title
stringlengths 10
147
| abstract
stringlengths 3
2.41k
| tldr_text
stringlengths 96
508
⌀ | content_markdown
stringlengths 0
464k
⌀ | authors
sequencelengths 0
41
⌀ | date
timestamp[ms] | publish_info
stringclasses 114
values | publish_is_top
bool 2
classes | citation_count
uint32 0
1k
| citation_count_filtered_math_and_top_conf
uint32 0
129
| theorem_provers
sequencelengths 1
4
⌀ | url
stringlengths 31
152
⌀ | arxiv_url
stringlengths 32
32
⌀ | semantics_scholar_url
stringlengths 78
78
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ART: Automatic multi-step reasoning and tool-use for large language models | Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings by generating intermediate chain of thought (CoT) reasoning steps. Further, each reasoning step can rely on external tools to support computation beyond the core LLM capabilities (e.g. search/running code). Prior work on CoT prompting and tool use typically requires hand-crafting task-specific demonstrations and carefully scripted interleaving of model generations with tool use. We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program. Given a new task to solve, ART selects demonstrations of multi-step reasoning and tool use from a task library. At test time, ART seamlessly pauses generation whenever external tools are called, and integrates their output before resuming generation. ART achieves a substantial improvement over few-shot prompting and automatic CoT on unseen tasks in the BigBench and MMLU benchmarks, and matches performance of hand-crafted CoT prompts on a majority of these tasks. ART is also extensible, and makes it easy for humans to improve performance by correcting errors in task-specific programs or incorporating new tools, which we demonstrate by drastically improving performance on select tasks with minimal human intervention. | Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program, achieves a substantial improvement over few-shot prompting and automatic CoT on unseen tasks in the BigBench and MMLU benchmarks, and matches performance of hand-crafted CoT prompts on a majority of these tasks. | ## ART: Automatic multi-step reasoning and tool-use for large language models
**Bhargavi Paranjape[1]** **Scott Lundberg[2]** **Sameer Singh[3]** **Hannaneh Hajishirzi[1][,][4]**
**Luke Zettlemoyer[1][,][5]** **Marco Tulio Ribeiro[2]**
1University of Washington, 2Microsoft Research, 3University of California, Irvine,
4Allen Institute of Artificial Intelligence, 5Meta AI
**Abstract**
Large language models (LLMs) can perform
complex reasoning in few- and zero-shot settings by generating intermediate chain of
thought (CoT) reasoning steps. Further, each
reasoning step can rely on external tools to support computation beyond the core LLM capabilities (e.g. search/running code). Prior work
on CoT prompting and tool use typically requires hand-crafting task-specific demonstrations and carefully scripted interleaving of
model generations with tool use. We introduce Automatic Reasoning and Tool-use
(ART), a framework that uses frozen LLMs
to automatically generate intermediate reasoning steps as a program. Given a new task to
solve, ART selects demonstrations of multistep reasoning and tool use from a task library. At test time, ART seamlessly pauses
generation whenever external tools are called,
and integrates their output before resuming
generation. ART achieves a substantial improvement over few-shot prompting and automatic CoT on unseen tasks in the BigBench
and MMLU benchmarks, and matches performance of hand-crafted CoT prompts on a majority of these tasks. ART is also extensible,
and makes it easy for humans to improve performance by correcting errors in task-specific
programs or incorporating new tools, which
we demonstrate by drastically improving performance on select tasks with minimal human
intervention.
A
B
**LLM Output**
**Tool Output**
C
**Human Feedback**
|Task: Translate into Pig Latin Input: albert goes home|Col2|Col3|
|---|---|---|
|Task Library Select Examples|||
|Task: Anachronisms Input: George HW ... Gulf War Q1: [search] When was George H. W. Bush, president? #1: From 1989-1993 ... Q2: [EOQ] Ans: True|Task: Arithmetic Input: Viola bought 167 books... Q1: [gen code] Write arithmetic as python code #1: viola =167, nancy = 137 ans = viola - nancy Q2: [exec code] Execute code Q3: [EOQ] Ans: 30||
|Run Program Frozen LLM|||
|Q1: [search] How to write english as pig latin? #1: Add "yay" if it starts with a vowel ... Q2: [gen code] Write code to translate "albert goes driving" to pig latin. #2: for w in ["albert", "goes", "home"]: if w[0] in "aeiou": print(w + "yay") ... Q3: [exec] Execute snippet #3: albertyay oesgay rivingday Q4: [EOQ] Ans: albertyay oesgay rivingday|||
|Fix Mistakes (optional)|||
|Q1: [search] How to write english as pig latin? #1: Add "yay" if it starts with a vowel ... Q2: [gen code] Write code to translate "albert ... #2: for w in ["albert", "goes", "home"]: if w[0] in "aeiou": print(w + "yay") ...consonent_cluster = find_clstr(w) Q3: [exec code] Execute snippet #3: albertyay oesgay ivingdray Q4: [EOQ] Ans: albertyay oesgay ivingdray|||
Figure 1: ART generates automatic multi-step decompositions for new tasks by selecting decompositions of
related tasks in the task libray (A) and selecting and
using tools in the tool library alongside LLM generation (B). Humans can optionally edit decompositions
(eg. correcting and editing code) to improve performance (C).
**1** **Introduction**
In-context learning allows large language models
(LLMs) to quickly adapt to new tasks simply by using natural language instructions and a few demonstrations as a prompt to the LLM (Xie et al., 2021;
Brown et al., 2020; Chowdhery et al., 2022). While
this circumvents annotating large datasets or even
hosting the LLM itself (since many are available
through APIs), there are severe performance limitations around multi-step reasoning (Liu et al., 2022),
math (Patel et al., 2021), having up-to-date information (Komeili et al., 2022), and others. To address
these limitations, recent work proposes prompting
LLMs to mimic a chain of thought (CoT) for multistep reasoning (Wei et al.; Zhou et al., 2022; Wang
et al., 2022; Press et al., 2022; Khot et al., 2022;
Arora et al., 2022) or providing them with access
to tools (e.g. a calculator or QA model) to enable
more complex reasoning steps (Gao et al., 2022;
-----
Chen et al., 2022; Press et al., 2022; Wei et al.;
Schick et al., 2023). However, existing methods
for chained reasoning with tool use are difficult to
extend to new tasks and tools, requiring fine-tuning
or prompt-engineering tailored for a specific task
(Parisi et al., 2022) or tool (Schick et al., 2023).
In this paper, we present Automatic Reasoning
and Tool use (ART), a framework that automatically generates decompositions (multi-step reasoning) for instances of new tasks. The framework
also selects and uses the most appropriate available tools (like search engines, and code execution)
in individual steps. Given a new task, ART retrieves demonstrations of related tasks from a task
_library to enable few-shot decomposition and tool_
use. These demonstrations follow a flexible but
structured query language (Beurer-Kellner et al.,
2022), such that it is easy to parse intermediate
steps, stop generation to call external tools, and
resume it after including the output of such tools
(Figure 1). ART provides the LLM with demonstrations of how to decompose instances of several related tasks, and how to select and use any tool from
the tool library that is represented in these demonstrations. This encourages the model to generalize
from demonstrations to decompose a new task and
use tools in appropriate places, zero-shot. It also
enables users to fix any mistakes in the reasoning
chain or add new tools by simply updating the task
and tool libraries, providing new demonstrations
where necessary (e.g. for the task at hand).
We construct a task library for 15 diverse BigBench (Srivastava et al., 2022) tasks, and evaluate ART on 19 unseen test tasks from BigBench, 6 MMLU tasks, and various tasks used
by related work on tool use (SQUAD, TriviaQA,
SVAMP, MAWPS). ART consistently matches or
outperforms automatically generated CoT reasoning chains on 32 / 34 BigBench and all MMLU
tasks, by an average of over 22 percentage points.
Tool-use in particular improves performance on test
tasks by an average of over 12.3 percentage points,
as compared to when no tools are allowed (Table 3).
ART improves over direct few-shot prompting by
10.8% percentage points on average across unseen
BigBench and MMLU tasks. Improvements are
particularly notable on unseen tasks requiring arithmetic and algorithmic reasoning, where ART improves over direct few-shot prompting by 12.5%
and previous best-known results for GPT3 that use
supervision for decomposition and/or tool use by
6.1% percentage points (Table 3).
Finally, ART enables human intervention and improvement of the reasoning process by simply updating the task and tool libraries with new demonstrations, making it very easy to improve performance on any specific task with minor human feedback. On 12 test tasks, ART with additional human feedback surpasses the best-known results for
GPT3 by an average of over 20% points (Table 6).[1]
**2** **Related Work**
**Scaled finetuning for low-resource adaptation**
Recent work has shown that finetuning LLMs on
a broad range of public NLP datasets (with prefixed instructions) is an effective technique for
cross-task generalization (Mishra et al., 2021; Sanh
et al., 2021; Khashabi et al., 2020; Wei et al.,
2021) in both the zero-shot and few-shot settings.
Ouyang et al. (2022) show that aligning language
models with user intent on a wide range of tasks
by fine-tuning with human feedback for desired
model behavior (InstructGPT) further improves
in-context learning performance on complex NLP
tasks. Chung et al. (2022) show that finetuning
on an aggregated mixture of tasks (T0, CoT, dialog, and code datasets) together with scaling models to 540B parameters achieves state-of-the-art
in-context learning performance on several benchmarks such as BigBench and MMLU. ART uses
API access to InstructGPT and Codex (LLM finetuned on code (Chen et al., 2021)) to leverage their
emergent in-context learning abilities. Future improvements in scaled finetuning in LLMs will likely
improve the performance on ART.
**Prompting with intermediate reasoning steps**
Chain-of-thought (CoT) prompting (Wei et al.,
2022; Suzgun et al., 2022) is a popular gradientfree technique that encourages LLMs to generate intermediate reasoning steps prior to the final
answer, with multiple task-specific variants (e.g.
Least-to-most prompting (Zhou et al., 2022), SelfAsk (Press et al., 2022), Ask-me-anything (Arora
et al., 2022), Successive prompting (Dua et al.,
2022), decomposed prompting (Khot et al., 2022)).
While such prompts were initially hand-crafted, recent work (Kojima et al., 2022) showed that LLMs
can generate CoT-style multi-step reasoning in a
zero-shot manner, when prompted with the prefix
“Let’s think step-by-step". Zhang et al. (2022) use
1Code is available at [https://github.com/](https://github.com/bhargaviparanjape/language-programmes/)
[bhargaviparanjape/language-programmes/](https://github.com/bhargaviparanjape/language-programmes/)
-----
Table 1: Comparing ART with related approaches for
multi-step reasoning and tool-use
|Feature|CoT Auto Tool- ART|
|---|---|
||CoT former|
|Multi-step reasoning Limited supervision Tool use Extendable libraries Cross-task transfer Human feedback|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|
LLMs to automatically generate such CoT-style
prompts—AutoCoT—which are competitive with
hand-crafted prompts in their performance on arithmetic and commonsense reasoning tasks. We compare ART, CoT and AutoCoT in Table 1. ART
builds on this line of work, introducing a common
language that enables cross-task demonstrations
and flexible and extensible tool use, improving accuracy of intermediate reasoning steps.
**Tool Use** There is growing interest in overcoming LLM limitations with external tools such as
search engines, web browsers, calculators, translation systems, and python interpreters (Komeili
et al., 2022; Thoppilan et al., 2022; Lazaridou et al.,
2022; Shuster et al., 2022; Nakano et al., 2021;
Thoppilan et al., 2022; Cobbe et al., 2021; Thoppilan et al., 2022; Gao et al., 2022; Chen et al.,
2022). Most of these approaches either require
large amounts of human supervision (Thoppilan
et al., 2022; Komeili et al., 2022) or carefully constructed prompts tailored to specific tasks and particular tools. An alternative line of recent work
uses self-supervision to teach LLMs to use search,
translation, and a calculator (Schick et al., 2023)—
Toolformer. In contrast, since ART does not require
any additional training or tool-specific prompts, it
allows users flexibility both in terms of replacing
the underlying LLM (e.g. when a new version
of GPT-3 is released), and in replacing or adding
_new tools (either general-purpose tools or tools that_
are important for a specific task of interest). We
compare ART and Toolformer in Table 1. In Section 3.4, we show how human-in-the-loop feedback
— analyzing and debugging LLM generations and
extending tool-use — can provide a large boost in
the performance of ART while also extending it
with new tools. This built-in feedback loop and
adaptive capability of ART extends the capabilities
of LLMs that are finetuning to follow instructions
and use tools.
**3** **ART**
With ART, a frozen LLM decomposes instances
of a new task into multiple steps (using external
tools whenever appropriate), despite not having explicit supervision for decomposition or tool use.
In this section, we present an overview of ART,
followed by more thorough descriptions of each
individual component. We use the Physics Question Answering (PQA) task as a running example,
which consists of high-school physics problems.
**3.1** **Overview**
In Figure 2, ART is presented with a new task
description and input instance. We also assume
access to a few input-output pairs (not shown), with
no decomposition or tool use supervision.
**Prompt building.** ART retrieves similar tasks
from a task library (Figure 2(A); Section 3.2), and
adds instances of those tasks as demonstrations in
the prompt.
A demonstration in the task library is written in a
specific format, defined by a custom parsing expres_sion grammar (PeG) (Section 3.2). The grammar_
is defined such that each task instance is decomposed into a sequence of sub-steps. Some of these
sub-steps contain symbols corresponding to tools
in a tool library (Section 3.3). We refer to these
decompositions as programs, since the sequential
reasoning steps and symbolic calls to tools are similar to a conventional program with function calls.
The resultant prompt consists of programs from
related tasks and teaches the LLM how to effectively decompose instances of a new task—related
sub-steps and tools in these programs can be used
by the LLM for cross-task generalization.
In Figure 2(A), the demonstrations include calls
to both search and code tools.
**Generation.** At generation time (Figure 2(B)),
the LLM writes its own program. ART parses the
program as it is generated, and pauses generation
whenever a tool call is encountered in the generated
text, resuming generation after the tool is called and
its output is integrated back into the program. As
illustrated in the figure, a search engine is used
to find the appropriate physics formula, and then
the LLM uses code generation and execution to
substitute the given values and compute the answer.
**Human feedback (optional).** Humans can add
new decomposition demonstrations to the task library, or add/edit tools in the tool library in order
-----
**New Task (Physics QA) Answer this high-school physics question**
**Input: Hector yanks on the chain with a 72.0 N force at an angle of 35.0° above the horizontal. Determine the horizontal components of the tension force.**
|Code CoT-style Search Arithmetic String B TOOL LIBRARY operations reasoning operations A TASK LIBRARY Input: Hector yanks on the chain with a 72.0 N force at an angle of 35.0° above the horizontal. Determine the horizontal components of the tension force. Solve these arithmetic problems using python code Input: Viola had 167 breads. Nancy took 137from him. How Q1: [search] What is the formula for the horizontal component of the tension force? many does Viola have now? #1: The formula for the horizontal component of the tension force is Tcosθ. The horizontal Q1: [generate code] Write down arithmetic as python code component (Fx) can be calculated as Ftens*cosine(θ) where θ is the angle which the force make #1: viola_bought = 167, nancy_took = 137 s with the horizontal in radians. ans = viola_bought - nancy_took Input: ... Q1: [search] ... LLM Q2: [code execute] Execute snippet #2: 30 Q3: [EOQ] Ans: No #1: ... can be calculated as Ftens*cosine(θ)where θ is ... Q2: [generate code] Use the formula Fx = Ftens*cosine(θ) to solve: Hank ... Does the sentence contain an anachrornism? Yes/No. Input: President George H. W. Bush called his generals at the outset of the #2:T = 72.0, theta = 35.0 Gulf War. radians= math.pi*theta/180 Q1: [search] When was President George H. W. Bush, president? Fx = T*math.cos(radians) #1: George H. W. Bush's tenure started on January 20, 1989, Input: ...Q1: [search] ...#1: ... and ended on January 20, 1993. Q2: [generate code] Use the formula Fx = Ftens*cosine(θ) to solve: Hank ... Q2: [search] When was the Gulf War fought? #2: The Gulf War was a 1990–1991 #2: ... Fx = T*math.cos(radians) Q3: [subquestion] Could these entities have co-existed? #3: Yes. Their time Q3: [code execute] Execute the python code and get the value of "Fx" periods intersect. #3: 58.9789 Q4: [generate output] Is this an anachronism? #4: No Q5: [EOQ] Ans: No Q4: [EOQ] Ans: 58.9789|Col2|
|---|---|
|Solve these arithmetic problems using python code Input: Viola had 167 breads. Nancy took 137from him. How many does Viola have now? Q1: [generate code] Write down arithmetic as python code #1: viola_bought = 167, nancy_took = 137 ans = viola_bought - nancy_took Q2: [code execute] Execute snippet #2: 30 Q3: [EOQ] Ans: No||
|Does the sentence contain an anachrornism? Yes/No. Input: President George H. W. Bush called his generals at the outset of the Gulf War. Q1: [search] When was President George H. W. Bush, president? #1: George H. W. Bush's tenure started on January 20, 1989, and ended on January 20, 1993. Q2: [search] When was the Gulf War fought? #2: The Gulf War was a 1990–1991 Q3: [subquestion] Could these entities have co-existed? #3: Yes. Their time periods intersect. Q4: [generate output] Is this an anachronism? #4: No Q5: [EOQ] Ans: No||
||Input: ...Q1: [search] ...#1: ... Q2: [generate code] Use the formula Fx = Ftens*cosine(θ) to solve: Hank ... #2: ... Fx = T*math.cos(radians) Q3: [code execute] Execute the python code and get the value of "Fx" #3: 58.9789 Q4: [EOQ] Ans: 58.9789|
Q3: [code execute] Execute the python code and get the value of "Fx"
Figure 2: A run-through of ART on a new task, Physics QA. (A) Programs of related tasks like anachronisms and
Math QA provide few-shot supervision to the LLM — related sub-steps and tools in these programs can be used
by the LLM for cross-task generalization (shown in purple). (B) Tool use: Search is used to find the appropriate
physics formula, and code generation and execution are used to substitute given values and compute the answer
(shown in orange).
to improve performance on a particular task of interest, or in general. In Figure 3(C) a user corrects
a specific program by including a step that adds
the unit of measurement, and adds this (modified)
program to the task library. While most of our experiments do not use such feedback, we show that
it is very effective at drastically improving performance when task generalization does not happen
automatically. Further, it gives users flexibility to
add custom tools without retraining of the LLM.
**3.2** **Task Library**
We construct a library of programs for a small
seed set of tasks from Big-Bench (Srivastava et al.,
2022), a collaborative benchmark that measures
the capabilities and limitations of language models. Big-Bench tasks span categories of traditional
NLP, mathematics, commonsense reasoning, and
question-answering.
**Constructing the task library.** We identify five
skills that are useful across more than half of the
tasks in BigBench that encompass text classification or generation of short answers in English (see
A.1). We group tasks in the benchmark by these
skills into the following clusters:
- Arithmetic: arithmetic and algebra problems.
- Code: Generating and executing python code.
- Search and question decomposition: Single or
multi-step questions that require search
- Free-form reasoning: Explaining step-by-step
reasoning in natural language
- String Operations: Reformatting/editing
strings, checking string entailment, etc.
We then select 2-4 tasks from each cluster and write
programs (decompositions) for a few instances of
each task, including calls to external tools and real
outputs of those tools. Examples of programs in
each cluster are in Appendix A.1. These programs
follow a specific grammar, as outlined below.
**Program grammar** The program format must
be flexible in terms of task inputs, steps, and tool
calls, such that a wide variety of NLP tasks can
be covered. To do so, we define a query language
(Beurer-Kellner et al., 2022) that extends the decomposed prompting format of Khot et al. (2022),
since it can represent decomposed reasoning steps
sequentially and incorporates function calls to external tools (like other LLMs). Each program consists of a series of nodes — a task input node, several sub-step nodes, and an answer node. The input
_node contains the task name, a simple instruction_
describing the task, and the input for an instance
of the task: “Answer this high-school Physics question.
**Input: Hector yanks...”. The input node is fol-**
lowed by a sequence of sub-task nodes, represented
as a (query, answer) pair “Qi : ..., #i : ...”. The
sub-task query Qi has a sub-task name and sub
-----
|Human feedback C|Col2|
|---|---|
|Q1: [search]...What is the formula for the horizontal component of the tension force? #1: ... calculated as Ftens*cosine(θ)where θ is ... Q2: [generate code] Use formula Fx = Ftens*cosine(θ) to solve: Hanks... #2: Fx = T*math.cos(radians) ... print(Fx) Q3: [code execute] Execute snippet get the value of "Fx" #3: 58.9789 Q4: [arithmetic] Round the answer to the nearest integer #4: 59 Q5: [add unit] Add the appropriate unit of measurement to the answer. #5: 59 N Q4: [EOQ] Ans: 59 N|Q1: [string split] What are the letters in "nwist" #1: %s Q2: [string permutation] What are the possible permutations of 'nwisr'? #2: ['nwist', 'nwits', 'nwsit', 'nwsti', 'nwtis', 'nwtsi', 'niwst', 'niwts', 'niswt',... Q3: [lookup] which word in the list is a common English word ? #3: twins Q4: [EOQ] Ans: twins|
**(a) Correcting generated programs**
**by adding additional reasoning steps**
**(b) Adding additional tool use examples and**
**new tool definitions**
**TASK LIBRARY**
Figure 3: Human feedback to ART shown for (a) PQA where reasoning steps are added to the program and; (b)
Word unscrambling where tool library is augmented with a new lookup tool.
**3.3** **Tool Library**
Whenever a sub-task query name matches a tool
name in the task library (e.g. “Qi: [search]”), generation is stopped and resumed after the tool is
called and its output is incorporated into the partially completed program. We seed the tool library
with the following tools (all of which have demonstrations in the task library). In particular, we describe the symbols used to represent these tools and
their inputs. We also specify how the tool output is
incorporated back into the program. Tool-specific
implementation details and other tools added to
ART during feedback (3.4) are in Appendix A.3.
**Search** We use SerpAPI[2], which provides an API
for Google search. The input to search is the sequence generated by the LLM after “Qi: [search]”.
We extract answer box snippets when they are available or combine the top-2 search result snippets together. For PQA in Figure 2(B), the search query is
the original input followed by “What is the formula
for the horizontal component of tension force?”,
and the output is ““... horizontal component (Fx)
can be calculated as Ftens*cosine(θ) ...””.
**Code Generation** We use the Codex (Chen
et al., 2021) model for code generation. Input
to code generation is the sequence generated by
the LM after the sub-task query symbol “Qi :
[generate python code]”. This argument is an
instruction for code generation and is prompted
to Codex as a multi-line comment in Python. For
example, in Figure 2, Codex is prompted the instruction ““Use the formula Fx = Ftens * cosine(θ)
to solve...”” as a comment and generates “T = 72.0,
[2https://serpapi.com](https://serpapi.com)
task input (“Q1: [search] What is the formula...”),
while the sub-task answer #i is simply the output
of the sub-task (“#1: The horizontal component
(Fx) can be calculated...”). The program ends with
a dummy sub-task (“Q3: [EOQ]”), followed by a
final answer node (“Ans: 59N”). All examples in
Figures 1 and 2 follow this format.
**Task Retrieval** Given a new task, ART retrieves
_N tasks from the task library to construct a dy-_
namic multi-task prompt. We explore two strategies to retrieve similar tasks, depending on what
data is available. If a small number of labeled
examples for the new task is available (≈50), we iterate over all five task clusters and select a few task
programs from each cluster to compose the prompt.
Ultimately, the task cluster with the highest performance on the held-out set of examples is chosen
when predicting on all unlabeled examples from
the task. While this strategy requires a held-out set
of input-output pairs, no additional supervision is
needed to generate a decomposed program.
In the second strategy, we craft a few-shot
prompt (Appendix A.2) with task pairs, where each
task includes a name, instructions, and a few inputoutput examples. For each pair, we provide a label of “Similar” or “Not similar”, and reasoning
(e.g. “These are related because they require solving arithmetic word problems”). At run time, we
pair the test task with every task in the task library,
and choose the highest-ranked ones based on the
log probability ratio between “Similar” and “Not
similar”. We explore both strategies in Section A.2.
-----
theta = 35.0, ..., Fx = T*math.cos(radians)”, which
is appended to the incomplete program.
**Code Execution** We run Python code in a virtual
Python environment with arithmetic, symbolic, and
scientific computing packages pre-installed. The
argument to code execute is the previous sub-task’s
answer sequence “#(i − 1) : . . . ”, i.e. the python
code snippet to be executed. For i = 1, the task
input is used as the argument since it potentially
contains the code snippet to be executed. In Figure 2, the code snippet generated in the previous
step is executed and the value of variable “Fx” is
added to the incomplete program.
**3.4** **Human feedback**
ART is specifically designed to be amenable to human feedback since it does not require additional
finetuning. Consequently, users can incorporate
feedback immediately into ART, by editing the
task library and/or the tool library. Since ART
generates multi-step reasoning programs that are
interpretable, we explore feedback in the form of
debugging, i.e. users edit existing programs rather
than creating programs from scratch. These edits
can be in the form of correcting sub-step outputs,
adding/removing sub-steps (with appropriate inputs and answers), adding calls to new tools, etc.
For example, in Figure 3(a) the user edits a program by adding two sub-steps, in order to round
the answer to the nearest integer and include the appropriate unit of measurement to the answer. This
feedback demonstrates appropriate decompositions
for the task, as these operations are still performed
by the LLM (the tool library does not have “[arithmetic]” or “[add unit]” APIs). In contrast, in Figure
3(b) the user demonstrates the use of a dictionary
“[lookup]” and implements it as a tool in the tool
library. While most of our experiments do not rely
on such feedback (and thus measure “zero shot”
task transfer with no supervision for reasoning/tooluse), we show that simple operations like these can
drastically improve performance on target tasks.
**4** **Experimental Setup**
**Evaluation Datasets** In addition to 15 tasks in
the task library (Section 3.2), we evaluate ART on
19 additional test tasks from BigBench which also
belong to the five task clusters identified in Section 3.2. To check for cross-benchmark generalization, we further evaluate ART on a random subset
of tasks from the MMLU benchmark (Hendrycks
et al., 2020). Finally, we also evaluate on a subset of tasks used to evaluate Toolformer (Schick
et al., 2023), in order to compare ART to a model
fine-tuned for tool use.
**Details** We use InstructGPT (text-davinci-002)
as the frozen LLM, and Codex as the code generation tool, with temperature set to 0.3. We set
the number of seed tasks in the prompt to N = 3
and use 2 demonstration programs from each task.
We measure the preferred scoring metric for each
task as in Srivastava et al. (2022), and report performance averaged over 5 runs.
**Baselines** ART proposes an automatic framework to generate multi-step reasoning decompositions and use relevant available external tools
within those decompositions. We compare with the
following baselines:
- Few-shot/Direct: Prompting LLMs with
input-output pairs (but no intermediate reasoning). We use 3 examples for BigBench
and 5 examples for MMLU, as done in prior
work (Suzgun et al., 2022). We evaluate this
baseline for both, GPT-3 and Codex, and report the higher of the two.
- Auto-CoT: A baseline that automatically generates multi-step reasoning in natural language. A random subset of 5 examples is
first used to elicit CoT-style reasoning (Input
_+ Let’s think step-by-step.). These examples_
and their generated output form the prompt for
other unseen examples of the task. This baseline is free-form and does not include tools,
and thus allows us to verify the effectiveness
of our query language and task library. We
evaluate this baseline for GPT-3.
- ART-tool: ART with tool-use turned off, i.e.
the LLM generates the output of every substep, to verify the gains from tool use.
- GPT-3 Best: Best published GPT-3/Codex
(175B) result with multi-step decomposition
and/or tool use. These often include additional
human supervision to decompose reasoning
steps, and external tools to boost performance
(with carefully constructed prompts).
Additional details about baselines and GPT-3 best
models are in Appendix A.4.
**5** **Results**
We evaluate ART (without human feedback) on
tasks in the task library (5.1), and on a variety
-----
**Task Name (Cluster)** **Few Shot** **AutoCot** **ART** **ART** **GPT-3**
**w/o Tool Use** **Best**
Anachronisms (Search) 71.3[5] 51.48 70.87 75.66 -
|Musique (Search) Hindu Knowledge (Search) Known Unknown (Search) ∆with ART (Search) Elementary Math QA (Arithmetic) Aqua-rat (Arithmetic) GSM8K (Arithmetic) Navigate (Arithmetic) ∆with ART (Arithmetic) K’th letter concatenation (String) Language games (String) Date Understanding (String) Auto Debugging (Code) Code Description (Code) Formal Fallacies (CoT) Hyperbation (CoT) ∆with ART (Misc)|2.035 12.88 85.02 5 73.03 68.90 5 56.09 +9.0 +17.44 56.407 74.52 20.547 34.41 7.797 21.99 60.77 61.7 +30.0 +18.25 3.25 0.64 35.145 18.58 37.535 38.90 62.945 38.24 97.997 88.67 44.845 56.4 62.725 55.4 +9.6 +16.4|10.04 19.19 83.42 87.98 80.43 80.43 +4.6 58.04 68.04 36.29 54.20 53.4 71.00 72.4 72.4 +11.4 8.19 40.00 11.19 23.08 52.05 - 55.29 62.94 84.67 88.00 64.76 - 80.80 - +13.7|15.23 - - +4.0 - 54.14 71.64 85.901 -4.7 98.02 - 70.411 - - 58.41 72.41 -15.4|
|---|---|---|---|
|∆with ART (Overall)|+14.90 +17.17|+7.91|-9.0|
Table 2: ART performance on tasks in the task library. ([1]Human-crafted CoT (Wei et al., 2022; Suzgun et al., 2022),
2Decomposed Prompting (Khot et al., 2022), 3Self-Ask (Press et al., 2022), 4PoT (Chen et al., 2022), 5InstructGPT
(Ouyang et al., 2022), [7]Code-davinci-002 (Chen et al., 2021)). (-) For tasks using CoT reasoning, no tool use is
used.
of test tasks from BigBench, MMLU, and QA
benchmarks (5.2). Then, we show that ART can
be further improved with more compute (selfconsistency) and with human feedback (5.3).
**5.1** **Results on the task library**
For tasks in the task library, demonstrations in
the prompt include two instances of the task itself, along with other instances from tasks in the
same cluster. We present results in Table 2, where
tasks are organized by skill cluster. Even with decomposition demonstrations for only two instances,
ART drastically improves performance over fewshot learning (+14.9 % points on average), in line
with prior work on CoT. It does not do as well on
language games, code description, and auto debugging — tasks that use code generation and/or code
editing models. We observe that code generation
errors often lead to cascading errors in reasoning.
Similarly, ART outperforms AutoCoT on most
tasks even without any tool use (by 8% points on
average). We hypothesize that the program format
(and PeG grammar) is better at eliciting multi-step
reasoning from models than free-form CoT due to
the added structure to the reasoning. When tool
use is turned on, ART outperforms AutoCoT on
all tasks (+17.7 % points) minus one. Tools are
called in ≈ 95% of test instances, and significantly
improve performance (+7.91 % points). Gains from
tool use are particularly significant for arithmetic
tasks that benefit from representing the arithmetic
problem as code that executes complex arithmetic
accurately (+21.85 on average). This has also been
noted in prior work (Chen et al., 2022; Gao et al.,
2022).
Compared to the best published GPT-3 results,
ART is stronger or comparable in 5/8 tasks. For
the others, further investigation indicates that the
demonstrations provided by Khot et al. (2022) and
Suzgun et al. (2022) are just more effective than the
two programs we author for these tasks (we explore
further human feedback for these in Appendix A.5).
In sum, ART is stronger than few-shot learning and
AutoCoT on the library tasks (where we provide
2 labeled decompositions), and comparable to the
best published GPT-3 results.
**5.2** **Test tasks (cross-task transfer)**
We measure cross-task generalization on test tasks
where ART does not use explicit supervision for
decomposition and tool use. ART retrieves demonstrations from the task library according to the first
strategy in Section 3.2, which uses a small amount
-----
**Task Name (Cluster)** **Few Shot** **AutoCot** **ART** **ART** **GPT-3**
**w/o Tool Use** **Best**
**Test Tasks**
Sentence Ambiguity (Search) 70.67[5] 51.47 71.00 73.33 -
Strategy QA (Search) 55.49[5] 27.22 59.37 66.44 -
Physics (Search) 70.09[5] 61.83 59.13 67.55 -
|∆with ART (Search)|+3.7 +22.27|+ 5.9|Col4|
|---|---|---|---|
|Physics Questions (Arithmetic) Operators (Arithmetic) Unit interpretation (Arithmetic) Repeat copy logic (Arithmetic) Object Counting (Arithmetic) Penguins in a table (Arithmetic) Reasoning about objects (Arithmetic) Tracking shuffled objects (Arithmetic) ∆with ART (Arithmetic) Word Unscramble (String) Simple Text Editing (Code) CS Algorithms (Code) Sports Understanding (CoT) Snarks (CoT) Disambiguation QA (Free-form) Temporal sequences (CoT) Ruin names (CoT) ∆with ART (Misc) ∆with ART (Overall)|7.025 5.56 71.237 75.52 58.27 41.20 50.017 15.63 39.27 26.80 58.237 40.40 71.007 33.33 22.397 19.44 +19.0 +36.7 40.727 32.44 35.315 30.21 73.487 0.0 69.745 51.47 54.585 57.24 55.035 48.45 55.807 19.70 71.015 55.28 2.4 22.5 +6.9 +24.6|6.30 20.37 71.80 92.00 51.4 53.99 31.25 44.38 42.2 87.00 68.86 77.85 45.35 64.34 18.14 37.67 + 23.1 23.03 42.7 20.74 27.65 41.59 88.11 92.89 - 57.13 - 55.89 - 49.5 - 60.22 - 24.37 +16.7|- - - - 81.201 72.341 52.691 36.321 +6.1 - - - 86.591 65.21 60.621 81.81 - -9.4 -1.7|
||MMLU|||
|College Computer Science (Search)|41.00 43.99|63.40 67.80|63.66|
|---|---|---|---|
|Astronomy (Search) Business Ethics (Search) Virology (Search) Geography (Search) Mathematics (Arithmetic)|62.10 41.48 61.60 48.8 50.03 49.52 77.67 57.07 36.67 33.77|76.71 79.1 77.17 81.16 71.60 71.49 70.30 71.71 39.50 45.66|62.56 72.76 50.726 81.86 34.56|
|∆with ART (MMLU)|+14.6 +23.7|+3.0|+8.5|
Sentence Ambiguity (Search)
Strategy QA (Search)
Physics (Search)
Table 3: ART performance on BigBench tasks and MMLU tasks. ([1] Human-crafted CoT (Wei et al., 2022; Suzgun
et al., 2022), [5] InstructGPT (Ouyang et al., 2022), [6] Scaled instruction finetuning (Chung et al., 2022), [7] Codedavinci-002 (Chen et al., 2021)).
|Col1|SQuAD T-REx SVAMP MAWPS NQ TriviaQA|
|---|---|
|||
|GPT3 (175B) Toolformer ART|29.90 39.8 10.0 19.8 22.6 65.9 33.8 53.5 29.4 44.0 17.7 48.8 39.34(+5.5) 50.4(-3.1) 76.2(+46.8) 71.00(+27.0) 33.8(+16.1) 66.13(+17.33)|
Table 4: Comparing ART results on GPT3 (175B) model and (Schick et al., 2023), which is a smaller GPT-J model
finetuned for tool-use. Results are reported from their paper (their code and models are not publicly available).
of labeled input-output pairs to pick a task cluster and sample demonstration programs from that
cluster.[3]
**BigBench test tasks** Even though there is no decomposition or tool use supervision, the results
in Table 3 are similar to those for tasks in the
3We compare both strategies in Appendix A.2
task library. ART outperforms few-shot learning
(6.9 % points). In particular, ART has significant
improvements on arithmetic tasks (+19.0) and is
comparable to the few-shot performance on search
tasks. Non-grammatical choices in ruin names and
choices not in the input in temporal sequences are
often incorrect, which the few-shot baseline may
potentially learn to ignore, while ART attempts to
-----
|Col1|Simple Text CS Strategy QA Physics Unit Reasoning about|
|---|---|
||Editing Algorithms Questions Interpretation colored objects|
|ART + Self Consistency|27.65 88.11 66.44 20.37 53.99 64.34 30.67(+3.0) 90.99(+2.9) 70.76(+4.3) 24.07(+3.7) 57.20(+3.2) 69.11(+4.8)|
Table 5: Improving ART via self-consistency (Wang et al., 2022). Ensembling model generations over 15 runs
further boosts performance.
|Task|CoT ART GPT-3 +Human + Human Best|Col3|Col4|Human Feedback|
|---|---|---|---|---|
||+Human + Human Best|||Feedback|
|CS Algorithms Reasong about objs. Repeat Copy Logic* Sentence Ambiguity Simple Text editing* Strategy QA* Physics* Temporal Sequences Track Shuffled objs. Unit Interpretation* Word Unscrambling*|0.0 23.0 33.33 67.75 15.63 45.22 51.47 72.33 30.21 35.31 27.22 29.19 61.83 68.21 19.70 30.22 19.44 36.48 41.2 41.2 32.44 33.40|88.11 92.73 64.34 98.90 44.38 80.31 73.33 83.67 27.65 36.11 66.44 69.15 67.55 72.55 49.5 88.00 37.67 99.86 53.99 95.0 42.70 62.11|73.48 71.00 50.01 70.67 35.31 55.49 70.09 81.8 36.32 58.2 40.72|C: longest common subsequence code C: Define object, color, count data structure C: string edit operation C: Constrain queries to extract relevant info. C: string edit operation C: Constrain queries to extract relevant info. A: [search] Formula that connects mass, ... A: [subquestion] Is X free Yam to Zam? C: Define object pair data struct, swap logic A: [add unit] Add the right unit to the answer T: lookup permutations in dictionary|
|Average|30.2 43.8|56.0 79.85|58.5||
Table 6: Improving ART and free-form CoT via self-consistency and human-in-the-loop feedback. (*) indicates
that human-in-the-loop improvement was done over automatically generated CoT reasoning for these tasks. Feedback for ART includes correcting sub-steps in programs (“C:”), adding additional sub-steps(“A:”), and defining
new tools(“T:”). Note that only five examples were edited for each task.
explicitly reason about them. As with library tasks,
we observe that string manipulation tasks like simple text editing, word unscrambling, and repeat
copy logic suffer from code generation errors.
As observed in the case of library tasks, ART
is better than AutoCoT on almost all tasks (24.6
% points). Tools are once again called very frequently (89% of instances), and are responsible for
a significant fraction of the gains over baselines.
When compared to the best published GPT-3
results, ART performs favorably on average, especially on arithmetic tasks (+6.1 % points). As
before, it does worse in tasks where good human
demonstrations of how to decompose the task it_self (provided by Suzgun et al. (2022)) have a big_
impact. We re-evaluate ART with more human
feedback on these tasks in 5.3, but even without
that we conclude that ART is competitive on BigBench even when we do not have supervision for
decompositions for the task at hand (i.e. there is
cross-task generalization).
**Other benchmarks** To make sure ART does not
overfit to BigBench-style tasks, we evaluate performance on additional benchmarks. We report
performance on randomly selected tasks from the
MMLU benchmark (Hendrycks et al., 2020) in
Table 3, where ART is more effective than all baselines on 5/6 tasks (+8.5 points better than few-shot
baseline on average), despite having no supervision
for demonstrations or tool use. MMLU requires
extensive world knowledge, and thus most of these
tasks benefit the most from the search tool.
In Table 4, we compare ART to a random subset
of tasks used to evaluate Toolformer (Schick et al.,
2023), a model finetuned to use a variety of tools.
The comparison is not exact since Toolformer uses
a smaller GPT-J model, but it is informative that
ART outperforms Toolformer by a large margin on
5/6 of these tasks. To make sure these gains are not
simply a result of model scale, we also use vanilla
GPT-3 as a baseline, which yields much worse results than ART on all tasks. Besides improved
performance, we note again that ART does not require additional fine-tuning when new tools or new
base LLMs are introduced, and also is amenable
to further improvement at the cost of compute or
human feedback.
**5.3** **Improving ART**
**Self-consistency** Previous work has noted benefits in generating multiple LLM outputs and taking the most frequent answer (a process known
as self-consistency), particularly for settings with
-----
multi-step reasoning (Khot et al., 2022; Wang et al.,
2022). In Table 5, we present self-consistency results (generating 15 outputs) for ART on a subset
of tasks and see that it consistently improves performance, at the cost of extra computation.
**Human feedback** We also pilot the use of taskspecific feedback in Table 6, by having one of
the authors edit 5 random instances of modelgenerated programs that resulted in errors for each
task. When editing, we correct errors in sub-steps
(denoted as “C:”), adds missing substeps (“A:”), or
defines a new tool and demonstrates its use (“T:”).
For example, this involved introducing an “add
unit” sub-step for the PQA task, and implementing
a dictionary lookup function as a tool for the “Word
Unscrambling” task (both illustrated by Figure 3).
We also compare human feedback applied to
CoT-style reasoning. Suzgun et al. (2022) already
provide reference CoT-style reasoning for some
tasks. For datasets where human-authored CoT
reasoning is unavailable, we correct the output of
the automatic CoT baseline, as indicated in Table 6.
The same author edits 5 random instances of AutoCoT decompositions that lead to errors on the same
tasks, correcting errors in sub-steps or adding new
sub-steps. As a reference, the edits included 35%
of tokens in the baseline, and 15.7% of tokens in
the ART programs. This included correcting substep arguments and outputs in 72% of the chosen
tasks and adding additional sub-steps in 44% of
the tasks. New tool definitions were added for two
tasks — dictionary lookup for word unscrambling
and a Prolog engine for formal fallacies.
In both cases, editing programs and adding them
as demonstrations leads to significant gains in performance on the task at hand. However, the gain is
much more dramatic in ART, leading it to consistently outperform the best published GPT-3 baseline for the task at hand. Further, these corrected
programs and tools can be added to the task and
tool libraries, and our prior results in Table 3 suggest that they potentially help improve ART on
other tasks as well. This pilot indicates that besides being competitive on cross-task generalization, ART is very amenable to task-specific improvement with minimal human intervention. We
report similar results in the task library in A.5.
**6** **Conclusion**
We introduce ART, a gradient-free approach for
automatic multi-step reasoning generation and
automatic tool-use for a large black-box language model. Our main contributions include a
lightweight grammar to represent multi-step reasoning as a program (with tool calls and arguments),
an extensible library of seed tasks for which programs are authored, and a tool library that consists of useful external utilities like search, code
generation, and execution. The interpretable reasoning framework also allows humans to improve
task decomposition and tool use to boost performance. ART achieves a substantial improvement
over few-shot prompting and automatic generation
of CoT reasoning on unseen tasks in the BigBench
and MMLU benchmarks, and substantially exceeds
performance on hand-crafted CoT prompts when
human feedback is incorporated. ART also benefits
from approaches such as self-consistency, or from
new and more powerful LLMs trained for tool use.
**References**
Simran Arora, Avanika Narayan, Mayee F Chen, Laurel J Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, and Christopher Ré. 2022. Ask me anything: A simple strategy for prompting language
models. arXiv preprint arXiv:2210.02441.
Luca Beurer-Kellner, Marc Fischer, and Martin Vechev.
2022. Prompting is programming: A query language for large language models. _arXiv preprint_
_arXiv:2212.06094._
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. arXiv preprint arXiv:2005.14165.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde de Oliveira Pinto, Jared Kaplan,
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg
Brockman, et al. 2021. Evaluating large language models trained on code. _arXiv preprint_
_arXiv:2107.03374._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. _arXiv preprint_
_arXiv:2211.12588._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
-----
Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Dheeru Dua, Shivanshu Gupta, Sameer Singh, and
Matt Gardner. 2022. Successive prompting for
decomposing complex questions. _arXiv preprint_
_arXiv:2212.04092._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language
models. arXiv preprint arXiv:2211.10435.
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language
understanding. arXiv preprint arXiv:2009.03300.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish
Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format
boundaries with a single qa system. arXiv preprint
_arXiv:2005.00700._
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao
Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2022. Decomposed prompting: A modular
approach for solving complex tasks. arXiv preprint
_arXiv:2210.02406._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. _arXiv_
_preprint arXiv:2205.11916._
Mojtaba Komeili, Kurt Shuster, and Jason Weston.
[2022. Internet-augmented dialogue generation. In](https://doi.org/10.18653/v1/2022.acl-long.579)
_Proceedings of the 60th Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 8460–8478, Dublin, Ireland._
Association for Computational Linguistics.
Angeliki Lazaridou, Elena Gribovskaya, Wojciech
Stokowiec, and Nikolai Grigorev. 2022. Internetaugmented language models through few-shot
prompting for open-domain question answering.
_arXiv preprint arXiv:2203.05115._
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay
Mohta, Tenghao Huang, Mohit Bansal, and Colin
Raffel. 2022. Few-shot parameter-efficient finetuning is better and cheaper than in-context learning.
_arXiv preprint arXiv:2205.05638._
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and
Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions.
_arXiv preprint arXiv:2104.08773._
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu,
Long Ouyang, Christina Kim, Christopher Hesse,
Shantanu Jain, Vineet Kosaraju, William Saunders,
et al. 2021. Webgpt: Browser-assisted questionanswering with human feedback. _arXiv preprint_
_arXiv:2112.09332._
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _arXiv preprint_
_arXiv:2203.02155._
Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm:
Tool augmented language models. _arXiv preprint_
_arXiv:2205.12255._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems?](https://doi.org/10.18653/v1/2021.naacl-main.168) In Proceedings of the 2021
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt,
Noah A Smith, and Mike Lewis. 2022. Measuring
and narrowing the compositionality gap in language
models. arXiv preprint arXiv:2210.03350.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Teven Le Scao, Arun
Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint
_arXiv:2110.08207._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì,
Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761.
Kurt Shuster, Mojtaba Komeili, Leonard Adolphs,
Stephen Roller, Arthur Szlam, and Jason Weston. 2022. Language models that seek for
knowledge: Modular search & generation for dialogue and prompt completion. _arXiv preprint_
_arXiv:2203.13224._
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta,
Adrià Garriga-Alonso, et al. 2022. Beyond the
imitation game: Quantifying and extrapolating the
capabilities of language models. _arXiv preprint_
_arXiv:2206.04615._
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V Le, Ed H Chi,
Denny Zhou, et al. 2022. Challenging big-bench
tasks and whether chain-of-thought can solve them.
_arXiv preprint arXiv:2210.09261._
-----
understanding, Database Operations, Algebra and
Arithmetic, Code Generation and Editing, Text Tagging/Annotation(linguistic markers), Specialized
Search(eg. looking up linguistic knowledge, scientific knowledge etc), String editing, Recursive
operations over multiple choices, Topic classification, Evidence extraction, conditional Text Generation/Editing, and Sentence similarity.
In this work, we choose to focus on the five most
used skills that cover a significant proportion of
BigBench tasks for classification (over 50 of the
91 tasks that remained after filtrating out long-text
understanding, generation, and multi-lingual tasks).
We randomly select 2-4 tasks from each of these 5
task clusters and author decomposed programs with
appropriate tool use for these tasks. This results in
a total of 15 tasks that compose the task library.
- Arithmetic: Elementary MathQA, Grade
school math (GSM8K), arithmetic Questions
about ratios (Aqua-Rat), Navigate
- Code: Auto Debugging, Code Description
- Search and question decomposition:
Anachronims, Multi-step question answering
(Musique), Hindu Knowledge, Known
Unknown
- Free-form reasoning: Formal fallacies, Hyperbation
- String Operations: Kth letter concatenation,
Language games, Date understanding
**Cluster Programs** The programs written for
tasks in each task cluster are shown in Table 7
for tasks involving string editing and manipulation,
in Table 8 for arithmetic and algebra tasks, in Table 10 for code generation, editing and debugging
tasks, in Table 9 for tasks benefit from search of
world knowledge, and in Table 11 for tasks that
benefit from eliciting chain-of-thought reasoning
following the prompt “Let’s think step-by-step”.
**Program Format** We define a parsing expression grammar (PEG) (shown in Figure 4) that describes the language used to write multi-step reasoning programs. This grammar is designed to
parse full programs of the form “Input: ... Q1: ...
#1:... Qn: [EOQ] Ans: ”. We use the python library parsimoneous[4] to construct the grammar and
parse programs generated by LLMs.
[4https://pypi.org/project/parsimonious/](https://pypi.org/project/parsimonious/)
Romal Thoppilan, Daniel De Freitas, Jamie Hall,
Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du,
et al. 2022. Lamda: Language models for dialog
applications. arXiv preprint arXiv:2201.08239.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. 2022. Self-consistency
improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin
Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint
_arXiv:2109.01652._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large
language models. arXiv preprint arXiv:2201.11903.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou,
et al. Chain-of-thought prompting elicits reasoning
in large language models. In Advances in Neural
_Information Processing Systems._
Sang Michael Xie, Aditi Raghunathan, Percy Liang,
and Tengyu Ma. 2021. An explanation of in-context
learning as implicit bayesian inference. _arXiv_
_preprint arXiv:2111.02080._
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
Smola. 2022. Automatic chain of thought prompting in large language models. _arXiv preprint_
_arXiv:2210.03493._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
**A** **Appendix**
**A.1** **Task Library**
**Library Design** We analyzed input-output instances of all 200 tasks in BigBench, filtered out
text classification and short answer generation tasks
in English, and created a list of reasoning skills that
were relevant to solving each task. We do not focus
on long text understanding, long text generation,
and multi-lingual tasks in this work. We find that
most of these tasks rely on a few common skills
mentioned below:
Visual Reasoning, Temporal Reasoning, Propositional logic, Natural Logic, Machine Translation,
Web Search, Knowledge Base or Database lookup,
Recursive sub-question decomposition, Long text
-----
String Operations
In these examples, you are given a task description and an input. Break the input down into subtasks in order to solve the
task. You can use string operations like splitting, reformatting, editing or merging. You can also use other operations like
arithmetic and logic.
Description: (Date Understanding) Find the required date in MM/DD/YYYY using information about related events and
dates in the input. Clue: First find what day is today.
Input: The deadline is Jun 1, 2021, which is 2 days away from now. What is the date 24 hours later in MM/DD/YYYY?
Q1: [string reformat] Jun 1, 2021 in MM/DD/YYYY
#1: 06/01/2021
Q2: [arithmetic] 06/01/2021 is 2 days away from now. What date is today?
#2: Today is 04/01/2021
Q3: [arithmetic] What date is 24 hours later than today?
#3: 05/01/2021
Q4: [EOQ]
Ans: 05/31/2021
—Description: (Language games) Translate English into Pig Latin.
Input: (English) Sami made his way across the bar and hugged Layla.
Q1: [string split] What are the words in "Sami made his way across the bar and hugged Layla."?
#1: ["Sami", "made", "his", "way", "across", "the", "bar", "and", "hugged", "Layla", "."]
Q2: [string edit] Transfer the initial consonant of each word to the end of the word and adding "ay" after it.
#2: ["Amisay", "ademay", "ishay", "ayway", "acrossyay", "ethay", "arbay", "andyay", "uggedhay", "Aylalay", "."]
Q3: [string merge] Concatenate #2 into a full sentence.
#3: Amisay ademay ishay ayway acrossyay ethay arbay andyay uggedhay Aylalay.
Q4: [EOQ]
Ans: Amisay ademay ishay ayway acrossyay ethay arbay andyay uggedhay Aylalay.
—Description: (Kth letter concatenation) Take the letters at position 3 of the words in a list of words and concatenate them
using a space.
Input: Take the letters at position 3 of the words in "Savita Saeed Ramos Sato Yadav" and concatenate them using a space.
Q1: [string split] What are the words in "Savita Saeed Ramos Sato Yadav"?
#1: ["Savita", "Saeed", "Ramos", "Sato", "Yadav"]
Q2: [string index] What is the third letter of words in the list in #1?
#2: ["v", "e", "m", "t", "d"]
Q3: [string merge] Concatenate #2 with spaces
#3: "v e m t d"
Q4: [EOQ]
Ans: v e m t d
—Descripton: %s
Input: %s
Q1:
Table 7: Programs in the task library for tasks requiring string manipulation.
-----
Arithmetic
In these examples, you are given a task description and an input. Break the input down into subtasks in order to solve the task.
You can generate python code to solve arithmetic and algebra equations in using functions from sympy.
from sympy import Symbol
from sympy import simplify
import math
from sympy import solve_it
# solve_it(equations, variable): solving the equations and return the variable value.
Description: (Aqua-rat) Solve the following arithmetic problems on ratios and fractions, writing out intermediate arithmetic
calculations as python code. Store your result as a variable named ’ans’.
Input: In a flight of 600 km, an aircraft was slowed down due to bad weather. Its average speed for the trip was reduced by
200 km/hr and the time of flight increased by 30 minutes. The duration of the flight is: A)1 hour B)2 hours C)3 hours D)4
hours E)5 hours
Q1: [generate python code] write python code to solve the problem, using math and sympy.
#1:
duration = Symbol(’duration’, positive=True)
delay = 30 / 60
total_disntace = 600
original_speed = total_disntace / duration
reduced_speed = total_disntace / (duration + delay)
solution = solve_it(original_speed - reduced_speed - 200, duration)
ans = solution[duration]
print(ans)
Q2: [code execute] Execute the python code in #1 and get the value of "ans"
#2:
1.0
Q3: [compare] Which of the options among A)1 hour B)2 hours C)3 hours D)4 hours E)5 hours is most similar to the answer?
#3: A
Q4: [EOQ]
Ans: A
—Description: (Elementary Math) Solve the following middle-school arithmetic problems, writing out intermediate arithmetic
calculations as python code. Store your result as a variable named ’ans’.
Input: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every
day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she
make every day at the farmers’ market?
Q1: [generate python code] write down the arithmetic or algebra equations as python code, storing the answer as ’ans’
#1:
total_eggs = 16
eaten_eggs = 3
baked_eggs = 4
sold_eggs = total_eggs - eaten_eggs - baked_eggs
dollars_per_egg = 2
ans = sold_eggs * dollars_per_egg
print(ans)
Q2: [code execute] Execute the python code in #1 and get the value of "ans"
#2: 18
Q3: [EOQ]
Ans:18
—Description: (Grage school Math) Solve the following middle-school arithmetic problems, writing out intermediate arithmetic
calculations as python code. Store your result as a variable named ’ans’.
Input: Joseph and Getty went to buy ice creams, they together bought 36 ice creams. On the way back, Joseph ate 12 of the
ice creasm, and he has 2 ice creams left now.
Q1: [generate python code] write down the arithmetic or algebra equations as python code, storing the answer as ’ans’
#1:
num_ice_creams_bought_by_joseph = 2 + 12
total_ice_creams = 36
ans = total_ice_creams - num_ice_creams_bought_by_joseph
print(ans)
Q2: [code execute] Execute the python code in #1 and get the value of "ans"
#2: 22
Q3: [EOQ]
Ans: 22
—Descripton: %s
Input: %s
Q1:
Table 8: Programs in the task library for tasks requiring arithmetic operations.
-----
Search
In these examples, you are given a task description and an input. Break the input down into subtasks in order to solve the task.
You can use search functions like Google search in one or more of your substeps, if there in insufficient information. Other
functions like arithmetic and logical operations can also be used.
Description: (Knwon or Unknwon) Choose the option that best answers the question. If the question does not have a known
answer, choose "Unknown".
Input: How many hairs were on Neil Armstrong’s head when he landed on the moon?
choice: Unknown
choice: Five million
Q1: [search] How many hairs were on Neil Armstrong’s head when he landed on the moon?
#1:
Apollo 11 (July 16–24, 1969) was the American spaceflight that first landed humans on the Moon. Commander Neil
Armstrong and lunar module pilot Buzz Aldrin.
Neil Alden Armstrong (August 5, 1930 – August 25, 2012) was an American astronaut and aeronautical engineer who became
the first person to walk on the Moon.
Q2: [subquestion] Does the information help answer the question? There could be no definitive answer because the question is
too specific, about personal details not in public record, because the answer is not yet known, or the question is opinion-based.
#2: No. The question is too specific
Q3: [compare] What is the final answer?
#3: Unknown
Q4: [EOQ]
Ans: Unknown
—Description: (Anachronisms) An anachronism is a mistake in chronology, or a person, thing, or event that is out of its proper
time. Does the sentence contain an anachrornism? Answer Yes/No.
Input: President George H. W. Bush called his generals to the Oval Office at the outset of the Gulf War.
Q1: [tag] What are the entities in this sentence?
#1:
President George H. W. Bush
Gulf War
Q2: [search] When was President George H. W. Bush president?
#2: George H. W. Bush’s tenure as the 41st president of the United States began with his inauguration on January 20, 1989,
and ended on January 20, 1993.
Q3: [search] When was the Gulf War fought?
#3: The Gulf War[b] was a 1990–1991 armed campaign waged by a 35-country military coalition in response to the Iraqi
invasion of Kuwait.
#4: [subquestion] Could these entities have co-existed based on thier time periods alone?
Yes. Their time periods intersect.
Q5: [generate output] Is this an anachronism?
#5: No
Q6: [EOQ]
Ans: No
—Description: (Hindu Knowledge) Answer questions about Hindu mythology by choosing the option that best answers the
question.
Input: In the Mahabharata, Karna is cursed to forget the incantations needed to use which weapon?
choice: Anjalikastra
choice: Narayanastra
choice: Agneyastra
choice: Brahmastra
Q1: [search] In the Mahabharata, Karna is cursed to forget the incantations needed to use which weapon?
#1: As a result, he cursed Karna, saying that HIS MARTIAL SKILLS, including the use of BRAHMASTRA, would abandon
him when he needed them most. Indra, the King of Gods, stung Karna in the form of a bee to get him cursed by Parshuram.
Karna walked through the woods in despair, feeling dejected by the curse. A skilled & devoted warrior...
Q2: [compare] Which option is the answer in #3 most similar to?
#2: Brahmastra
Q3: [EOQ]
Ans: Brahmastra
—Descripton: %s
Input: %s
Q1:
Table 9: Programs in the task library for tasks requiring search for world knowledge.
-----
Code Generation and Editing
In these examples, you are given a task description and an input. Break the input down into subtasks in order to solve the task.
You can use a python code generation and execution function in one or more of your substeps, if required. Other functions
like arithmetic and logical operations can also be used.
Description: (Auto Debugging) Debug the following code snippets by finding the answer or the error message.
Input:
“‘
if x < 5:
pass
“‘
What error does this program surface?
Q1: [execute] Execute the following python code snippet.
if x < 5:
pass
#1:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name ’x’ is not defined
Q2: [generate answer] What is the final error message?
#2: NameError: name ’x’ is not defined
Q3: [EOQ]
Ans: NameError: name ’x’ is not defined
—Description: (Code description) Given a python code snippet, choose the option that is the best description of the code
snippet.
Input:
Python code:
try:
n = int(input())
m = int(input())
integer_sum = int(n) + int(m)
print(integer_sum)
except:
print(’error’)
choice: prints number between 5 and 6
choice: try input and except error
choice: inputs the string ’try’
choice: prints sum of two input numbers only if they are integers otherwise raises error
Q1: [code generate] prints number between 5 and 6
#1:
import random
print(random.uniform(5,6))
Q2: [code generate] try input and except error
#2:
try:
file = open(file_name, "r")
data = file.read()
file.close()
except:
print(’error’)
Q3: [code generate] inputs the string ’try’
#3: print(’try’)
Q4: [code generate] prints sum of two input numbers only if they are integers otherwise raises error
#4:
#!/usr/bin/python
a=raw_input("enter first number: ")
b=raw_input("enter second number: ")
try:
sum=int(a)+int(b)
print "sum is: ",sum
except:
print "enter integer values only"
Q5: [compare] Which of the generated code snippets are most like the original one?
#5: prints sum of two input numbers only if they are integers otherwise raises error
Q6: [EOQ]
Ans: prints sum of two input numbers only if they are integers otherwise raises error
—Descripton: %s
Input: %s
Q1:
-----
CoT Reasoning
In these examples, you are given a task description and an input. Break the input down into subtasks in order to solve the task.
Thinking though the problem explicitly can be one of the substeps you use.
Description: (Sports Understanding) Determine whether an artificially constructed sentence relating to sports is plausible.
The final answer should be "yes" or "no".
Input: Is the following sentence plausible? "Santi Cazorla scored a touchdown."
Q1: [think step-by-step]
#1: Let’s think step-by-step. Santi Cazorla is a soccer player. Touchdown is part of American football and rugby. So the
answer is no.
Q2: [EOQ]
Ans: no
—Description: (Hyperbation) Identify correct adjective ordering from the two choices. This involves selecting what would be
considered the more inexplicably "intuitive" sentence by a native English speaker.
Input: Which sentence has the correct adjective order:
Options:
(A) repulsive small Brazilian exercise ship
(B) Brazilian repulsive exercise small ship
Q1: [think step-by-step]
#1: Let’s think step-by-step. When there is more than one adjective before a noun, the adjectives need to respect the following
order before a noun: "[1. opinion] [2. size] [3. age] [4. shape] [5. color] [6. origin] [7. material] [8. purpose] noun".
Option (A): "repulsive small Brazilian exercise ship". (1) "repulsive" falls into the opinion category. (2) "small" falls into the
size category. (3) "Brazilian" falls into the origin category. (4) "exercise" falls into the purpose category. Option (A) has the
following adjective order: [1. opinion] [2. size] [6. origin] [8. purpose] (or, in numeric terms, 1 2 6 8). Because 1 < 2 < 6 < 8
is correct, (A) has the correct ordering.
Option (B): "Brazilian repulsive exercise small ship". Option (B) has the following adjective order: [6. origin] [1. opinion] [8.
purpose] [2. size] (or, in numeric terms, 6 1 8 2). Because 6 < 1 < 8 < 2 is not correct, (B) does not have the correct ordering.
So the answer is (A).
Q2: [EOQ]
Ans: (A)
—Description: (Formal Fallacies) Distinguish deductively valid syllogistic arguments from formal fallacies, paying specific
attention to negations.
Input: "It is not always easy to see who is related to whom – and in which ways. The following argument pertains to this
question: To begin with, Lesley is a close friend of Fernando. Moreover, being a close friend of Fernando or a schoolmate of
Lowell is sufficient for being a great-grandfather of Leroy. It follows that Lesley is a great-grandfather of Leroy."
Is the argument, given the explicitly stated premises, deductively valid or invalid?
Options:
- valid
- invalid
Q1: [think step-by-step]
#1:
Let’s think step-by-step.
(1) Lesley is a close friend of Fernando: Lesley = friend(Fernando).
(2) Being a close friend of Fernando or a schoolmate of Lowell is sufficient for being a great-grandfather of Leroy: If X =
friend(Fernando) OR SCHOOLMATE(Lowell), then X = great-grandfather(Leroy).
Hypothesis: Does it follow that Lesley is a great-grandfather of Leroy: Lesley = great-grandfather(Leroy)?
Let’s see whether the Hypothesis can be deduced from the arguments (1) and (2) by logical reasoning?
By (1), we have Lesley = friend(Fernando). By (2), we have if Lesley = friend(Fernando), then Lesley = greatgrandfather(Leroy).
So, it is true that Lesley is a great-grandfather of Leroy. So the answer is valid.
Q2: [EOQ]
Ans: valid
—Description: (Reasoning about colored objects) Given a collection of colored objects in the text input, answer the question at
the end of the input.
Input: On the nightstand, there is a red pencil, a purple mug, a burgundy keychain, a fuchsia teddy bear, a black plate, and a
blue stress ball. What color is the stress ball?
Q1: [think step-by-step]
#1: Let’s think step-by-step. According to this question, the color of the stress ball is blue. So the answer is blue.
Q2: [EOQ]
Ans: blue
—Descripton: %s
Input: %s
Q1:"""
Table 11: Programs in the task library for tasks requiring free-form chain-of-thought style reasoning about logic
and lingusitics.
-----
**A.2** **Task Selection**
When provided new task description and input instance, ART retrieves N tasks from the task library
to constructs a dynamic multi-task prompt. We
explore two strategies for task selection.
**Task-Cluster based** 50 examples used for tuning except in cases with fewer than 100 examples,
where we reduce this number to 10.
We iterate over all five task clusters in the library,
prompting the LLM with demonstration programs
from just one cluster at a time. For example, we
only use programs from arithmetic tasks as demonstrations in the prompt in one such iteration. The
task cluster with the highest performance on the
held-out set of examples ( 50) is chosen. This strategy requires as many API calls as there are task
clusters, and a held-out set of input-output pairs for
the new task. Note that no additional supervision is
needed for the new task to generate a decomposed
program.
**LLM-Similarity based** The LLM is prompted
with pairs of tasks. Some pairs contain two tasks
from the same cluster and are labeled "Similar"
while some pairs don’t and are labeled "Not similar". Additionally, we also provide reasoning for
the decision — “Elementary math QA and GSM8K
_are related tasks because they both require solving_
_arithmetic word problems". A task in this prompt_
is represented by its name, an instruction, and a few
input-output pairs. We use the prompt in Table 13
to prompt LLMs.
The LLM is prompted for a decision for every
library task paired with the new task. We choose
the top-N tasks ranked by the ratio of log probabilities of "Similar" to "Not similar". This strategy
requires fewer held-out examples but is prone to
high variance in performance based on the tasks
chosen in every experimental run. For PQA, the
most similar tasks chosen based on the LLM-based
similarity are anachronisms and GSM8K.
In Table 12, we examine the effect of changing the task selection strategy in ART. Instead of
choosing the task cluster with the highest held-out
performance over 50 examples, we use the LLMbased similarity score to choose task programs for
the prompt. This strategy is worse on average compared to tuning performance on a held-out set and
has high variance over several runs where different tasks are chosen by the LLM. Selecting similar
tasks that share sub-tasks and tools (without any su
pervision) is still a challenging task for LLMs, and
will explore this direction further in future work.
**A.3** **Tool Use**
**Code Generation** We use the Codex (Chen
et al., 2021) model for code generation. Argument
for code generation is the previous sub-task’s
answer sequence ““#i − 1 : _. . . "” and the_
sequence generated by the LM after the sub-task
query symbol ““Qi : [generate python code]"”.
When i = 1, the instance input is used as the first
argument. We include the previous answer/input
since it often contains information relevant to
generating accurate code, like the arithmetic word
problem for which code needs to be generated
(see Table 8 for examples). Both arguments are
provided to Codex as a multi-line python comment,
while maintaining their original formatting. To
keep the answer variable consistent, we also
append an additional instruction: Store the final
answer in variable ’ans’ and print it. For example:
Janet’s ducks lay 16 eggs per day. She
eats three for breakfast every morning
and bakes muffins for her friends every
day with four. She sells the remainder
at the farmers market daily for \$2 per
fresh duck egg. How much in dollars does
she make every day at the farmers
market?
is used to prompt Codex as follows:
"""
Janet’s ducks lay 16 eggs per day. She
eats three for breakfast every morning
and bakes muffins for her friends every
day with four. She sells the remainder
at the farmers market daily for \$2 per
fresh duck egg. How much in dollars does
she make every day at the farmers
market?
Write down the arithmetic or algebra
equations as python code, storing the
answer as ’ans’ and print it.
"""
Codex generation temperature is set to 0.3 and the
maximum length to 500 tokens, with “print(ans)”
used as the stopping criterion.
**Code Editing** We use the Codex (Chen et al.,
2021) model for code generation and code editing. Arguments for both include the previous
-----
|Col1|Simple Text CS Strategy QA Physics Unit Reasoning about|
|---|---|
||Editing Algorithms Questions Interpretation colored objects|
|Best task cluster LLM-based task sim.|27.65 88.11 66.44 20.37 53.99 64.34 38.30 83.71 60.39 14.06 43.56 62.00|
Table 12: Comparing ART results on GPT3 (175B) model with two similar task selection strategies. LLM-based
similarity is worse on average compared to just choosing the best task cluster.
Prompt to LLM for selecting similar tasks
Give two tasks with their descriptions and examples of inputs and outputs for the tasks, determine if they are similar. Two
tasks are similar if require common subtasks like string operations, web search, translation, arithmetic, code execution, etc.
—Task1: [Date understanding] Find the required date in MM/DD/YYYY using information about related events and dates in the
input. Input: The deadline is Jun 1, 2021, which is 2 days away from now. What is the date 24 hours later in MM/DD/YYYY?
The final answer is 05/01/2021.
Task2: [Language Games] Translate English into Pig Latin. Input: English sentence is "Sami made his way across the bar
and hugged Layla". The final answer is "Amisay ademay ishay ayway acrossyay ethay arbay andyay uggedhay Aylalay."
Are these similar? Yes. They both require answering in a spcific string format.
—Task1: [K’th letter concatenation] Take the letters at position 3 of the words in a list of words and concatenate them using a
space. Input: What are the words in "Savita Saeed Ramos Sato Yadav"? The final answer is "v e m t d".
Task2: [Language Games] Translate English into Pig Latin. Input: English sentence is "Sami made his way across the bar
and hugged Layla". The final answer is "Amisay ademay ishay ayway acrossyay ethay arbay andyay uggedhay Aylalay."
Are these similar? Yes. They both require accessing and manipulating characters in strings.
—Task1: [K’th letter concatenation] Take the letters at position 3 of the words in a list of words and concatenate them using a
space. Input: What are the words in "Savita Saeed Ramos Sato Yadav"? The final answer is "v e m t d".
Task2: [Known Unknown] Choose the option that best answers the question. If the question does not have a known answer,
choose "Unknown". Input: How many hairs were on Neil Armstrong’s head when he landed on the moon? The final answer
is "Unknown".
Are these similar? No. Task 1 requires manipulating strings and Task 2 requires answering a question by possibly looking up
information on the web.
—Task1: [Anachronisms] An anachronism is a mistake in chronology, or a person, thing, or event that is out of its proper time.
Does the sentence contain an anachrornism? Input: Kurt Cobain starred in the 1980 television show "Twin Peaks". The final
answer is "Yes".
Task2: [Known Unknown] Choose the option that best answers the question. If the question does not have a known answer,
choose "Unknown". Input: Where was Mark Twain born? The final answer is Florida, Missouri.
Are these similar? Yes. They both require searching information about entities mentioned in the text, like Kurt Cobain or
Mark Twain.
—Task1: [Hindu Knowledge] Answer questions about Hindu mythology by choosing the option that best answers the question.
Input: In the Mahabharata, Karna is cursed to forget the incantations needed to use which weapon? Choices: Anjalikastra,
Narayanastra, Agneyastra, Brahmastra. The final answer is Brahmastra.
Task2: [Code Debugging] Debug the following code snippets by finding the answer or the error message. Input:
if x < 5:
pass
The final answer is
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name ’x’ is not defined
Are these similar? No. Task 1 is about asnswering a question and requires searching information about entities mentioned in
the text. Task 2 is a question about debugging code and may require a Python interpreter.
Task 1: %s
Task 2: %s
Are these similar?
Table 13: Programs in the task library.
-----
```
grammar = parsimonious.grammar.Grammar(
r"""
program = program_start*node*partial_command*final_answer
program_start = input_start~r"( |\n)"text~r"\n"
input_start = ~r"Input:"
text = ~r"(?<=Input:( |\n))(.|\n|\t)*?(?=\nQ[0-9]+:)"
node = command_node~r"\n"output_node~r"\n"
command_node = command_start~r"( |\n)"command_instruction
output_node = begin_answer~r"( |\n)"output
command_instruction = ~r"(?<=\]( |\n))(.|\n|\t)*?(?=\n\#[0-9]+)"
command_start = ~r"Q[0-9]+: \[[A-Za-z_ ]+\]"
begin_answer = ~r"\#[0-9]+:"
output = ~r"(?<=\#[0-9]+:( |\n))(.|\n|\t)*?(?=\nQ[0-9]+:)"
partial_command = command_start~r"\n"
final_answer = ~r"Ans:( |\n)(.|\n)*$"
""")
```
Figure 4: PeG Grammar used to parse ART programs
from sympy.solvers import solve
from sympy import Symbol, Eq, simplify
import math
import numpy as np
import cvxpy as cp
import statistics
def solve_it(equation, variable):
solution=solve(equation, variable, dict=True)
if not solution:
if isinstance(variable, list):
solution=v: None for v in variable
else:
solution=variable: None
return solution
else:
solution = solution[0]
return solution
Table 14: Code prefix appended before a code snippet
prior to execution.
simply to the code snippet as a comment). Again, to
encourage executable code with consistent variable
usage, we also append the sequence "Store your
final answer is variable ’ans’" to the comment. The
results of the execution call are used to replace the
answer sequence generated by the language model.
Finally, we prepend a code snippet consisting of
useful module and function imports so that function
calls external modules like numpy and scipy are
executed successfully. This code prefix is shown in
Table 14. We use the exec native python function to
execute the code snippet and access the ’ans’ local
variable if it exists.
**Knowledge Base lookup** This tool is added in
the Word Unscrambling task. This function call is
used to look up data by keys in a relational knowledge base. For example, we use dictionary lookup
for the Word Unscrambling task. The input to this
function is again the previous sub-task’s answer sequence (if it exists, or the original input is used) and
sub-task’s answer sequence “#i − 1 : . . . ” (or
the input if i = 1), and the sequence generated by the LM after the sub-task query symbol
“Qi : [generate python code]”. The first argument is the code snippet that needs to be edited
and the second argument is a multi-line comment
in Python used as the instruction for editing/generation. To ensure that subsequent code execution
results in the generation of an answer string independent of variable name, the edit instruction is to
_print the required variable. For example, for the_
auto debugging task in the task library, the following program snippet:
Input:
‘‘‘
x = set([1, 1, 2, 3])
‘‘‘
What is the value of x after this
program executes?
Q1: [code edit] Edit the code to print
the value of x
is used to prompt Codex in edit mode as follows.
For code input:
x = set([1, 1, 2, 3])
For edit instruction:
Edit the code to print the value of x
**Code Execution** We run python code in a virtual
python environment with arithmetic, symbolic, and
scientific computing packages pre-installed. The
arguments to code execute include the previous subtask’s answer sequence “#i−1 : . . . ", which is the
python code snippet that requires executing. If i =
1, the input contains the code. The other argument
is the sequence generated by the LM after the subtask query symbol “Qi : [execute code]" (which is
-----
the sequence generated by the LM after the function name symbol. The first argument is parsed as
a python code snippet and interpreted as a list of
lookup keys. The second argument is parsed as a
code generation prompt which is consequently executed. For example, the first argument of l = [’yob’,
_’boy’, ’oyb’] and the second argument Check which_
_of these list of words is a word in English. Store_
_the final answer is ’ans’ and print it. results in the_
following code snippet and final answer ’boy’:
def lookup(word_list):
import enchant
d = enchant.Dict("en_US")
valid_list = []
for word in word_list:
if d.check(word):
valid_list.append(word)
return valid_list
While this is a restricted definition for a general
knowledge base lookup or query, we explore how
human-in-the-loop feedback can be used to create
custom lookup tools.
**Prolog Engine** This tool is added in the formal
fallacies task. This task consits of first-order logic
statements stated in natural language, as follows:
To begin with, Bonnie is a schoolmate of
Miranda. Moreover, whoever is a
workmate of Aubrey is not a schoolmate
of Miranda. All this entails that Bonnie
is not a workmate of Aubrey.
Is the argument, given the explicitly
stated premises, deductively valid or
invalid?
This can be written in Prolog [5] as:
workmate(X, aubrey) :- \+ schoolmate(X,
miranda).
schoolmate(bonnie, miranda).
?- workmate(bonnie, aubrey).
Humans provide feedback by authoring such prolog statements for a few instances with a new tool
symbol “[translate to prolog]”. They then author a
new tool that calls a python prolog parsing engine
to execute the prolog code and determine the binary value of the final expression. This is integrated
back into the program.
[5https://en.wikipedia.org/wiki/Prolog](https://en.wikipedia.org/wiki/Prolog)
**A.4** **Baselines**
**Few-shot baseline** This is the direct prompting
baseline where the prompt consists of input-output
pairs only and no additional intermediate reasoning steps. Following prior work that reports results with direct prompting (Suzgun et al., 2022;
Wei et al., 2022), we use 3 randomly chosen
input-output instances. We run direct prompting
for both, InstructGPT (text-davinci-002) (Ouyang
et al., 2022) and Codex (code-davinci-002) (Chen
et al., 2021) and report the higher performance.
This follows from (Chung et al., 2022), where they
find that Codex models are better at analytical tasks
than text models, even with direct prompting.
**Auto CoT** A baseline that generates automatic
CoT-style multi-step reasoning in a free-form natural language (as done in AutoCoT (Zhang et al.,
2022)). A randomly selected subset of examples in
the dataset is used to prompt the LLM to elicit CoTstyle reasoning (Input + Let’s think step-by-step.).
Since CoT-style generation is free-form and parsing potential tool use symbols is harder, we don’t
use tools for this baseline. This baseline specifically measures the effectiveness of a custom query
language (and PeG grammar) we use to write programs and parse tool calls; While (Zhang et al.,
2022) cluster training examples to provide diverse
demonstrations to the LLM, we choose a random
selection of 5 examples. A careful selection of
demonstration examples may also be used for ART,
and we leave an exploration of this choice to future
work. We parse the generated CoT-style reasoning to extract the answer string and add the phrase
“The final answer i” along with the answer string to
the end of the reasoning. This pattern is used for
evaluation.
**Best GPT-3 Approaches** We briefly describe the
GPT-3 best results reported in Tables 2 and Tables 3, which correspond to the best GPT-3 results
reported in approaches that use multi-step reasoning (like CoT) and tool use, with human supervision for both.
- (Suzgun et al., 2022): Human-authored CoT
reasoning for several tasks in BigBench.
A closer inspection of their hand-crafted
prompts revealed that they cast BigBench
tasks to multiple-choice tasks (selecting between options A,B,C,...), which differs from
the more challenging format proposed originally and used in this work. Hence, we modify
-----
|Task|CoT ART Human feedback|Col3|Col4|
|---|---|---|---|
||+Human + Human|||
|Kth letter concat* Language Games* Anachronisms* Auto Debugging* Navigate Date Understanding Formal Fallacies|0.64 59.40 18.58 26.08 51.48 49.82 38.24 61.18 61.7 85.9 38.9 70.4 56.4 56.4|40.0 100.0 23.08 35.38 75.66 82.91 62.94 67.05 72.4 80.89 52.05 65.45 64.76 74.39|Code C: k’th letter extraction and merge for a list of words Code C: Eng->Pig Latin and vice-versa C: search query constrained to extract time-periods Code C: Code edit fixed to print variable asked in input A: “[generate answer] What is the final error message?” Code C: correct forward, backward, right, left distances A: First find what date is today T: Translate to Prolog and add prolog engine|
Table 15: Improving ART and free-form CoT via self-consistency and human-in-the-loop feedback on library tasks.
(*) indicates that human-in-the-loop improvement was done over automatically generated CoT reasoning for these
tasks.
their prompt to generate answers in the output
space, which is a fair comparison with ART.
Hence, the results reported in Table 2 and Table 3 may differ from the results reported in
(Suzgun et al., 2022)
- (Khot et al., 2022): Decomposed prompting
for list reveral and kth letter comaparison.
- (Press et al., 2022): Decomposed multi-step
QA for musique.
- (Chen et al., 2022): Program-of-though:
Promtps are designed to convert Arithmetic
problems into code and execute them.
- (Ouyang et al., 2022): InstructGPT
- (Chung et al., 2022): PaLM language model.
Note that we only consider the highest reported text-davinci-002 or code-davinci-002
results from this paper to compare with ART.
- (Chen et al., 2021): Codex
**A.5** **Human Feedback**
We present feedback provided by humans for five
BigBench tasks.
- Table 16 for reasoning about colored objects,
where code is edited introduces a consistent
data structure to represent the input.
- Table 17 for physics, where substeps is introduced that prompt for given physical quantities, required quantity and a physics formula
that connects them together.
- Table 18 for temporal sequences, where substeps are introduced that go over all options
one by one to determine whether the subject
in the input is busy with an activity at those
times.
- Table 19 for tracking shuffled objects, where
code is edited introduces a consistent data
structure to represent the input and swap logic
is corrected.
- Table 20 for unscrambling english words,
where code is edited in step 2 to get correct
permutations and a new tool is defined in step
3 to lookup the english dictionary and return
permutations that are actual words.
In Table 15, we present human feedback results for
training tasks in the task library.
-----
Reasoning about colored objects
Before
Input: On the nightstand, there are two silver textbooks and three silver cat toys. If I remove all the cat toys from the
nightstand, how many fuchsia things remain on it?
Q1: [generate python code] write down the arithmetic or algebra equations as python code, storing the answer as ’ans’
1:
num_of_silver_textbooks = 2
num_of_silver_cat_toys = 3
num_of_fuchsia_things = num_of_silver_textbooks
num_of_things_on_nightstand = num_of_silver_textbooks + num_of_silver_cat_toys
ans = num_of_things_on_nightstand - num_of_silver_cat_toys
print(ans)
Q2: [code execute] Execute the python code and get the value of "ans"
2: 2
Q3: [EOQ]
Ans: 2
After
Input: On the nightstand, there are two silver textbooks and three silver cat toys. If I remove all the cat toys from the
nightstand, how many fuchsia things remain on it?
Q1: [generate python code] Write python code to find if I remove all the cat toys from the nightstand, how many fuchsia
things remain on it
#1:
color_object_count = [("silver", "textbooks", 2), ("silver", "cat toys", 3)]
removed_objects = [("silver", "cat toys", 3)]
def remove_objects(color_object_count, removed_objects):
color_object_count_updated = []
for color, object, count in color_object_count:
for color2, object2, count2 in removed_objects:
if color == color2 and object == object2:
count -= count2
color_object_count_updated.append([color, object, count])
return color_object_count_updated
color_object_count = remove_objects(color_object_count, removed_objects)
ans = 0
for color, object, count in color_object_count:
if color == "fuchsia":
ans = count
print(ans)
Q2: [code execute] Execute the python snippet.
#2: 0
Q3: [EOQ]
Ans: 0
Table 16: Human feedback: Reasoning about colored objects
-----
Physics
Before
Input: Identify the physics formula that would be most useful for finding the answer to each of the following word problems.A
particle of mass 6.8 * 10 ^ -5 g and charge 58 mC moves in a region of space where the electric field is uniform and is 7.2
N/C in the x direction and zero in the y and z direction. If the initial velocity of the particle is given by vy = 1.7 * 10 ^ 6 m/s,
vx = vz = 0, what is the speed of the particle at 0.7 s?
choice: F = m * a
choice: v = dx / dt
choice: E = q / (ϵ * A * cos(θ))
choice: a = dv / dt
Q1: [subquestion] Can this question be answered step-by-step?
#1: Yes.
Q2: [search] A particle of mass 6.8 * 10 ^ -5 g and charge 58 mC moves in a region of space where the electric field is
uniform and is 7.2 N/C in the x direction and zero in the y and z direction. If the initial velocity of the particle is given by vy
= 1.7 * 10 ^ 6 m/s, vx = vz = 0, what is the speed of the particle at 0.7 s?
#2: v = dx / dt A particle of mass 6.6 × 10-5 g and charge 22 mC moves in a region of space where the electric field is uniform
and is 5.6 N/C in the x direction and zero ...
Q3: [compare] Which option is the answer in #2 most similar to?
#3: v = dx / dt
Q4: [EOQ]
Ans: v = dx / dt
After
Input: Identify the physics formula that would be most useful for finding the answer to each of the following word problems.A
particle of mass 6.8 * 10 ^ -5 g and charge 58 mC moves in a region of space where the electric field is uniform and is 7.2
N/C in the x direction and zero in the y and z direction. If the initial velocity of the particle is given by vy = 1.7 * 10 ^ 6 m/s,
vx = vz = 0, what is the speed of the particle at 0.7 s?
choice: F = m * a
choice: v = dx / dt
choice: E = q / (ϵ * A * cos(θ))
choice: a = dv / dt
Q1: [subquestion] What quantity is being asked for in "A particle of mass 6.8 * 10 ^ -5 g and charge 58 mC moves in a region
of space where the electric field is uniform and is 7.2 N/C in the x direction and zero in the y and z direction. If the initial
velocity of the particle is given by vy = 1.7 * 10 ^ 6 m/s, vx = vz = 0, what is the speed of the particle at 0.7 s"
#1: speed of the particle
Q2: [subquestion] What other information is given in "A particle of mass 6.8 * 10 ^ -5 g and charge 58 mC moves in a region
of space where the electric field is uniform and is 7.2 N/C in the x direction and zero in the y and z direction. If the initial
velocity of the particle is given by vy = 1.7 * 10 ^ 6 m/s, vx = vz = 0, what is the speed of the particle at 0.7 s"
#2: mass of particle (6.8 * 10 ^ -5 g), charge(58 mC), electric field (7.2 N/C), initial velocity (vy = 1.7 * 10 ^ 6 m/s, vx = vz =
0)
Q3: [search] Given mass of the particle, charge and initial velocity of particle, and electric field, what can we compute for the
particle? Give the physics formula.
#3:
We can compute acceleration of the particle, as follows:
a = (q/m) * E
This formula is derived from the Lorentz force law, which states that the force (F) on a particle of charge q in an electric field
E is given by the equation:
F = ma = qE
Q4: [subquestion] What is the physics formula in the choices that is closest to F = ma = qE, a = (q/m) * E
#4: F = m * a
Q5: [EOQ]
Ans: F = m * a
Table 17: Human feedback: Physics
-----
Temporal sequences
Before
Input: Today, Tiffany went to the swimming pool. Between what times could they have gone?
We know that:
Tiffany woke up at 6am.
Jessica saw Tiffany waiting at the airport from 6am to 10am.
William saw Tiffany attending class at the school from 12pm to 5pm.
David saw Tiffany walking towards the Statue of Liberty from 5pm to 6pm.
James saw Tiffany reading at the library from 6pm to 8pm.
The swimming pool was closed after 8pm.
Between what times could Tiffany have gone to the swimming pool?
choice: 12pm to 5pm
choice: 6pm to 8pm
choice: 10am to 12pm
choice: 6am to 10am
Q1: [think step-by-step]
#1: Let’s think step-by-step. Tiffany went to the swimming pool today. We need to find out when she could have gone.
- Tiffany woke up at 6am.
- Jessica saw Tiffany waiting at the airport from 6am to 10am.
- William saw Tiffany attending class at the school from 12pm to 5pm.
- David saw Tiffany walking towards the Statue of Liberty from 5pm to 6pm.
- James saw Tiffany reading at the library from 6pm to 8pm.
- The swimming pool was closed after 8pm.
The swimming pool was closed after 8pm, so Tiffany couldn’t have gone at 6pm to 8pm. She was reading at the library from
6pm to 8pm, so she couldn’t have gone at 10am to 12pm. She was waiting at the airport from 6am to 10am, so she couldn’t
have gone at 6am to 10am. That leaves 12pm to 5pm. So the answer is 12pm to 5pm.
Q2: [EOQ]
Ans: 12pm to 5pm
After
Q1: [subquestion] Is Tiffany free 12pm to 5pm?
#1: No, she is attending classes at the school.
Q2: [subquestion] Is Tiffany free 6pm to 8pm?
#2: No, she is reading at the library.
Q3: [subquestion] Is Tiffany free 10am to 12pm?
#3: Yes, there is no other activity in her scehdule.
Q4: [subquestion] Is Tiffany free 6am to 10am?
#4: No, she is waiting at the airport.
Q5: [subquestion] Is the swimming pool open from 10am to 12pm?
#5: Yes, the swimming pool is open from 10am to 8pm.
Q6: [get answer] In conclusion, when should she go to the swimming pool?
#6: Tiffany should go to the swimming pool from 10am to 12pm.
Q7: [EOQ]
Ans: 10am to 12pm
Table 18: Human feedback: Temporal sequences
-----
Tracking shuffled objects
Before
Input: Alice, Bob, Claire, Dave, and Eve are playing a game. At the start of the game, they are each holding a ball: Alice has
a pink ball, Bob has a white ball, Claire has a red ball, Dave has a purple ball, and Eve has a yellow ball.
As the game progresses, pairs of players trade balls. First, Alice and Dave swap balls. Then, Claire and Eve swap balls. Then,
Alice and Bob swap balls. Then, Dave and Claire swap balls. Finally, Alice and Claire swap balls. At the end of the game,
Claire has the
Q1: [generate python code] write down the arithmetic or algebra equations as python code, storing the answer as ’ans’
#1:
Alice = "pink"
Bob = "white"
Claire = "red"
Dave = "purple"
Eve = "yellow"
Alice_and_Dave = [Alice, Dave]
Claire_and_Eve = [Claire, Eve]
Alice_and_Bob = [Alice, Bob]
Dave_and_Claire = [Dave, Claire]
Alice_and_Claire = [Alice, Claire]
ans = Claire
print(ans)
Q2: [code execute] Execute the python code and get the value of "ans"
#2: red
Q3: [EOQ]
Ans: red
After
Q1: [generate python code] write down the arithmetic or algebra equations as python code, storing the answer as ’ans’
#1:
players = [’Alice’, ’Bob’, ’Claire’, ’Dave’, ’Eve’]
balls = [’pink’, ’white’, ’red’, ’purple’, ’yellow’]
swap_list = [[0, 3], [2, 4], [0, 1], [3, 2], [0, 2]]
for swap in swap_list:
balls[swap[0]], balls[swap[1]] = balls[swap[1]], balls[swap[0]]
ans = [players[i] + ’ has a ’ + balls[i] for i in range(5)]
print(ans)
Q2: [code execute] Execute the python code and get the value of "ans"
#2: [’Alice has a pink’, ’Bob has a purple’, ’Claire has a white’, ’Dave has a yellow’, ’Eve has a red’]
Q3: [subquestion] At the end, what color does Claire have?
#3: white
Q3: [EOQ]
Ans: white
Table 19: Human feedback: Tracking shuffled objects
Tracking shuffled objects
Before
Input: The word illoctnecos is a scrambled version of the English word
Q1: [string split] What are the letters in "illoctnecos"?
#1: ["i", "l", "l", "o", "c", "t", "n", "e", "c", "o", "s"]
Q2: [string permutation] What are the possible permutations of the letters in #1?
#2: ["i", "l", "l", "o", "c", "t", "n", "e", "c", "o", "s"], ["i", "l", "l", "o", "c", "t", "n", "e", "o", "c", "s"], ["i", "l", "l", "o", "c", "t",
"e", "n", "c", "o", "s"], ["i", "l", "l", "o", "c", "t", "e", "n", "o", "c", "s"], ["i", "l", "l", "o", "c", "n", "t", "e", "c", "o", "s"], ...
Q3: [string check] Among the permutations in #2, which one is an English word?
#3: ["i", "l", "l", "o", "c", "t", "n", "e", "c", "o", "s"]
Q4: [EOQ]
Ans: ["i", "l", "l", "o", "c", "t", "n", "e", "c", "o", "s"]
Word Unscrambling: After
Q1: [string split] What are the letters in "illoctnecos"?
#1: ["i", "l", "l", "o", "c", "t", "n", "e", "c", "o", "s"]
Q2: [string permutation] What are the possible permutations of the letters in #1?
#2: ["illoctnecos", "illoctneocs", "illoctenocs", "illoctencos", "illotnencos", ...]
Q3: [word lookup] Among the permutations in #2, which one is an English word?
#3: collections
Q4: [EOQ]
Ans: collections
Table 20: Human feedback: Word Unscrambling
-----
| [
"Bhargavi, Paranjape",
"Hannaneh, Hajishirzi",
"Sameer, Singh",
"Luke, Zettlemoyer",
"Scott, Lundberg",
"Marco Tulio, Ribeiro"
] | 2023-03-15T00:00:00 | null | false | 106 | 5 | null | http://arxiv.org/abs/2303.09014 | https://arxiv.org/abs/2303.09014 | https://www.semanticscholar.org/paper/0d42221038c05cee8443c5b5af838505ee137dc3 |
Learning To Use Formulas To Solve Simple Arithmetic Problems | No summary was provided. | A novel method to learn to use formulas to solve simple arithmetic word problems and beats the state-of-the-art by 86.07% of the problems in a corpus of standard primary school test questions. | # Learning To Use Formulas To Solve Simple Arithmetic Problems
**Arindam Mitra**
Arizona State University
[email protected]
**Abstract**
Solving simple arithmetic word problems
is one of the challenges in Natural Language Understanding. This paper presents
a novel method to learn to use formulas
to solve simple arithmetic word problems.
Our system, analyzes each of the sentences to identify the variables and their
attributes; and automatically maps this information into a higher level representation. It then uses that representation to
recognize the presence of a formula along
with its associated variables. An equation is then generated from the formal description of the formula. In the training
phase, it learns to score the <formula,
variables> pair from the systematically
generated higher level representation. It is
able to solve 86.07% of the problems in
a corpus of standard primary school test
questions and beats the state-of-the-art by
a margin of 8.07%.
**1** **Introduction**
Developing algorithms to solve math word problems (Table 1) has been an interest of NLP researchers for a long time (Feigenbaum and Feldman, 1963). It is an interesting topic of study from
the point of view of natural language understanding and reasoning for several reasons. First, it incorporates rigorous standards of accurate comprehension. Second, we know of a good representation to solve the word problems, namely algebraic
equations. Finally, the evaluation is straightforward and the problems can be collected easily.
In the recent years several challenges have
been proposed for natural language understanding.
This includes the Winograd Schema challenge
for commonsense reasoning (Levesque, 2011),
**Chitta Baral**
Arizona State University
[email protected]
Story Comprehension Challenge (Richardson et
al., 2013), Facebook bAbl task (Weston et al.,
2015), Semantic Textual Similarity (Agirre et al.,
2012) and Textual Entailment (Bowman et al.,
2015; Dagan et al., 2010). The study of word math
problems is also an important problem as quantitative reasoning is inextricably related to human life.
Clark & Etzioni (Clark, 2015; Clark and Etzioni,
2016) discuss various properties of math word
(and science) problems emphasizing elementary
school science and math tests as a driver for AI.
Researchers at Allen AI Institute have published
two standard datasets as part of the Project Euclid[1]
for future endeavors in this regard. One of them
contains simple addition-subtraction arithmetic
problems (Hosseini et al., 2014) and the other
contains general arithmetic problems (KoncelKedziorski et al., 2015). In this research, we focus
on the former one, namely the AddSub dataset.
_Dan grew 42 turnips and 38 cantelopes . Jes-_
_sica grew 47 turnips . How many turnips did_
_they grow in total ?_
**Formula** **Associated variables**
_part-whole_ _whole: x, parts: {42, 47}_
**Equation** _x = 42 + 47_
Table 1: Solving a word problem using part-whole
Broadly speaking, common to the existing approaches (Kushman et al., 2014; Hosseini et al.,
2014; Zhou et al., 2015; Shi et al., 2015; Roy and
Roth, 2015) is the task of grounding, that takes as
input a word problem in the natural language and
represents it in a formal language, such as, a system of equations, expression trees or states (Hosseini et al., 2014), from which the answer can be
easily computed. In this work, we divide this task
of grounding into two parts as follows:
1http://allenai.org/euclid.html
2144
-----
In the first step, the system learns to connect the
assertions in a word problem to abstract mathematical concepts or formulas. In the second step,
it maps that formula into an algebraic equation.
Examples of such formulas in the arithmetic domain includes part whole which says, ‘the whole
is equal to the sum of its parts’, or the Unitary
_Method that is used to solve problems like ‘A man_
_walks seven miles in two hours. What is his aver-_
_age speed?’._
Consider the problem in Table 1. If the system
can determine it is a ‘part whole’ problem where
the unknown quantity X plays the role of whole
and its parts are 42 and 47, it can easily express
the relation as X = 42 + 47. The translation of
a formula to an equation requires only the knowledge of the formula and can be formally encoded.
Thus, we are interested in the question, ‘how can
an agent learn to apply the formulas for the word
problems?’ Solving a word problem in general,
requires several such applications in series or parallel, generating multiple equations. However, in
this research, we restrict the problems to be of a
single equation which requires only one application.
Our system currently considers three mathematical concepts: 1) the concept of part whole, 2) the
concept of change and 3) the concept of compar_ison. These concepts are sufficient to solve the_
arithmetic word problems in AddSub. Table 2 illustrates each of these three concepts with examples. The part whole problems deal with the part
whole relationships and ask for either the part or
the whole. The change problems make use of the
relationship between the new value of a quantity
and its original value after the occurrence of a series of increase or decrease. The question then
asks for either the initial value of the quantity or
the final value of the quantity or the change. In
case of comparison problems, the equation can be
visualized as a comparison between two quantities and the question typically looks for either the
larger quantity or the smaller quantity or the difference. While the equations are simple, the problems describe a wide variety of scenarios and the
system needs to make sense of multiple sentences
without a priori restrictions on the syntax or the
vocabulary to solve the problem.
Training has been done in a supervised fashion. For each example problem, we specify the
formula that should be applied to generate the ap
**_Change_**
RESULT UNKNOWN
Mary had 18 baseball cards, and 8 were torn .
Fred gave Mary 26 new baseball cards . Mary
bought 40 baseball cards . How many baseball
cards does Mary have now ?
CHANGE UNKNOWN
There were 28 bales of hay in the barn . Tim
stacked bales in the barn today . There are now
54 bales of hay in the barn . How many bales
did he store in the barn ?
START UNKNOWN
Sam ’s dog had puppies and 8 had spots . He
gave 2 to his friends . He now has 6 puppies .
How many puppies did he have to start with?
**_Part Whole_**
TOTAL SET UNKNOWN
Tom went to 4 hockey games this year, but
missed 7 . He went to 9 games last year . How
many hockey games did Tom go to in all ?
PART UNKNOWN
Sara ’s high school played 12 basketball games
this year . The team won most of their games
. They were defeated during 4 games . How
many games did they win ?
**_Comparision_**
DIFFERENCE UNKNOWN
Last year, egg producers in Douglas County
produced 1416 eggs . This year, those same
farms produced 4636 eggs . How many more
eggs did the farms produce this year ?
LARGE QUANTITY UNKNOWN
Bill has 9 marbles. Jim has 7 more marbles than
Bill. How many marbles does Jim have?
SMALL QUANTITY UNKNOWN
Bill has 9 marbles. He has 7 more marbles than
Jim. How many marbles does Jim have?
Table 2: Examples of Add-Sub Word Problems
propriate equation and the relevant variables. The
system then learns to apply the formulas for new
problems. It achieves an accuracy of 86.07% on
the AddSub corpus containing 395 word arithmetic
problems with a margin of 8.07% with the current
state-of-the-art (Roy and Roth, 2015).
Our contributions are three-fold: (a) We model
the application of a formula and present a novel
method to learn to apply a formula; (b) We annotate the publicly available AddSub corpus with the
2145
-----
correct formula and its associated variables; and
(c) We make the code publicly available. [2]
The rest of the paper is organized as follows. In
section 2, we formally define the problem and describe our learning algorithm. In section 3, we define our feature function. In section 4, we discuss
related works. Section 5 provides a detailed description of the experimental evaluation. Finally,
we conclude the paper in section 6.
**2** **Problem Formulation**
A single equation word arithmetic problem P is
a sequence of k words _w1, ..., wk_ and contains
_⟨_ _⟩_
a set of variables VP = _v0, v1, ..., vn_ 1, x
_{_ _−_ _}_
where v0, v1, ..., vn 1 are numbers in P and x is
_−_
the unknown whose value is the answer we seek
(Koncel-Kedziorski et al., 2015). Let Paddsub be
the set of all such problems, where each problem P ∈Paddsub can be solved by a evaluating
a valid mathematical equation E formed by combining the elements of VP and the binary operators
from O = {+, −}.
We assume that each target equation E of
_P_ _∈_ _Paddsub is generated by applying one_
of the possible mathematical formulas from
= _Cpartwhole, Cchange, Ccomparision_ . Let
_C_ _{_ _}_
_addsub_
_Pwhere the target equation[1]_ _[⊆P][addsub][ be the set of all problems] E can be generated by a_
single application of one of the possible formulas
from C. The goal is then to find the correct application of a formula for the problem P ∈Paddsub[1] [.]
**2.1** **Modelling Formulas And their**
**Applications**
We model each formula as a template that has predefined slots and can be mapped to an equation
when the slots are filled with variables. Application of a formula C ∈C to the problem P, is then
defined as the instantiation of the template by a
subset of VP that contains the unknown.
**Part Whole** The concept of part whole has
two slots, one for the whole that accepts a single
variable and the other for its parts that accepts a
set of variables of size at least two. If the value
of the whole is w and the value of the parts are
_p1, p2, ..., pm, then that application is mapped to_
the equation, w = p1 + p2 + ... + pm, denoting
that whole is equal to the sum of its parts.
2The code and data is publicly available at
_https://github.com/ari9dam/MathStudent._
**Change** The change concept has four slots,
namely start, end, gains, losses which respectively
denote the original value of a variable, the final
value of that variable, and the set of increments
and decrements that happen to the original value
of the variable. The start slot can be empty; in
that case it is assumed to be 0. For example, consider the problem, ‘Joan found 70 seashells on the
_beach . she gave Sam some of her seashells. She_
_has 27 seashell . How many seashells did she give_
_to Sam?’. In this case, our assumption is that be-_
fore finding the 70 seashells Joan had an empty
hand. Given an instantiation of change concept
the equation is generated as follows:
_valstart +_ _valg =_ _vall + valend_
_g∈Xgains_ _l∈Xlosses_
**Comparision** The comparision concept has
three slots namely the large quantity, the small
quantity and their difference. An instantiation of
the comparision concept is mapped to the following equation: large = small + difference.
**2.2** **The Space of Possible Applications**
Consider the problem in Table 1. Even though the
correct application is an instance of part whole
formula with whole = x and the parts being
_{42, 47}, there are many other possible applica-_
tions, such as, partWhole(whole=47, parts=x,42),
_change(start=47,_ _losses={x},_ _gains={},_ _end_
_= 42), comparison(large=47, small=x, differ-_
_ence=42)._ Note that, _comparison(large=47,_
_small=38, difference=42) is not a valid applica-_
tion since none of the associated variables is an
_unknown. Let AP be the set of all possible appli-_
cations to the problem P . The following lemma
characterizes the size of AP as a function of the
number of variables in P .
**Lemma 2.2.1. Let P ∈Paddsub[1]** _[be an arithmetic]_
_word problem with n variables (|VP | = n), then_
_the following are true:_
_1. The number of possible applications of part_
_whole formula to the problem P_ _, Npartwhole_
_is (n + 1)2[n][−][2]_ + 1.
_2. The number of possible applications of_
_change formula to the problem P_ _, Nchange_
_is 3[n][−][3](2n[2]_ + 6n + 1) − 2n + 1.
_3. The number of possible applications of_
_comparison formula to the problem P_ _,_
_Ncomparison is 3(n −_ 1)(n − 2).
2146
-----
_4. The number of all possible applications to_
_the problem P is Npartwhole + Nchange +_
_Ncomparison._
Proof of lemma 2.2.1 is provided in the Appendix. The total number of applications for problems having 3, 6, 7, 8 number of variables are 47,
3, 105, 11, 755, 43, 699 respectively. AdditionSubtraction arithmetic problems hardly contain
more than 6 variables. So, the number of possible applications is not intractable in practice.
The total number of applications increases
rapidly mainly due to the change concept. Since,
the template involves two sets, there is a 3[n][−][3] factor present in the formula of Nchange. However,
any application of change concept with gains and
_losses slots containing a collection of variables can_
be broken down into multiple instances of change
concept where the gains and losses slots accepts
only a single variable by introducing more intermediate unknown variables. Since, for any formula that does not have a slot that accepts a set,
the number of applications is polynomial in the
number of variables, there is a possibility to reduce the application space. We plan to explore
this possibility in our future work. For the part
_whole concept, even though there is a exponen-_
tial term involved, it is practically tractable (for
_n = 10, Npartwhole = 2, 817 ). In practice, we_
believe that there will hardly be any part whole
application involving more than 10 variables. For
formulas that are used for other categories of word
math problems (algebraic or arithmetic), such as
the unitary method, formulas for ratio, percentage,
_time-distance and rate of interest, none of them_
have any slot that accepts sets of variables. Thus,
further increase in the space of possible applications will be polynomial.
**2.3** **Probabilistic Model**
For each problem P there are different possible
applications y ∈ _AP, however not all of them are_
meaningful. To capture the semantics of the word
problem to discriminate between competing applications we use the log-linear model, which has a
feature function φ and parameter vector θ ∈ R[d].
The feature function φ : H → R[d] takes as input a problem P and a possible application y and
maps it to a d-dimensional real vector (feature
_vector) that aims to capture the important infor-_
mation required to discriminate between competing applications. Here, the set H is defined as
(P, y) : P _addsub_
_{date the dependency of the possible applications ∈P_ [1] _[∧]_ _[y][ ∈]_ _[A][P][ }][, to accommo-]_
on the problem instance. Given the definition of
the feature function φ and the parameter vector θ,
the probability of an application y given a problem
_P is defined as,_
_e[θ.φ][(][P,y][)]_
_p(y|P_ ; θ) = _y[′]_ _AP_ _[e][θ.φ][(][P,y][′][)]_
_∈_
Here, . denotes dot product. Section 3 definesP
the feature function. Assuming that the parameter θ is known, the function f that computes the
correct application is defined as,
_f_ (P ) = arg max
_y_ _AP_ _[p][(][y][|][P]_ [;][ θ][)]
_∈_
**2.4** **Parameter Estimation**
To learn the function f, we need to estimate the
parameter vector θ. For that, we assume access to
_n training examples, {Pi, yi[∗]_ [:][ i][ = 1][ . . . n][}][, each]
containing a word problem Pi and the correct application yi[∗] [for the problem][ P][i][. We estimate][ θ]
by minimizing the negative of the conditional loglikelihood of the data:
_O(θ) = −_
_log p(yi[∗]_
_i=1_ _[|][P][i][;][ θ][)]_
X
= −
[θ.φ(Pi, yi[∗][)][ −] _[log]_
_i=1_
X
_e[θ.φ][(][P][i][,y][)]]_
_y∈APi_
We use stochastic gradient descent to optimize
the parameters. The gradient of the objective function is given by:
_∇O_
_θ_ [=][ −]
_∇_
[φ(Pi, yi[∗][)][ −]
Xi=1 (1)
_p(y_ _Pi; θ)_ _φ(Pi, y)]_
_|_ _×_
_y∈XAPi_
Note that, even though the space of possible applications vary with the problem Pi, the gradient
for the example containing the problem Pi can be
easily computed.
**3** **Feature Function φ**
A formula captures the relationship between variables in a compact way which is sufficient to generate an appropriate equation. In a word problem, those relations are hidden in the assertions
2147
-----
of the story. The goal of the feature function is
thus to gather enough information from the story
so that underlying mathematical relation between
the variables can be discovered. The feature function thus needs to be aware of the mathematical relations so that it knows what information it
needs to find. It should also be “familiar” with
the word problem language so that it can extract
the information from the text. In this research,
the feature function has access to machine readable dictionaries such as WordNet (Miller, 1995),
ConceptNet (Liu and Singh, 2004) which captures
inter word relationships such as hypernymy, synonymy, antonymy etc, and syntactic and dependency parsers that help to extract the subject, verb,
object, preposition and temporal information from
the sentences in the text. Given these resources,
the feature function first computes a list of attributes for each variable. Then, for each application y it uses that information, to compute if some
aspects of the expected relationship described in y
is satisfied by the variables in y.
Let the first b dimensions of the feature vector
contain part whole related features, the next c dimensions are for change related features and the
remaining d features are for comparison concept.
Then the feature vector for a problem P and an
application of a formula y is computed in the following way:
**Data: A word problem P**, an application y
**Result: d-dimensional feature vector, fv**
**Initialize fv := 0**
**if y is instance of part whole then**
compute fv[1 : b]
**end**
**if y is instance of change then**
compute fv[b + 1 : b + c]
**end**
**if y is instance of comparision then**
compute fv[b + c + 1 : b + c + d]
**end**
**Algorithm 1: Skeleton of the feature function φ**
**3.1** **Attributes of Variables**
For each occurrence of a number in the text a variable is created with the attribute value referring
to that numeric value. An unknown variable is
created corresponding to the question. A special
attribute type denotes the kind of object the variable refers to. Table 3 shows several examples
of the type attribute. It plays an important role
in identifying irrelevant numbers while answering
|the question.|Col2|
|---|---|
|Text|Type|
|John had 70 seashells|seashells|
|70 seashells and 8 were broken|seashells|
|61 male and 78 female salmon|male, salmon|
|35 pears and 27 apples|pear|
Table 3: Example of type for highlighted variables.
The other attributes of a variable captures its
linguistic context to surrogate the meaning of the
variable. This includes the verb attribute i.e.
the verb attached to the variable, and attributes
corresponding to Stanford dependency relations
(De Marneffe and Manning, 2008), such as nsubj,
_tmod, prep in, that spans from either the words in_
associated verb or words in the type. These attributes were computed using Stanford Core NLP
(Manning et al., 2014). For the sentence, “John
found 70 seashells on the beach.” the attributes of
the variable are the following: { value : {70},
**verb : {found}, nsubj : {John}, prep on :**
_{beach }}._
**3.2** **Cross Attribute Relations**
Once the variables are created and their attributes
are extracted, our system computes a set of
boolean variables, each denoting whether the attribute a1 of the variable v1 has the same value
as the attribute a2 of the variable v2. The value
of each attribute is a set of words, consequently
set equality is used to calculate attribute equality.
Two words are considered equal if their lemma
matches.
Four more boolean variables are computed for
each pair of variables based on the attribute type
and they are defined as follows:
**subType:** Variable v1 is a subType of variable v2 if v2.type ⊂ _v1.type or their type consists_
of a single word and there exists the IsA relation
between them in ConceptNet (Speer and Havasi,
2013; Liu and Singh, 2004).
The rest of the section is organized as follows.
We first describe the attributes of the variables that
are computed from the text. Then, we define a list
of boolean variables which computes semantic relations between the attributes of each pair of variables. Finally, we present the complete definition
of the feature function using the description of the
attributes and the boolean variables.
2148
-----
**disjointType is true if v1.type** _v2.type = φ_
**intersectingType is true if v1 is neither a**
_subType of v2 nor is disjointType[T]_ nor equal.
We further compute some more variables by utilizing several relations that exist between words:
**antonym: For every pair of variables v1 and**
_v2, we compute an antonym variable that is true if_
there exists a pair of word in (v1.verb _v1.adj)_
_×_
(v2.verb _v2.adj) that are antonym to each other_
in WordNet irrespective of their part of speech tag.[S]
[S]
**relatedVerbs:** The verbs of two variables are
related if there exists a RelatedTo relations in ConceptNet between them.
**subjConsume:** The nsubj of v1 consumes the
_nsubj of v2 if the formers refers to a group and the_
latter is a part of that group. For example, in the
problem, ‘Joan grew 29 carrots and 14 watermel_ons . Jessica grew 11 carrots . How many carrots_
_did they grow in all ?’, the nsubj of the unknown_
variable consumes others. This is computed using
Stanford co-reference resolution. For the situation
where there is a variable with nsubj as ‘they’ and
it does not refer to any entity, the subjConsume
variable is assumed to be implicitly true for any
variable having a nsubj of type person.
**3.3** **Features: Part Whole**
The part whole features look for some combinations of the boolean variables and the presence
of some cue words (e.g. ‘all’) in the attribute
list. These features capture the underlying reasonings that can affect the decision of applying a part
_whole concept. We describe the conditions which_
when satisfied activate the features. If active, the
value of a feature is the number of variables associated with the application y and 0 otherwise. This
is also true for change and comparision features
also. Part whole features are computed only when
the y is an instance of the formula part whole. The
same applies for change and comparision features.
**Generic Word Cue** This feature is activated
if y.whole has a word in its attributes that belongs
to the “total words set” containing the followings
words “all”, “total”, “overall”, “altogether”, “together” and “combine”; and none of the variables
in parts are marked with these words.
**ISA Type Cue** is active if all the part variables
are subType of the whole.
**Type-Verb Cue** is active if the type and verb
attributes of vwhole matches that of all the variables
in the part slot of y.
**Type-Individual Group Cue** is active if the
variable vwhole subjConsume each part variable vp
in y and their type matches.
**Type-Verb-Tmod Cue** is active if the variable in the slot whole is the unknown and for each
part variable vp their verb, type and tmod (time
modifier of the verb) attributes match.
**Type-SubType-Verb Cue** is active if the variable in the slot whole is either the unknown or
marked with a word in “total words set” and for
all parts vp, their verb matches and one of the type
or subType boolean variable is true.
**Type-SubType-Related Verb Cue** is similar
to Type-SubType-Verb Cue however relaxes the
_verb match conditions to related verb match. This_
is helpful in problems like ‘Mary went to the mall.
_She spent $ 13.04 on a shirt and $ 12.27 on a_
_jacket . She went to 2 shops . In total, how much_
_money did Mary spend on clothing ? ’._
**Type-Loose Verb Cue** ConceptNet does not
contain all relations between verbs. For example,
according to ConceptNet ‘buy’ and ‘spend’ are related however there is no relation in ConceptNet
between ‘purchase’ and ‘spend’. To handle these
situations, we use this feature which is similar to
the previous one. The difference is that it assumes
that the verbs of part-whole variable pairs are related if all verbs associated with the parts are same,
even though there is no relation in ConceptNet.
**Type-Verb-Prep Cue** is active if type and
verb matches. The whole does not have a “preposition” but parts have and they are different.
**Other Cues** There are also features that add
_nsubj match criteria to the above ones. The prior_
feature for part whole is that the whole if not unknown, is smaller than the sum of the parts. There
is one more feature that is active if the two part
variables are antonym to each other; one of type
or subType should be true.
**3.4** **Features: Change**
The change features are computed from a set of 10
simple indicator variables, which are computed in
the following way:
2149
-----
**Start Cue** is active if the verb associated with
the variable in start slot has one of the following
possessive verbs : {‘call for’, ‘be’, ‘contain’, ‘remain’, ‘want’, ‘has’, ‘have’, ‘hold’, ...}; the type
and nsubj of start variable match with the end variable and the tense of the end does not precede the
start. The list of ‘possessive verbs’ is automatically constructed by adding all the verbs associated with the start and the end slot variables in
annotated corpus.
**Start Explicit Cue** is active if one of following words, “started with”, “initially”, “begining”,
“originally” appear in the context of the start variable and the type of start and end variables match.
**Start prior** is active if the verb associated
with the variable in start slot is a member of the
set ‘possessive verbs’ and the variable appears in
first sentence.
**Start Default Cue** is active if the start variable has a “possessive verb” with past tense.
**End Cue** is active if the verb associated with
the variable in slot end has a possessive verb with
the tense of the verb not preceding the tense of
the start, in case the start is not missing. The type
and nsubj should match with either the start or the
gains in case the start is missing.
**End Prior** is true if vend has a possessive verb
and an unknown quantity and at least one of vend
or vstart does not have a nsubj attribute.
**Gain Cue** is active if for all variables in the
_gains slot, the type matches with either vend or_
_vstart and one of the following is true: 1) the nsubj_
of the variable matches with vend or vstart and the
verb implies gain (such as ‘find’) and 2) the nsubj
of the variable does not match with vend or vstart
and the verb implies losing (e.g. spend). The set
of gain and loss verbs are collected from the annotated corpus by following the above procedure.
**Gain Prior** is true if the problem contains
only three variables, with vstart < vend and the
only variable in the gain slot, associated with nonpossessive verb is the unknown.
**Loss Cue & Loss prior** are designed in a
fashion similar to the Gain cue and Gain Prior.
Let us say badgains denotes that none of the gain
prior or gain cue is active even though the gain slot
is not empty. badlosses is defined similarly and let
_bad = badgains ∨_ _badlosses . Then the change fea-_
tures are computed from these boolean indicators
using logical operators and, or, not. Table4 shows
some of the change features.
!bad _gaincue_ _startdefault_ _endcue_
_∧_ _∧_ _∧_
!bad !gaincue _losscue_ _startdefault_ _endcue_
_∧_ _∧_ _∧_ _∧_
!bad (gaincue _losscue)_
_∧_ _∨_ _∧_
_startcue_ !startdefault _endcue_
_∧_ _∧_
!bad (gaincue _losscue)_
_∧_ _∨_ _∧_
_startexplicit_ !startdefault _endcue_
_∧_ _∧_
!bad (gaincue _losscue)_ _startprior_
_∧_ _∨_ _∧_ _∧_
(endcue _endprior)_
_||_
!bad (gaincue _losscue)_ (startprior
_∧_ _∨_ _∧_ _∨_
_startcue)_ !startdefault _endprior_
_∧_ _∧_
Table 4: Activation criteria of some change related
features.
**3.5** **Features: Comparison**
The features for the “compare” concept are relatively straight forward.
**Difference Unknown Que** If the application
_y states that the unknown quantity is the differ-_
ence between the larger and smaller quantity, it is
natural to see if the variable in the difference slot is
marked with a comparative adjective or comparative adverb. The prior is that the value of the larger
quantity must be bigger than the small one. Another two features add the type and subject matching criteria along with the previous ones.
**Large & Small Unknown Que** These features can be active only when the variable in the
_large or small slot is unknown. To detect if the ref-_
erent is bigger or smaller, it is important to know
the meaning of the comparative words such as
‘less’ and ‘longer’. Since, the corpus contains only
_33 comparison problems we collect these compar-_
ative words from web which are then divided into
two categories. With these categories, the features
are designed in a fashion similar to change features that looks for type, subject matches.
**3.6** **Handling Arbitrary Number of Variables**
This approach can handle arbitrary number of
variables. To see that consider the problem, ‘Sally
found 9 seashells, Tom found 7 seashells, and
Jessica found 5 seashells on the beach . How
many seashells did they find together ?’. Let us
say that feature vector contains only the ‘Type_Individual Group Cue’ feature and the weight_
2150
-----
of that feature is 1. Consider the two following applications: y1 = partWhole(x,{9,7}) and
_y2 = partWhole(x,{9,7, 5}). For both y1 and y2_
the ‘Type-Individual Group Cue’ feature is active
since the subject of the unknown x refers to a
group that contains the subject of all part variables
in y1 and y2 and their types match. However, as
mentioned in section 3.3, when active, the value
of a feature is the number of variables associated
with the application. Thus _[p]p[(]([y]y[2]1[;];[P,θ]P,θ[)])_ [=] _ee[3][4][ =][ e][.]_
Thus, y2 is more probable than y1.
**4** **Related Works**
Researchers in early years have studied math word
problems in a constrained domain by either limiting the input sentences to a fixed set of patterns (Bobrow, 1964b; Bobrow, 1964a; Hinsley et
al., 1977) or by directly operating on a propositional representation instead of a natural language
text (Kintsch and Greeno, 1985; Fletcher, 1985).
Mukherjee and Garain (2008) survey these works.
Among the recent algorithms, the most general
ones are the work in (Kushman et al., 2014; Zhou
et al., 2015) . Both algorithms try to map a word
math problem to a ‘system template’ that contains
a set of ‘equation templates’ such as ax + by =
_c._ These ‘system templates’ are collected from
the training data. They implicitly assume that
these templates will reoccur in the new examples
which is a major drawback of these algorithms.
Also, Koncel-Kedziorski et al. (2015) show that
the work of Kushman et al. (2014) heavily relies on the overlap between train and test data and
when this overlap is reduced the system performs
poorly.
Work of (Koncel-Kedziorski et al., 2015; Roy
and Roth, 2015) on the other hand try to map the
math word problem to an expression tree. Even
though, these algorithms can handle all the four
arithmetic operators they cannot solve problems
that require more than one equation. Moreover,
experiments show that our system is much more
robust to diversity in the problem types between
training and test data for the problems it handles.
The system ARIS in (Hosseini et al., 2014)
solves the addition-subtraction problems by categorizing the verbs into seven categories such as
‘positive transfer’, ‘loss’ etc. It represents the information in a problem as a state and then updates
the state according to the category of a verb as the
story progresses. Both ARIS and our system share
the property that they give some explanation behind the equation they create. However, the verb
categorization approach of ARIS can only solve a
subset of addition-subtraction problems (see error
analysis in (Hosseini et al., 2014)); whereas the usage of formulas to model the word problem world,
gives our system the ability to accommodate other
math word problems as well.
**5** **Experimental Evaluation**
**5.1** **Dataset**
The AddSub dataset consist of a total of 395
addition-subtraction arithmetic problems for third,
fourth, and fifth graders. The dataset is divided
into three diverse set MA1, MA2, IXL containing
134, 140 and 121 problems respectively. As mentioned in (Hosseini et al., 2014), the problems in
MA2 have more irrelevant information compared
to the other two datasets, and IXL includes more
information gaps.
**5.2** **Result**
Hosseini et al. (2014) evaluate their system using
3-fold cross validation. We follow that same procedure. Table 5 shows the accuracy of our system on each dataset (when trained on the other
two datasets). Table 6 shows the distribution of
the part whole, change, comparison problems and
the accuracy on recognizing the correct formula.
|Col1|MA1|IXL|MA2|Avg|
|---|---|---|---|---|
|ARIS|83.6|75.0|74.4|77.7|
|KAZB|89.6|51.1|51.2|64.0|
|ALGES|-|-|-|77.0|
|Roy & Roth|-|-|-|78.0|
|Majority|45.5|71.4|23.7|48.9|
|Our System|96.27|82.14|79.33|86.07|
Table 5: Comparision with ARIS, KAZB (Kushman et al., 2014), ALGES (Koncel-Kedziorski et
al., 2015) and the state of the art Roy & Roth on
the accuracy of solving arithmetic problems.
As we can see in Table 6 only IXL contains
problems of type ‘comparison’. So, to study the
accuracy in detecting the compare formula we
uniformly distribute the 33 examples over the 3
datasets. Doing that results in only two errors in
the recognition of a compare formula and also increases the overall accuracy of solving arithmetic
problems to 90.38%.
2151
-----
|Type|Col2|MA1|IXL|MA2|
|---|---|---|---|---|
|part whole|Total correct|59 59|89 81|51 40|
|change|Total correct|74 70|18 15|68 56|
|compare|Total correct|0 0|33 0|0 0|
Table 6: Accuracy on recognizing the correct application. None of the MA1 and MA2 dataset contains “compare” problems so the cross validation
accuracy on “IXL” for “compare” problems is 0.
cannot solve this problem and we need to either
use a better representation or a more powerful
learning algorithm to be able to answer correctly.
Another interesting example of this kind is the
following: “For his car, Mike spent $118.54 on
speakers and $106.33 on new tires . Mike wanted
3 CD ’s for $4.58 but decided not to . In total,
how much did Mike spend on car parts?”
**Incomplete IsA Knowledge:** For the problem “Tom bought a skateboard for $ 9.46, and
spent $ 9.56 on marbles . Tom also spent $ 14.50
on shorts . In total, how much did Tom spend
on toys ? ”, it is important to know that ‘skateboard’ and ‘marbles’ are toys but ‘shorts’ are not.
However, such knowledge is not always present in
ConceptNet which results in error.
**Parser Issue:** Error in dependency parsing is
another source of error. Since the attribute values
are computed from the dependency parse tree, a
wrong assignment (mostly for verbs) often makes
the entity irrelevant to the computation.
**6** **Conclusion**
Solving math word problems often requires explicit modeling of the word. In this research, we
use well-known math formulas to model the word
problem and develop an algorithm that learns to
map the assertions in the story to the correct formula. Our future plan is to apply this model to
general arithmetic problems which require multiple applications of formulas.
**7** **Acknowledgement**
We thank NSF for the DataNet Federation Consortium grant OCI-0940841 and ONR for their grant
N00014-13-1-0334 for partially supporting this research.
**5.3** **Error Analysis**
An equation that can be generated from a change
or comparision formula can also be generated by
a part whole formula. Four such errors happened
for the change problems and out of the 33 com_pare problems, 18 were solved by part whole._
Also, there are 3 problems that require two applications. One example of such problem is, “There
_are 48 erasers in the drawer and 30 erasers on the_
_desk. Alyssa placed 39 erasers and 45 rulers on_
_the desk. How many erasers are now there in to-_
_tal ?”. To solve this we need to first combine the_
two numbers 48 and 30 to find the total number of
erasers she initially had. This requires the knowledge of ‘part-whole’. Now, that sum of 48 and
30, 39 and x can be connected together using the
‘change’ formula. With respect to ‘solving’ arithmetic problems, we find the following categories
as the major source of errors:
**Problem Representation:** Solving problems
in this category requires involved representation.
Consider the problem, ‘Sally paid $ 12.32 total for
_peaches, after a ‘3 dollar’ coupon, and $ 11.54_
_for cherries . In total, how much money did Sally_
_spend?’. Since the associated verb for the variable_
3 dollar is ‘pay’, our system incorrectly thinks that
_Sally did spend it._
**Information Gap:** Often, information that is
critical to solve a problem is not present in the text.
E.g. Last year, 90171 people were born in a coun_try, and 16320 people immigrated to it . How_
_many new people began living in the country last_
_year ?. To correctly solve this problem, it is impor-_
tant to know that both the event ‘born’ and ‘immigration’ imply the ‘began living’ event, however
that information is missing in the text. Another
example is the problem, “Keith spent $6.51 on a
_rabbit toy, $5.79 on pet food, and a cage cost_
_him $12.51 . He found a dollar bill on the ground._
_What was the total cost of Keith ’s purchases? ”. It_
is important to know here that if a cage cost Keith
$12.51 then Keith has spent $12.51 for cage.
**Modals:** Consider the question ‘Jason went to
_11 football games this month ._ _He went to 17_
_games last month, and plans to go to 16 games_
_next month . How many games will he attend in_
_all?’ To solve this question one needs to under-_
stand the meanings of the verb “plan” and “will”.
If we replace “will” in the question by “did” the
answer will be different. Currently our algorithm
2152
-----
**References**
Eneko Agirre, Mona Diab, Daniel Cer, and Aitor
Gonzalez-Agirre. 2012. Semeval-2012 task 6: A
pilot on semantic textual similarity. In Proceedings
_of the First Joint Conference on Lexical and Com-_
_putational Semantics-Volume 1: Proceedings of the_
_main conference and the shared task, and Volume_
_2: Proceedings of the Sixth International Workshop_
_on Semantic Evaluation, pages 385–393. Associa-_
tion for Computational Linguistics.
Daniel G Bobrow. 1964a. Natural language input for a
computer problem solving system.
Daniel G. Bobrow. 1964b. A question-answering
system for high school algebra word problems. In
_Proceedings of the October 27-29, 1964, Fall Joint_
_Computer Conference, Part I, AFIPS ’64 (Fall, part_
I), pages 591–614, New York, NY, USA. ACM.
Samuel R Bowman, Gabor Angeli, Christopher Potts,
and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference.
_arXiv preprint arXiv:1508.05326._
Peter Clark and Oren Etzioni. 2016. My computer
is an honor student but how intelligent is it? standardized tests as a measure of ai. AI Magazine.(To
_appear)._
Peter Clark. 2015. Elementary school science and
math tests as a driver for ai: Take the aristo challenge! In AAAI, pages 4019–4021.
Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan
Roth. 2010. Recognizing textual entailment: Rational, evaluation and approaches–erratum. Natural
_Language Engineering, 16(01):105–105._
Marie-Catherine De Marneffe and Christopher D Manning. 2008. Stanford typed dependencies manual.
Technical report, Technical report, Stanford University.
Edward A Feigenbaum and Julian Feldman. 1963.
Computers and thought.
Charles R Fletcher. 1985. Understanding and solving
arithmetic word problems: A computer simulation.
_Behavior Research Methods, Instruments, & Com-_
_puters, 17(5):565–571._
Dan A Hinsley, John R Hayes, and Herbert A Simon.
1977. From words to equations: Meaning and representation in algebra word problems. Cognitive pro_cesses in comprehension, 329._
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533._
Walter Kintsch and James G Greeno. 1985. Understanding and solving word arithmetic problems.
_Psychological review, 92(1):109._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. Association for Computational Linguistics.
Hector J Levesque. 2011. The winograd schema challenge.
Hugo Liu and Push Singh. 2004. Conceptneta practical commonsense reasoning tool-kit. BT technology
_journal, 22(4):211–226._
Christopher D Manning, Mihai Surdeanu, John Bauer,
Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demon_strations), pages 55–60._
George A Miller. 1995. Wordnet: a lexical
database for english. Communications of the ACM,
38(11):39–41.
Anirban Mukherjee and Utpal Garain. 2008. A review
of methods for automatic understanding of natural
language mathematical problems. Artificial Intelli_gence Review, 29(2):93–122._
Matthew Richardson, Christopher JC Burges, and Erin
Renshaw. 2013. Mctest: A challenge dataset for
the open-domain machine comprehension of text. In
_EMNLP, volume 1, page 2._
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. EMNLP.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and
reasoning. In Proceedings of the Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP), Lisbon, Portugal._
Robert Speer and Catherine Havasi. 2013. Conceptnet
5: A large semantic network for relational knowledge. In The Peoples Web Meets NLP, pages 161–
176. Springer.
Jason Weston, Antoine Bordes, Sumit Chopra, and
Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks.
_arXiv preprint arXiv:1502.05698._
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using
quadratic programming. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 817–822._
2153
-----
| [
"Arindam, Mitra",
"Noah A., Smith",
"Chitta, Baral",
"Katrin, Erk"
] | 2016-08-01T00:00:00 | ACL 2016 Long Papers | true | 106 | 13 | null | https://aclanthology.org/P16-1202 | null | https://www.semanticscholar.org/paper/f6b5335f27b9583dd152d8cd4ea9134e24bd297b |
MathDQN: Solving Arithmetic Word Problems via Deep Reinforcement Learning | Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-of-the-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15\%. | This is the first attempt of applying deep reinforcement learning to solve arithmetic word problems and yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15\%. | null | [
"Jingkuan, Song",
"Dongxiang, Zhang",
"Lei, Wang",
"Lianli, Gao",
"Heng Tao, Shen",
"Long, Guo"
] | 2018-04-27T00:00:00 | AAAI 2018 | false | 106 | 14 | null | https://ojs.aaai.org/index.php/AAAI/article/view/11981 | null | https://www.semanticscholar.org/paper/835c6b524b90b1639aba28742f7161137ddf4397 |
Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | Recent progress in large language models (LLMs) like GPT-4 and PaLM-2 has brought significant advancements in addressing math reasoning problems. In particular, OpenAI's latest version of GPT-4, known as GPT-4 Code Interpreter, shows remarkable performance on challenging math datasets. In this paper, we explore the effect of code on enhancing LLMs' reasoning capability by introducing different constraints on the Code Usage Frequency of GPT-4 Code Interpreter. We found that its success can be largely attributed to its powerful skills in generating and executing code, evaluating the output of code execution, and rectifying its solution when receiving unreasonable outputs. Based on this insight, we propose a novel and effective prompting method, explicit $\underline{\text{c}}$ode-based $\underline{\text{s}}$elf-$\underline{\text{v}}$erification (CSV), to further boost the mathematical reasoning potential of GPT-4 Code Interpreter. This method employs a zero-shot prompt on GPT-4 Code Interpreter to encourage it to use code to self-verify its answers. In instances where the verification state registers as "False", the model shall automatically amend its solution, analogous to our approach of rectifying errors during a mathematics examination. Furthermore, we recognize that the states of the verification result indicate the confidence of a solution, which can improve the effectiveness of majority voting. With GPT-4 Code Interpreter and CSV, we achieve an impressive zero-shot accuracy on MATH dataset $\textbf{(53.9}$% → $\textbf{84.3}$%$\textbf{)}$. | The effect of code on enhancing LLMs' reasoning capability by introducing different constraints on the Code Usage Frequency of GPT-4 Code Interpreter is explored, and a novel and effective prompting method, explicit \uline{c}ode-based \ULine{s}elf-\uline {v}erification~(CSV), is proposed to further boost the mathematical reasoning potential of GPN. | ### SOLVING CHALLENGING MATH WORD PROBLEMS USING GPT-4 CODE INTERPRETER WITH CODE-BASED SELF-VERIFICATION
**Aojun Zhou[1][∗]** **Ke Wang[2][∗]** **Zimu Lu[3][∗]** **Weikang Shi[4][∗]** **Sichun Luo[5][∗]** **Zipeng Qin[1]**
**Shaoqing Lu** **[6]** **Anya Jia** **[7]** **Linqi Song[5]** **Mingjie Zhan[1][†]** **Hongsheng Li[1][‡]**
1Multimedia Laboratory (MMLab), The Chinese University of Hong Kong
2Nanjing University 3University of Science and Technology of China
4Tsinghua University 5City University of Hong Kong
6Changsha University of Science and Technology 7Tufts University
_{aojunzhou, wangk.gm, sichunluo2, zmjdll}@gmail.com_
[email protected] [email protected]
[email protected] [email protected]
ABSTRACT
Recent progress in large language models (LLMs) like GPT-4 and PaLM-2 has
brought significant advancements in addressing math reasoning problems. In
particular, OpenAI’s latest version of GPT-4, known as GPT-4 Code Interpreter,
shows remarkable performance on challenging math datasets. In this paper, we
explore the effect of code on enhancing LLMs’ reasoning capability by introducing different constraints on the Code Usage Frequency of GPT-4 Code Interpreter. We found that its success can be largely attributed to its powerful skills
in generating and executing code, evaluating the output of code execution, and
rectifying its solution when receiving unreasonable outputs. Based on this insight, we propose a novel and effective prompting method, explicit code-based
self-verification (CSV), to further boost the mathematical reasoning potential of
GPT-4 Code Interpreter. This method employs a zero-shot prompt on GPT-4 Code
Interpreter to encourage it to use code to self-verify its answers. In instances where
the verification state registers as “False”, the model shall automatically amend its
solution, analogous to our approach of rectifying errors during a mathematics examination. Furthermore, we recognize that the states of the verification result
indicate the confidence of a solution, which can improve the effectiveness of majority voting. With GPT-4 Code Interpreter and CSV, we achieve an impressive
zero-shot accuracy on MATH dataset (53.9% → **84.3%).**
1 INTRODUCTION
Large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023) have shown
impressive success in various tasks, such as common sense understanding and code generation.
However, they still fall short in mathematical reasoning, often producing nonsensical or inaccurate content and struggling with complex calculations. Previous attempts to tackle these challenges
include the Chain-of-Thought (CoT) (Wei et al., 2022) framework, which enhances LLMs’ logical reasoning abilities by generating intermediate steps in their reasoning process. Additionally,
PAL (Gao et al., 2023) introduces a novel approach by using the Python programming interpreter to
improve computational accuracy.
In recent advancements, OpenAI has unveiled an improved version of GPT-4, namely the GPT-4
Code Interpreter[12] or GPT4-Code, which is proficient at providing logical natural language reason
_∗Equal contribution._
_†Project lead._
_‡Corresponding author._
[1https://openai.com/blog/chatgpt-plugins#code-interpreter](https://openai.com/blog/chatgpt-plugins##code-interpreter)
[2https://chat.openai.com/?model=GPT4-Code-interpreter](https://chat.openai.com/?model=GPT4-Code-interpreter)
-----
ing, alongside step-by-step Python code. Notably, it can generate and execute code incrementally,
and subsequently present the executed code’s output back to the LLM. The addition of code generation and execution to natural language outputs has shown promising results in solving mathematical
reasoning problems. Our initial experiments show that GPT4-Code achieved an impressive zero-shot
accuracy of 69.7% on the challenging MATH dataset (Hendrycks et al., 2021), marking a significant
improvement of 27.5% over GPT-4’s performance (42.2%).
While GPT4-Code has demonstrated proficiency in solving math problems, there has been a notable
absence of systematic analysis focusing on understanding and further enhancing its mathematical
problem-solving abilities. A key distinction between GPT4-Code and its predecessor, GPT-4, lies in
GPT4-Code’s ability to automatically generate and execute code. Therefore, this paper presents pilot experiments that investigate GPT4-Code’s code generation and execution mechanism using specific code-constrained prompts. The analysis reveals that GPT4-Code’s strong performance is not
solely due to its code generation and execution abilities, but also its capacity to adjust its problemsolving strategies based on feedback from code execution—a process we term self-debugging (illustrated in Tab. 7 and Tab. 8). Due to the fact that code generation evolves its reasoning step-by-step
and performs self-debugging after code execution errors, there is an increased frequency of code
usage. Hence, we introduce the concept of Code Usage Frequency to differentiate these unique
prompting strategies to quantitatively analyze the impact of code-constrained prompts on GPT4Code for mathematical problem-solving.
The step-by-step code generation and self-debugging mechanisms highlight the critical role of code
in mathematical problem-solving. Nevertheless, the self-debugging mechanism only verifies each
step of the generated code while lacks the verification of the reasoning steps and the final answer,
which has been demonstrated to be of vital importance to the math problem-solving abilities of
LLMs (Cobbe et al., 2021; Lightman et al., 2023; Weng et al., 2023).
We therefore ask the question: can we fully exploit the code generation and self-debugging mech_anisms in GPT4-code, so that it can automatically verify and correct its solutions, without extra_
_assistance from other models or users?_
To answer this question, we propose a simple yet effective prompting technique termed the ex**plicit code-based self-verification (CSV), which guides GPT4-Code to generate additional code**
that verifies the answer and adjusts the reasoning steps if there’s a flaw in reasoning. Unlike previous methods that rely on external language models for verification (Lightman et al., 2023; Cobbe
et al., 2021), our approach leverages GPT4-Code’s inherent strengths. This approach offers two key
benefits: (1) When the verification indicates an answer is False, GPT4-Code can rectify its prior
solution and provide an improved alternative. (2) Solutions verified as True tend to be more reliable, akin to human problem-solving. However, even if a solution is self-verified as False, we do
not directly abandon it. Instead, we propose a weighted majority voting strategy that incorporates
the code-based solution verification results, as opposed to relying exclusively on the frequency of
answers. We assign different weights to the solutions according to their verification states, reflecting
the solutions’ varying levels of reliability. In alignment with the Code Usage Frequency analysis
from our pilot experiments, our explicit code-based self-verification prompt boosts GPT4-Code’s
accuracy in mathematical problem-solving with increased code usage.
Empirical study demonstrates the effectiveness of our proposed framework on the MATH, GSM8K,
and MMLU-Math datasets using GPT4-Code. Our approach achieves an impressive accuracy of
**84.32% on the MATH dataset, greatly outperforming the base GPT4-Code and previous state-of-**
the-art methods. Additionally, we are making our experimental data on the MMLU-Math and MATH
datasets publicly available, enabling result replication and facilitating fine-tuning of open-source
LLM models (e.g., LLaMA 2 (Touvron et al., 2023)) to further enhance mathematical problemsolving capabilities with the assistance of code generation.
This paper’s main contributions can be summarized in three key aspects:
- This study provides the first systematic analysis of code generation, execution, and selfdebugging’s role in mathematical problem-solving. Our findings reveal that GPT4-Code’s
impressive mathematical problem-solving proficiency is primarily attributed to its step-bystep code generation and dynamic solution refinement based on code execution outcomes.
-----
Question
**Question:**
The greatest common divisor of positive integers m and n is 6.
The least common multiple of m and n is 126. What is the least
possible value of m + n? Answer: 60
(a) Model output (b) Model output
Prompt 1 Prompt 2
You are not allowed to use
You can use code only 1 time. The problem is:
any code. The problem is:
Question .
Question . _{_ _}_
_{_ _}_
We know that for any two positive First, we should recall that for any two positive inteintegers m and n, the product of gers m and n, their greatest common divisor (gcd)
their greatest common divisor (GCD) and least common multiple (lcm) satisfy the relationand least common multiple (LCM) is ship:
equal to the product of the two num-bers. This can be written as: _m · n = gcd(m, n) · lcm(m, n)_
In this problem, we’re given that gcd(m, n) = 6
Given thatGCD(m, n GCD= m) ·( LCM ·m, n n ) = 6(m, n and) and lcmvalues into the formula to get:(m, n) = 126m · n = 6. We can substitute these · 126
_LCM_ (m, n) = 126, we can We can solve this equation for one variable and then
use the fact that m and n are integers to generate
substitute these values into the equa
possible pairs (m, n).
tion:
After we have the pairs (m, n), we can find the pair
6 · 126 = m · n that minimizes m + n.
Solving this gives usNow, we need to find the pair of fac- m · n = 756. code
tors of 756 that add up to the smallest import sympy as sp
sum, as we want to find the least pos- gcd,lcm = 6,126
sible value of m + n. product = gcd * lcm
The pairs of factors of 756 are pairs = [(d, product // d) for d in
(1,756), (2,378), (3,252), (4,189), sp.divisors(product) if d _<=_ product //
(6,126), (7,108), (9,84), (12,63), d]
(14,54), (18,42), (21,36), (27,28). min pair = min(pairs, key=sum)
We can see that the pair (21,36) gives min pair, sum(min pair)
the smallest sum, which is 21 + >>> ((27, 28), 55)
36 = 57.
So, the least possible value of m+n
is 57. The pair (m, n) that minimizes m + n is
Therefore, the solution to the
(27, 28), and the minimum possible value of m +
problem is 57 .✗ _n is_ 55 .✗
(c) Model output
BasicPrompt
Solve the problem and put your answer in
\\boxed{}. The problem is: {Question}.
One important property relating greatest common divisor (GCD) and least common multiple (LCM) is that
GCD(m, n) × LCM(m, n) = m × n
Given in the problem, we can get
6 × 126 = m × n
Let’s compute the right hand side of the equation first.
code
gcd,lcm = 6, 126
gcd * lcm
>>> 756
The product of m and n is 756.
Since the GCD of m and n is 6, both m and n must
be multiples of 6.
Let’s find all the pairs of multiples of 6 that multiply to
756. We’ll do this by finding all the factors of 756 that
are multiples of 6.
code
[(i, m times n // i) for i in range(6,
int(m times n**0.5) + 1, 6) if m times n %
i == 0]
>>> [(6, 126), (12, 63), (18, 42)]
The pairs of factors of 756 that are multiples of 6 are:
1. (6, 126) 2. (12, 63) 3. (18, 42)
We want to find the pair with the least sum. Let’s compute the sums for each pair,identify the smallest one.
code
sums = [(pair, sum(pair)) for pair in factor pairs]
min(sums, key=lambda x: x[1])
>>> ((18, 42), 60)
The pair with the smallest sum is (18, 42), and the
sum is 60.So, the least possible value is 60 . ✓
Figure 1: Example from the 277th number theory problem in the MATH dataset: (a) Prompt 1 generates intermediate text-only reasoning steps. (b) Prompt 2 generates intermediate text reasoning
steps and post-hoc Python code, akin to PAL (Gao et al., 2023). (c) Basic prompt that set no constrain on code usage. Details of the three prompts are presented in Sec. 3.1. (For more examples,
see Tab. 9 and Tab. 10.)
- We introduce the innovative explicit code-based self-verification (CSV) prompt, which
leverages GPT4-Code’s advanced code generation mechanism. This prompt guides the
model to verify the answer and then reevaluate its solution with code. CSV not only extends the verification to the logic behind problem-solving but also improves the efficacy of
the majority voting method by integrating the verification states.
- Additionally, we have contributed to the LLM community by creating two new instructionfollowing datasets: MATH-code and MMLU-Math-code. These datasets are designed to
enhance the mathematical reasoning capabilities of open-source models.
2 RELATED WORK
**Chain-of-Thought Reasoning. The Chain-of-Thought (CoT) prompting approach proposed by Wei**
et al. (2022) is a notable contribution that showcases the multi-step reasoning capabilities of LLMs.
By simply adding “Let’s think step by step” before questions, Kojima et al. (2022) implements Zero_shot-CoT, which can serve as a strong zero-shot baseline. Further research extends the reasoning_
-----
capabilities of CoT by applying majority voting to improve self-consistency (Wang et al., 2023),
choosing few-shot examples and output chains with more complex reasoning steps (Fu et al., 2022),
breaking down the problem to simpler sub-problems (Zhou et al., 2023), or even expanding Chainof-thought to Tree-of-Thoughts (Yao et al., 2023). Similar to Zero-shot-CoT, our method apply “step
by step”-like prompts to regularize GPT4-Code’s use of code without the careful design of step-bystep few-shot examples. Additionally, We enhance majority voting to verification-guided weighted
majority voting, leveraging the results of CSV as voting weights.
**Solving Math Problems with Code. Large language models have been found to be less accurate in**
performing arithmetic calculations, such as addition, subtraction, multiplication, etc (Cobbe et al.,
2021; Lewkowycz et al., 2022; Gao et al., 2023; Lu et al., 2022). Consequently, previous works
have attempted to solve math problems with the assistance of code. The GSM8K dataset (Cobbe
et al., 2021) uses calculation annotations to extract all arithmetic calculations solved by an external
calculator: the Python eval function. To further leverage the role of code in LLMs, Program-Aided
_Language model (PAL) (Gao et al., 2023) as well as Program of Thoughts (PoT) (Chen et al., 2022)_
interpret the math problems as Python codes and execute the codes with an external Python interpreter to obtain the answer. Although they can get more accurate answers than some non-code
methods, many generated codes have execution errors or get wrong answers due to the lack of verification mechanism. Our approach not only utilizes the ability of GPT4-Code to generate multi-step
codes and refine codes that fail to run, but also uses CSV to enhance the reliability and accuracy of
the answers.
**Self-Verification. Human problem solving is not always a one-time success, but rather requires iter-**
ative thinking, verification, and refinement. Unlike previous studies that train an additional verifier
to verify the correctness of final answers (Cobbe et al., 2021) or intermediate steps (Lightman et al.,
2023; Li et al., 2023), Weng et al. (2023) showed the self-verification abilities of LLMs by generating multiple answers and ranking them by self-verification scores. Furthermore, SELF-REFINE
proposed by Madaan et al. (2023) iteratively refines its output through self-generated feedback.
Unlike these self-verification methods that require LLMs to give verification feedback in natural
language, our method applies generated codes to verify the answers and votes on different answers
based on the verification results, thus improving the accuracy of the verification and making full use
of the information in the verification process.
3 METHOD
We first conduct a pilot experiment with GPT4-Code on the challenging MATH dataset (Hendrycks
et al., 2021). Remarkably, it achieves an accuracy of 69.7%, significantly surpassing the previous state-of-the-art performance of 53.9% (Zheng et al., 2023). Encouraged by the compelling
performance of GPT4-Code, we strive to systematically explore and analyze its underlying code
mechanisms. In Sec. 3.1, we illustrate, via our code-constrained prompts design, that GPT4-Code’s
robust performance in solving math problems derives not only from its ability to generate accurate
step-by-step code, but also from its self-debugging mechanism. In Sec. 3.2, we aim to leverage
GPT4-Code’s self-debugging strengths to further improve its mathematical problem-solving ability.
3.1 PILOT EXPERIMENTS ON ANALYZING CODE USAGE OF GPT4-CODE
To explore the impact of code usage on GPT4-Code’s math problem-solving capabilities, we adopt a
straightforward approach by constraining GPT4-Code’s interaction with code through thoughtfully
constructed prompts. Specifically, we introduce two code-constrained prompts and the basic prompt
for comparison:
- Prompt 1: No code usage is allowed: In response to this prompt, GPT4-Code is prohibited
from incorporating code into its solutions. This prompts GPT4-Code to rely solely on
Natural Language (NL) reasoning chain, resembling solutions in the CoT framework (Wei
et al., 2022). The resulting sequence of reasoning steps is depicted as CNL, with an example
given in Fig. 1 (a).
- Prompt 2: Code can be used only once: In this prompt setting, GPT4-Code is permitted
to employ code exclusively within a single code block to generate the solution, mirroring
-----
steps if the outcomes contain bugs or are deemed illogical, as illustrated in Tab. 7 and Tab. 8.
Verification Prompt
Basic Prompt (Please verify your answer using code interpreter by yourself.)
Prompt 1 Prompt 2
(You are not allowed to use any code.) (You can use code only 1 time.)
Code Usage Frequency
(a) 76 (b) 100
74.48
3.0
74
90
72 4.54 2.0
80
70 69.94
68 67.58 70 1.5
Accuracy 66 Accuracy 60 1.0
64 6.78 50
0.5
62
60.80 40
60 0.1
1 2 3 4 Level 1 Level 2 Level 3 Level 4 Level 5
Prompts Levels
Figure 2: Performance on MATH dataset of different levels by applying different prompts to adjust the frequency of code usage. (a) Comparison of overall accuracy between the 4 prompts. (b) Code usage frequency
is in proportion to accuracy in all five levels and this phenomenon is especially apparent when the problems are
relatively complicated (i.e. with higher level).
the PAL approach introduced by Gao et al. (2023). We denote this sequence as CSL, representing a series of Symbolic Language (SL), such as Python. An example is shown in
Fig. 1 (b).
- Basic Prompt: GPT4-Code is prompted to tackle the problem without any restrictions on
code usage. This prompt leads to GPT4-Code’s typical functioning pattern, which can be
denoted as C = ((c1NL, c1SL), (c2NL, c2SL), . . . ), representing a sequential list of reasoning steps, each consisted of both natural language and Python code, with an example
shown in Fig. 1 (c).
Apart from the specific example in Fig. 1, we introduce Code Usage Frequency to record the number
of the code execution for different prompts. The results of the experiments using these prompts are
shown in Fig. 2 (b). This figure illustrates a positive correlation between the better performance of
GPT4-Code and the higher Code Usage Frequency. More specifically,
**Prompt 1 v.s. Prompt 2: Prompt 1 results in almost negligible code usage, while Prompt 2 results**
in approximately 1 time’s code usage. Prompt 2 yields an accuracy gain of 6.9 percent over Prompt
1. This suggests that the Python code chains CSL, can improve computational capability more
than the natural language chains CNL. This observation is consistent with the findings in previous
Python-based prompting methods (Gao et al., 2023; Chen et al., 2022). However, employing code
only once comes with an inherent drawback – the model lacks the ability to self-debugging when
the code output triggers an error or produces an implausible outcome.
**Prompt 2 v.s. Basic Prompt: The Basic Prompt consistently produces solutions that entail multiple**
instances of code usage, resulting in a large Code Usage Frequency. Additionally, the Basic Prompt
exhibits notably enhanced accuracy. These improvements in Code Usage Frequency and accuracy
might be attributable to two unique advantages: (1) Generating code in brief and frequent segments,
divided among natural language reasoning steps, tends to result in higher accuracy. (2) The model
possesses the capability to evaluate the results of code execution and make corrections to solution
From these observations, it is plausible to enhance and build upon the favorable attributes of GPT4Code, to further improve its precision in tackling math problems.
3.2 EXPLICIT CODE-BASED SELF-VERIFICATION PROMPTING
Inspired by the observations on Code Usage Frequency analysis, we seek to harness the capabilities
of GPT4-Code. These capabilities include the model’s aptitude for generating accurate code, evaluating the outcomes of code execution, and automatically adjusting reasoning steps of solutions when
-----
Question
**Question:**
What is the smallest positive integer solution
to ( [100][,][000]x[4] _[,][000]_ ) < 1?
**Answer: 101**
(b) Example: w/ self verifications
We can firstly change the inequality into x > _√4_ 100000000.
Then let’s evaluate the fourth root of 100,000,000.
code
import math
root = math.pow(100000000, 1/4)
smallest integer = math.ceil(root)
>>> (100.0, 100)
Thus, the smallest positive integer x is 100.
Let’s verify this solution.
code
>>>verification = 1e8 / (smallest integer False _∗∗_ 4) < 1
The smallest integer x should be the next integer greater than 100,
not 100 itself.
Let’s calculate that next integer.
code
smallest integer = 100 + 1
verification = 1e8 / (smallest integer>>> (101, True) _∗∗_ 4) < 1
Upon substituting x = 101 back, it holds true, which verifies our
solution.
The final answer is 101 . ✓
(a) Example: w/o self verification
Breaking this down into subproblems, the first step is to rearrange
the inequality to isolate x.
We can do this by multiplying both sides of the inequality by x[4] and
then taking the fourth root of both sides.
This gives us x > _√4_ 100000000.
Let’s solve for x.
code
import math
root = math.pow(100000000, 1/4)
>>> 100
Since 100 is already a positive integer, the smallest positive integer
solution to the inequality [100][,][000][,][000] _< 1 is x = 100._
_x[4]_
The final answer is 100 . ✗
Figure 3: Question from the 712th intermediate algebra problem in the MATH dataset. (a) Without selfverification, the model generates a wrong answer. (b) With self-verification, the model corrects the error and
generates the correct answer. The CSV prompt: To solve the problem using code interpreter step by step, and
_please verify your answer using code interpreter._
needed. However, despite these advantages, GPT4-Code currently falls short in assuring solution
correctness. Consequently, our objective is to utilize these strengths to augment solution verification.
To achieve this objective, we propose the technique termed as explicit code-based self-verification
(CSV). This method prompts GPT4-Code to explicitly validate its answer through code generation.
By implementing this prompt, we introduce an extra verification stage to the solution C, referred
to as V. The verification outcome V can be classified as True, False, or Uncertain. An Uncertain
classification indicates that GPT4-Code encountered difficulties in identifying an effective method
for answer verification, thereby abstaining from delivering a definitive verification result. Leveraging GPT4-Code’s inherent autonomous capabilities, we can formulate the proposed prompting as
follows:
True _→_ final answer
False _→_ **Cnew →** **V →· · · →** True → final answer
Uncertain _→_ final answer
**C →** **V =**
An example is presented in Fig. 3 (b). Incorporated with CSV, the model becomes capable of using
code to verify answers, then reviewing and adjusting how it arrived at the solution if the verification
result is False, aiming at obtaining the correct answer. Upon refining and correcting the initial
solution, we anticipate a notable increase in accuracy. It is worth noting that both the verification
and rectification stages are code-based. This inevitably results in increased Code Usage Frequency,
akin to the aforementioned analysis, which will be further demonstrated in subsequent experiments.
We perform experiments with CSV, and these results can be found in Fig. 2. The experiment here is
conducted with GPT4-Code on MATH (Hendrycks et al., 2021). In Fig. 2 (b), the accuracy achieved
with our proposed CSV prompt consistently surpasses that of the Basic Prompt across all designated
difficulty levels[3]. Meanwhile, the Code Usage Frequency receives a clear increase.
3Human-perceived easier problems are categorized under Level-1 difficulty as per Hendrycks et al. (2021).
-----
Before the advent of GPT4-Code, prior frameworks (Lightman et al., 2023; Cobbe et al., 2021)
depended on an external LLM to use natural language for verification and well-designed few-shot
example prompts. In contrast, our approach simplifies the process by relying solely on a straightforward prompt for GPT4-Code, all in a zero-shot manner. This enables GPT4-Code to autonomously
verify and independently rectify its solutions using the advanced code execution mechanism, thereby
eliminating the need for customized few-shot examples.
Given that CSV can effectively verify problem-solving answers, we can naturally integrate the verification states into majority voting, akin to the methodology embraced in self-consistency CoT (Wang
et al., 2023). Answers deemed True through verification are generally more trustworthy, reflecting
the problem-solving approach seen in human cognition (Newell & Simon, 1972; Wang & Chiew,
2010). This improved reliability can be leveraged in the widely-used majority voting process. To exploit this insight, we introduce verification-guided weighted majority voting, which assigns different
weights to the states of the verification process.
In practice, it sometimes occurs that once an answer is confirmed as False, no additional verification
is conducted, yielding a False verification state. We allocate corresponding weights these states of
**True, Uncertain, False: wT, wU, and wF, respectively.**
Similar to the Self-consistency with CoT (CoT-SC) (Wang et al., 2023) in Fig. 4 (a)(ii), our framework can sample k paths. For simplicity, we extract pairs of final answers and their corresponding
verification results from k solutions, represented as (v[i], a[i]), i = 1, 2, . . ., k, where v[i] and a[i] denote
the i-th final answer and final verification result, respectively.
So the voting score for each candidate answer a can be expressed as:
_wv(#_ _i_ _a[i]_ = a and v[i] = v ), _v_ True, Uncertain, False _,_ (1)
_{_ _|_ _}_ _∈{_ _}_
_{Xv[i]}_
Score(a) =
Here, a represents a candidate answer, v denotes the state of verification, and wv is an element from
the set {wT, wU, wF}. Each wv signifies the degree of confidence associated with its corresponding
verification state.
Finally, we select the answer with the highest score from all candidate answers,
Output = arg max Score(a), (2)
_a_
where Score(a) refers to the score of answer a according to Eq. 1.
It should be noted that whenthe naive majority voting employed in Self-Consistency with CoT (CoT-SC) (Wang et al., 2023). wv = 1 for all wv ∈{wT, wU, wF}, Eq. 1 becomes equivalent to
Typically, we set wT > wU > wF, which means that an answer verified true has a greater confidence
than the one with an uncertain state of verification, while an answer verified false has the lowest
degree of confidence. An example of the calculation process within verification-guided weighted
majority voting is illustrated in Fig. 4.
4 EXPERIMENTS
4.1 PERFORMANCE ON MATH
The MATH dataset (Hendrycks et al., 2021) is recognized as the most challenging math word problem dataset, as also highlighted by Chen et al. (Chen et al., 2023). Most of our experiments and the
corresponding analyses are performed on the MATH benchmark. Tab. 1 compares the performance
of the GPT4-Code against other models. GPT4-Code reaches 69.69% on MATH (Hendrycks et al.,
2020), largely surpassing the previous state of art result (53.90%), which shows that GPT4-Code exhibits strong abilities in solving math problems and is used as our baseline for ablation study. On
top of GPT4-Code, our method further improves its accuracy, raising the result to 73.54% after
adding explicit code-based self-verification, and 84.32% after adding both explicit code-based selfverification and verification-guided weighted majority voting (the number of sampled paths is 16).
-----
|Col1|Col2|
|---|---|
|||
|Col1|Col2|Col3|
|---|---|---|
||||
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
|True Uncertain False 4 ( 3, 1, 0 ) 6 ( 1, 1, 4 )|𝑠𝑐𝑜𝑟𝑒2 = 1 × 3 + 0.5 × 1 + 0.2 × 0 = 3.5 𝑠𝑐𝑜𝑟𝑒5 = 1 × 1 + 0.5 × 1 + 0.2 × 4 = 2.3|
|---|---|
**Reasoning** **Code** **Reasoning & Code** Question: Given that the repetend in the decimal representation of 19[1] [contains 18 ]
digits, find the 39th digit in the decimal representation.
**Verification** **Answer**
Verified: True Verified: Uncertain Verified: False
**Input** **Input** Given that the sequence repeats every 18 digits, the 39the 3the 3[th] digit will be the same as [rd][rd]. So, we need to find digit in the decimal If we calculate the first few decimal places of see a pattern emerge. Let's calculate this using Python.19[1] [, we will] Since the decimal representation repeats every 18 digits, the 39the same as the 21[th] digit will be [st] digit.
representation of 19[1] Code execution…… Code execution……
The result is 0. Now let’s
Code execution…… The 39th digit will be the same as the 21st, which is verify the answer.
… … … … … … example The 3representation of Now let’s verify if this [rd] digit in the decimal 191 [ is 2.] actually the 3rd digit in our calculated sequence. Let's find this digit. The pattern of 18 repeating digits is Code execution……
solution is equal to the 39[th] Code execution…… "052631578947368421". The
digit. given the repeating nature 21st digit, which is the same
Code execution…… of the decimal as the 39th digit, is '5', not ‘0’. The verification result is False,
The verification result is True. representation of 19[1] [, we can ] Our final result is
Therefore, our final solution confidently state that the
**Majority** 𝑤𝑇 𝑤𝑈 𝑤𝐹 is 2 39th digit will indeed be 2 5
**Voting**
**Verification-guided** 𝑤𝑇 = 1 𝑤𝑈 = 0.5 𝑤𝐹 = 0.2
**weighted majority** True Uncertain False
**Output** **Output** **voting** Candidate answers: **2: 4** ( 3, 1, 0 ) 𝑠𝑐𝑜𝑟𝑒2 = 1 × 3 + 0.5 × 1 + 0.2 × 0 = 3.5
**5: 6** ( 1, 1, 4 ) 𝑠𝑐𝑜𝑟𝑒5 = 1 × 1 + 0.5 × 1 + 0.2 × 4 = 2.3
**(i) SC CoT** **(ii) CSV** 4 < 6 Verification-guided 3.5 > 2.3
Majority voting: **5ⅹ** weighted majority **2 √**
voting:
(a) (b)
Figure 4: (a) Illustration of the Naive majority voting (Wang et al., 2023) and our Verification-guided
weighted majority voting. The full pipeline of the proposed Verification-guided Weighted Majority
Voting framework. We use the model to generate several different solutions. Then we detect the
self-verification state of each solution, and classify them into three states: True, Uncertain, and
_False. According to the state of the verification, we assign each solution a different weight, and use_
the classified result to vote the score of each possible answer.
Table 1: Accuracy (%) on MATH dataset. VW-voting is the abbreviation for the verification-guided
weighted majority voting. (Overall: The results across various MATH subtopics (Hendrycks et al.,
2021))
**Code-based** **VW-** Intermediate Precalculus Geometry Number Counting & PreAlgebra Algebra Overall
**Verification** **Voting** Algebra **–** **–** Theory Probability **–** **–** MATH
GPT-4 ✗ ✗ - - - - - - - 42.20
GPT-3.5 ✗ ✗ 14.6 16.8 22.3 33.4 29.7 53.8 49.1 34.12
GPT-4 (CoT) ✗ ✗ 23.4 26.7 36.5 49.6 53.1 71.6 70.8 50.36
GPT-4 (PHP) ✗ ✗ 26.3 29.8 41.9 55.7 56.3 73.8 74.3 53.90
GPT4-Code ✗ ✗ 50.1 51.5 53.4 77.2 70.6 86.3 83.6 69.69
GPT4-Code + CSV ✓ ✗ 56.6 53.9 54.0 85.6 77.3 86.5 86.9 73.54
_Improvement_ **+6.5** **+2.4** **+0.6** **+7.6** **+6.7** **+0.2** **+3.3** **+3.85**
GPT4-Code + CSV + Voting ✓ ✓ **(k=16)** **74.4** **67.8** **64.9** **94.1** **89.0** **91.6** **95.6** **84.32**
_Improvement_ **+24.3** **+16.3** **+11.5** **+16.9** **+18.4** **+5.3** **+12.0** **+14.63**
Note that this astonishingly high result is based on the strong abilities of the base model GPT4-Code,
and our method amplifies its good qualities of GPT4-Code, with the ability to verify solutions.
Note that although adding Code-based Self-verification can improve the performance of every individual subject, the extent of improvement varies from subject to subject, from 7.6% to only 0.6%.
In particular, the Geometry problem only has an increased accuracy of 0.6%, even though the original GPT4-Code accuracy is only 54.0%, which is low among the subjects. This discrepancy may
be attributed to the fact that solving geometry problems often requires multi-modality (Chen et al.,
2023), a concept beyond the scope of this paper.
4.2 PERFORMANCE ON OTHER DATASETS
In addition to the challenging MATH dataset, we have also performed our method on other reasoning
datasets such as GSM8K (Cobbe et al., 2021), MMLU-Math, and MMLU-STEM (Hendrycks et al.,
2020). The corresponding results can be viewed in Tab. 2 and Tab. 3. When integrated on top of GPT
-----
Table 2: Performance on GSM8K dataset.
Method Sampled paths Accuracy(%)
GPT-3.5 (5-shot) – 57.1
GPT-4 (5-shot CoT) – 92.0
GPT-4 (PHP) 40 96.5
GPT-4 (Model selection) 15 96.8
GPT4-Code - 92.9
GPT4-Code + CSV + Voting **5** **97.0**
Table 3: Performances on MMLU dataset.
Method Dataset Accuracy(%) Few-shot
Chinchilla (Hoffmann et al., 2022) Math 35.7 5-shot
Galactica (Taylor et al., 2022) Math 41.3 5-shot
GPT4-Code Math 87.5 zero-shot
GPT4-Code + CSV + Voting Math **89.2** **zero-shot**
LLaMA 2 STEM 58.0 5-shot
OpenLLM STEM 70.6 5-shot
GPT-4 STEM 82.7 zero-shot
GPT4-Code STEM 86.8 zero-shot
GPT4-Code + CSV + Voting STEM **87.0** **zero-shot**
(a) Level (b) Subject
100 90 Level 1
Level 2
90 80 Level 3
Level 4
80 Level 5
70 OverallOverall
70 number_theory
60 precalculus
60 algebra
Accuracy Accuracy prealgebra
50 50 intermediate
algebra
counting and
40 40 probability
geometry
0 1 2 3 0 1 2 3
Code Usage Frequency Code Usage Frequency
Figure 5: The four points on each curve correspond to results using Prompt 1, Prompt 2, Basic
**Prompt and Code-based Self-verification Prompt, respectively. (a) The accuracy of different**
levels at various code usage frequencies. (b) The accuracy of different subjects at various code
usage frequencies.
4-code, our method outperforms other methods in the competition, achieving state-of-the-art results
across all datasets. Other subjects in MMLU benchmarks are provided in Fig. 8. A comparative
analysis of our results with those of previous state-of-the-art techniques and open-source models is
also provided.
Tab. 2 illustrates that verification-guided majority voting is an effective framework to reduce
the number of sampled paths, compared to GPT-4 with model selection (Zhao et al., 2023) and
PHP (Zheng et al., 2023).
Tab. 3 presents a comparison of our model’s performance with existing models (Hoffmann et al.,
2022; Taylor et al., 2022) on the MMLU-Math dataset and with state-of-the-art open-sourced models[4] on MMLU-STEM. The open-source models remain significantly outpaced by their closedsource counterparts. To address this gap, we have made the dataset and will make it publicly
available in the near future. Our intention is to facilitate the fine-tuning of open-source LLMs.
For example, the open-source model LLaMA 2 (Touvron et al., 2023) can potentially utilize this
data to further bolster its math reasoning capabilities.
4.3 CODE USAGE FREQUENCY OF PROPOSED PROMPTS
Analogous to the approach taken in Sec. 3.1, we gather data to elucidate the correlation between
accuracy and Code Usage Frequency across various dimensions - prompts (proposed CSV prompt
as well as prompts used in pilot experiments), subjects, and difficulty levels. As shown in Fig. 5,
the model’s behavior is in accordance with our expectations when adding the code-based prompts.
Each line in Fig. 5 has an obvious trend of going upwards, proving that the increase of Code Usage
Frequency induces a general improvement in accuracy. The performance gain when using more
code is more obvious in the higher difficulty levels, while in lower levels, the performance gain is
not very prominent, as shown in Fig. 5 (a). Also, the Code Usage Frequency increases steadily with
[4https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
-----
Table 4: Comparison Self-verification with/without explicit code-based prompt (Overall:The results
across various MATH subtopics (Hendrycks et al., 2021))
**Verification** Intermediate Precalculus Geometry Number Counting & PreAlgebra Algebra Overall
**Method** Algebra **–** **–** Theory Probability **–** **–** **–**
GPT4-Code without Verification 50.1 51.5 53.4 77.2 70.6 86.3 83.6 69.69
Interpreter Nature Language 52.6 48.7 50.8 79.9 72.5 83.1 82.6 69.29
**+2.5** **-2.8** **-2.6** **+2.7** **+1.9** **-3.2** **-1.0** **-0.40**
Code-based **56.6** **53.9** **54.0** **85.6** **77.3** **86.5** **86.9** **73.54**
**+6.5** **+2.4** **+0.6** **+8.4** **+6.7** **+0.2** **+3.3** **+3.85**
the increase of difficulty levels. This shows that the harder math problems require more frequent
code usage, which implies that invoking code multiple times might be an important reason why
GPT4-Code have such an advantage in solving difficult math problems. There is a similar trend in
Fig. 5 (b).
4.4 ABLATION STUDY AND DISCUSSION
**Comparisons between Natural Language and Code-based Self-Verification: to underscore the**
significance of code in the self-verification stage, we employed a distinct natural language selfverification prompt. In this approach, GPT4-Code is directed to verify the solution through natural
language instead of relying on code-based verification, as presented in Tab. 4. The accuracy achieved
with this method was slightly lower than that of the Basic Prompt. Moreover, we observed a decline
in accuracy for 4 of the 7 subtopics, indicating that relying solely on natural language verification
can not only compromise accuracy but also negatively impact performance. In contrast, code-based
verification enhances accuracy across all 7 subtopics when compared to the Basic Prompt.
**Analysis of Verification-guided Weighted Majority Voting: we initially compiled the confusion**
matrix (TP/TN/FP/FN), capturing solutions with self-verification that matches the True and False
states mentioned in Eq. 1 from five distinct sampled paths. The details of the confusion matrix
are presented in Appendix A.1.1. From this data, we computed Precision, Recall, and Accuracy.
(Solutions in the True state are seen as positive.) The results are presented in Fig. 6. In comparison
to Accuracy, we observed numerical enhancements of 22.3% and 5.6% in the average Precision and
Recall, respectively. In particular, the average Precision registered at 95.88%. This implies that the
Accuracy has the potential to become much higher, if more solutions reach the verified True state
before giving the final answer.
**Hyper-parameters ablation in Verification-guided Weighted Majority Voting: we also per-**
formed ablation studies on the hyper-parameter wv _wT, wU, wF_ in Eq. 1. When the hypermajority voting consistently surpassed that of the naive majority voting methods across all sampledparameter setting satisfied wT > wU ≥ _wF, the performance of the verification-guided weighted ∈{_ _}_
paths. In contrast, when we set the hyper-parameter (wT = 0.5, wU = 0.5, wF = 1), the performance under this configuration was worse than the naive majority voting. Therefore, our proposed
method, verification-guided weighted majority voting, is easy to tune and robust.
5 CONCLUSION AND LIMITATION
In this paper, we begin with pilot experiments on GPT4-Code to explore how its use of code impacts
its performance in mathematical reasoning. By analyzing Code Usage Frequency and accuracy, we
determine that GPT4-Code’s skill in solving math problems can be largely attributed to its ability
to generate and execute code, as well as its effectiveness in adjusting and rectifying solutions when
confronted with implausible execution outputs. Expanding on this understanding, we introduce the
ideas of explicit code-based self-verification and verification-guided weighted majority voting, with
the goal of enhancing GPT4-Code’s mathematical capabilities.
However, there are limitations in our work that we plan to explore further in the future. Firstly,
our analysis and improvements are currently focused on GPT4-Code, which is somewhat restrictive.
We aim to apply the methods to other LLMs. Secondly, our explicit code-based self-verification
and verification-guided weighted majority voting technique could potentially create more accurate
datasets. These datasets would include detailed step-by-step code-based solution generation and
-----
(a) (b)
Precision
Recall 84
Accuracy
100
82
Avg.
95.88
90 80
Value 78
Accuracy
80 Avg.
79.11
76
Avg. 1/0/0
73.54
0.5/0.5/1
70
74 1/0.5/0.2
1/1/1
(Majority Voting)
72
60
0 1 2 3 4 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Different Reasoning Path # Sampled Reasoning Paths
Figure 6: (a). shows the precision, recall, and accuracy on different reasoning paths. (b). shows the
accuracy in response to the number of sampled reasoning paths when the weight is set to different
values.
code-based validation, which could help improve open-source LLMs like LLaMA 2 (Touvron et al.,
2023) and enhance their mathematical abilities. Although we haven’t yet investigated this approach,
we leave it for future work.
-----
REFERENCES
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report.
arXiv preprint arXiv:2305.10403, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks, 2022.
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, and
Tony Xia. Theoremqa: A theorem-driven question answering dataset, 2023.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168,
2021.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting
for multi-step reasoning. arXiv preprint arXiv:2210.00720, 2022.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. In International Conference on Machine
Learning, pp. 10764–10799. PMLR, 2023.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong
Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv,
abs/2009.03300, 2020. [URL https://api.semanticscholar.org/CorpusID:](https://api.semanticscholar.org/CorpusID:221516475)
[221516475.](https://api.semanticscholar.org/CorpusID:221516475)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
preprint arXiv:2103.03874, 2021.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. In Advances in Neural Information Processing Systems,
volume 35, pp. 22199–22213, 2022.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative
reasoning problems with language models. Advances in Neural Information Processing Systems,
35:3843–3857, 2022.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making
language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–
5333, 2023.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
arXiv:2305.20050, 2023.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter
Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured
mathematical reasoning. arXiv preprint arXiv:2209.14610, 2022.
-----
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri
Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement
with self-feedback. arXiv preprint arXiv:2303.17651, 2023.
A. Newell and H.A. Simon. Human Problem Solving. ACS symposium series. Prentice-Hall,
1972. ISBN 9780134454030. [URL https://books.google.com.hk/books?id=](https://books.google.com.hk/books?id=h03uAAAAMAAJ)
[h03uAAAAMAAJ.](https://books.google.com.hk/books?id=h03uAAAAMAAJ)
OpenAI. Gpt-4 technical report, 2023.
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia,
Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for
science. arXiv preprint arXiv:2211.09085, 2022.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2023. URL
[https://openreview.net/forum?id=1PL1NIMMrw.](https://openreview.net/forum?id=1PL1NIMMrw)
Yingxu Wang and Vincent Chiew. On the cognitive process of human problem solving. Cognitive
Systems Research, 11(1):81–92, 2010. ISSN 1389-0417.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi,
Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.),
[Advances in Neural Information Processing Systems, 2022. URL https://openreview.](https://openreview.net/forum?id=_VjQlMeSB_J)
[net/forum?id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Kang Liu, and Jun Zhao. Large language
models are better reasoners with self-verification, 2023.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. ArXiv,
abs/2305.10601, 2023.
Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, and Qizhe Xie. Automatic model selection with
large language models for reasoning. arXiv preprint arXiv:2305.14333, 2023.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting
improves reasoning in large language models. arXiv preprint arXiv:2304.09797, 2023.
Denny Zhou, Nathanael Sch¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables
complex reasoning in large language models, 2023.
-----
APPENDIX
This Appendix contains two sections. The first section provides the experiment details, including
the detailed experiment results on MATH and MMLU datasets. The second section presents some
examples of GPT4-Code.
A EXPERIMENT DETAILS
A.1 DETAILED EXPERIMENT RESULT ON MATH DATASET
A.1.1 CONFUSION MATRIX
A confusion matrix is a specific table layout that allows visualization of the performance of an algorithm. It’s particularly useful for classification problems, and we utilize it to analyze the performance
of our verification process.
The matrix itself is a two-dimensional grid, 2x2, for the binary classification of verification results.
Each row of the matrix represents the instances in a predicted class, which is determined by the
verification results given by the language model, while each column represents the instances in an
actual class, which is determined by the actual correctness of the answer given by the model. Tab. 5
shows how the matrix looks for our verification process:
Table 5: Confusion Matrix of Verification
|Col1|Answer Correct Answer Wrong|
|---|---|
|Verification True Verification False|TP FP FN TN|
|---|---|
Here’s what the four terms mean:
- True Positive (TP): The cases in which the model’s verification result is ‘True’, and the
answer is actually correct.
- True Negative (TN): The cases in which the model’s verification result is ‘False’, and the
answer is actually wrong.
- False Positive (FP): The cases in which the model’s verification result is ‘True’, but the
answer is actually wrong.
- False Negative (FN): The cases in which the model’s verification result is ‘False’, but the
answer is actually correct.
This matrix is very helpful for measuring more than just straightforward accuracy, based on which
Precision and Recall are two important metrics. They are defined in Eq. 3 and their meanings are as
follows:
- Precision is the fraction of relevant instances among the retrieved instances. It is a measure
of the accuracy of the classifier when it predicts the positive class.
- Recall is the fraction of the total amount of relevant instances that were actually retrieved.
It is a measure of the ability of a classifier to find all the positive instances.
TP TP
Precision = (3)
TP + FP _[,][ Recall][ =]_ TP + FN
In other words, precision answers the question “What proportion of verified TRUE answers was
actually correct?”, while recall answers “What proportion of actual correct answers was verified
TRUE?” Given its meaning, verification-guided voting is bounded to be effective when the precision
of verification is high.
-----
A.1.2 PYTHON PACKAGE USAGE ANALYSIS
Tab. 6 outlines the usage of various Python packages in our experiments. Among them, we found
that the sympy package is utilized most frequently, highlighting its central role in the computational
tasks performed.
Table 6: Python package usage frequency on MATH dataset.
Package sympy numpy math fractions itertools cmath scipy matplotlib functools collections statistics
All 0.4168 0.0284 0.1590 0.0094 0.0034 0.0034 0.0016 0.0010 0.0004 0.0004 0.0002
Correct 0.3907 0.0241 0.1638 0.0110 0.0029 0.0026 0.0009 0.0003 0.0003 0.0006 0.0003
correct per code 0.3323 0.0205 0.1393 0.0094 0.0025 0.0022 0.0007 0.0003 0.0003 0.0005 0.0003
Wrong 0.4724 0.0383 0.1493 0.0058 0.0045 0.0052 0.0032 0.0026 0.0007 0 0
wrong per code 0.3194 0.0259 0.1009 0.004 0.0031 0.0035 0.0022 0.0018 0.0004 0 0
c/w per code 104% 79% 138% 238% 80% 63% 34% 14% 57% NaN NaN
A.2 DETAILED EXPERIMENT RESULT ON MMLU DATASET
Fig. 8 illustrates that GPT4-Code performs relatively poorly in certain domains, such as engineering
and the humanities, with a particularly marked deficiency in virology, where it achieves a score of
less than 60%. These observations delineate specific areas that call for further investigation and
refinement, thus outlining the direction for future improvements in the model.
B EXAMPLES
In this section, we provide more examples.
-----
Figure 7: GPT4-Code performance on MMLU dataset on: (a) subsubcategory, (b) category, (c)
subcategory
-----
Figure 8: Model performance on MMLU dataset on: (a) subsubcategory, (b) category, (c) subcategory
-----
**Math Problem: 0.54 as a fraction in lowest terms.**
Sure, we can solve this problem by setting up an equation, solving it, and simplifying the
fraction.
Let’s call 0.54 as x.
The decimal 0.54 means that the digits ”54” repeat indefinitely. So, we can multiply x by 100
(since ”54” is a two-digit number) to get another equation.
This gives us 100x = 54.54.
By subtracting the original equation from this new one, we eliminate the repeating decimal:
100x − _x = 54.54 −_ 0.54.
Let’s perform these steps in the next Python cell.
# define the symbol
x = symbols(’x’)
# define the equations
eq1 = Eq(x, 0.54545454545454545454545454545454) # approximating the
repeating decimal
......
**RESULT**
--------------------------------------------------------------------TypeError Traceback (most recent call last)
......
Apologies for the mistake. We can’t directly subtract two equations in Python. Instead, we
should subtract the right-hand sides and left-hand sides of the equations separately to get a new
equation. Let’s correct that.
# subtract the right hand sides and left hand sides of the equations
separately
rhs = simplify(eq2.rhs - eq1.rhs)
lhs = simplify(eq2.lhs - eq1.lhs)
......
**RESULT**
**Eq(99*x, 54.0)**
After subtraction, we get the equation 99x = 54.
Next, we can solve for x to express 0.54 as a fraction.
# solve for x
solution = solve(new_eq, x)
solution[0]
**RESULT**
**0.545454545454545**
The solution for x is approximately 0.545454545454545, but we need to express it as a fraction.
Let’s convert this decimal into a fraction.
# convert the decimal solution to a fraction
fraction = nsimplify(solution[0])
fraction
**RESULT**
**6/11**
The decimal 0.54 can be expressed as the fraction
6
11 [.]
**Answer:**
11
Table 7: An example in which GPT4-Code self-debugs when the code execution emits an error.
-----
**Math Problem: A bus comes by Jerry’s bus stop every 20 minutes starting at exactly 5:13 a.m. If Jerry**
shows up at exactly 8:35 a.m., how many minutes will he have to wait for the next bus?
Step 1: Calculate the total minutes from 5:13 a.m. to 8:35 a.m.
Total minutes from 5:13 a.m. to 8:35 a.m. is 180 + 22 minutes.
Step 2: Calculate how many buses have come by during this time.
Step 3: Calculate the time when the last bus before 8:35 a.m. came by.
- The time when the last bus came by is 5 : 13 a.m. + 20 × ⌊n⌋ minutes.
Step 4: Calculate the waiting time.
Now, let’s calculate each step.
# Step 1: Calculate the total minutes from 5:13 a.m. to 8:35 a.m.
total_minutes = 180 + 22
......
**RESULT**
**-311**
It seems that there’s a mistake in the calculations. The waiting time cannot be negative. This
error might have occurred in Step 3, where I calculated the time of the last bus in minutes past
midnight, but then in Step 4, I subtracted this from the total time from 5:13 a.m. to 8:35 a.m.
I should correct Step 4 to calculate the time until the next bus, which is either 0 if a bus has just
arrived at 8:35 a.m., or 20 minutes minus the time since the last bus if a bus arrived before 8:35
a.m.
Let’s correct this.
# Corrected Step 4: Calculate the time until the next bus
time_since_last_bus = total_minutes - last_bus_time
......
**RESULT**
**331**
There still seems to be a mistake in the calculations. The waiting time calculated is significantly
more than 20 minutes, which is not possible given that the bus comes every 20 minutes.
Looking back at the calculations, it seems that the error is in Step 3 when calculating the time
the last bus came. The last bus time should be calculated from 5:13 a.m., not from midnight.
Let’s correct this.
# Corrected Step 3: Calculate the time when the last bus before 8:35
a.m. came by.
last_bus_time_corrected = 5 * 60 + 13 + num_buses * 20
......
**RESULT**
**331**
It seems there is still an error. Let’s try a different approach.
1. Calculate the total minutes that have passed since the first bus at 5:13 a.m. until 8:35 a.m.
2. Since the buses come every 20 minutes, calculate how many complete cycles of 20 minutes
have passed in the calculated total minutes.
3. The time Jerry will have to wait is the remainder of the total minutes divided by 20 (which
is the time since the last bus) subtracted from 20 (which is the bus cycle).
This approach should give us the correct waiting time. Let’s calculate it.
# Calculate total minutes from 5:13 a.m. to 8:35 a.m.
total_minutes = 3 * 60 + 22 # 3 hours and 22 minutes
......
**RESULT**
**18**
So, Jerry will have to wait for 18 minutes for the next bus.
**Answer: 18**
Table 8: An example in which GPT4-Code adjusts its solution based on the result of code execution.
-----
**Math Problem: Solve for x : 3[2][x]** + 19 = 10[x].
|code 0 This is a difficult equation to solve and there’s no simple analytical solution using ele- mentary functions. In real world scenarios, such equations are typically solved numerically. For example, using a graphing calculator, a computer, or a mathematical software. However, if you want an approx- imation of the solution, it is ad- visable to use numerical meth- ods such as the bisection method, Newton’s method, or others. If you would like me to provide an algorithm to find the solution numerically, let me know.|code 1 Answer: The equation you’re asking to solve, 32x + 19 = 10x, cannot be solved algebraically due to the nature of the functions and operations involved. Therefore, we’ll need to use numerical methods such as Newton’s method or Bisection method for root finding. from scipy.optimize import fsolve import numpy as np # Defining the function def func(x): return 3**(2*x) + 19 - 10**x # Initial guess for x, it can be anything, 1.0 is chosen arbitrarily here x0 = 1.0 # Using fsolve to find the root root = fsolve(func, x0 ) x = root[0] x RESULT 1.001831315399663 So the answer is 1.00.|code more Answer: Rewrite the equation as: 32x −10 + 19 = 0 Let’s denote 3x = a, and 10x = b so a2 −b + 19 = 0 now we have two equations: 1. a = 3x 2. b = 10x Let’s solve this with Python. from scipy.optimize import root import numpy as np # Define the system of equations def equations(vars): x, a, b = vars eq1 = a - 3**x eq2 = b - 10**x eq3 = a**2 - b + 19 return [eq1, eq2, eq3] ..... RESULT: 0.0 32.0 + 19 = 100 The left-hand side becomes 1+19=20 and the right-hand side becomes 1. Thus, the original equation does not hold true for x=0. Let’s try a different initial guess for the solution to see if that helps. # Adjust initial guess for x, a, b x0 = [1, 3, 10] # Solve system of equations solution = root( equations, x0) # Extract solution x = solution.x[0] x RESULT: 2.0000000000004463 We know that 32·2 = 34 = 81 and 102 = 100. The left-hand side becomes 100 and the right- hand side is also 100. Thus, the original equation holds true for x = 2.|
|---|---|---|
**Answer: 2**
**code 1**
**code more**
Table 9: An example in which no restriction on code use succeeds while not using code and using
-----
**Math Problem: What is the sum of the digits in the terminating decimal representation of the**
fraction 54321[7] 2[8][ ?]
_·_
|code 0 The given fraction is 4321 57 28 · To simplify this fraction, we can notice that 57 · 28 = 107, which would mean the fraction is equiv- alent to 4321 × 10−7, i.e., the number 4321 with the decimal point moved 7 places to the left. This would result in the number 0.0004321. To get the sum of the digits in the terminating decimal representa- tion of this number, we just need to add up the digits in 4321, since the zeros do not contribute to the sum. So, the sum is 4+3+2+1 = 10. Therefore, the answer is 10|code 1 Answer: First, we need to simplify the fraction. Given that 4321 = 25 · 33 · 50 · 70 · 110 · 131, we can simplify the fraction 4321 to 57·28 25 ·33 ·5 50 7·7 ·20 8·110 ·131 = 33 · 131 · 2−3 5−7. · Next, we need to compute the decimal representation and sum the digits. # Compute the decimal representation of the fraction value = (3**3 * 13**1) / (2**3 * 5**7) # Convert to string and replace the decimal point str_value = str(value) .replace(’.’, ’’) # Compute the sum of the digits digit_sum = sum(int( digit) for digit in str_value) digit_sum RESULT: 18|code more Answer: The denominator can be rewritten as 10min(7,8) × 5max(0,7−8) 2max(0,8−7), × where min(a, b) is the minimum of a and b, and max(a, b) is the maximum of a and b. Now let’s calculate the value of min(7, 8), max(0, 7 −8), and max(0, 8 −7). min_ab = min(a, b) max_0_a_minus_b = max (0, a-b) max_0_b_minus_a = max (0, b-a) min_ab, max_0_a_minus_b, max_0_b_minus_a RESULT: (7, 0, 1) Now, let’s simplify the fraction 4321 . 107×50×21 # define the numerator and denominator numerator = 4321 denominator = 10**7 * 5**0 * 2**1 ...... RESULT: 0.00021605 We exclude the zeros before the first non-zero digit and the dec- imal point as they do not con- tribute to the sum. Thus, the sum of the digits is the sum of 2, 1, 6, 0, and 5. Let’s calculate it. # digits in the fraction digits = [2, 1, 6, 0, 5] # calculate the sum of the digits sum_of_digits = sum( digits) sum_of_digits RESULT: 14 So, the final answer is: 14|
|---|---|---|
**Answer: 14**
Table 10: An example in which no restriction on code use succeeds while not using code and using
code only once fails.
-----
**Math Problem: What is the sum of all positive integers r that satisfy lcm[r, 700] = 7000 ?**
|True The least common multiple (LCM) of two numbers a and b is given by the formula: LCM(a, b) = |a × b| GCD(a, b) Hence, the equation can be rewritten as: r × 700 = 7000 GCD(r, 700) We also know that r must be a divisor of 7000. So, we’ll find all the divisors of 7000. # Find all divisors of 7000 divisors_7000 = sympy. divisors(7000) # Filter out the divisors r_values_corrected = [ r for r in divisors_7000 if r == 7000 * sympy. gcd(r, 700) // 700] ...... RESULT: (7000, 1000) Let’s verify our solution by checking whether the LCM of each possible r and 700 is indeed 7000. # Verify the solution lcm_values_corrected = [sympy.lcm(r, 700) for r in r_values_corrected ] ...... RESULT: (True, True) Upon verification, we find that the least common multiple (LCM) of each possible value of r and 700 is indeed 7000. Hence, the sum of all positive integers r that satisfy lcm[r, 700] = 7000 is 1000 + 7000 = 8000. The answer is 8000.|Uncertain First, let’s find the prime factorization of 7000. Then, we’ll generate all the factors of 7000 and find which of those factors are multiples of 700. Finally, we’ll sum up all the positive integers that satisfy the condition. import sympy # Get the prime factorization of 7000 prime_factors = sympy. factorint(7000) # Generate all factors of 7000 factors = set() for factor_2_count in range( prime_factors[2] + 1): for factor_5_count in range( prime_factors [5] + 1): ...... RESULT: (2: 3, 5: 3, 7: 1, [3500, 700, 7000, 1400], 12600) The prime factorization of 7000 is 23 · 53 · 71. The factors of 7000 that are multiples of 700 are: 3500, 700, 7000, and 1400. So, the sum of all positive inte- gers r that satisfy lcm(r, 700) = 7000 is 3500 + 700 + 7000 + 1400 = 12600. So the final answer is 12600.|False In this problem, we are given that lcm(r, 700) = 7000. Let’s first find the prime factors of 700 and 7000. import sympy # Find the prime factorization of 700 and 7000 factors_700 = sympy. factorint(700) ...... RESULT: (2: 2, 5: 2, 7: 1, 2: 3, 5: 3, 7: 1) Since the LCM of r and 700 is 7000, r must contain all the prime factors of 7000 that are not al- ready in 700. # Generate all combinations combinations = list( product(*[[1, i] for i in extra_factors])) ...... RESULT: ([1, 5, 2, 10], 18) The possible values of r are 1, 2, 5, and 10. To verify our answer, we can check that the least common mul- tiple of r and 700 is indeed 7000. # Remove 1 from the possible values of r possible_r_values. remove(1) ...... RESULT: False It seems that there’s a mistake. Not all the calculated values of r satisfy the given condition. Let’s correct the calculation. # All the values of r should have the factors of 700 r_values = [700] ...... RESULT: ([700, 1400, 3500, 7000], 18720) Therefore, answer is 18720.|
|---|---|---|
**Answer: 8000**
**Uncertain**
**False**
Table 11: An example in which True succeeds while Uncertain and False fails.
-----
**Math Problem: Let A = (1, −11, 2), B = (3, −4, 1), and C = (−2, 1, −1). Compute ∠ABC,**
in degrees.
|True The angle ABC is the an- gle between the vectors B⃗A and B⃗C. We can find these vectors as follows: B⃗A = A⃗ −B⃗ = (−2, −7, 1) B⃗C = C⃗ −B⃗ = (−5, 5, −2) The angle θ between two vectors ⃗u and ⃗v can be found using the formula ⃗u · ⃗v cos(θ) = ||⃗u||||⃗v|| Let’s calculate these values. import numpy as np # Define vectors vec_BA = np.array([1, -11, 2]) - np. array([3, -4, 1]) ...... RESULT: 120.00000000000001 Let’s now verify this answer. ⃗u · ⃗v = ||⃗u|| · ||⃗v|| · cos(θ) Substituting our values into this formula, we should get the same dot product (-27) that we calcu- lated earlier. Let’s check this. # Compute the dot product using the formula dot_product_verificatio = magnitude_BA * magnitude_BC * np. cos(theta * np.pi / 180) ...... RESULT: -27 which matches the dot product we calculated earlier. Therefore, the computed angle of 120◦ is correct.|Uncertain the cosine of the angle θ between them is given by u · v cos θ = ||u|| ||v|| In the given problem, we want to find the angle between vectors −→ −−→ AB and BC, where −→ AB = B −A = (2, 7, −1) −−→ BC = C −B = (−5, 5, −2) Now, we’ll calculate the angle using the formula for cos θ. from numpy import array, dot from numpy.linalg import norm import math # Defining the vectors A = array([1, -11, 2]) ...... RESULT: 60 The angle ∠ABC is 60◦. n|False −→ The vector AB is given by −−→ B − A, and the vector BC is given by C −B. The cosine of the angle between two vectors u and v is given by u · v cos(θ) = ∥u∥∥v∥ Let’s calculate. import numpy as np # define the points A = np.array([1, -11, 2]) ...... RESULT: 60.00000000000001 The angle ABC is approxi- mately 60.0◦. Let’s verify this. We can solve for cos(C): a2 + b2 −c2 cos(C) = 2ab # calculate vector AC AC = C - A # calculate the magnitudes of AC magnitude_AC = np. linalg.norm(AC) ...... RESULT: False I apologize for the confusion; there seems to have been a mis- take in the verification. The discrepancy arises due to the directionality of the vectors. Therefore, the initial calculation was correct: the angle ABC is approximately 60.0◦.|
|---|---|---|
**Answer: 120[◦]**
**Uncertain**
**False**
Table 12: An example in which True succeeds while Uncertain and False fails.
-----
| [
"Aojun, Zhou",
"Ke, Wang",
"Zimu, Lu",
"Weikang, Shi",
"Hongsheng, Li",
"Sichun, Luo",
"Zipeng, Qin",
"Shaoqing, Lu",
"Linqi, Song",
"Anya, Jia",
"Mingjie, Zhan"
] | 2023-08-15T00:00:00 | ICLR 2024 Poster | true | 106 | 17 | null | http://arxiv.org/abs/2308.07921 | https://arxiv.org/abs/2308.07921 | https://www.semanticscholar.org/paper/1dbd58bd8768ba0dada2e7c84aa2fe0b9f418ebc |
Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework | As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness. Generating unfactual texts not only leads to lower performances but also degrades the trust and validity of their applications. Chain-of-Thought (CoT) prompting improves trust and model performance on complex reasoning tasks by generating interpretable reasoning chains, but still suffers from factuality concerns in knowledge-intensive tasks. In this paper, we propose the Verify-and-Edit framework for CoT prompting, which seeks to increase prediction factuality by post-editing reasoning chains according to external knowledge. Building on top of GPT-3, our framework lead to accuracy improvements in multiple open-domain question-answering tasks. | The Verify-and-Edit framework for CoT prompting is proposed, which seeks to increase prediction factuality by post-editing reasoning chains according to external knowledge and lead to accuracy improvements in multiple open-domain question-answering tasks. | ## Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework
**Ruochen Zhao** [1][∗] **Xingxuan Li** [1][,][2][∗]† Shafiq Joty 1,3‡ Chengwei Qin 1 Lidong Bing 2
1
Nanyang Technological University, Singapore
2
DAMO Academy, Alibaba Group
3
Salesforce AI
{ruochen002, chengwei003}@e.ntu.edu.sg
{xingxuan.li, l.bing}@alibaba-inc.com
[email protected]
**Abstract**
As large language models (LLMs) have
become the norm in NLP, demonstrating good
performance in generation and reasoning
tasks, one of its most fatal disadvantages
is the lack of factual correctness. Generating unfactual texts not only leads to lower
performances but also degrades the trust
and validity of their applications. Chainof-Thought (CoT) prompting improves
trust and model performance on complex
reasoning tasks by generating interpretable
reasoning chains, but still suffers from
factuality concerns in knowledge-intensive
tasks. In this paper, we propose the Verifyand-Edit framework for CoT prompting,
which seeks to increase prediction factuality
by post-editing reasoning chains according
to external knowledge. Building on top
of GPT-3, our framework lead to accuracy
improvements in multiple open-domain
question-answering tasks. For reproducing
our results and extending the framework
further, we make our codebase available at
[https://github.com/RuochenZhao/Verify-and-](https://github.com/RuochenZhao/Verify-and-Edit)
[Edit](https://github.com/RuochenZhao/Verify-and-Edit)
**Question** **Standard**
Of all the teams **Newcastle United.**
John Nyskohus
played for, which **Chain-of-thought**
team was known
as "the Black and First, John Nyskohus played for the Norwegian
Whites?" football team Odd Grenland. Second, Odd
Grenland is known as "the Black and
Whites." The answer is Odd Grenland.
**Self-Consistency:**
less than majority agree
**Verify**
What team did John Nyskohus play for?
What team is known as "the Black and Whites?"
**External Knowledge Retrieval**
John Nyskohus ... is an Australian former soccer player who played club football for
USC Lion ... and Adelaide City in the National Soccer League ...
Adelaide City Football Club is an Australian football (soccer) club based in Adelaide,
South Australia. They are also known as "The Zebras" and "the Black and Whites.
**Edit Rationales** **New Prediction**
First, John Nyskohus played for Adelaide City in
The answer is
the National Soccer League. Second, Adelaide
**Adelaide City Football**
City Football Club is known as "the Black and
**Club.**
Whites".
Figure 1: The Verify-and-Edit framework consists of
five steps: (1) pass predictions with lower-than-average
consistency to the next stages while leaving highly consistent predictions as-is; (2) produce verifying questions; (3) retrieve external knowledge; (4) edit rationales with informed answers; and (5) produce new predictions.
been to improve end-task performance by utilizing generated CoTs as-is. For example, Ye and
Durrett (2022) train a calibrator that tunes prediction probabilities based on rationale scores; Wang
et al. (2022) sample multiple reasoning paths to find
the most common (consistent) prediction. Only a
few, such as Creswell et al. (2022) and Zhou et al.
(2022), have explored ways to improve the quality
of CoTs themselves.
In fact, improving the CoT quality could be beneficial in enhancing both interpretability and endtask performance. Ye and Durrett (2022) point out
that explanations judged as good by humans often indicate more accurate predictions. Intuitively,
a better set of CoT prompts could provide better
grounding and logically consistent thought pro
**1** **Introduction**
Large Language Models (LLMs) have become the
new norm in many downstream NLP tasks. In
utilizing these LLMs, Chain-of-Thought (CoT)
prompting (Wei et al., 2022) is found to improve
performances for tasks that require complex reasoning, such as math word problems, commonsense
reasoning, and symbolic manipulation. At the same
time, it is able to generate interpretable reasoning
chains. Recent work further explored how to use
these reasoning chains to select better predictions.
However, the primary focus of these methods has
_∗_ Equal contribution.
_†_ Xingxuan Li is under the Joint Ph.D. Program between
Alibaba and Nanyang Technological University.
_‡_ Work done when the author was on leave from NTU.
-----
cesses, thus leading to more accurate predictions.
To improve generation quality, one important
aspect is factual correctness, which is currently
one of the most fatal drawbacks of LLMs (OpenAIBlog, 2022; Zhao et al., 2023). In answering user
queries, LLMs such as GPT-3 (Brown et al., 2020)
tend to make up facts and details, which is now
flagged as a primary warning in their API usage.
As a major use case of LLMs is the prospect of
replacing traditional search engines and usage for
more direct information access through questionanswering, factuality concerns could largely undermine their validity and degrade users’ level of
trust (Marcus, 2022). Fixing this issue is challenging and the concerns still persist even after the
models are instruction-tuned with human feedback
(Ouyang et al., 2022). This is because the source
of truth can be unavailable during the finetuning
process (OpenAI-Blog, 2022).
Thus, it is of urgent concern to better control the
generation and increase the factual correctness of
predictions. As LLMs could fail to recall accurate
details when functioning as a knowledge base (Ye
and Durrett, 2022; Creswell et al., 2022), if possible, knowledge from external sources could be
introduced as assistance. Assisted thought process
is also common in human reasoning: when humans
answer questions, they often search (or revisit) external knowledge sources for supporting facts in
order to refresh their (internal) memory.
Inspired by this, in this work we propose a
**Verify-and-Edit (VE) framework to post-edit the**
reasoning chains for more factually aligned predictions. As shown in Fig. 1, we first select uncertain
instances to edit, which have a less-than-majorityagree consistency. These instances, as implied
by Wang et al. (2022), often consist of plausiblesounding statements, such as the sentence “John
Nyskohus played for the Norweigian football team
Odd Greenland" in Fig. 1. When editing, we first
generate a question to verify this detail, such as
“What team did John Nyskohus play for?” Then,
to answer this query, we introduce external knowledge through open-domain retrieval systems. For
example, the fact “John Nyskohus ... played for
Adelaide City..” is retrieved in this instance. Then,
the rationales are edited by providing the retrieved
facts in the prompts as memory refreshments. Thus,
the edited rationales could be updated corresponding to the retrieved facts (Fig. 1). Given the edited
rationales, the new prediction is generated, which
considers more factually aligned reasoning traces.
To our knowledge, our work is the first to postedit CoT-style reasoning chains to enhance prediction performance. We perform experiments on two
open-domain Question Answering (QA) tasks that
require reasoning: Adversarial HotpotQA (Yang
et al., 2018) and 2WikiMultihop (Ho et al., 2020).
We also test its performance on the Fact Verification
task using Fever (Thorne et al., 2018). We find that
the model is able to benefit from more factual reasoning chains, thus generating more accurate predictions. For example, for open-domain QA, our
model demonstrates 3.8x accuracy improvement
compared to similar retrieval-augmented models on
AdvHotpot. On 2WikiMultihop, Verify-and-Edit
reaches 33.6% accuracy with open-domain search,
while CoT Self-Consistency stands at 27.7%.
**2** **Related Work**
Chain-of-Thought or CoT (Wei et al., 2022) is a
prompting method for improving the reasoning
abilities of LLMs, which enables LLMs to decompose complex problems into multiple intermediate
steps. CoT provides interpretability and has been
proven to be more capable of solving complex problems than standard prompting methods.
However, hallucination is a long-standing problem in NLP, especially for LLMs, which has drawn
significant attention from the research communities.
The decoding process of LLMs is auto-regressive,
which unavoidably makes it output nonfactual content without controlled generation (Ye and Durrett,
2022; Wiegreffe et al., 2022). As such, the lack
of supporting facts during the generation process
of CoT could largely undermine the validity of the
final answer (Golovneva et al., 2022). Ye and Durrett (2022) demonstrate that the accuracy of the
final answers largely correlates with the factuality
and consistency of the reasoning explanations. The
commonly proposed methods to improve the factuality of CoT reasoning process can be grouped
into two categories: prompt engineering and result
calibration.
Prompt engineering methods are usually applied
to guide LLMs to generate better intermediate reasoning explanations. _ReAct (Yao et al., 2022),_
which is the most comparable to our work, synergizes reasoning and acting in LLMs, where reasoning steps help the model induce and update
actions, while action steps allow the model to consult additional information from Wikipedia for a
-----
factuality check. Compared to ReAct, we generate
more natural and conversational CoTs for better
interpretability and easier learning. As such, our
framework requires a much shorter prompt to learn.
Press et al. (2022) propose self-ask by instructing
the LLM to explicitly ask itself (and then answer)
follow-up questions before answering the initial
question. One natural way of solving a complex
problem is to decompose the problem into subproblems and solve them sequentially. Zhou et al.
(2022) adopt the idea and propose least-to-most
prompting. However, both self-ask and least-to_most prompting still rely on repetitively retrieving_
internal knowledge learned by the LLM instead
of connecting to external knowledge. Thus, their
ability to improve factuality is limited.
Result calibration functions on the output of the
LLMs. Ye and Durrett (2022) train a calibrator
to calibrate the weights of the final answers based
on the factuality and consistency of the generated
explanations, which efficiently improves the results. The decoding method in CoT is naive greedy,
which simply outputs the next token with the highest probability. Wang et al. (2022) propose a self_consistency decoding method, which samples a_
diverse set of reasoning paths and then selects the
most consistent answer by marginalizing out the
sampled reasoning paths. Selection-Inference (SI)
(Creswell et al., 2022) framework is another stateof-the-art method that exploits LLMs as general
processing modules. Out of all the methods, it
is also the first to systematically improve the factual correctness of CoTs in order to predict more
accurately. It alternates between selection and inference to generate a series of interpretable, causal
reasoning steps leading to the final answer, which
is proven to be efficient. However, it is not designed for open-domain or commonsense question
answering.
Moreover, another comparable line of work
has been exploring retrieval-augmented language
model pretraining (REALM) (Guu et al., 2020),
which first retrieves documents from an external
knowledge source and then utilizes retrieved documents to process question-answering tasks. Lazaridou et al. (2022) propose to include Google search
results of the question in the prompt to improve the
factuality of the generated answer. However, such
methods may fail in complex questions as it does
not utilize the reasoning capability of LLMs. Thus,
we consider retrieval-augmented reasoning paths
as a natural way to increase factual alignment.
**3** **Verify-and-Edit Framework**
Our goal is to make LLMs generate more factual
reasoning chains with CoT prompting assisted with
external knowledge, thereby also improving prediction accuracy of the final answer. We hypothesize
that this can enhance LLMs’ capability to solve
complex knowledge-intensive tasks that require
multiple reasoning steps to arrive at an answer.
Generally, we hope to follow the human reasoning process: when a person answers a question, if
he/she is unsure, he/she would search for a supporting fact and consider it before giving the final
answer. Thus, we could separate the Verify-andEdit (VE) framework into 3 different stages: finding uncertain predictions, editing their rationales
by searching for supporting facts, and using the
edited rationales to generate final answers (Fig. 1).
In designing the stages, we hope to maximally preserve the LLMs’ biggest advantage: their opengeneration and reasoning ability. And we aim to
design tasks and setups as natural and conversational as possible, thus making it easy to understand for humans and LLMs which are trained with
natural texts.
**3.1** **Deciding when to edit**
How can we identify when a model is unsure of
its prediction? The self-consistency method (Wang
et al., 2022) provides a solution. In sampling diverse reasoning paths and answers, self-consistency
is found to be highly correlated with accuracy, suggesting that it could provide an uncertainty estimate
and confer abilities for the model to “know when
it doesn’t know". Thus, we begin the VE framework by using the consistency method to sample n
diverse reasoning paths for a prediction task. The
highly consistent predictions are left as-is. When
consistency is lower than ⌈n/2⌉, i.e. the majority
cannot agree on the same answer, we label it as
“uncertain".
**3.2** **How to edit a specific rationale**
The rationale, i.e. the thought process (CoT), could
be viewed in two parts: facts and reasoning which
combines facts to derive a new claim. Thus, we
consider improving the CoT from both aspects.
_• Facts_ To make the thought process more factually correct, we search for supporting facts in external knowledge sources (e.g. Wikipedia, Google).
-----
**Algorithm 1 Verify-and-Edit**
**Require: The original question q; An n-shot CoT prompt pcot**
**Require: An LLM f** (·); LM number of completions n; LM decoding temperature τ
**Require: An external knowledge retrieval model g(·)**
**Require: n-shot prompts for verifying question generation (pvq) and answer generation (pva)**
_R, A_ _f_ (pcot, q, n, τ ) _▷_ Generate a set of reasonings (R) and answers (A).
_←_
_s[∗]sc_ _▷_ The highest self-consistency score among all answers.
_[←]_ [max][ P] [(][a][|][p][cot][, q][)][, a][ ∈] _[A]_
_r[∗], a[∗]_ arg max P (a _pcot, q), a_ _A_ _▷_ Reasoning and answer with highest self-consistency.
_←_ _|_ _∈_
**if s[∗]sc** _[<][ ⌈]_ _[n]2_ _▷_ Edit reasoning with a less-than-majority-agree consistency.
**for oi ∈[⌉]r[then][∗]** **do** _▷_ Edit each sentence in the reasoning.
_u_ _f_ (pvq, q, oi) _▷_ Generate verifying question.
_←_
_v ←_ _g(u)_ _▷_ Retrieve external knowledge.
_w_ _f_ (pva, u, v) _▷_ Generate verifying answer.
_←_
_oi_ _w_ _▷_ Edit original reasoning sentence with verifying answer.
_←_
**end for**
_a[∗]_ _f_ (pcot, q, r[∗]) _▷_ Generate final answer with edited reasoning.
_←_
**return a[∗]**
**else if s[∗]sc** 2 _▷_ Answer with high consistency is left as-is.
**return[≥⌈] a[∗]** _[n]_ _[⌉]_ **[then]**
**end if**
First, to mimic a human’s query when searching could be longer than desired, we use a pre-trained
for validating facts, a natural question is gener- LM to rank and select the top-k sentences most
ated to verify the rationale. For this, we use the similar to the verifying question query.
in-context learning capability of the same LLM.
The original question and the rationale are both _• Reasoning_ While methods such as Selection
Inference (Creswell et al., 2022) directly use re
provided in the prompt for verifying question gen
trieved facts as rationales, they are usually too ver
eration to ensure that it asks for the most relevant
bose, longer than desired, or contain irrelevant de
information required to answer the original ques
tails. Ye and Durrett (2022) have made similar
tion, instead of other entities in the rationale. For
observations: directly using supporting sentences
example, if the rationale (wrong) is “the US pres
is usually too verbose and not sufficient.
ident born on 4 August 1961 is John Kennedy.”
and the original question is "who is the spouse of To obtain more relevant and logical rationales,
the US president born on 4 August 1961”, we ex- we again utilize a natural and generative approach,
pect the generated verifying question to be: “Who as reasoning abilities are believed to be already
is the US president born on 4 August 1961?” in- built into LLMs (Wei et al., 2022). In particular, by
stead of “When is John Kennedy’s birthday?” By feeding in prompts in the format of “question, ratiogenerating a relevant question instead of directly nale, answer”, the LLM learns to reason for a few
querying with the generated rationale, we eliminate steps before answer generation. Upon investigating
potential noise brought by incorrect fact generation. the original rationales, we observe that, even when
In the example above, if one retrieves using the they contain incorrect facts, the logical reasoning
wrong claim “the US president born on 4 August component seems to be generally intact. Thus, we
1961 is John Kennedy”, the incorrect entity “John use the verifying questions (as logic) and retrieved
Kennedy” may obfusticate the search process. facts (as information) to generate informed answers.
The informed answers are then composed into a
In this paper, we use relevant contexts retrieved from 3 systems: (i) DrQA (Chen et al., new rationale, providing potentially a more factual
CoT.
2017), an open-domain question-answering system; (ii) Wikipedia search of relevant pages; and
**3.3** **Answering again**
(iii) Google search, which demonstrates possibilities of combining LLMs and search engines. Finally, with the post-edited CoT, new answers are
As the retrieved contexts from a retrieval system generated by prompting the LLM. A pseudocode
-----
of the overall procedure is given in Alg. 1, and illustrated with an example in Fig. 1 . We can see
that, by allowing the LLM to incorporate external knowledge, our method could result in more
factually-grounded rationales. When prompted into
the LLM as a CoT, it could bring in the information necessary to make a new prediction, which was
originally not remembered correctly by the model.
Compared to specifically designed prompts such
as ReAct (Yao et al., 2022), the Verify-and-Edit
framework is simple and arguably more natural. Its
conversational nature could allow humans to better
understand the model’s thought processes and have
the potential for users to naturally interfere and
revise at any stage of inference. In the experiments
presented next, we also observe that such a setup
is effective in mitigating factuality concerns and
boosting end-task performances.
**4** **Experiment Setup**
**4.1** **Reasoning tasks**
As the Verify-and-Edit framework offers more
knowledge-grounded reasoning steps, it should
benefit tasks that fulfill the following two properties: (i) reliant on multi-hop reasoning to arrive
at a later prediction, thus depending on rationale
generation, and (ii) open-domain, thus needing to
interact with an external knowledge source.
Therefore, we validate the approach on three
datasets: (i) Adversarial HotpotQA (Yang et al.,
2018), a multi-hop question answering dataset. We
use the challenging subset proposed by Ye and
Durrett (2022), where the correct and incorrect predictions are balanced using their model. (ii) 2Wiki**Multihop (Ho et al., 2020) a multi-hop question-**
answering dataset exploiting the structured format
in Wikidata and use logical rules.[1] (iii) Fever
(Thorne et al., 2018), a fact verification dataset
that labels claims as “SUPPORTS”, “REFUTES”,
or “NOT ENOUGH INFO” based on evidence paragraphs from Wikipedia. Similar to the HotpotQA
setup, we sample a challenging set by balancing
the samples where GPT3 CoT makes correct and
incorrect predictions. Details on the processing and
use of the datasets can be found in Appendix A.
**4.2** **Compared methods**
To provide the most state-of-art performance estimates, we utilize the GPT-3 instruct series API
1We randomly sample 1,000 samples out of 12,576 dev
samples for cost considerations.
text-davinci-003 (Ouyang et al., 2022), the
strongest and most up-to-date model at the time
of experiments, as a backbone. The cost of experiments is stated in Appendix B.
Adversarial HotpotQA and 2WikiMultihop experiments used 6-shot and Fever used 3-shot incontext learning, as Fever questions are shorter
and easier to learn. We use the manual annotations provided for HotpotQA by Ye and Durrett
(2022) and manually annotate few-shot examples
for 2WikiMultihop and Fever in a similar format.
Full prompts for baseline and our methods are provided in Appendix C.
**Baselines** To provide a more comprehensive
overview of where our framework stands, we use
the following baselines:
1. Standard Prediction (Standard): Directly predicting the label based on input, given the same
number of in-context learning examples.
2. Original CoT (Wei et al., 2022): Predicting the
label after generating the explanation.
3. CoT with Self-Consistency (CoT-SC) (Wang
et al., 2022): Sampling 5 CoT trajectories with
a decoding temperature of 0.7, which is recommended by the paper.
4. Calibrator (Calib.) (Ye and Durrett, 2022): A
calibrator that tunes the probabilities of a prediction based on the score of its prediction.
5. ReAct (Yao et al., 2022): A reason-and-act
framework that utilizes an external Wikipedia
API. For this baseline, we use the reported results in the original paper, which uses the PaLM
model (Chowdhery et al., 2022), whose performance is similar to GPT-3.[2] To add a more
justified perspective, we report its performance
improvement gained on top of the CoT-SC baseline. [3]
**Verify-and-Edit (VE)** In implementing the VE
framework, the same consistency baseline is employed to estimate when the model is uncertain.
As stated in §3.1, we edit all instances with a
self-consistency score below ⌈n/2⌉, where n is
the number of sampled paths. Then, the verifying questions are produced using a 2-shot[4] setup
2We could not use PaLM as it is not open-sourced.
3it is worth noting that ReAct conducted experiments on
the entire dataset, where we used a sampled version (see §4.1).
4As we observe that question generation quality does not
vary too much as in-context examples increase, we select the
shortest prompt that is able to generate reasonable questions
to reduce cost.
-----
with in-context learning. The verifying answers are
produced using the same number of examples in
original answer generation and greedy decoding.
To study the effect of knowledge retrieval systems on the results, we use four systems:
1. Wikipedia-API (wiki): Searching for the query
entities and selecting top sentences from their
Wikipedia pages.
2. DrQA (Chen et al., 2017): A pre-trained opendomain QA model that combines bigram hashing, TF-IDF matching, and a multi-layer recurrent neural network model. We only utilize the
contexts retrieved from it.[5]
3. Google: Using top-k search results produced by
Google as assistive contexts. This result is interesting in providing possibilities in combining
search engines and LLMs.
4. Dataset: Selecting from the set of paragraphs
provided in Adversarial HotpotQA and 2WikiMultihopQA, which includes ground-truth supporting contexts and distractor paragraphs. This
is similar to an oracle setup, which provides an
upper bound of the performance boost, assuming we have a good retrieval system.
For 1, 2, and 4, after retrieving, we select the top
3 sentences most similar to the query ranked by the
pre-trained Sentence BERT model (Reimers and
Gurevych, 2019) as context.
**5** **Results and Analysis**
**5.1** **Using Self-Consistency: know when it**
**doesn’t know**
For the first step in the Verify-and-Edit framework,
consistency is used to measure the model’s confidence in a prediction. Aligned with the findings
from Wang et al. (2022), we hypothesize that when
the consistency is low, the model is more uncertain
and thus more likely to generate inaccurate predictions. To test whether this hypothesis holds, we plot
the kernal density estimation plots for consistency
distribution on the Adversarial HotpotQA dataset.
As shown in Fig. 2, the incorrect samples show a
left-skewed consistency distribution, where most
incorrect predictions have low consistencies. On
the other hand, the distribution of correct predictions shows a right-skewed tendency, where there
5We selected DrQA by first conducting small-scale experiments with different open-domain QA models, including
DPR (Karpukhin et al., 2020). DrQA is found to yield better
performance. Thus, we consistently use it.
Figure 2: Kernal density estimation plots for consistency on the Adversarial HotpotQA dataset. With kernal estimation, the curve extends its true distribution’s
range, which is from 0 to 5 (as we sampled 5 paths).
**Method** **knowledge** **EM** ∆EM **AUC**
CoT-SC → ReAct Wiki. 34.2% +0.8% -
ReAct → CoT-SC Wiki. 35.1% +1.7% -
Standard - 23.1% - 43.24
CoT - 31.8% - 38.30
CoT-SC - 31.2% - 34.97
CoT-SC + Calib. Dataset - - 49.00
CoT-SC + VE Wiki. 35.7% +4.5% 45.62
CoT-SC + VE DRQA 36.0% +4.8% 46.06
CoT-SC + VE Google 37.7% +6.5% 47.98
CoT-SC + VE Dataset **56.8%** **+25.6%** **60.94**
Table 1: Results on the Adversarial HotpotQA dataset.
The best result for each model is underlined and the
best result overall is bolded. ∆EM represents the improvement on Exact Match from the CoT-SC baseline.
The top two rows uses the PaLM model and the rest
uses the GPT-3 davinci-003 model.
are very few incorrect samples with higher consistencies. This effectively validates our hypothesis.
In the main experiments, we use ⌈n/2⌉ as a majority threshold and edit all samples below it, which
is at 3. To show the effects of different thresholds
on the framework’s performance, we also provide
an ablation study later.
**5.2** **Results on HotpotQA**
Reported in Table 1, we observe that CoT improves
on top of the Standard few-shot setting. CoT-SC,
on the other hand, does not demonstrate a good
improvement on the baseline. Using the calibrator from Ye and Durrett (2022), AUC is improved
as it learns to calibrate the answer weights based
on ground-truth contexts provided in the dataset.
-----
**Method** **knowledge** **EM** ∆EM **AUC**
Standard - 16.9% - 35.89
CoT - 28.4% - 16.64
CoT-SC - 27.7% - 17.16
CoT-SC + Calib. Dataset - - 24.13
CoT-SC + VE Wiki. 33.1% +5.4% 28.32
CoT-SC + VE DRQA 31.1% +3.4% 27.75
CoT-SC + VE Google 33.6% +5.9% 30.06
CoT-SC + VE Dataset **37.2%** **+9.5%** **32.28**
Table 2: Results on 2WikiMultiHopQA dataset. ∆EM
represents the improvement on Exact Match from the
CoT-SC baseline. All experiment uses the GPT-3
davinci-003 model.
Thus, it should be compared with the last setup
of VE, where we use dataset knowledge. In comparison, the calibrator results in a lower AUC and
cannot improve the accuracy as it does not generate
alternative answers in open-domain settings.
Using the Verify-and-Edit framework, the retrieval systems Wikipedia and DrQA could generate an improvement of 4.5% and 4.8% respectively
on top of the baseline, which is 2x the highest EM
improvement for ReAct (1.7%). When we combine the search engine results from Google into the
framework, the EM is increased by 6.5%, which
is 3.8x the ReAct result. This shows a promising
method for combining search engines and LLMs,
which is a popular direction now. Search engines return factual results, but are less powerful in queries
that require reasoning. On the other hand, LLMs
are powerful in reasoning and abstraction but tend
to generate plausible-sounding but incorrect statements (OpenAI-Blog, 2022; Zhao et al., 2023). To
combine the best of both worlds, we could utilize
the long memory of LLMs, as many users have
reported that GPT is able to remember inputs mentioned earlier in the dialogue. By providing factual
results from the search engines as a memory refreshment, GPT is able to generate better and more
factual predictions.
Then, when we use the adversarially augmented
paragraphs provided in the dataset, the model is
able to demonstrate very high EM (56.8%) and
AUC (60.94) at the same time. This setup shows
that, if we have a highly compressed set of contexts and a nearly-ideal retrieval system, the Verifyand-Edit framework could potentially result in very
strong performances.
**Method** **knowledge** **Accuracy** ∆ **Accuracy**
CoT-SC → ReAct Wiki. - +4.2%
ReAct → CoT-SC Wiki. - +1.6%
Standard - 46.8% -
CoT - 50.0% -
CoT-SC - 52.0% -
CoT-SC + Calib. - 33.7%
CoT-SC + VE Wiki. 53.6% +1.6%
CoT-SC + VE DRQA 53.3% +1.3%
CoT-SC + VE Google 53.9% +1.9%
Table 3: Results on Fever dataset. ∆Accuracy represents the improvement on Accuracy from the CoT-SC
baseline. The top two rows uses the PaLM model and
the rest uses the GPT-3 davinci-003 model.
**5.3** **Results on 2WikiMultiHop**
As shown in Table 2, our method demonstrates even
stronger performances on 2WikiMultiHop compared to HotpotQA. The Verify-and-Edit framework with open-domain retrieval is able to generate
a high accuracy improvement, ranging from 3.4%
to 5.9%. Selecting from paragraphs provided in
the dataset, which includes supporting evidences
and irrelevant paragraphs, the accuracy improvement is further increased to 9.5%. The calibrator,
on the other hand, uses the dataset provided paragraphs but still lags behind all variations of our
Verify-and-Edit framework.
**5.4** **Results on fact verification**
Results on the Fever dataset are shown in Table 3.
As the reasoning required by the Fever dataset is
less multi-hop compared to HotpotQA and 2WikiMultiHop, we anticipate that it should demonstrate
lower improvements compared to the other two.
In the Fever dataset, the calibrator method completely fails, decreasing to 33.7%: it calibrates
the prediction scores based on factuality estimates,
which is produced by examining the overlap between the reasoning path and the provided context.
However, in such Fact Verification datasets, there is
no provided contexts. Thus, we calibrate using the
original claim, which results in bad performances.
It shows here that one limitation of the calibrator
method is that it only applies to cases with provided
relevant contexts.
Even though this task does not require much
reasoning, employing the Verify-and-Edit framework, we are able to observe consistent improvements over the baseline method. Similar to before,
the Wikipedia retrieval is able to result in a larger
improvement over DrQA, and Google search improves further at 1.9%.
-----
**# Examples** **Cohen κ** **CoT-SC** **Ours** **Tie**
50 0.25 17% **53%** 30%
Table 4: Human study for factuality of CoTs on the
HotpotQA dataset. “Ours” refers to the Verify-and-Edit
model with Google retrieval.
Compared to our method, ReAct is able to
demonstrate a larger improvement on Fever. First
of all, it has been mentioned before that Fever is
less suited for the Verify-and-Edit framework as it
requires less reasoning to solve the task. Secondly,
ReAct prompts are much longer than our prompts,
requiring more computational costs.
**5.5** **Cost considerations**
As cost reduction is a main concern when interacting with LLMs, our method takes it into consideration and attempts to reduce computational
costs from two aspects: Firstly, Verify-and-Edit
only makes edits for selected instances, whereas
others edit every time. Specifically, we only revise
when the model is uncertain (judged by consistency), which occurs 40% of the time. As a comparison, other methods, such as ReAct, retrieve
relevant information and edit for every single instance, resulting in higher costs. Secondly, Verifyand-Edit designs tasks that are natural and conversational, requiring only a few demonstrations and
short prompts to learn. For example, other methods
usually learn non-natural calls, such as [thought]
and [action] tags in ReAct and API calls in Toolformer (Schick et al., 2023). Therefore, the LLM
requires longer prompts, more demonstrations, or
even fine-tuning to learn the format. On the other
hand, we design Verify-and-Edit tasks to be as natural as possible, requiring minimal effort to learn.
Our tasks only consist of asking and answering
questions, with no synthetic tags or tasks to be
learned. As a comparison, with the GPT-3 API, for
editing one Fever instance, Verify-and-Edit costs
$0.014, whereas ReAct costs $0.017.
**5.6** **Evaluating the reasoning chains with**
**human study**
To closely examine the faithfulness of the generated reasoning chains, we also conduct a smallscale human study experiment. During the experiment, two human volunteers are shown 50 randomly selected questions with generated reasoning
chains from CoT-SC and Verify-and-Edit on the
HotpotQA dataset. They are then asked to select
Figure 3: Ablation study on the effect of various consistency thresholds on task performances on Adversarial
HotpotQA
the more factually consistent one. Volunteers are
encouraged to use search engines as assistance. A
detailed description on the setup is described in
Appendix D.
Shown in Table 4, humans select the reasoning
chains produced by Verify-and-Edit as more factually consistent 53% of the time, compared to 17%
for the CoT-SC baseline. The Cohen κ is at 0.25,
showing fair agreement between the two annotators (McHugh, 2012). The annotators used Google
search as an assistive tool 100% of the time, which
shows the necessity of introducing external knowledge.
Moreover, human annotations in this case require a lot of efforts. Annotators report 1.5 minutes
on average to validate one data point. Thus, automating the Verify-and-Edit process is of benefits
as an assistive tool to reduce human labor.
To observe the qualitative effects of the Verifyand-Edit framework in detail, we also include several interesting examples in Appendix E, which
show the effectiveness of our framework in correcting the original claims.
**5.7** **Ablation study: editing at different**
**consistency thresholds**
In the Verify-and-Edit framework, the only hyperparameter to select is the consistency threshold.
Similar thresholds also exists in ReAct (Yao et al.,
2022), where the CoT → ReAct method is to employ ReAct-style prompting when “the majority
answer among n CoT-SC samples occurs less than
n/2 times". Using majority counts, however, is less
fine-grained compared to using the original consistency formulated with log probablities. Thus,
we employ the original score proposed by Wang
et al. (2022), which is the unnormalized answer
probabilities marginalized over the rationales’ log
-----
probabilities. To mimic a majority-vote threshold,
we select ⌈n/2⌉, where n is the number of sampled
paths.
To study the effect of adjusting the consistency
threshold on our framework, we show the ablation
results of Adversarial HotpotQA in Fig. 3. As
the threshold increases, accuracy first increases,
reaching a peak close to ⌈n/2⌉, which is 3, before
decreasing. The AUC scores demonstrate a similar
trend.
As shown in Fig. 2, when consistency is larger
than majority (⌈n/2⌉), there are usually more correct predictions rather than incorrect predictions,
and vice versa. Thus, as we increase the consistency threshold from 0 to ⌈n/2⌉, more uncertain
and possibly incorrect samples are getting edited by
introducing external knowledge. As we go beyond
the ideal threshold ⌈n/2⌉, we are mostly re-editing
correct samples, and the introduced noise may disrupt the original reasoning chains.
Thus, we recommend a consistency threshold at
_⌈n/2⌉_ as an ideal level.
**6** **Conclusions**
In this paper, we introduce a Verify-and-Edit framework for open-domain question-answering. It is
a first attempt to post-edit CoT-style reasoning
chains for better end-task performance. By combining knowledge retrieval with reasoning, the framework edits CoTs in a natural and conversational
way, which enhances prediction factuality. Combined with Google search, the framework also
shows a promising direction that combines the
open-generation ability of state-of-art LLMs with
the updated facts provided by search engines.
**Limitations**
There are a few limitations to the current framework. Firstly, Verify-and-Edit works the best for
open-domain question-answering tasks that require
complex reasoning. Less complex datasets or commonsense datasets that do not require knowledge
retrieval may not result in high improvements. Secondly, it is most ideal to edit a group of mostly
incorrect samples, which we try to select by using
consistency. Thus, our method is reliant on the consistency method’s performance and its abilities to
separate correct and incorrect predictions. Most often, it can demonstrate a larger improvement with
a more challenging set of examples.
To address these limitations, we plan to work on
reducing the noise brought in the rationale-editing
stage and utilize more knowledge resources, such
as knowledge bases, as a follow-up.
**Ethics Statement**
The Verify-and-Edit framework can mitigate potential ethical concerns of LLM generation surrounding hallucinations and unfactual details. Some persisting concerns include: (1) As the framework uses
google as one of the retrieval methods, it could retrieve potentially toxic information that exists in
google search results. (2) As the framework uses
GPT3 as a backbone, it could suffer from existing
ethical concerns of GPT3, such as responding to
toxic queries or exhibiting biased behavior.
For knowledge retrieval, we used Wikipedia
corpus and google search results. Permission
is granted to copy, distribute and/or modify
Wikipedia’s text under the terms of the Creative
Commons Attribution-ShareAlike 3.0 Unported License. For google search results, scraping publicly
accessible data is legal considered by the U.S. appeals court.
**7** **Acknowledgement**
This research is supported by the National Research
Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-01001[T]).
**References**
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Danqi Chen, Adam Fisch, Jason Weston, and Antoine
[Bordes. 2017. Reading Wikipedia to answer open-](https://doi.org/10.18653/v1/P17-1171)
[domain questions. In Proceedings of the 55th An-](https://doi.org/10.18653/v1/P17-1171)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 1870–_
1879, Vancouver, Canada. Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Antonia Creswell, Murray Shanahan, and Irina Higgins. 2022. Selection-inference: Exploiting large
-----
language models for interpretable logical reasoning.
_arXiv preprint arXiv:2205.09712._
Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam FazelZarandi, and Asli Celikyilmaz. 2022. Roscoe: A
suite of metrics for scoring step-by-step reasoning.
_arXiv preprint arXiv:2212.07919._
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. In Pro_ceedings of the 37th International Conference on_
_Machine Learning, ICML’20. JMLR.org._
Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara,
and Akiko Aizawa. 2020. [Constructing a multi-](https://doi.org/10.18653/v1/2020.coling-main.580)
[hop QA dataset for comprehensive evaluation of](https://doi.org/10.18653/v1/2020.coling-main.580)
[reasoning steps. In Proceedings of the 28th Inter-](https://doi.org/10.18653/v1/2020.coling-main.580)
_national Conference on Computational Linguistics,_
pages 6609–6625, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
Wen-tau Yih. 2020. [Dense passage retrieval for](https://doi.org/10.18653/v1/2020.emnlp-main.550)
[open-domain question answering. In Proceedings of](https://doi.org/10.18653/v1/2020.emnlp-main.550)
_the 2020 Conference on Empirical Methods in Nat-_
_ural Language Processing (EMNLP), pages 6769–_
6781, Online. Association for Computational Linguistics.
Angeliki Lazaridou, Elena Gribovskaya, Wojciech
Stokowiec, and Nikolai Grigorev. 2022. [Internet-](https://doi.org/10.48550/ARXIV.2203.05115)
augmented language [models](https://doi.org/10.48550/ARXIV.2203.05115) through few-shot
[prompting for open-domain question answering.](https://doi.org/10.48550/ARXIV.2203.05115)
[Gary Marcus. 2022. Is chatgpt really a “code red” for](https://garymarcus.substack.com/p/is-chatgpt-really-a-code-red-for)
[google search?](https://garymarcus.substack.com/p/is-chatgpt-really-a-code-red-for)
Mary L McHugh. 2012. Interrater reliability: the
kappa statistic. Biochemia medica, 22(3):276–282.
OpenAI-Blog. 2022. [Chatgpt: Optimizing language](https://openai.com/blog/chatgpt/)
[models for dialogue.](https://openai.com/blog/chatgpt/)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _arXiv preprint_
_arXiv:2203.02155._
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt,
Noah A Smith, and Mike Lewis. 2022. Measuring
and narrowing the compositionality gap in language
models. arXiv preprint arXiv:2210.03350.
Nils Reimers and Iryna Gurevych. 2019. [Sentence-](https://doi.org/10.18653/v1/D19-1410)
[BERT: Sentence embeddings using Siamese BERT-](https://doi.org/10.18653/v1/D19-1410)
[networks. In Proceedings of the 2019 Conference on](https://doi.org/10.18653/v1/D19-1410)
_Empirical Methods in Natural Language Processing_
_and the 9th International Joint Conference on Natu-_
_ral Language Processing (EMNLP-IJCNLP), pages_
3982–3992, Hong Kong, China. Association for
Computational Linguistics.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì,
Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761.
James Thorne, Andreas Vlachos, Christos
Christodoulopoulos, and Arpit Mittal. 2018.
[FEVER: a large-scale dataset for fact extraction](https://doi.org/10.18653/v1/N18-1074)
[and VERification.](https://doi.org/10.18653/v1/N18-1074) In Proceedings of the 2018
_Conference of the North American Chapter of_
_the_ _Association_ _for_ _Computational_ _Linguistics:_
_Human Language Technologies, Volume 1 (Long_
_Papers), pages 809–819, New Orleans, Louisiana._
Association for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. 2022. Self-consistency
improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large
language models. arXiv preprint arXiv:2201.11903.
Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta,
Mark Riedl, and Yejin Choi. 2022. [Reframing](https://doi.org/10.18653/v1/2022.naacl-main.47)
[human-AI collaboration for generating free-text ex-](https://doi.org/10.18653/v1/2022.naacl-main.47)
[planations. In Proceedings of the 2022 Conference](https://doi.org/10.18653/v1/2022.naacl-main.47)
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, pages 632–658, Seattle, United States._
Association for Computational Linguistics.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio,
William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. [HotpotQA: A dataset](https://doi.org/10.18653/v1/D18-1259)
[for diverse, explainable multi-hop question answer-](https://doi.org/10.18653/v1/D18-1259)
[ing. In Proceedings of the 2018 Conference on Em-](https://doi.org/10.18653/v1/D18-1259)
_pirical Methods in Natural Language Processing,_
pages 2369–2380, Brussels, Belgium. Association
for Computational Linguistics.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language
models. arXiv preprint arXiv:2210.03629.
[Xi Ye and Greg Durrett. 2022. The unreliability of ex-](https://openreview.net/forum?id=Bct2f8fRd8S)
[planations in few-shot prompting for textual reason-](https://openreview.net/forum?id=Bct2f8fRd8S)
[ing. In Advances in Neural Information Processing](https://openreview.net/forum?id=Bct2f8fRd8S)
_Systems._
Ruochen Zhao, Xingxuan Li, Yew Ken Chia, Bosheng
Ding, and Lidong Bing. 2023. Can chatgpt-like generative models guarantee factual accuracy? on the
mistakes of new generation search engines. arXiv
_preprint arXiv:2304.11076._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
-----
**Appendix for “Verify-and-Edit: A**
**Knowledge-Enhanced Chain-of-Thought**
**Framework”**
**A** **Dataset Processing**
**A.1** **Adversarial HotpotQA**
The Adversarial HotpotQA subset is formed in Ye
and Durrett (2022), who processed the original set
in a few ways: (1) Context length is reduced to
make it better fit the purpose of testing in-context
learning. (2) Set of adversarial contexts is reduced
to two ground truth supporting paragraphs and two
adversarial paragraphs, instead of using all eight
distractors. Each paragraph is further simplified by
only keeping relevant sentences needed for answering the question (or distracting the prediction) (3)
A challenging test set of 250 examples is formed by
balancing the mix of examples on which prompted
text-davinci-001 (which is used at their time
of experiments) to make correct and incorrect predictions. This is done by first running few-shot
inference over 1000 examples, and then randomly
sampling 125 examples with correct and incorrect
predictions, respectively. The subsampled dataset
is available publicly at the github for Ye and Durrett
(2022).
The HotpotQA dataset is distribued under the CC
BY-SA 4.0 license, which allows for modification
and research use.
**A.2** **2WikiMultihopQA**
For cost concerns, we randomly subsample 1,000
out of the dev set of 12,576 samples, which provides a reasonable estimate. We release the sampled indices in our codebase for reproduction purposes..
The 2wikimultihop dataset is licensed under the
Apache License 2.0, which allows for modification
and research use.
**A.3** **Fever**
To mimic the Adversarial HotpotQA setup, we run
the CoT baseline for 3,000 samples and randomly
sample 1,000 by balancing the number of right and
wrong predictions. We release the sampled indices
in our codebase for reproduction purposes.
Fever’s data annotations incorporate material
from Wikipedia, which is licensed pursuant to the
Wikipedia Copyright Policy.
**B** **Experiment Costs**
For the experiments, we use the API for
text-davinci-003. The costs for inferencing the
LLM is $0.02/1K tokens. We spent in total 273$.
**C** **Prompts Used**
**C.1** **HotpotQA**
**C.1.1** **Few-shot prompt**
**Q: This British racing driver came in third at the**
2014 Bahrain GP2 Series round and was born in
what year
**A: 1991**
**Q: What band did Antony King work with that**
formed in 1985 in Manchester?
**A: Simply Red**
**Q: How many inhabitants were in the city close to**
where Alberta Ferretti’s studios was located?
**A: 146,606**
**Q: TLC: Tables, Ladders & Chairs was a wrestling**
event featuring which American wrestler and rapper in the main event?
**A: John Felix Anthony Cena**
**Q: The person who received the Order of the Ele-**
phant on 31 January 1998 was born on what date?
**A: 27 April 1967**
**Q: III - Odyssey of the Mind is the sixth album by**
a German band formed in what city?
**A: Düsseldorf**
**Q: [Question]**
**A:**
**C.1.2** **CoT, CoT-SC prompt**
**Q: This British racing driver came in third at the**
2014 Bahrain GP2 Series round and was born in
what year
**A: First, at the 2014 Bahrain GP2 Series round,**
DAMS driver Jolyon Palmer came in third. Second,
Jolyon Palmer (born 20 January 1991) is a British
racing driver. The answer is 1991.
**Q: What band did Antony King work with that**
formed in 1985 in Manchester?
**A: First, Antony King worked as house engineer**
for Simply Red. Second, Simply Red formed in
1985 in Manchester. The answer is Simply Red.
-----
**Q: How many inhabitants were in the city close to**
where Alberta Ferretti’s studios was located?
**A: First, Alberta Ferretti’s studio is near Rimini.**
Second, Rimini is a city of 146,606 inhabitants.
The answer is 146,606.
**Q: TLC: Tables, Ladders & Chairs was a wrestling**
event featuring which American wrestler and rapper in the main event?
**A: First, TLC: Tables, Ladders & Chairs was a**
wrestling event featuring John Cena in the main
event. Second, John Cena is an American wrestler
and rapper. The answer is John Felix Anthony
Cena.
**Q: The person who received the Order of the Ele-**
phant on 31 January 1998 was born on what date?
**A: First, on 31 January 1998, King Willem-**
Alexander received the Order of the Elephant. Second, Willem-Alexander was born on 27 April 1967.
The answer is 27 April 1967.
**Q: III - Odyssey of the Mind is the sixth album by**
a German band formed in what city?
**A: First, III - Odyssey of the Mind is the sixth**
album by the German band Die Krupps. Second,
Die Krupps is formed in Düsseldorf. The answer
is Düsseldorf.
**Q: [Question]**
**A:**
**C.1.3** **Verifying Question Generation prompt**
Write a question that asks about the answer to the
overall question.
**Overall Question: The Sentinelese language is the**
language of people of one of which Islands in the
Bay of Bengal?
**Answer: The language of the people of North Sen-**
tinel Island is Sentinelese.
**Question: What people´s language is Sentinelese?**
**Overall Question: Two positions were filled in**
The Voice of Ireland b which British-Irish girl
group based in London, England?
**Answer: Little Mix is based in London, England.**
**Question: What girl group is based in London,**
England?
**Overall Question: [original question]**
**Answer: [rationale sentence to edit]**
**Question:**
**C.1.4** **Verifying Answer Generation**
**(Rationale Editing) prompt**
Barnes House (born 20 January 1969) is a British
racing driver, currently driving for Renault Sport
F1 Team in the Formula One World Championship.
Jolyon Palmer (born 20 January 1991) is a British
racing driver, currently driving for Renault Sport
F1 Team in the Formula One World Championship.
Ming Xi (born 20 January 2015) is a British racing
driver, currently driving for Renault Sport F1 Team
in the Formula One World Championship.
The 2014 Bahrain GP2 Series round was a pair
of motor races held on 6 and 7 April 2014 at the
Bahrain International Circuit in Sakhir, Bahrain
as part of the GP2 Series. Julián Leal finished
second for the Carlin team and DAMS driver
Jolyon Palmer came in third.
**Q: This British racing driver came in third at the**
2014 Bahrain GP2 Series round and was born in
what year
**A: This British racing driver came in third at the**
2014 Bahrain GP2 Series round and was born in
1991..
Antony King (born 1974) is a British live audio engineer for Depeche Mode and Nine Inch Nails. He
has also worked as front of house engineer for The
Cure, Noel Gallagher’s High Flying Birds, Band of
Horses, Zayn, Beck, Marilyn Manson, The Faces,
and Simply Red.
Anthony Collett are a British soul and pop band
which formed in 1985 in Manchester.
Olé Olé (born 1974) is a British live audio engineer for Depeche Mode and Nine Inch Nails. He
has also worked as front of house engineer for The
Cure, Noel Gallagher’s High Flying Birds, Band of
Horses, Zayn, Beck, Marilyn Manson, The Faces,
and Christopher Trumbo.
Simply Red are a British soul and pop band which
formed in 1985 in Manchester.
**Q: What band did Antony King work with that**
formed in 1985 in Manchester?
**A: Antony King work with the band Simply Red,**
which was formed in 1985 in Manchester..
Alberta Ferretti (Cattolica, 1950) is an Italian fashion designer and dressmaker. Her showroom is in
Milan, Italy but her studio is in the village of Cattolica, near Rimini, Italy.
Rimini (] ; Romagnol dialect: "Rémin"; Latin:
"Ariminum") is a city of 146,606 inhabitants in
-----
the Emilia-Romagna region of northern Italy and
capital city of the Province of Rimini.
Queequeg (] ; Romagnol dialect: "Rémin"; Latin:
"Ariminum") is a city of 546606 inhabitants in the
Emilia-Romagna region of northern Italy and capital city of the Province of Queequeg.
Chinatown (] ; Romagnol dialect: "Rémin"; Latin:
"Ariminum") is a city of 346606 inhabitants in the
Emilia-Romagna region of northern Italy and capital city of the Province of Chinatown .
**Q: How many inhabitants were in the city close to**
where Alberta Ferretti’s studios was located?
**A: 146,606 inhabitants were in the city close to**
where Alberta Ferretti’s studios was located..
[contexts]
**Q: [verifying question]**
**A:**
**C.2** **2WikiMultihop**
**C.2.1** **Few-shot prompt**
**Q: Which film was released earlier, Kistimaat or**
I’M Taraneh, 15?
**A: I’M Taraneh, 15**
**Q: What is the date of death of the composer of**
film Baalaraajana Kathe?
**A: 27 May 1980**
**Q: Who is the spouse of the director of film Alive**
(1993 Film)?
**A: Kathleen Kennedy**
**Q: Who lived longer, Edward Frederick Sanderson**
or Forrest Towns?
**A: Edward Frederick Sanderson**
**Q: Which country the director of film Battle Circus**
(Film) is from?
**A: American**
**Q: Which country the director of film In The Fog**
is from?
**A: Ukrainian**
**Q: [question]**
**A:**
**C.2.2** **CoT, CoT-SC prompt**
**Question: Which film was released earlier, Kisti-**
maat or I’M Taraneh, 15?
**A: First, film Kistimaat was released on 6 October**
2014. Second, I’M Taraneh, 15 was released on 13
November 2002. The answer is I’M Taraneh, 15.
**Question: What is the date of death of the com-**
poser of film Baalaraajana Kathe?
**A: First, the composer of film Baalaraajana Kathe**
is S. Hanumantha Rao. Second, S. Hanumantha
Rao died on 27 May 1980. The answer is 27 May
1980.
**Question: Who is the spouse of the director of film**
Alive (1993 Film)?
**A: First, the director of film Alive is Frank Mar-**
shall. Second, Frank Marshall’s spouse is Kathleen
Kennedy. The answer is Kathleen Kennedy.
**Question: Who lived longer, Edward Frederick**
Sanderson or Forrest Towns?
**A: First, Edward Frederick Sanderson died at age**
81. Second, Forrest Towns died at age 77. The
answer is Edward Frederick Sanderson.
**Question: Which country the director of film Bat-**
tle Circus (Film) is from?
**A: First, the director of film Battle Circus (Film)**
is Richard Brooks. Second, Richard Brooks was
American. The answer is American.
**Question: Which country the director of film In**
The Fog is from?
**A: First, the director of film In The Fog is Sergei**
Loznitsa. Second, Sergei Loznitsa is Ukrainian.
The answer is Ukrainian.
**Question: [question]**
**A:**
**C.2.3** **Verifying Question Generation prompt**
Write a question that validates the reason for an
overall question.
**Overall Question: What is the date of death of the**
composer of film Baalaraajana Kathe?
**Reason: First, the composer of film Baalaraajana**
Kathe is S. Hanumantha Rao.
**Question: Who is the composer of film Baalaraa-**
jana Kathe?
**Overall Question: Who lived longer, Edward**
Frederick Sanderson or Forrest Towns?
**Reason: First, Edward Frederick Sanderson died**
at age 81.
**Question: How long did Edward Frederick Sander-**
son live for?
**Overall Question: [original question]**
**Reason: [rationale sentence]**
**Question:**
-----
**C.2.4** **Verifying Answer Generation**
**(Rationale Editing) prompt**
The film was released in 1984 by Essex Films.
Kistimaat is a 2014 Bangladeshi action film directed by Ashiqur Rahman and produced by Tiger
Media Limited and The Abhi Pictures. I’m
Taraneh, 15 is a 2002 Iranian film directed by Rasul
Sadrameli. The film was released on May 4, 2001.
**Question: When was the film Kistimaat released?**
**Answer: The film Kistimaat was released in 2014.**
Dwaram Venkataswami Naidu and also a lyricist.
The film has musical score by S. Hanumantha Rao.
Rao died 27 May 1980. Rao married Raja Mani
with whom he had three daughters and one son.
**Question: Who is the composer of film Baalaraa-**
jana Kathe?
**Answer: The composer of film Baalaraajana Kathe**
is S. Hanumantha Rao.
Adib Kheir was a leading Syrian nationalist of the
1920s. Filmed on location in the Purcell Mountains
in British Columbia, the film was directed by Frank
Marshall, written by John Patrick Shanley, and narrated by John Malkovich. Frank Wilton Marshall(
born September 13, 1946) is an American film producer and director, often working in collaboration
with his wife, Kathleen Kennedy. He received the
Irving G. Thalberg award from the Academy of
Motion Picture Arts and Sciences in 2018.
**Question: Who is the director of film Alive (1993**
Film)?
**Answer: The director of film Alive is Frank Mar-**
shall.
[context]
**Question: [verifying question]**
**Answer:**
**C.3** **Fever**
**C.3.1** **Few-shot prompt**
Determine if there is Observation that SUPPORTS
or REFUTES a Claim, or if there is NOT ENOUGH
INFO.
**Claim: Reg Watson is a current television pro-**
ducer.
**A: REFUTES**
**Claim: The Gadsden flag was named by Christo-**
pher Gadsden.
**A: NOT ENOUGH INFO**
**Claim: Black Mirror is about society.**
**A: SUPPORTS**
**Claim: [question]**
**A:**
**C.3.2** **CoT, CoT-SC prompt**
Determine if there is Observation that SUPPORTS
or REFUTES a Claim, or if there is NOT ENOUGH
INFO.
**Claim: Reg Watson is a current television pro-**
ducer.
**A: First, Reginald James Watson AM was an Aus-**
tralian television producer and screenwriter. Second, Reginald James Watson AM died on 8 October
2019. The answer is REFUTES.
**Claim: The Gadsden flag was named by Christo-**
pher Gadsden.
**A: First, The Gadsden flag is named after politician**
Christopher Gadsden. Second, there is no information on who named the Gadsden flag. The answer
is NOT ENOUGH INFO.
**Claim: Black Mirror is about society.**
**A: First, Black Mirror is a British anthology tele-**
vision series. Second, The series uses technology
to comment on contemporary social issues. The
answer is SUPPORTS.
**Claim: [question]**
**A:**
**C.3.3** **Verifying Question Generation prompt**
Write a question that validates the reason for a
claim.
**Claim: Reg Watson is a current television pro-**
ducer.
**Reason: Reginald James Watson AM was an Aus-**
tralian television producer and screenwriter.
**Question: What is Reg Watson’s occupation?**
**Claim: The Gadsden flag was named by Christo-**
pher Gadsden.
**Reason: there is no information on who named the**
Gadsden flag.
**Question: Who named the Gadsden flag?**
**Claim: [question]**
**Reason: [rationale sentence]**
**Question:**
-----
chain 1 and 2 are CoTs generated by the CoT-SC
baseline and the Verify-and-Edit shown in random
order. On average, each volunteer took 1.25 hours
to finish 50 samples.
**E** **Qualitative Examples**
In Table 5, 3 examples from the Adversarial HotpotQA datasets are shown in detail.
From the first sample, the LLM incorrectly states
that the song is “based on .. Spider-Man.” However, in the Google retrieved facts, it clearly states
that it is based on “Ghost Rider”. Therefore, the
retrieved fact is able to help correct the detail in the
rationale. Moreover, although the original rationale
also covered the brand name “Marvel Comics”, the
generation goes on with the hero name as an answer, instead of the “brand” being asked. Feeding
in again also corrects that logical mistake.
In the second example, the LLM makes up a
plausible-sounding fact that “Tony Robinson has
written seven children’s books”. There is also no indicator on the LLM’s confidence level of this claim.
Thus, if a user is unfamiliar with this knowledge,
it could easily be mistaken as a true fact, which is
highly risky. By introducing Google as an assistive
tool, we retrieve the sentence “he has published 16
children’s books.” With this newly retrieved fact
in mind, the LLM goes on generating the correct
answer.
The third example is an interesting one. The
original CoT already makes mistakes in the first
sentence and goes on making continued mistakes
in the second sentence as well. This is a type of
common mistake in the dataset as well. On correcting them, the Verify-and-Edit framework is able
to correct the first claim with the show “Chelsea
Does”. The second claim, however, is verified but
irrelevant to the original question anymore. In this
case, by feeding in both rationale sentences, the
LLM is able to select the relevant fact as an answer, while disregarding the rest. This example
shows that the CoT setup used by Verify-and-Edit
is important as it allows for models to reason and
abstract for a second time, instead of plainly replacing and correcting.
Figure 4: Example Screenshot of Human Evaluation
User Interface.
**C.3.4** **Verifying Answer Generation**
**(Rationale Editing) prompt**
Reginald James Watson AM (27 August 1926 – 8
October 2019) was an Australian television producer and screenwriter. He was executive producer
on Crossroads and created Australian media exports serials such as Prisoner, Neighbours, The
Young Doctors and Sons and Daughters.
**Question: What is Reg Watson’s occupation?**
**Answer: Reg Watson was an Australian television**
producer and screenwriter
The flag is named after politician Christopher Gadsden (1724–1805), who designed it in 1775 during
the American Revolution.
**Question: Who named the Gadsden flag?**
**Answer: The Gadsden flag is named after Christo-**
pher Gadsden, but there is no information on who
named it.
[context]
**Question: [verifying question]**
**Answer:**
**D** **Human Study**
To conduct the human study, we show the instructions in Fig. 4 to two human volunteers. The volunteers are NLP Ph.D. students who are proficient in
English. The volunteers understand the use for the
data collection and are in consensus. The reasoning
-----
Original Question Suicide’s 1977 released album features a song based on what brand’s comic character?
True Answer Marvel Comics
CoT-SC First, Suicide released their 1977 album titled Suicide: Alan Vega & Martin Rev.
Second, the album features a song based on the Marvel Comics character Spider-Man.
The answer is Spider-Man.
Verifying Questions What album was released by Suicide in 1977?
What comic character is the song on Suicide’s 1977 album based on?
Retrieved Facts Suicide is the debut studio album from the American rock band Suicide. It was released in 1977
on Red Star Records and produced by Craig Leon and Marty Thau.
"Ghost Rider" is a song by the protopunk band Suicide appearing on their debut album. The song
is based on the Marvel Comics character. On Suicide, the noir-movie vibes of engine-revving tunes
“Ghost Rider” (named after Vega’s favorite comic book) and “Rocket U.S.A.” feel scary.
Verify-and-edit First, Suicide released their debut studio album, titled Suicide, in 1977.
Second, The song on Suicide’s 1977 album is based on the Marvel Comics character Ghost Rider.
The answer is Marvel Comics.
Original Question How many children’s books has the writer of the sitcom Maid Marian and her Merry Men written?
True Answer 16
CoT-SC First, the writer of the sitcom Maid Marian and her Merry Men is Tony Robinson.
Second, Tony Robinson has written seven children’s books.
The answer is seven.
Verifying Questions Who is the writer of the sitcom Maid Marian and her Merry Men?
How many children’s books has Tony Robinson written?
Retrieved Facts Maid Marian and her Merry Men is a British children’s television series created and written by Tony
Robinson and directed by David Bell. It began in 1989 on BBC1 and ran for four series, with
the last episode shown in 1994.
He has published 16 children’s books.
Verify-and-Edit First, The writer of the sitcom Maid Marian and her Merry Men is Tony Robinson.
Second, Tony Robinson has written 16 children’s books.
The answer is 16.
Original Question What show other than Hello Ross did Chelsea Handler appear on in January of 2016
True Answer Chelsea Does
CoT-SC First, Chelsea Handler appeared on the show The Tonight Show Starring Jimmy Fallon in January
of 2016.
Second, The Tonight Show Starring Jimmy Fallon is a talk show hosted by Jimmy Fallon.
The answer is The Tonight Show Starring Jimmy Fallon.
Verifying Questions On which show did Chelsea Handler appear in January of 2016?
What talk show is hosted by Jimmy Fallon?
Retrieved Facts Chelsea Does is an American streaming television documentary series first released on Netflix on
January 23, 2016. ... The episodes follow comedian Chelsea Handler as she explores different ...
The Tonight Show Starring Jimmy Fallon is an American late-night talk show hosted by Jimmy
Fallon that airs on NBC.
Verify-and-Edit First, Chelsea Handler appeared on the show Chelsea Does in January of 2016.
Second, The Tonight Show Starring Jimmy Fallon is a talk show hosted by Jimmy Fallon.
The answer is Chelsea Does.
Table 5: Examples from AdvHotpotQA, facts are retrieved with Google.
-----
| [
"Ruochen, Zhao",
"Xingxuan, Li",
"Shafiq, Joty",
"Chengwei, Qin",
"Lidong, Bing"
] | 2023-05-04T00:00:00 | ACL 2023 Long Papers | true | 106 | 0 | null | http://arxiv.org/abs/2305.03268 | https://arxiv.org/abs/2305.03268 | https://www.semanticscholar.org/paper/629c441076da3f8185b1cf85e8036064b714e249 |
Faithful Reasoning Using Large Language Models | Although contemporary large language models (LMs) demonstrate impressive question-answering capabilities, their answers are typically the product of a single call to the model. This entails an unwelcome degree of opacity and compromises performance, especially on problems that are inherently multi-step. To address these limitations, we show how LMs can be made to perform faithful multi-step reasoning via a process whose causal structure mirrors the underlying logical structure of the problem. Our approach works by chaining together reasoning steps, where each step results from calls to two fine-tuned LMs, one for selection and one for inference, to produce a valid reasoning trace. Our method carries out a beam search through the space of reasoning traces to improve reasoning quality. We demonstrate the effectiveness of our model on multi-step logical deduction and scientific question-answering, showing that it outperforms baselines on final answer accuracy, and generates humanly interpretable reasoning traces whose validity can be checked by the user. | The method carries out a beam search through the space of reasoning traces to improve reasoning quality, and generates humanly interpretable reasoning traces whose validity can be checked by the user. | _August 2022_
# Faithful Reasoning Using Large Language Models
**Antonia Creswell[1]** **and Murray Shanahan[1]**
1DeepMind
**Although contemporary large language models (LMs) demonstrate impressive question-answering capa-**
**bilities, their answers are typically the product of a single call to the model. This entails an unwelcome**
**degree of opacity and compromises performance, especially on problems that are inherently multi-step.**
**To address these limitations, we show how LMs can be made to perform faithful multi-step reasoning via**
**a process whose causal structure mirrors the underlying logical structure of the problem. Our approach**
**works by chaining together reasoning steps, where each step results from calls to two fine-tuned LMs,**
**one for selection and one for inference, to produce a valid reasoning trace. Our method carries out a**
**beam search through the space of reasoning traces to improve reasoning quality. We demonstrate the**
**effectiveness of our model on multi-step logical deduction and scientific question-answering, showing**
**that it outperforms baselines on final answer accuracy, and generates humanly interpretable reasoning**
**traces whose validity can be checked by the user.**
_Keywords: Reasoning, Causality, Large Lanauge Models_
### 1. Introduction
Among the many tasks that contemporary large
language models (LMs) can perform (Alayrac
et al., 2022; Nakano et al., 2021; Zeng et al.,
2022), question-answering is potentially one of
the most useful (Rae et al., 2021). However, the
proficiency of these models typically goes hand-inhand with an unacceptable level of opacity. The
assumptions behind an answer and the intermediate steps of reasoning that justify it – insofar as
these exist – are hidden from the user. This prevents the user from verifying an answer, makes
it difficult to debug a model when it gets an answer wrong, and undermines overall trust in the
model’s responses.
By contrast, a system that reasons faithfully is
one whose underlying computations mirror standard definitions of logical validity. Such a system
can supply the user with an interpretable reasoning trace, which allows them to understand how
the model reached its final answer. Exposing a
model’s assumptions and reasoning steps (Figure
1) in this way enables the user to spot mistakes
the model may have made, and empowers them
to decide for themselves whether the model’s conclusions are justified.
This provision is especially important given that
LMs are trained on human data collected from
the internet, which makes them vulnerable to
picking up and perpetuating bias (Bender et al.,
2021; Betz et al., 2021; Weidinger et al., 2021).
Presented with a context of relevant knowledge
and a question, an LM may base its answer on
information encoded in its weights rather than
prioritising the information present in the context
(Dasgupta et al., 2022). Without an interpretable
reasoning trace, we cannot know how a model
has reached its answer. Did the model rely on
its priors, which may be biased, or did it obtain
an answer by reasoning correctly with relevant
knowledge?
In this paper we develop a forward-chaining
model that reasons faithfully in the sense defined
above (and more formally in Section 2). The
backbone of our system, denoted SI, comprises
two fine-tuned LMs, one for selection and one
for inference. The interleaved operation of these
two components has a causal structure (Figure
2) that mirrors the definition of logical validity.
This guarantees that the model’s answers follow
logically from the given context under certain
assumptions.
_Corresponding author(s): [email protected]_
-----
Faithful Reasoning Using Large Language Models
Figure 1 **Example input and output from our Faithful Reasoning model.**
|
Two further fine-tuned language models complete our architecture. First, the halter is used to
terminate the reasoning process and return an answer in the required format. If the trace does not
terminate within a specified number of steps then
the answer is considered to be ‘Unknown’, allowing us to filter model answers and increase answer
precision. Second, a learned value function, which
assesses the quality of the current reasoning step
is deployed to guide a beam search over reasoning traces to enhance their quality and further
boost overall performance.
We evaluate our model on two datasets,
Proof Writer (Tafjord et al., 2021) and a questionanswering version of EntailmentBank (Dalvi et al.,
2021). We shown that our model outperforms
baseline models on final answer accuracy and
that our proposed halter and search methods also
lead to compounding boosts in performance (Tables 2 and 1). We show that in most cases SI
produces higher quality reasoning traces than
baseline models (Figures 9 and 8). It is less likely
to “hallucinate” facts (Table 5), is better able to
utilise the context (Table 3) and is more likely to
use its trace to answer questions (Table 4). Finally, our model can accurately predict when it
knows the answer (Figure 6).
### 2. Defining a Valid Reasoning Trace
In this Section, we formally define the concept
of valid forward reasoning in the context of our
framework, adhering closely to textbook definitions from formal logic (e.g. Hamilton (1988)).
**Definition 1. A reasoning step is a pair** _𝑠, 𝑖_ _,_
⟨ ⟩
_where 𝑠_ _(the selection) is a set of statements and 𝑖_
_(the inference) is a statement._
**Definition 2. A reasoning trace is a pair** _,_
⟨C T⟩
_where_ _(the context) is a set of statements and_
C T
_is a sequence of reasoning steps._
**Definition 3. A reasoning trace** _,_ _, where_ =
⟨C T⟩ T
⟨𝑠0, 𝑖0⟩, ⟨𝑠1, 𝑖1⟩, . . . ⟨𝑠𝑛, 𝑖𝑛⟩, is connected iff for every
_reasoning step_ _𝑠𝑘, 𝑖𝑘_ _, for every statement 𝑞_ _in the_
⟨ ⟩
_set 𝑠𝑘_ _either 𝑞_ _or 𝑞_ = 𝑖 _𝑗_ _for some 𝑗< 𝑘._
∈C
**Definition 4. A reasoning trace** _,_ _, where_ =
⟨C T⟩ T
_𝑟0, 𝑟1, . . ., 𝑟𝑛, is valid if it is connected and each_
_reasoning step 𝑟𝑘_ = _𝑠, 𝑖_ _is correct (in the sense_
⟨ ⟩
_that 𝑖_ _logically follows from 𝑠)._
In the next section, we introduce the components of our architecture and show how it satisfies
the requirements of faithful reasoning, under certain assumptions.
### 3. Components of a Faithful Reason- ing Model
We begin by introducing Selection-Inference (SI),
the step-wise forward reasoning backbone whose
causal structure (see Figure 2) satisfies the requirements for producing valid reasoning traces.
We then describe a component for halting, which
-----
Faithful Reasoning Using Large Language Models
Figure 2 **Comparing dependencies between inputs and outputs for SI and related models.**
|
Inputs - blue circles, LM outputs - purple circles. Order of the letters indicates the order in which the
values are predicted. Arrows indicate the dependencies between inputs – the context, C, and question,
Q – intermediate outputs – the selection, S, and inference, I, and the final answer, A. SI is the only
model where the answer does not have a direct dependency on the question. Note, EntailmentWriter
takes the hypothesis and context as input, where the hypothesis depends on the question and answer.
**3.1. Selection-Inference: Valid Forward Rea-**
**soning**
Given a question and a context consisting of a
number of statements sufficient to answer the
question, we would like our model to produce
a sequence of deductive reasoning steps that answers the question (Figure 1). To achieve this,
the SI backbone splits each reasoning step in two
(Defn. 1). First, given the question, the Selection
model chooses a set of statements from the context (the selection). Second, the Inference model
predicts an entailment by computing a statement
that follows from the selection (the inference).
The inference is then added to the context, and
that concludes a single step of reasoning. Multiple iterations of SI are carried out to produce a
reasoning trace (Defn. 2). The final inference is
used to answer the question.
**_3.1.1. Selection_**
To ensure that the reasoning trace is connected
(Defn. 3), the Selection model is obliged to select
elements only from the context, and is unable to
‘hallucinate’ facts. Similar to Tafjord et al. (2021)
and Dalvi et al. (2021), we achieve this by training an LM to refer to statements in the context by
their sentence labels, for example, ‘sent 3’. These
are used to compose sentences of the form “X. We
know that Y and ... and Z.”, where X, Y, and Z
are sentence labels (Figure 4). These sentences
are passed directly to the inference model.
Figure 3 **Faithful Reasoning architecture. See**
|
Section 3 for details of each component.
looks at the output of a Selection-Inference step
and determines if there is sufficient information to
answer the question. When there is sufficient information, the model predicts the answer in such
a way that it cannot rely on knowledge embedded
in its weights, but must depend on the reasoning trace. Finally, we introduce a value function,
which is used to perform a step-level beam search
on the reasoning traces to find the best candidate
for answering the question. A schematic of our
model is shown in Figure 3. We now describe
each of these components in more detail. Note
that (in contrast to Dalvi et al. (2021)) each component in our model is trained in isolation, and
at no point do we optimise our pipeline for final
answer accuracy.
-----
Faithful Reasoning Using Large Language Models
Figure 4 **The Selection model. The role of the Selection model is to take the context and question**
|
and select a number of statements from the context to feed to the inference model. It is crucial that the
Selection model is not able to ‘hallucinate‘ facts. To achieve this we fine-tune a LM to predict sentence
labels, as show in (i). We then extract the only the sentence labels (ii) and compose a sentence (iii).
The statements from the context are then substituted back in (iv) resulting in a sentence composed
of statements from the context.
**_3.1.2. Inference_**
To encourage it to produce correct reasoning steps,
the Inference model is trained to predict an entailment given only the selection. By not allowing
the Inference model access to the question, we
prevent it from “cheating” (directly predicting the
answer from the question). While we cannot guarantee that every reasoning step is correct, in the
sense that the inference logically follows from the
selection (Defn. 4), our implementation makes
this more likely. Under the assumption that the
Inference model produces logically correct inferences, our model is guaranteed to produce valid
reasoning traces.
**3.2. Halting: When to Stop Reasoning?**
SI allows us to produce multi-step reasoning
traces, but it does not tell us when to stop the
reasoning process. Furthermore, while we may
want to use the final inference as the answer, it
may not be in a desirable format. For example, if
we are asked whether a statement ‘𝑃 _𝑋_ ’ is true or
( )
false, our final inference my be ‘𝑃 _𝑋_ ’ or ‘𝑛𝑜𝑡𝑃 _𝑋_ ’
( ) ( )
where 𝑃 is a predicate and 𝑋 a constant. Alternatively, we may want to answer multiple-choice
questions, which require one answer to be output
from a given set of possibilities.
In light of this, we deploy a two-stage Halter
(Figure 5), which uses an LM fine-tuned to predict
whether the question can be answered given the
current inference and the question. If the question cannot be answered, ‘Unknown’ is returned.
Otherwise, the Halter computes an answer, using the same LM, given the final inference and
minimal additional information. It is important
that the model is obliged to use the final inference, rather than depend on knowledge embed
-----
Faithful Reasoning Using Large Language Models
ded in its weights. For example, if we are answering a multiple-choice question, we may provide
the choices alongside the final inference, and use
the model to output the choice that most closely
matches that inference.
To determine if the system is ready to answer
the question, we provide the Halter with a sentence of the following form: ‘Question:{question}
_Given {inference}._ _Do you know the answer?’._
The output of the Halter LM is then either ‘Yes’
or ‘No’. If the output is ‘Yes’, the Halter LM is
then prompted again to answer the question with
a prompt of the following form: ‘Given {infer_ence}. Which of the following most closely matches:_
_{choices}? Answer:’. The output is one of the_
choices.
The Halter is applied after each step of SI to the
resulting inference. If the output of the Halter is
_‘Unknown’ then we proceed to another iteration_
of SI. If the output of the Halter is an answer,
then the process is terminated and the answer is
returned. If, after a pre-specified number of SI
iterations, the system has not halted, it returns
the answer ‘Unknown’ (Alg. 2). An additional
benefit of this is that it allows the model to say
that it cannot answer the question, rather than
making up an answer. We see a notable increase
in performance when we remove questions that
the model “thinks” it cannot answer (Figure 6).
This has significant implications for trust, safety
and the deployment of systems in the real world,
where precision (rather than recall) is a priority.
**3.3. Search: Finding the Best Trace**
The selection module is non-deterministic, in the
sense that it samples from multiple candidate
statements, and this induces a tree of potential
reasoning traces. We use beam search to explore
this tree in order to find high quality traces. To
enable this, we introduce a value function which
computes the value of adding a reasoning step
to the current trace. The value function is a language model, LMvalue, fine-tuned on examples of
_partial reasoning traces that culminate in a “cor-_
rect” or “incorrect” next step. A step is considered
“correct” if it is both logically valid and is on the
ground truth (shortest) reasoning path. A step is
otherwise considered “incorrect”.
Assuming that the sum of probabilities for “correct” and “incorrect” is close to one, we can
use log 𝑝value(“correct”|reasoning trace) to score
reasoning traces as they are being constructed,
where 𝑙𝑜𝑔𝑝value denotes the distribution over tokens learned by the language model, LMvalue.
We use the value function to guide a beam
search. Starting from a single empty trace we
use SI to produce 𝑃 candidate steps. We evaluate
each of these steps using the value function and
keep the top 𝐵<= 𝑃. We use SI again to generate
_𝑃_ candidate next steps for each of the 𝐵 traces,
resulting in 𝐵 _𝑃_ traces. These are evaluated
×
using the value function and the best 𝐵 traces are
kept. We continue this process until all the traces
have halted.
### 4. Experimental Setup and Evaluation of Components
In this section, we detail how each component
in our faithful reasoning model is trained, and
evaluate each component is isolation, where possible. We use two challenging reasoning datasets,
Proof Writer (Tafjord et al., 2021) and a modified
– more challenging – question-answering version
of EntailmentBank (Dalvi et al., 2021). We use a
7B parameter Chinchilla language model in each
of our components (Hoffmann et al., 2022).
**4.1. Datasets**
We fine-tune language models on examples of
ground truth reasoning traces. Two datasets
that provide reasoning traces are EntailmentBank (Dalvi et al., 2021) and Proof Writer (PW)
(Tafjord et al., 2021) (See Section B.1.1 for details). Proof Writer is a dataset of logical reasoning problems that ask a question whose answer is
True or False given a context and provide step-bystep reasoning traces. Problems require 1, 2, 3 or
5 steps of reasoning. EntailmentBank is derived
from the ARC (Clark et al., 2018) dataset of grade
_school science questions. Dalvi et al. (2021) pro-_
vide a dataset of context, hypothesis, entailment
⟨
tree triples. Dalvi et al. (2021) propose three
⟩
tasks, Task 1 where the context consists of facts
-----
Faithful Reasoning Using Large Language Models
Figure 5 **The two-stage Halter. First the model determines if a question is answerable given the**
|
current inference. If it is, the model combines minimal additional information (that could not be used
on its own to answer the question) and predicts the answer.
from WorldTreeV2 (Xie et al., 2020) needed to
answer the question, and Task 2 that additionally
includes distractors. EntailmentBank is not a QA
dataset, rather the task requires predicting the
entailment tree given the hypothesis and context.
We reformulate the EntailmentBank dataset (taking additional information from the original ARC
tasks) into an EntailmentBankQA (EB) dataset by
creating a dataset of context, question, choices,
answer and a proof derived from the entailment
tree. Our task is to predict the answer and proof
given the question, context and choices. This task
is more similar to the ARC task, however, here we
provide the context and predict a reasoning trace
that leads to the answer.
**4.2. Selection-Inference**
The selection model is trained on individual steps
of reasoning; given the context and any previous inferences the model is trained to predict the
sentence labels which refer to statements in the
context. The inference model is trained to predict
an entailment given a number of statements from
the context. Each reasoning step in each training example in the original dataset produces one
training data point for selection and one training
point for inference. Examples of input, target
⟨ ⟩
pairs used to train the LMs are shown in Figures
11 and 12. By training the model to select statements by labels we prevent the model from being
able to make up facts that are not present in the
context (Tables 6 and Table 5). Tables 7 and 8
show the inference accuracy on the test set.
**4.3. Halter**
We use the ground truth reasoning traces from
each dataset to produce training examples for
the Halter LM. The halter has two functions, (1)
learn when there is sufficient information to answer the question given the current inference (and
the question) and (2) answer the question given
the current inference and the choices. An example of how data is generated is shown in Figure
13. Each step of reasoning in each problem can
be converted into a data point for training. The
input has the form ‘Question: {question}. Given
_{inference}. Do you know the answer?’. For inter-_
mediate reasoning steps the target is ‘ No.’. For
final reasoning steps the target is ‘ Yes.’. From
these examples, the model can learn whether an
inference contains sufficient information to answer the question. We obtain an additional data
point for each problem, which is used to train
the model to answer a question. The inputs are
of the form ‘Given {inference}. Which of these
_most closely matches {choices}?’. The target is the_
ground truth answer given in the dataset.
We train two halters, one on the PW dataset
and the another on the EB dataset. For the PW
dataset we use a simplified single step prediction
because the question does not contain sufficient
information to solve the problem[1]. Specifically,
for PW we construct a training dataset where the
input has the form ‘Given {inference}. {question}’.
For each intermediate inference the target is ‘
1Note that for every "Show P(X)" there is a "Show not
P(X)" in the PW dataset, therefore if the model tried to
answer using only the question the model would achieve
only 50% accuracy.
-----
Faithful Reasoning Using Large Language Models
_Unknown’ while the final inference has the target_
_‘ True’ or ‘ False’._
To evaluate each halter independently of the
**Proof Only baseline or SI model, it is applied**
it to the ground truth proofs from the test split.
Tables 1 and 2 show results for PW and EB respectively. We see that the PW halter performs almost
perfectly while the EB halter achieves 88.8% accuracy.
The Halter endows our model with the desirable property of predicting when it does not know
_the answer. Figure 6 shows that our halter model_
can reliably predict when the answer is known.
When we filter out the problems where the model
does not know the answer, we obtain nearly perfect accuracy on the PW dataset for all depths
and 87.5% & 83.7% accuracy on Task 1 and 2
of EB dataset respectively. This has significant
implications for the deployment of such models
in scenarios where precision matters.
**4.4. Search**
The Value LM is trained to predict whether the
current step of a reasoning trace is ‘ correct’ or ‘ in_correct’. Again, we use the ground truth reasoning_
traces to construct examples of correct and incorrect partial reasoning traces. Constructing the correct examples is simple; we take a ground truth
trace with 𝑁 steps and construct the following
input for all 𝑛 1, 2, ..., 𝑁, ‘Context:{context}
∈[ ]
_Question:{question} Reason:{reason[1:n]} The_
_above reasoning steps are’. The target is ‘ correct’_
for all of these examples. To create the negative
examples we take each positive example and replace one of the correct supporting statements
with a different, randomly chosen statement from
the context and use our Inference LM to predict
the entailment. These training examples have the
target ‘ incorrect’. Examples for both Proof Writer
and EntailmentBank are shown in Figures 15 and
14.
### 5. Experiments and Results
We present results on both Proof Writer (PW)
(Tafjord et al., 2021) and EntailmentBankQA
(EB). We show that our model achieves 88.1%
and 78.1% final answer accuracy on PW and EB
respectively significantly outperforming baseline
models (Table 1 and 2). We also perform an ablation to demonstrate the key role of search in
our model (Table 1 and 2). Compared to baseline models, we show that our model often has
higher reasoning trace accuracy; this is most evident on the more challenging tasks, for example
PW depth-5 and EB Task 2 (Figure 8 and 9). Finally, we evaluate reasoning trace validity (Section 5.4) showing that baseline model are less
likely to leverage the context when answering
questions (Table 4 and 3) and are more likely
to “hallucinate” statements than SI (Table 6 and
5). All results in this paper were obtained using
7B parameter Chinchilla language model models
(Hoffmann et al., 2022).
**5.1. Baselines**
We consider three baseline models. A Proof +
**Answer baseline where the LM is trained to pre-**
dict the whole proof followed by the answer. A
**Proof Only baseline where the model is trained**
to predict only the proof. We use the Proof Only
baseline to ablate the SI model by pairing it with
our halter and search methods (see Tables 2 and
1). Finally, we include EntailmentWriter + An**swer. This is the entailment model of Dalvi et al.**
(2021), which is fine-tuned to predict an entailment tree alone, extended for question-answering
by training the model to predict the answer after
the final conclusion.
While EntailmentWriter + Answer and Proof
**+ Answer tend to be very good at predicting the**
intermediate inferences (See Figures 17a and 9b)
they tend to be less good at selecting the correct
statements (see Figures 8a and 9a) and overall
they perform less well on final answer accuracy
(see Table 1 and 2). This suggests that the models
are predicting the correct intermediate outputs,
without selecting the correct supporting statements and that the models are unable to use the
reasoning trace to answer the question. We also
see that baseline models, with the exception of EntailmentWriter, often make-up facts when reasoning (see Table 5), suggesting that their traces are
not connected and therefore are not valid (Defn.
3). Finally, baseline models leverage information
-----
Faithful Reasoning Using Large Language Models
(a) Proof Writer (b) EntailmentBankQA
Figure 6 **Our model accurately predicts when it ‘knows’ the answer. The ‘known only’ accuracy**
|
is computed after filtering out the answers that are ‘Unknown’. The ‘all’ accuracy is computed on all
problems. This property is beneficial for applications that require high precision.
Figure 7 **The value function. Given the context, question and a partial reasoning trace the model**
|
predicts the log probability that the current step is correct.
in the context less well than our model, see Table 3, and Table 4 suggests that on Proof Writer,
SI is the only model to consistently leverage the
reasoning trace to answer questions.
On inspection of EntailmentWriter (Dalvi et al.,
2021) outputs on the Proof Writer dataset we see
that the model often ‘cheats’, where the final inference helps to answer the question, but does not
follow from the previously selected statements.
See Section E.1. Our inference model does not
have access to the question and therefore does
not have the ability to cheat in this way.
**5.2. Final Answer Accuracy**
Tables 1 and 2 show final answer accuracy on the
Proof Writer (PW) and EntailmentBankQA (EB)
datasets respectively. Each table shows a comparison to baselines as well as an ablation; comparing both SI + Halter and the Proof Only +
**Halter baseline model with and without search.**
We see that SI outperforms EntailmentWriter
**+ Answer and Proof + Answer baseline mod-**
els on all PW and EB tasks. We also show that
search improves both baseline and SI performance, providing the most significant improvement for problems that require more reasoning
steps (PW, depth-5) and on problems with distractors in the context (EB, Task 2)
On the EB dataset we see that SI model + Hal**ter + Search yields similar performance to Proof**
**Only + Halter + Search while also providing**
faithful reasoning traces, which the Proof Only
models do not. In fact, Table 5 shows that the
**Proof Only models are prone to hallucinating**
facts in up to 40% of problems, while SI has made
up facts to only 1% of problems[2]. In the next section we look at reasoning trace accuracy.
2This is likely a failure of the Selection model to produce
an output with the correct syntax and could be filtered for.
-----
Faithful Reasoning Using Large Language Models
Experiment depth-1 depth-2 depth-3 depth-5 Overall
Entailment Writer (Dalvi et al., 2021) + Answer 50.4% 55.3% 52.2% 56.0% 53.5%
Proof + Answer 70.9% 65.0% 65.5% 60.4% 65.4%
Ground truth proof + Halter 99.9% 100% 100% 100% 100%
Proof Only + Halter 97.0% 93.1% 84.8% 44.6% 79.9%
Proof Only + Halter + EB Search 99.2% 96.2% 91.4% 54.9% 85.0%
Proof Only + Halter + PW Search 98.7% 96.0% 90.3% 56.8% 85.4%
SI model + Halter 98.3% 94.1% 82.4% 38.4% 78.3%
SI model + Halter + EB Search **99.4%** 98.0% 91.7% 61.7% **88.0%**
SI model + Halter + PW Search **99.4%** **98.1%** **92.0%** **63.4%** **88.1%**
Table 1 **Proof Writer ablation and comparison to baselines. Note that the baseline model does**
|
not produce faithful reasoning traces and has access to the question when answering. By contrast, in
SI the reasoning is faithful and the answer depends on the reasoning trace. We show results using
search with a value function trained on Proof Writer, PW Search, and with a value function trained on
EntailmentBank, EB Search.
Model Task 1 Task 2
Ground truth proof + Halter 88.8% 88.8%
Proof + Answer 64.6% 7.8%
EntailmentWriter* + Answer 50.0% 35.0%
Proof Only + Halter 78.5% 60.3%
Proof Only + Halter + Search 82.9% **76.2%**
SI model + Halter 72.4% 55.9%
SI model + Halter + Search **83.2%** 72.9%
Table 2 **EntailmentBankQA ablation and com-**
|
**parison to baselines.** Note that the baseline
models are not causal. We use 7B parameter LMs
for all models. *(Dalvi et al., 2021)
**5.3. Evaluating Reasoning Trace Accuracy**
Here we evaluate the reasoning trace accuracy
of each model on the PW and EB datasets, see
Figures 8, 17 and 9.
Evaluating reasoning trace accuracy on PW is
straightforward since we are able to use exact
string match to check whether two strings are
the same. We show the Jaccard similarity between predicted and ground truth leaves (i.e the
selection, Figure 8a), intermediate outputs (i.e.
the inferences, Figure 17a) and steps (i.e. selection and inference, Figure 8b). Results show that
SI had the highest Jaccard similarity for leaves
and full traces while Proof + Answer and En**tailment Writer + Answer have highest Jaccard**
similarity for intermediate outputs 17a. This sug
gests that these models are correctly predicting
the intermediate outputs, but not via the correct
reasoning. Note that this evaluation does not
consider the ordering of the proof steps which
may be inflating the perceived performance of
the baseline models since the baseline models are
able to cheat by predicting later reasoning steps
without computing earlier reasoning steps.
Overall, on the EB dataset, we see that SI
outperforms EntailmentWriter + Answer and
**Proof + Answer baselines on the more challeng-**
ing task, Task 2, which has distractors in the context. Figure 9 shows the Jaccard similarity between predicted and ground-truth leaves (i.e. the
selection) and the as well as the rouge scores between predicted and target intermediate outputs
on the EB dataset (additional results in Figure
18).
Note that high baseline performance on the
intermediate outputs (Figures 17 and 9) also suggests that the baseline models have been trained
well and means that their poor final answer accuracy cannot be attributed to poor training but
rather to the baseline models’ inability to use the
reasoning trace to answer the question.
**5.4. Trace Validity**
While requirements for Defn. 1-3 are satisfied by
the causal structure of our underling model (Figure 2). Requirements of correctness, for Defn. 4,
-----
Faithful Reasoning Using Large Language Models
(a) Jaccard similarity between the pre**dicted and ground-truth selection, re-**
**ferred to as leaves, used to reason. We see**
that the SI models perform better than Baseline models.
(b) Jaccard similarity between the pre**dicted and ground-truth reasoning steps.**
We see that the SI models perform better
than Baseline models on the more challenging Task 2.
Figure 8 **Evaluating proof steps for Proof Writer. We compute the above values only on problems**
|
where the model predicts that the answer is not "Unknown". Additional analysis in Figure 17.
are less strongly enforced. Never the less we show
bellow that, unlike baseline models, our model
is not able to cheat and therefore, the correctness
assumption is more likely to hold. First, however,
we demonstrate that while SI satisfies the requirement of being connected, other baseline models
fail to do so.
**_5.4.1. SI produces connected traces_**
For a reasoning trace to be connected it must not
hallucinate facts (Defn. 3). Tables 5 and 6 show
that while some baseline models fail to satisfy
this requirement and often hallucinate facts. For
example, the Proof + Answer baseline makes up
facts to solve 60% of EntailmentBankQA problems. On the other hand, SI makes up facts <
1% of the time, suggesting that >99% of traces
produced by SI are connected reasoning traces.
**_5.4.2. SI produces correct inferences_**
Following Defn. 4 for a trace to be valid it must
be connected (as above) and the steps must be
_correct; the inference must follow from the selec-_
tion. Tables 7 shows that when fed with a valid
selection the inference model reliably produces
the correct inference. It is harder to evaluate inference accuracy on EntailmentBankQA, however
Table 8 suggests that the inference model is accurate, with a RougeL score if 0.69.
**_5.4.3. SI uses its reasoning trace to answer the_**
**_question_**
Unlike baseline models, SI’s causal structure (see
Figure 2) forces it to use the reasoning trace to
answer the question. On the other hand, some
baseline models are able to ‘cheat’, answering
questions without reasoning properly over the
context. In other words, they depend more on
the knowledge embedded in their weights than
on the context provided and the reasoning trace
constructed. To investigate this, we evaluate performance of a model that is given an incorrect
context (a context different from the one needed
to solve the problem) and compare this to performance when the model is given the correct
context. If a model’s answer depends on careful
reasoning over the context, then it should be un_able to answer the question when provided with_
a random context.
On the EntailmentBankQA dataset, we use a
_random context sampled from another problem in_
the dataset. Table 3 shows that both the Proof +
**Answer and EntailmentWriter + Answer mod-**
els are still able to answer 30% and 23% of questions respectively, while SI + Halter is only able
to answer 9%. We also see that while almost half
of the final accuracy could be accounted for by
‘cheating’ or chance in the baseline models, less
that 12.5% of SI + Halter final accuracy could
be attributed to ‘cheating’ or chance.
10
-----
Faithful Reasoning Using Large Language Models
Depths 1-5
Model incomplete Δ
↓ ↑
context
SI + Halter **29.5%** **48.8%**
Proof + Answer 61.2% 4.3%
EW* + Answer 53.4% 0.1%
Table 4 **Proof Writer: Relative performance in-**
|
**crease, Δ, when using the complete context as**
**opposed to a incomplete (rules only) context.**
In the Proof Writer dataset information needed to
solve the problem may be leaked, in the baseline
models, by the rules themselves, without need
to do valid reasoning. We expect models that actively make use of the reasoning trace – rather
than ‘cheating’ using short-cuts – to have poor
performance when using the incomplete context
and to have a larger performance increases, Δ.
(*EW=EntailmentWriter (Dalvi et al., 2021)) SI
**+ Halter performance is less than 50% because**
in 69.7% of cases the model correctly predicts
that it cannot answer the question. The Δ results
suggests that SI + Halter is the only model that
reliably uses the reasoning trace to answer questions.
**6.1. Language Models Are Not Enough**
Recent work on applying language models to reasoning problems has largely concentrated on improving final answer accuracy rather than producing valid, human interpretable reasoning traces
that lead to the answers. For example, various
methods of prompting (Wei et al., 2022) and iterative fine-tuning (Zelikman et al., 2022) have
been used to encourage models to produce reasoning traces, and while this has led to improvement
of final answer accuracy these traces do not support our understanding of how the answer was
reached.
Kojima et al. (2022) split the reasoning in two
parts, first producing a reasoning trace and then
predicting the answer given the question and the
reason. Similar to our own model Zhou et al.
(2022) go one-step further and split each reasoning step in two: first asking an intermediate
question and second, answering that question.
While the authors suggest that their approach promotes compositional generalisation, unlike our
11
On the Proof Writer dataset, we use an in_complete context which consists only of the rules_
needed to solve the problems but not the facts,
making it impossible to construct the correct a
valid trace to solve the problem. Table 4 shows
model performance and the difference in performance, Δ, between models that use the complete
and incomplete context. The Δ results suggest
that SI + Halter is the only model that reliably
makes use of the reasoning trace, while the other
models rely on taking short cuts. For example,
**Proof + Answer may be taking short cuts, by**
looking for rules whose head predicate matches
the predicate in the question.
Task 1
Model random Δ
↓ ↑
context
SI + Halter **9.4%** **63.0%**
Proof + Answer 30.0% 34.6%
EW* + Answer 23.0% 27.0%
Table 3 **EntailmentBank:** **Relative perfor-**
|
**mance increase, Δ, when using the correct**
**context as opposed to a random one. We ex-**
pect models that actively make use of the context
to have poor performance when using the random
_context and a larger performance increases, Δ,_
when using the correct context compare to when
using the incorrect one. (*EW=EntailmentWriter
(Dalvi et al., 2021))
### 6. Related Work
While contemporary language models (LMs) are
good at many natural language tasks, they often
struggle with logical reasoning (Betz et al., 2021;
Creswell et al., 2022; Dasgupta et al., 2022; Rae
et al., 2021; Zhang et al., 2022). In this section
we draw attention to the exciting progress being
made towards reasoning using LMs. We highlight
several works that use language models to produce reasoning traces (Bostrom et al., 2022; Dalvi
et al., 2021; Kojima et al., 2022; Saha et al., 2020;
Tafjord et al., 2021; Wei et al., 2022; Zelikman
et al., 2022), and assess the reasoning validity of
each approach. Finally, we discuss two additional
areas of related work, the use of search and the
problem of when to stop reasoning.
-----
Faithful Reasoning Using Large Language Models
(a) Jaccard similarity between the ground
**truth leaves (e.g.** **selection) and those**
**used by the model. We see that SI outper-**
forms all of the baseline models on the more
challenging task, Task 2.
(b) Rouge score on the intermediate out**puts (or inferences) from each step (ig-**
**noring order). The baseline models that do**
not use search or the halter perform poorly
on Task 2.
Figure 9 **Evaluating reasoning steps for EntailmentBankQA. We compute the above values only**
|
on problems where the model predicts that the answer is not "Unknown". Note, none of these metrics
account for order of the reasoning steps.
Model Task 1 Task 2
Proof + Answer 10% 60%
EntailmentWriter + Answer 3% **0%**
Proof Only + Halter 15% 23%
Proof Only + Halter + Search 18% 40%
SI + Halter **1%** **0%**
SI + Halter + Search **1%** **0%**
Table 5 **EntailmentBankQA: Proportion of**
|
**problems on which models made-up facts that**
**were not in the context. We see that only SI**
and EntailmentWriter are able to avoid making
up facts.
approach the answering part of the model has full
access to the question and therefore the model
does not have to rely on the reasoning trace to answer the question. Moreover, unlike our work, the
models of Kojima et al. (2022); Wei et al. (2022);
Zelikman et al. (2022); Zhou et al. (2022) are
not restricted to reasoning over knowledge in the
context, but rather have the ability to hallucinate
possibly incorrect “knowledge” to support the answer leading to reasoning traces which are not
valid and cannot be trusted.
**6.2. Reasoning with Language Models**
The EntailmentBank dataset proposed by Dalvi
et al. (2021) has led to several works focused
on deriving reasoning traces to backup an answer or hypothesis (Bostrom et al., 2022; Dalvi
et al., 2022; Jhamtani and Clark, 2020; Ribeiro
et al., 2022). In our work, we focus on answering
questions and providing faithful reasoning traces,
rather than post-hoc explanations.
With a similar motivation to our own, Gupta
et al. (2022) and Nakano et al. (2021) show
promising results extracting evidence from a table or the web, respectively, and using this to
answer a question or solve a natural language
inference (NLI) problem. However, while Gupta
et al. (2022) and Nakano et al. (2021) show the
evidence used, they do not show how that information was combined to answer the question.
In our work, we produce a valid reasoning trace
that shows how multiple pieces of knowledge are
combined, over several iterations, to answer a
question.
Other works have focused on using reasoning to
show whether a statement is True or False (Betz
12
-----
Faithful Reasoning Using Large Language Models
Figure 10 **Comparison between Faithful Reasoning and other related works.**
|
et al., 2021; Tafjord et al., 2021). In Proof Writer,
Tafjord et al. (2021) train an LM to enumerate
implications (and corresponding reasoning steps)
given a hypothesis. A valid reasoning trace can
be constructed from these outputs. However, this
approach is limited to answering questions whose
answer is True, False or Unknown, and a reasoning trace must be constructed post-hoc.
Finally, while several works have informally
introduced the notion of faithful reasoning
(Bostrom et al., 2022; Gupta et al., 2022; Kumar and Talukdar, 2020), we have related this
more precisely to the definition of valid reasoning
in logic.
**6.3. Using Search for Reasoning Problems**
The notion of valid and invalid reasoning traces
have also been explored in the context of search.
Jhamtani and Clark (2020) develop datasets of
valid and invalid reasoning traces for grade school
science questions. These can be used to train models to detect valid reasoning traces. However, it
can be expensive to collect both valid and invalid
reasoning traces hence they collect only shallow
traces and their traces do not include intermediate inferences. Instead, we show how, given
a valid reasoning trace, we can generate many
invalid reasoning traces that can be used to finetune a value function and used to guide search.
Also, rather than learning a verifier that evaluates
a whole trace (Cobbe et al., 2021; Jhamtani and
Clark, 2020; Nye et al., 2022), we train a model
on partial reasoning traces, resulting in a model
more similar to a value function which assesses
the “value” of the current reasoning step, which
can be used for step-level search.
Bostrom et al. (2022) also use step-level search
to determine whether a hypothesis is entailed
by a set of statements. While we perform a
beam search, using a learned value function, to
find high-quality reasoning traces, Bostrom et al.
(2022) depend on exhaustive search to evaluate
all possible pairs of statements to use for selection.
Unlike Bostrom et al. (2022), our selection step is
not limited to selecting just two statements. This
allows us to more efficiently solve Proof Writer
tasks whose rules may be conditioned on multiple statements.
**6.4. The Problem of When to Stop Reasoning**
The problem of when to “stop” rarely features in
the deep learning literature because our models
typically answer problems in a single step. However, there are some exceptions. A simple example is text synthesis with large language models
where the model has to determine when to stop
producing tokens. This is often handled by a special ‘End Of Sequence’ token (Graves, 2013). Other
examples in the deep learning literature draw random variables from a parameterised distribution
13
-----
Faithful Reasoning Using Large Language Models
to predict when to stop reasoning (Banino et al.,
2021; Graves, 2016).
Related work by Kadavath et al. (2022) also
investigates when LMs “know” the answer. Their
model proposes a number of candidates, and predicts whether each candidate is the answer to
the question or not. Additionally, Bostrom et al.
(2022) tackle the less challenging problem of determining whether an inference matches a goal
state.
In summary, current work focuses on True/False/NLI tasks (Bostrom et al., 2022; Dalvi et al.,
2021; Tafjord et al., 2021) while our work tackles question-answering. This is not a trivial difference. In question-answering, there is less information with which to construct the reasoning
trace, since the “goal” is not known, and learning
when to terminate is also more challenging. Moreover, current work leverages reasoning traces to
boost performance – rather than to aid explainability or build trust – allowing for hallucination
of “knowledge” during reasoning (Kojima et al.,
2022; Wei et al., 2022). Furthermore, some existing approaches still allow the opportunity for
“cheating” Dalvi et al. (2021); Wei et al. (2022) by
providing the answering part of the model with
direct access to the question[3]. Finally, unlike most
other models (Dalvi et al., 2021; Wei et al., 2022),
the causal structure of our model (see Figure 10)
mirrors the requirements for validity, see Table
10. Other approaches that do satisfy validity have
their own limitations, as detailed above.
### 7. Limitations
The causal structure of our model mirrors the
requirements for producing a valid trace (Defn.
4). Requirements for a connected reasoning trace
(Defn. 3) are guaranteed by design (Section
3.1.1). Unavoidably, given our use of LMs, we
cannot guarantee that all reasoning steps will be
logically correct (Defn. 4). However, our architecture is designed to encourage logical correctness
by preventing models from ‘cheating’. For exam
3Specifically in these cases, the question itself contains sufficient information to supply the answer, unlike
in Proof Writer where the question is also not sufficient for
answering correctly.
ple, if the Selection model selects two unrelated
statements, then the Inference model may draw a
nonsensical conclusion. We also mitigate this by
introducing a learned value function (Section 3.3)
that filters out poor reasoning traces, although
this still cannot guarantee the correctness of every step. Examples of both correct and incorrect
reasoning traces, along with their value (according to the value function), are shown in Section
D.
In this paper we have focused on developing
models that answer questions using valid reasoning. For now we have assumed access to a context,
over which to reason. However, while there are
some settings where such a context may be provided, in most real world settings this is unlikely.
In this paper we have chosen to focus on the challenging problem of multi-step reasoning within a
_given context. However, in future work we hope_
to incorporate retrieval to populate the context,
and there is already interesting research in this
direction (Dalvi et al., 2021; Ribeiro et al., 2022;
Xie et al., 2020).
### 8. Discussion
Language models are being applied, with great
success, to many different problems Alayrac et al.
(2022); Nakano et al. (2021); Nye et al. (2022);
Rae et al. (2021); Zeng et al. (2022). However,
they largely remain black boxes; we do not know
how the models produce their responses. One
solution to this is to develop models that can produce faithful reasoning traces. We characterise
faithful reasoning in terms of logical validity (Section 2), and propose Selection-Inference, a model
that mirrors the structure of this definition, and
is guaranteed to produce valid reasoning traces
under the assumption that individual steps are
correct (Defn. 4). By fine-tuning an Inference
model specifically for this task and preventing it
from “cheating”, we increase the likelihood that
this assumption holds (Tables 7and 8). Finally, to
find high-quality reasoning traces, we introduce a
value function, and use it to guide a beam search
through the tree of potential traces induced by
the non-determinism of selection.
The resulting model achieves higher final an
14
-----
Faithful Reasoning Using Large Language Models
swer accuracy than baseline models on both
Proof Writer Tafjord et al. (2021) and EntailmentBankQA Dalvi et al. (2021) tasks. We see that
both Proof Only and SI benefit from search (Tables 1 and 2). When compared to baseline models,
our model is less likely to hallucinate facts while
reasoning (Table 6 and 5). We see that the SI +
**Halter model is far more likely than baseline mod-**
els to pay attention to the context (Table 3) and
to leverage the reasoning trace (Table 4). Overall, we see that SI + Halter (+ Search) models
achieves superior reasoning trace accuracy, especially on the more challenging tasks (Figures 8
and 9).
Our approach exemplifies a trend towards algo_rithmic prompting, a form of automated prompt_
engineering in which querying a language model
becomes a computational primitive. The responses of the language model can be manipulated to construct new prompts that are then
used to make further queries. Model queries
and prompt construction are composed into algorithms with the usual computational constructs:
sequence, choice, and iteration. Algorithmic
prompting can be used to elicit more sophisticated and nuanced behaviour from a language
model than would otherwise be possible. For example, as our work shows, this approach can be
used to develop models capable of faithful reasoning, without compromising performance. In
future work we aim to leverage advancements
in retrieval to populate the context, rather than
relying on the context being provided in the question.
### References
J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr,
Y. Hasson, K. Lenc, A. Mensch, K. Millican,
M. Reynolds, et al. Flamingo: a visual language
model for few-shot learning. arXiv preprint
_arXiv:2204.14198, 2022._
A. Banino, J. Balaguer, and C. Blundell. Pondernet: Learning to ponder. _arXiv preprint_
_arXiv:2107.05407, 2021._
E. M. Bender, T. Gebru, A. McMillan-Major, and
S. Shmitchell. On the dangers of stochastic
parrots: Can language models be too big?
In Proceedings of the 2021 ACM Conference
_on Fairness, Accountability, and Transparency,_
FAccT ’21, page 610–623, New York, NY,
USA, 2021. Association for Computing Machinery. ISBN 9781450383097. doi: 10.1145/
[3442188.3445922. URL https://doi.org/](https://doi.org/10.1145/3442188.3445922)
```
10.1145/3442188.3445922.
```
G. Betz, C. Voigt, and K. Richardson. Critical
thinking for language models. In Proceed_ings of the 14th International Conference on_
_Computational Semantics (IWCS), pages 63–_
75, Groningen, The Netherlands (online), June
2021. Association for Computational Linguistics. [URL https://aclanthology.org/](https://aclanthology.org/2021.iwcs-1.7)
```
2021.iwcs-1.7.
```
K. Bostrom, Z. Sprague, S. Chaudhuri, and G. Durrett. Natural language deduction through
search over statement compositions. _arXiv_
_preprint arXiv:2201.06028, 2022._
P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think
you have solved question answering? try arc,
the ai2 reasoning challenge. _arXiv preprint_
_arXiv:1803.05457, 2018._
K. Cobbe, V. Kosaraju, M. Bavarian, J. Hilton,
R. Nakano, C. Hesse, and J. Schulman. Training
verifiers to solve math word problems. arXiv
_preprint arXiv:2110.14168, 2021._
A. Creswell, M. Shanahan, and I. Higgins.
Selection-inference: Exploiting large language
models for interpretable logical reasoning.
_arXiv preprint arXiv:2205.09712, 2022._
B. Dalvi, P. Jansen, O. Tafjord, Z. Xie, H. Smith,
L. Pipatanangkura, and P. Clark. Explaining
answers with entailment trees. In Proceedings
_of the 2021 Conference on Empirical Methods in_
_Natural Language Processing, pages 7358–7370,_
Online and Punta Cana, Dominican Republic,
Nov. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.
585. [URL https://aclanthology.org/](https://aclanthology.org/2021.emnlp-main.585)
```
2021.emnlp-main.585.
```
B. Dalvi, O. Tafjord, and P. Clark. Towards
teachable reasoning systems. arXiv preprint
_arXiv:2204.13074, 2022._
15
-----
Faithful Reasoning Using Large Language Models
I. Dasgupta, A. K. Lampinen, S. C. Chan,
A. Creswell, D. Kumaran, J. L. McClelland, and
F. Hill. Language models show human-like
content effects on reasoning. arXiv preprint
_arXiv:2207.07051, 2022._
A. Graves. Generating sequences with recurrent neural networks. _arXiv preprint_
_arXiv:1308.0850, 2013._
A. Graves. Adaptive computation time for
recurrent neural networks. _arXiv preprint_
_arXiv:1603.08983, 2016._
V. Gupta, S. Zhang, A. Vempala, Y. He, T. Choji,
and V. Srikumar. Right for the right reason:
Evidence extraction for trustworthy tabular
reasoning. In Proceedings of the 60th Annual
_Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), Dublin,_
Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.
[acl-long.231. URL https://aclanthology.](https://aclanthology.org/2022.acl-long.231)
```
org/2022.acl-long.231.
```
A. Hamilton. Logic for Mathematicians. Cambridge University Press, 1988.
J. Hoffmann, S. Borgeaud, A. Mensch,
E. Buchatskaya, T. Cai, E. Rutherford, D. d. L.
Casas, L. A. Hendricks, J. Welbl, A. Clark, et al.
Training compute-optimal large language
models. _arXiv preprint arXiv:2203.15556,_
2022.
H. Jhamtani and P. Clark. Learning to explain:
Datasets and models for identifying valid reasoning chains in multihop question-answering.
In Proceedings of the 2020 Conference on Em_pirical Methods in Natural Language Process-_
_ing (EMNLP), pages 137–150, Online, Nov._
2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.
10. [URL https://aclanthology.org/](https://aclanthology.org/2020.emnlp-main.10)
```
2020.emnlp-main.10.
```
S. Kadavath, T. Conerly, A. Askell, T. Henighan,
D. Drain, E. Perez, N. Schiefer, Z. H. Dodds,
N. DasSarma, E. Tran-Johnson, et al. Language
models (mostly) know what they know. arXiv
_preprint arXiv:2207.05221, 2022._
T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot
reasoners. arXiv preprint arXiv:2205.11916,
2022.
S. Kumar and P. Talukdar. Nile: Natural language
inference with faithful natural language explanations. In Proceedings of the 58th Annual
_Meeting of the Association for Computational_
_Linguistics, pages 8730–8742, 2020._
R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang,
C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. Webgpt: Browser-assisted questionanswering with human feedback. arXiv preprint
_arXiv:2112.09332, 2021._
M. Nye, A. J. Andreassen, G. Gur-Ari,
H. Michalewski, J. Austin, D. Bieber, D. Dohan,
A. Lewkowycz, M. Bosma, D. Luan, et al.
Show your work: Scratchpads for intermediate
computation with language models. In Deep
_Learning for Code Workshop, 2022._
J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson,
R. Ring, S. Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446,
2021.
D. Ribeiro, S. Wang, X. Ma, R. Dong, X. Wei,
H. Zhu, X. Chen, Z. Huang, P. Xu, A. Arnold,
et al. Entailment tree explanations via iterative
retrieval-generation reasoner. arXiv preprint
_arXiv:2205.09224, 2022._
S. Saha, S. Ghosh, S. Srivastava, and M. Bansal.
Prover: Proof generation for interpretable reasoning over rules. In EMNLP (1), 2020.
T. Sellam, D. Das, and A. Parikh. Bleurt: Learning robust metrics for text generation. In Pro_ceedings of the 58th Annual Meeting of the As-_
_sociation for Computational Linguistics, pages_
7881–7892, 2020.
O. Tafjord, B. Dalvi, and P. Clark. Proofwriter:
Generating implications, proofs, and abductive
statements over natural language. In Findings
_of the Association for Computational Linguistics:_
_ACL-IJCNLP 2021, pages 3621–3634, 2021._
16
-----
Faithful Reasoning Using Large Language Models
J. Wei, X. Wang, D. Schuurmans, M. Bosma,
E. Chi, Q. Le, and D. Zhou. Chain of thought
prompting elicits reasoning in large language
models, 2022.
L. Weidinger, J. Mellor, M. Rauh, C. Griffin,
J. Uesato, P. Huang, M. Cheng, M. Glaese,
B. Balle, A. Kasirzadeh, Z. Kenton, S. Brown,
W. Hawkins, T. Stepleton, C. Biles, A. Birhane,
J. Haas, L. Rimell, L. A. Hendricks, W. S. Isaac,
S. Legassick, G. Irving, and I. Gabriel. Ethical and social risks of harm from language
models. CoRR, abs/2112.04359, 2021. URL
```
https://arxiv.org/abs/2112.04359.
```
Z. Xie, S. Thiem, J. Martin, E. Wainwright, S. Marmorstein, and P. Jansen. Worldtree v2: A corpus of science-domain structured explanations
and inference patterns supporting multi-hop inference. In Proceedings of the 12th Language Re_sources and Evaluation Conference, pages 5456–_
5473, 2020.
E. Zelikman, Y. Wu, and N. D. Goodman. Star:
Bootstrapping reasoning with reasoning. arXiv
_preprint arXiv:2203.14465, 2022._
A. Zeng, A. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. Ryoo, V. Sindhwani, J. Lee, V. Vanhoucke, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint
_arXiv:2204.00598, 2022._
H. Zhang, L. H. Li, T. Meng, K.-W. Chang,
and G. V. d. Broeck. On the paradox of
learning to reason from data. arXiv preprint
_arXiv:2205.11502, 2022._
D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales,
X. Wang, D. Schuurmans, O. Bousquet, Q. Le,
and E. Chi. Least-to-most prompting enables
complex reasoning in large language models.
_arXiv preprint arXiv:2205.10625, 2022._
### Acknowledgements
The authors would like to thank Angeliki Lazaridou, Charles Blundell and Christopher Summerfield for feedback on our paper as well as Jonathan
Uesato, Jordi Grau-Moya, Ramana Kumar and
Irina Higgins for insightful discussions.
17
-----
Faithful Reasoning Using Large Language Models
### Supplementary Materials
A. Formal definition of the problem
Formally, suppose we have a problem of the form
_𝑞,_ 0, where 0 is a context, consisting of a set
( C ) C
of statements which are sufficient to predict the
the correct answer to the question, 𝑞.
The role of the Selection model, LMSelection is
to sample a selection, 𝑠𝑘, given the question, 𝑞
and the current context, _𝑘, see Equation 1._
C
_𝑠𝑘_ = LMSelection(𝑞, C𝑘) (1)
The role of the Inference model, LMInference is
to sample an inference, 𝑖𝑘, given the selection, 𝑠𝑘,
see Equation 2.
_𝑖𝑘_ = LMInference(𝑠𝑘) (2)
After each inference the context is updated as
follows, C𝑘−1 ∪ _𝑖𝑘−1, accumulating inferences from_
previous steps of reasoning.
The Halter LM, LMhalt, is applied to each inference, 𝑖𝑡, in two ways. First, to choose whether
the model should stop reasoning and the second
to answer the question when the model is ‘ready’.
This is illustrated in Alg. 1 and Alg. 2.
### B. Training Details
**B.1. Datasets**
**_B.1.1. Proof Writer_**
The Proof Writer dataset (Tafjord et al., 2021)
contains both a Closes and Open World Assumption version (CWA and OWA respectively). We
use a subset of the OWA dataset. This is because
for the CWA dataset everything that cannot be
proven is considered False. This means that problems whose answer is False, do not have reasoning
traces. On the other hand the OWA dataset contains proofs for problems whose answers are True
and False. Those without proofs are "Unknown".
Since we need proofs for training and evaluation,
we use the problems from the OWA dataset that
**Algorithm 1: The SI() function.**
**Input: LMSelection: Selection LM.**
**Input: LMInference: Inference LM.**
**Input: halt(): Halt function (Alg. 2).**
**Input: 𝑞: Question.**
**Input: C0: Initial context.**
**Input: 𝑐: Choices.**
**Input: 𝐾** [′]: Max. reasoning steps.
**1 𝑎** _‘Unknown’; Initial answer is unknown._
←
**2 𝑡** 0; Step counter.
←
**3 while 𝑎** **_is ‘Unknown’ do_**
**4** _𝑠𝑘_ ← LMSelection(𝑞, C𝑘);
**5** _𝑖𝑘_ ← LMInference(𝑖);
**6** _𝐶𝑘+1 ←C𝑘_ ∪ _𝑖𝑘;_
**7** _𝑎_ **halt** _𝑞, 𝑖𝑘, 𝑐_ ;
← ( )
**8** _𝑘_ _𝑘_ 1;
← +
**9** **if 𝑘> 𝐾** [′] **then**
**10** **return 𝑎**
**11 𝑎** ← LMhalt(𝑖, 𝑐);
**12 return 𝑎;**
have accompanying proofs (i.e. those whose answer is not Unknown).
**B.2. Selection-Inference**
Figures 11 and 12 show examples of training samples use to fine-tune the Selection and Inference
LMs.
**B.3. Halter**
Figure 13 shows how training data is constructed
for training the halting model.
**B.4. Search**
Figures 14 and 15 show examples of data points
used to train the value function. The targets for
the value function are either ‘ correct’ or ‘ incor_rect’._
18
-----
Faithful Reasoning Using Large Language Models
**Algorithm 2: The halt() function. Note**
that we use the same language model
LMhalt to both determine whether the
model is able to answer the question and
to answer the question. The key difference
is the prompt, shown in Section 3.2.
**Input: LMhalt: Halting LM.**
**Input: 𝑞: Question.**
**Input: 𝑖: Current inference.**
**Input: 𝑐: Choices.**
**1 𝑎** ← LMhalt(𝑞, 𝑖);
**2 if 𝑎** **_is ‘Unknown’ then_**
**3** **return 𝑎;**
**4 else**
**5** _𝑎_ ← LMhalt(𝑖, 𝑐);
**6** **return 𝑎;**
(a) Example of **input, target** **pairs used to train**
⟨ ⟩
**the Selection LLM.**
(b) Example of **input, target** **pairs used to train**
⟨ ⟩
**the Inference LLM.**
Figure 11 **Examples of Proof Writer training**
|
**pairs for Selection and Inference LLMs**
### C. Additional Results
**C.1. Halter**
Figure 16 shows qualitative results from the Halter model trained on EntailmentBankQA.
**C.2. Reasoning Trace Accuracy**
Figure 17 show additional evaluation of reasoning
traces on the Proof Writer dataset.
(a) Example of **input, target** **pairs used to train**
⟨ ⟩
**the Selection LLM.**
(b) Example of **input, target** **pairs used to train**
⟨ ⟩
**the Inference LLM.**
Figure 12 **Examples of EntailmentBankQA**
|
**training pairs for Selection and Inference**
**LLMs**
In Table 5 we saw that baseline models, with
the exception of EntailmentWriter, were more
likely to hallucinate facts while reasoning on the
EntailmentBank dataset than SI. Interestingly, Table 6 shows that Proof + Answer and Proof Only
baseline models have learned not to make up facts
while reasoning on the Proof Writer dataset. Note
that both EntailmentWriter and SI (ours) are de_signed not to make up facts._
Figure 18 shows the Rouge 1 scores between
the predicted and ground truth, ordered, intermediate inferences. We see that EntailmentWriter
is very good at single step inference on Task 1
problems, but performance quickly declines for
problems requiring multiple steps of reasoning.
In general SI models and models using halting
and search outperform the baseline models.
Tables 7 and 8 show the accuracy of the Inference LM when fed valid selections.
19
-----
Faithful Reasoning Using Large Language Models
Figure 13 **Example of how data is generated for the halter. Above are examples for four training**
|
data points. The first three, Do you know the answer? examples, are used to train the halting model
to learn when to halt. The final datum is used to train the halter to select an answer from the available
choices.
Model depth
depht-1 depth-2 depth-3 depth-5
Proof + Answer 0% 0% 1% 1%
EntailmentWriter + Answer 0% 0% 0% 0%
Proof Only + Halter 4% 1% 1% 0%
Proof Only + Halter + Search 0% 0% 0% 0%
SI + Halter 0% 0% 0% 0%
SI + Halter + Search 0% 0% 0% 0%
Table 6 **Proof Writer: Proportion of problems on which models made-up facts that were not in**
|
**the context. We see that the Proof + Answer and Proof Only baseline models have learned not to**
make up facts, while EntailmentWriter and SI are designed not to make up facts.
Task Inference Accuracy
depth-1 100%
depth-2 100%
depth-3 100%
depth-5 99.9%
Table 7 **Proof Writer inference accuracy. The**
|
inference model achieves almost perfect performance. We use exact string match in lower case
to decide if two statement are the same.
### D. Selection-Inference Model Outputs
**D.1. Proof Writer: SI + Halter + Search**
Below we show reasoning traces from the SI +
**Halter + Search model with the top 10 value**
function scores. No additional filtering is performed. For ease of reading we have combined
each selection and inference into a single line of
text rather than showing them separately. Examples that score high often involve repeated steps,
this is because the Proof Writer proof dataset often includes repeated steps. Invalid reasoning is
shown in red.
**_Example 1 (value: -6.7e-06)_**
**Context:**
If something likes the rabbit and it sees the bald
eagle then the bald eagle needs the rabbit.
If something is blue then it sees the rabbit.
20
-----
Faithful Reasoning Using Large Language Models
Metric Inference Accuracy
Rouge1 0.69
Rouge2 0.55
RougeL 0.69
BLEURT > 0.28 64%
Table 8 **EntailmentBankQA inference accu-**
|
**racy. We report Rouge scores as well as accu-**
racy using a BLEURT (Sellam et al., 2020) with
a threshold of 0.28 determine accuracy (as done
by Dalvi et al. (2021).
If something needs the rabbit and it likes the
bald eagle then the bald eagle is blue.
If something likes the mouse then it is round.
If something is rough and it does not see the
mouse then it does not need the bald eagle.
If something sees the mouse then it needs the
rabbit.
the rabbit likes the mouse.
the bald eagle likes the rabbit.
the rabbit likes the bald eagle.
the rabbit sees the bald eagle.
the rabbit is blue.
the rabbit needs the bald eagle.
the bald eagle needs the mouse.
the rabbit needs the mouse.
the mouse needs the bald eagle.
the rabbit is not green.
the bald eagle likes the mouse.
the rabbit sees the mouse.
the bald eagle is round.
the bald eagle does not see the mouse.
the rabbit is red.
the rabbit is rough.
**Question: Does it imply that the statement "The**
bald eagle does not see the rabbit" is True?
**Proof:**
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something needs the rabbit and it likes the
bald eagle then the bald eagle is blue. We know
that the rabbit needs the rabbit and the rabbit
likes the bald eagle. Therefore, the bald eagle is
blue.
If something is blue then it sees the rabbit. We
know that the bald eagle is blue. Therefore, the
bald eagle sees the rabbit.
**Answer: False**
**Target Proof:**
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something needs the rabbit and it likes the
bald eagle then the bald eagle is blue. We know
that the rabbit needs the rabbit and the rabbit
likes the bald eagle. Therefore, the bald eagle is
blue.
If something is blue then it sees the rabbit. We
know that the bald eagle is blue. Therefore, the
bald eagle sees the rabbit.
**Target Answer: False**
**_Example 2 (value: -8.6e-06)_**
**Context:**
If someone needs the bald eagle and the bald
eagle needs the squirrel then the bald eagle likes
the cow.
If someone needs the squirrel then they are
green.
If someone needs the cow and they need the dog
then the cow is green.
If someone likes the squirrel and they like the
dog then the dog is nice.
If someone sees the cow then they need the dog.
If someone is cold then they see the cow.
If someone needs the dog then the dog sees the
cow.
If someone likes the dog and the dog is blue then
the dog sees the bald eagle.
the dog is nice.
the cow is nice.
the dog likes the squirrel.
the squirrel likes the dog.
the dog likes the cow.
the squirrel sees the dog.
the bald eagle is cold.
the bald eagle sees the dog.
the squirrel sees the bald eagle.
21
-----
Faithful Reasoning Using Large Language Models
the dog sees the bald eagle.
the squirrel needs the cow.
the squirrel is cold.
**Question: Does it imply that the statement "The**
cow is not green" is True?
**Proof:**
If someone is cold then they see the cow. We
know that the squirrel is cold. Therefore, the
squirrel sees the cow.
If someone is cold then they see the cow. We
know that the bald eagle is cold. Therefore, the
bald eagle sees the cow.
If someone sees the cow then they need the
dog. We know that the squirrel sees the cow.
Therefore, the squirrel needs the dog.
If someone needs the cow and they need the dog
then the cow is green. We know that the squirrel
needs the cow and the squirrel needs the dog.
Therefore, the cow is green.
**Answer: False**
**Target Proof:**
If someone is cold then they see the cow. We
know that the squirrel is cold. Therefore, the
squirrel sees the cow.
If someone sees the cow then they need the
dog. We know that the squirrel sees the cow.
Therefore, the squirrel needs the dog.
If someone needs the cow and they need the dog
then the cow is green. We know that the squirrel
needs the cow and the squirrel needs the dog.
Therefore, the cow is green.
**Target Answer: False**
**_Example 3 (value: -9.5e-06)_**
**Context:**
If something is red and it likes the tiger then it
needs the dog.
If the cat likes the tiger and the cat needs the dog
then the dog is not cold.
If something eats the dog and the dog needs the
bald eagle then the dog is nice.
If something eats the cat and it needs the dog
then the cat is cold.
If the tiger needs the dog and the dog is green
then the dog needs the tiger.
If something eats the cat then the cat needs the
bald eagle.
All cold things are nice.
If the bald eagle needs the dog and the dog eats
the bald eagle then the dog is not cold.
If something needs the bald eagle then it eats the
cat.
the tiger needs the bald eagle.
the bald eagle is nice.
the bald eagle is round.
the cat likes the bald eagle.
the bald eagle does not eat the cat.
the cat likes the tiger.
the cat needs the tiger.
the dog likes the bald eagle.
the tiger eats the dog.
the cat eats the tiger.
the bald eagle is green.
the bald eagle likes the tiger.
the cat is red.
the dog is not round.
**Question: Does it imply that the statement "The**
cat is nice" is True?
**Proof:**
If something is red and it likes the tiger then it
needs the dog. We know that the cat is red and
the cat likes the tiger. Therefore, the cat needs
the dog.
If something needs the bald eagle then it eats the
cat. We know that the tiger needs the bald eagle.
Therefore, the tiger eats the cat.
If something eats the cat then the cat needs the
bald eagle. We know that the tiger eats the cat.
Therefore, the cat needs the bald eagle.
If something eats the cat and it needs the dog
then the cat is cold. We know that the tiger eats
the cat and the cat needs the dog. Therefore, the
cat is cold.
All cold things are nice. We know that the cat is
cold. Therefore, the cat is nice.
**Answer: True**
**Target Proof:**
If something needs the bald eagle then it eats the
cat. We know that the tiger needs the bald eagle.
Therefore, the tiger eats the cat.
If something eats the cat then the cat needs the
bald eagle. We know that the tiger eats the cat.
Therefore, the cat needs the bald eagle.
If something needs the bald eagle then it eats the
cat. We know that the cat needs the bald eagle.
Therefore, the cat eats the cat.
22
-----
Faithful Reasoning Using Large Language Models
If something is red and it likes the tiger then it
needs the dog. We know that the cat is red and
the cat likes the tiger. Therefore, the cat needs
the dog.
If something eats the cat and it needs the dog
then the cat is cold. We know that the cat eats
the cat and the cat needs the dog. Therefore, the
cat is cold.
All cold things are nice. We know that the cat is
cold. Therefore, the cat is nice.
**Target Answer: True**
**_Example 4 (value: -1.0e-05)_**
**Context:**
If something is rough and it does not see the
mouse then it does not need the bald eagle.
If something needs the rabbit and it likes the
bald eagle then the bald eagle is blue.
If something sees the mouse then it needs the
rabbit.
If something likes the rabbit and it sees the bald
eagle then the bald eagle needs the rabbit.
If something is blue then it sees the rabbit.
If something likes the mouse then it is round.
the rabbit needs the bald eagle.
the rabbit likes the bald eagle.
the bald eagle does not see the mouse.
the bald eagle needs the mouse.
the bald eagle likes the mouse.
the mouse needs the bald eagle.
the bald eagle likes the rabbit.
the rabbit is red.
the rabbit is rough.
the rabbit is not green.
the rabbit is blue.
the rabbit likes the mouse.
the rabbit sees the mouse.
the bald eagle is round.
the rabbit sees the bald eagle.
the rabbit needs the mouse.
**Question: Does it imply that the statement "The**
bald eagle sees the rabbit" is True?
**Proof:**
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something needs the rabbit and it likes the
bald eagle then the bald eagle is blue. We know
that the rabbit needs the rabbit and the rabbit
likes the bald eagle. Therefore, the bald eagle is
blue.
If something is blue then it sees the rabbit. We
know that the bald eagle is blue. Therefore, the
bald eagle sees the rabbit.
**Answer: True**
**Target Proof:**
If something sees the mouse then it needs the
rabbit. We know that the rabbit sees the mouse.
Therefore, the rabbit needs the rabbit.
If something needs the rabbit and it likes the
bald eagle then the bald eagle is blue. We know
that the rabbit needs the rabbit and the rabbit
likes the bald eagle. Therefore, the bald eagle is
blue.
If something is blue then it sees the rabbit. We
know that the bald eagle is blue. Therefore, the
bald eagle sees the rabbit.
**Target Answer: True**
**_Example 5 (value: -1.0e-05)_**
**Context:**
If the mouse is young and the mouse likes the cat
then the mouse chases the cat.
If something likes the mouse and the mouse likes
the tiger then the tiger likes the mouse.
If the tiger is cold then the tiger sees the mouse.
If something sees the mouse then it is young.
If something chases the cat and it likes the cat
then the cat sees the mouse.
If something is young and it chases the mouse
then it likes the cat.
If something chases the tiger and the tiger likes
the cat then the cat likes the mouse.
If something sees the cat and it chases the cat
then it likes the cat.
the tiger is nice.
the tiger sees the cat.
the mouse chases the tiger.
the tiger is red.
the mouse is young.
the mouse is round.
the tiger likes the mouse.
23
-----
Faithful Reasoning Using Large Language Models
the mouse is cold.
the mouse is red.
the mouse likes the cat.
the cat chases the tiger.
the tiger chases the mouse.
**Question:**
Does it imply that the statement "The cat is not
young" is True?
**Proof:**
If the mouse is young and the mouse likes the
cat then the mouse chases the cat. We know that
the mouse is young and the mouse likes the cat.
Therefore, the mouse chases the cat.
If the mouse is young and the mouse likes the
cat then the mouse chases the cat. We know that
the mouse is young and the mouse likes the cat.
Therefore, the mouse chases the cat.
If something chases the cat and it likes the cat
then the cat sees the mouse. We know that the
mouse chases the cat and the mouse likes the cat.
Therefore, the cat sees the mouse.
If something sees the mouse then it is young. We
know that the cat sees the mouse. Therefore, the
cat is young.
**Answer: False**
**Target Proof:**
If the mouse is young and the mouse likes the
cat then the mouse chases the cat. We know that
the mouse is young and the mouse likes the cat.
Therefore, the mouse chases the cat.
If something chases the cat and it likes the cat
then the cat sees the mouse. We know that the
mouse chases the cat and the mouse likes the cat.
Therefore, the cat sees the mouse.
If something sees the mouse then it is young. We
know that the cat sees the mouse. Therefore, the
cat is young.
**Target Answer: False**
**_Example 6 (value: -1.0e-05)_**
**Context:**
If something eats the cow then it is green.
If the bald eagle eats the mouse and the mouse is
nice then the mouse is rough.
If something needs the bald eagle then it is blue.
If something chases the bald eagle and the bald
eagle is rough then the bald eagle eats the cow.
If something is green and it needs the mouse
then the mouse eats the cow.
If something chases the bald eagle then the bald
eagle is rough.
If something chases the dog then the dog eats
the cow.
If something eats the dog and the dog chases the
mouse then the dog needs the bald eagle.
If something chases the cow and it is nice then
the cow chases the mouse.
the dog chases the cow.
the dog eats the mouse.
the cow chases the bald eagle.
the dog is green.
the cow is blue.
the dog chases the bald eagle.
the mouse needs the bald eagle.
the bald eagle needs the mouse.
the bald eagle is kind.
the dog chases the mouse.
**Question:**
Does it imply that the statement "The mouse is
green" is True?
**Proof:**
If something is green and it needs the mouse
then the mouse eats the cow. We know that the
dog is green and the bald eagle needs the mouse.
Therefore, the mouse eats the cow.
If something eats the cow then it is green. We
know that the mouse eats the cow. Therefore,
the mouse is green.
**Answer: True**
**Target Proof:**
If something chases the bald eagle then the bald
eagle is rough. We know that the cow chases the
bald eagle. Therefore, the bald eagle is rough.
If something chases the bald eagle and the bald
eagle is rough then the bald eagle eats the cow.
We know that the cow chases the bald eagle and
the bald eagle is rough. Therefore, the bald eagle
eats the cow.
If something eats the cow then it is green. We
know that the bald eagle eats the cow. Therefore,
the bald eagle is green.
If something is green and it needs the mouse
then the mouse eats the cow. We know that the
bald eagle is green and the bald eagle needs the
mouse. Therefore, the mouse eats the cow.
If something eats the cow then it is green. We
24
-----
Faithful Reasoning Using Large Language Models
know that the mouse eats the cow. Therefore,
the mouse is green.
**Target Answer: True**
**_Example 7 (value: -1.1e-05)_**
**Context:**
If someone eats the bald eagle then the bald
eagle is not kind.
the bear sees the cat.
the cat eats the bald eagle.
the lion sees the cat.
the bald eagle sees the bear.
the lion likes the bear.
the cat likes the bald eagle.
the bear does not eat the bald eagle.
**Question:**
Does it imply that the statement "The bald eagle
is not kind" is True?
**Proof:**
If someone eats the bald eagle then the bald
eagle is not kind. We know that the cat eats the
bald eagle. Therefore, the bald eagle is not kind.
**Answer: True**
**Target Proof:**
If someone eats the bald eagle then the bald
eagle is not kind. We know that the cat eats the
bald eagle. Therefore, the bald eagle is not kind.
**Target Answer: True**
**_Example 8 (value: -1.1e-05)_**
The question in this example asks the opposite
question to that in Example 7, with the same
context. Notice how the reasoning traces are the
same and have the same value. The final answers
are correct for each problem.
**Context:**
If someone eats the bald eagle then the bald
eagle is not kind.
the bald eagle sees the bear.
the cat eats the bald eagle.
the lion sees the cat.
the cat likes the bald eagle.
the bear sees the cat.
the lion likes the bear.
the bear does not eat the bald eagle.
**Question:**
Does it imply that the statement "The bald eagle
is kind" is True?
**Proof:**
If someone eats the bald eagle then the bald
eagle is not kind. We know that the cat eats the
bald eagle. Therefore, the bald eagle is not kind.
**Answer: False**
**Target Proof:**
If someone eats the bald eagle then the bald
eagle is not kind. We know that the cat eats the
bald eagle. Therefore, the bald eagle is not kind.
**Target Answer: False**
**_Example 9 (value: -1.2e-05)_**
**Context:**
If someone eats the squirrel and they like the
rabbit then they are green.
If someone sees the rabbit then they like the
mouse.
If someone likes the squirrel then they are kind.
If someone likes the mouse and they like the
rabbit then the mouse is red.
If someone likes the squirrel then the squirrel
likes the rabbit.
If the squirrel likes the mouse and the mouse eats
the squirrel then the mouse likes the squirrel.
If someone eats the mouse and they eat the
squirrel then the mouse is green.
If someone is red then they eat the mouse.
the rabbit is red.
the mouse eats the rabbit.
the squirrel sees the mouse.
the mouse eats the squirrel.
the rabbit likes the mouse.
the squirrel is nice.
the squirrel likes the mouse.
the squirrel sees the rabbit.
the mouse likes the rabbit.
the rabbit sees the squirrel.
the squirrel eats the rabbit.
the rabbit eats the squirrel.
**Question:**
Does it imply that the statement "The mouse is
not red" is True?
**Proof: If someone sees the rabbit then they like**
the mouse. We know that the squirrel sees the
25
-----
Faithful Reasoning Using Large Language Models
rabbit. Therefore, the squirrel likes the mouse.
If someone likes the mouse and they like the
rabbit then the mouse is red. We know that the
squirrel likes the mouse and the mouse likes the
rabbit. Therefore, the mouse is red.
**Answer: False**
**Target Proof:**
If the squirrel likes the mouse and the mouse eats
the squirrel then the mouse likes the squirrel. We
know that the squirrel likes the mouse and the
mouse eats the squirrel. Therefore, the mouse
likes the squirrel.
If someone likes the squirrel then the squirrel
likes the rabbit. We know that the mouse likes
the squirrel. Therefore, the squirrel likes the
rabbit.
If someone likes the mouse and they like the
rabbit then the mouse is red. We know that the
squirrel likes the mouse and the squirrel likes the
rabbit. Therefore, the mouse is red.
Target Answer: False
**_Example 10 (value: -1.2e-05)_**
**Context: If someone likes the mouse and the**
mouse sees the cow then they like the dog.
If someone is green and they chase the lion then
the lion chases the mouse.
If the mouse does not like the lion then the
mouse does not chase the lion.
If the dog is young then the dog is round.
If someone likes the lion then the lion likes the
cow.
If someone is green then they like the cow.
If someone likes the cow and they are rough then
the cow does not like the lion.
If someone likes the cow then they chase the cow.
the lion likes the cow.
the cow sees the lion.
the dog sees the mouse.
the mouse chases the cow.
the dog is green.
the mouse does not chase the dog.
the mouse is rough.
**Question:**
Does it imply that the statement "The dog does
not like the cow" is True?
**Proof:**
If someone is green then they like the cow. We
know that the dog is green. Therefore, the dog
likes the cow.
**Answer: False**
**Target Proof:**
If someone is green then they like the cow. We
know that the dog is green. Therefore, the dog
likes the cow.
**Target Answer: False**
Below are examples of the worst five reasoning
traces from SI + Halter + Search according to
the value function. Errors are highlighted in red.
**_Example 1 (value: -10.9)_**
**Context:**
If something is kind then it likes the cow.
If something likes the cow then the cow is kind.
the cow is big.
the mouse eats the bear.
the tiger is kind.
the bear visits the tiger.
**Question:**
Does it imply that the statement "The cow likes
the cow" is True?
**Proof:**
If something is kind then it likes the cow. We
know that the tiger is kind. Therefore, the tiger
likes the cow.
If something likes the cow then the cow is kind.
We know that the tiger likes the cow. Therefore,
the cow is kind.
If something is kind then it likes the cow. We
know that the cow is kind. Therefore, the cow
likes the cow.
**Answer: True**
**Target Proof:** If something is kind then it
likes the cow. We know that the tiger is kind.
Therefore, the tiger likes the cow.
If something likes the cow then the cow is kind.
We know that the tiger likes the cow. Therefore,
the cow is kind.
If something is kind then it likes the cow. We
know that the cow is kind. Therefore, the cow
likes the cow.
**Target Answer: True**
26
-----
Faithful Reasoning Using Large Language Models
**_Example 2 (value: -10.2)_**
**Context:**
If something needs the squirrel then the squirrel
is cold.
If something needs the lion then the lion likes
the cat.
If the lion needs the squirrel and the squirrel
needs the lion then the lion eats the squirrel.
If something likes the squirrel and the squirrel
likes the cow then the cow is rough.
If something is cold then it needs the squirrel.
If something needs the squirrel and it eats the
cat then the cat likes the squirrel.
the cow is cold.
the cat eats the cow.
the lion eats the cow.
the squirrel eats the lion.
**Question:**
Does it imply that the statement "The squirrel
does not need the squirrel" is True?
**Proof:**
If something is cold then it needs the squirrel.
We know that the cow is cold. Therefore, the cow
needs the squirrel.
If something needs the squirrel then the squirrel
is cold. We know that the cow needs the squirrel.
Therefore, the squirrel is cold.
If something is cold then it needs the squirrel.
We know that the squirrel is cold. Therefore, the
squirrel needs the squirrel.
**Answer: False**
**Target Proof:**
If something is cold then it needs the squirrel.
We know that the cow is cold. Therefore, the cow
needs the squirrel.
If something needs the squirrel then the squirrel
is cold. We know that the cow needs the squirrel.
Therefore, the squirrel is cold.
If something is cold then it needs the squirrel.
We know that the squirrel is cold. Therefore, the
squirrel needs the squirrel.
**Target Answer: False**
**_Example 3 (value: -10.2)_**
**Context:**
If someone is kind then they are rough.
All cold, quiet people are smart.
If someone is cold then they are smart.
If someone is blue then they are quiet.
If someone is red and kind then they are smart.
If someone is quiet then they are blue.
All blue, smart people are red.
Blue, rough people are red.
If Gary is kind and Gary is rough then Gary is
quiet.
Fiona is smart.
Charlie is kind.
Fiona is blue.
Bob is smart.
Fiona is rough.
Gary is kind.
Charlie is not smart.
**Question:**
Does it imply that the statement "Gary is smart"
is True?
**Proof:**
If someone is kind then they are rough. We know
that Gary is kind. Therefore, Gary is rough.
If someone is red and kind then they are smart.
We know that Gary is kind and Gary is kind.
Therefore, Gary is smart.
**Answer: True**
**Target Proof:**
If someone is kind then they are rough. We know
that Gary is kind. Therefore, Gary is rough.
If Gary is kind and Gary is rough then Gary is
quiet. We know that Gary is kind and Gary is
rough. Therefore, Gary is quiet.
If someone is quiet then they are blue. We know
that Gary is quiet. Therefore, Gary is blue.
If someone is kind then they are rough. We know
that Gary is kind. Therefore, Gary is rough.
Blue, rough people are red. We know that Gary
is blue and Gary is rough. Therefore, Gary is red.
If someone is red and kind then they are smart.
We know that Gary is red and Gary is kind.
Therefore, Gary is smart.
**Target Answer: True**
**_Example 4 (value: -10.1)_**
**Context:**
If something likes the cow then the cow is kind.
If something is kind then it likes the cow.
the cow is big.
27
-----
Faithful Reasoning Using Large Language Models
the tiger is kind.
the mouse eats the bear.
the bear visits the tiger.
**Question:**
Does it imply that the statement "The cow does
not like the cow" is True?
**Proof:**
If something is kind then it likes the cow. We
know that the tiger is kind. Therefore, the tiger
likes the cow.
If something likes the cow then the cow is kind.
We know that the tiger likes the cow. Therefore,
the cow is kind.
If something is kind then it likes the cow. We
know that the cow is kind. Therefore, the cow
likes the cow.
**Answer: False**
**Target Proof:**
If something is kind then it likes the cow. We
know that the tiger is kind. Therefore, the tiger
likes the cow.
If something likes the cow then the cow is kind.
We know that the tiger likes the cow. Therefore,
the cow is kind.
If something is kind then it likes the cow. We
know that the cow is kind. Therefore, the cow
likes the cow.
**Target Answer: False**
**_Example 5 (value: -10.0)_**
**Context:**
If something is kind and not young then it eats
the mouse.
If something eats the mouse and the mouse eats
the tiger then the mouse does not see the rabbit.
If something needs the rabbit and the rabbit eats
the tiger then the tiger sees the rabbit.
If something is blue then it eats the mouse.
If something eats the mouse then the mouse is
blue.
If something sees the tiger then it eats the rabbit.
If the mouse is young and the mouse eats the
bald eagle then the mouse needs the rabbit.
If something sees the rabbit then the rabbit is not
blue.
the tiger is green.
the bald eagle sees the tiger.
the tiger is blue.
the bald eagle is not blue.
the rabbit is round.
the rabbit is kind.
the mouse is not kind.
the bald eagle eats the rabbit.
the tiger sees the rabbit.
the bald eagle sees the mouse.
**Question:**
Does it imply that the statement "The mouse
does not eat the mouse" is True?
**Proof:**
If something is blue then it eats the mouse. We
know that the tiger is blue. Therefore, the tiger
eats the mouse.
If something eats the mouse then the mouse is
blue. We know that the tiger eats the mouse.
Therefore, the mouse is blue.
If something is blue then it eats the mouse. We
know that the mouse is blue. Therefore, the
mouse eats the mouse.
**Answer: False**
**Target Proof:**
If something is blue then it eats the mouse. We
know that the tiger is blue. Therefore, the tiger
eats the mouse.
If something eats the mouse then the mouse is
blue. We know that the tiger eats the mouse.
Therefore, the mouse is blue.
If something is blue then it eats the mouse. We
know that the mouse is blue. Therefore, the
mouse eats the mouse.
**Target Answer: False**
**D.2. EntailmentBankQA Task 1 - depth 1: SI**
**+ Halter + Search**
Below are the top five examples of reasoning
traces on EntailmentBankQA problems that require a single step of reasoning, ranked by their
value according to our learned value function and
are otherwise not cherry picked.
**_Example 1 (value: -0.026)_**
**Context:**
a fly is a kind of insect
an insect has six legs
28
-----
Faithful Reasoning Using Large Language Models
**Question:**
An animal has six legs. What is it most likely to
be? a spider OR a fly OR a lizard OR a centipede.
**Proof:**
a fly is a kind of insect. We know that an insect
has six legs. Therefore, a fly has six legs.
**Answer: a fly**
**Target Proof:**
a fly is a kind of insect. We know that an insect
has six legs. Therefore, a fly has six legs.
**Target Answer: a fly**
**_Example 2 (value: -0.033)_**
**Context:**
solid is a kind of state of matter
an ice cube is a kind of solid
**Question:**
Which word best describes the physical state of
an ice cube? gas OR solid OR liquid OR plasma.
**Proof:**
solid is a kind of state of matter. We know that
an ice cube is a kind of solid. Therefore, an ice
cube is in solid state.
**Answer: solid**
**Target Proof:**
solid is a kind of state of matter. We know that
an ice cube is a kind of solid. Therefore, an ice
cube is solid in its physical state.
**Target Answer: solid**
**_Example 3 (value: -0.045)_**
**Context:**
the surface of the moon contains mountains
the surface of the earth contains mountains
**Question:**
Which features can be found on the surface of
both Earth and the Moon? plants OR oceans OR
animals OR mountains.
**Proof:**
the surface of the moon contains mountains.
We know that the surface of the earth contains
mountains. Therefore, both the surface of the
moon and the surface of the earth contain
mountains.
**Answer: mountains**
**Target Proof:**
the surface of the moon contains mountains.
We know that the surface of the earth contains
mountains. Therefore, both the surface of the
moon and the surface of the earth contain
mountains.
**Target Answer: mountains**
**_Example 4 (value: -0.049)_**
**Context:**
steel is a kind of solid metal
sound travels fastest through solid
**Question:**
Through which material would sound travel the
fastest? cork OR water OR air OR steel.
**Proof:**
steel is a kind of solid metal. We know that
sound travels fastest through solid. Therefore,
sound will travel fastest in steel.
**Answer: steel**
**Target Proof:**
steel is a kind of solid metal. We know that
sound travels fastest through solid. Therefore,
sound travels fastest in steel.
**Target Answer: steel**
**_Example 5 (value: -0.052)_**
**Context:**
drinking straws are usually made of plastic
plastic is flexible
**Question:**
Which of these is MOST flexible? Broom handle
OR Wooden ruler OR Drinking straw OR Sewing
needle.
**Proof:**
drinking straws are usually made of plastic. We
know that plastic is flexible. Therefore, drinking
straws are flexible.
**Answer: Drinking straw**
**Target Proof:**
drinking straws are usually made of plastic.
We know that plastic is flexible. Therefore, a
drinking straw is flexible.
**Target Answer: Drinking straw**
29
-----
Faithful Reasoning Using Large Language Models
**D.3. EntailmentBankQA Task 1 - depth 2: SI**
**+ Halter + Search**
Below are the top five examples of reasoning
traces on EntailmentBankQA problems that require two steps of reasoning, ranked by their
value according to our learned value function
and are otherwise not cherry picked.
**_Example 1 (value: -0.022)_**
**Context:**
a runway is a kind of pathway for airplanes
airports have runways for airplanes
as the number of pathways increases, the traffic
congestion in that area usually decreases
**Question:**
Which of the following would be most effective in
reducing air traffic congestion at a busy airport?
providing performance feedback to pilots OR
providing flight information to passengers OR
increasing the number of aircraft at the airport
OR increasing the number of runways at the
airport.
**Proof:**
a runway is a kind of pathway for airplanes. We
know that airports have runways for airplanes.
Therefore, an airport runway is a kind of pathway
for airplanes.
an airport runway is a kind of pathway for
airplanes. We know that as the number of
pathways increases, the traffic congestion in
that area usually decreases. Therefore, as the
number of runways at a airport increases, the
traffic congestion in that area usually decreases.
**Answer: increasing the number of runways at**
the airport
**Target Proof:**
a runway is a kind of pathway for airplanes. We
know that as the number of pathways increases
, the traffic congestion in that area usually
decreases. Therefore, as the number of runways
increases, the traffic congestion in that area
usually decreases.
as the number of runways increases, the traffic
congestion in that area usually decreases. We
know that airports have runways for airplanes.
Therefore, as the number of runways increases,
the traffic congestion at the airport usually
decreases.
**Target Answer:** increasing the number of
runways at the airport
**_Example 2 (value: -0.030)_**
**Context:**
tidal energy is a renewable resource
wave energy is a renewable resource
tidal energy means energy from ocean tides
**Question:**
Wave energy from the ocean can be harnessed
to power generators to make electricity. Energy
from ocean tides can also be used to make
electricity. How would you categorize these
two sources of energy? Both are renewable.
OR Both are nonrenewable. OR Wave energy
is nonrenewable. Tidal energy is renewable.
OR Wave energy is renewable. Tidal energy is
nonrenewable..
**Proof:**
tidal energy is a renewable resource. We know
that wave energy is a renewable resource.
Therefore, wave energy and tidal energy are both
renewable resources.
Answer: Wave energy is renewable
**Target Proof:**
tidal energy is a renewable resource. We know
that tidal energy means energy from ocean
tides. Therefore, energy from ocean tides is a
renewable resource.
energy from ocean tides is a renewable resource.
We know that wave energy is a renewable
resource. Therefore, wave energy and energy
from ocean tides are both renewable resources.
Target Answer: Both are renewable.
**_Example 3 (value: -0.031_**
**Context: light comes to the focal point after**
passing through a convex lens
a convex lens causes light to refract and concentrate to magnify distant objects
refraction is when light bends
**Question:**
Which object is best used to demonstrate the
bending of light as it passes through a substance?
convex lens OR flat mirror OR opaque rod OR
concave mirror.
30
-----
Faithful Reasoning Using Large Language Models
**Proof:**
light comes to the focal point after passing
through a convex lens. We know that a convex
lens causes light to refract and concentrate to
magnify distant objects. Therefore, the lens
causes light to refract and concentrate on the
focal point.
the lens causes light to refract and concentrate
on the focal point. We know that refraction is
when light bends. Therefore, the lens causes the
light to bend towards the focal point.
**Answer: convex lens**
**Target Proof:**
light comes to the focal point after passing
through a convex lens. We know that a convex
lens causes light to refract and concentrate to
magnify distant objects. Therefore, a convex
lens will cause the light to refract when the light
passes through the convex lens.
a convex lens will cause the light to refract
when the light passes through the convex lens.
We know that refraction is when light bends.
Therefore, a convex lens will cause the light to
bend when the light passes through the lens.
**Target Answer: convex lens**
**_Example 4 (value: -0.036)_**
**Context:**
friction occurs when two object ’s surfaces move
against each other
a hand is a kind of object
friction causes the temperature of an object to
increase
a student is rubbing his hands together
**Question:**
A student stands outside on a cold winter day. His
hands become cold and he rubs them together to
make them warmer. Which statement explains
why rubbing his hands together makes them
warmer? This action produces thermal energy
through friction. OR This action conducts thermal
energy away from the body. OR This action
captures thermal energy from the environment.
OR This action reduces the amount of thermal
energy transferred to the air..
**Proof:**
friction occurs when two object ’s surfaces move
against each other. We know that a hand is a
kind of object and a student is rubbing his hands
together. Therefore, friction will occur between
the student’s hands.
friction will occur between the student’s hands.
We know that friction causes the temperature
of an object to increase. Therefore, friction will
cause the temperature of the student’s hands to
increase.
**Answer: This action produces thermal energy**
through friction
**Target Proof:**
friction occurs when two object ’s surfaces move
against each other. We know that a hand is a
kind of object and a student is rubbing his hands
together. Therefore, friction occurs when the
student is rubbing his hands together.
friction occurs when the student is rubbing his
hands together. We know that friction causes the
temperature of an object to increase. Therefore,
friction causes the temperature of student’s
hands to increase.
**Target Answer: This action produces thermal**
energy through friction.
**_Example 5 (value: -0.0378)_**
**Context: receiving the recessive gene for a trait**
from each parent causes the offspring to express
that recessive trait
light eyes are a recessive trait for human
jessica is a human with light eyes
**Question:**
If Jessica has light eyes (bb) and both of her
parents have dark eyes (Bb), which statement
is true? Jessica inherited both genes from her
father. OR Jessica inherited both genes from her
mother. OR Jessica inherited one recessive form
of the gene from each parents from each parent.
OR Jessica inherited one dominant from of the
gene from each parent..
**Proof: receiving the recessive gene for a trait**
from each parent causes the offspring to express
that recessive trait. We know that light eyes are a
recessive trait for human and jessica is a human
with light eyes. Therefore, jessica received the
recessive gene for light eyes from each parent.
**Answer: Jessica inherited one recessive form of**
31
-----
Faithful Reasoning Using Large Language Models
the gene from each parent
**Target Proof:**
light eyes are a recessive trait for human. We
know that jessica is a human with light eyes.
Therefore, jessica has a recessive trait of light
eyes.
jessica has a recessive trait of light eyes. We
know that receiving the recessive gene for a trait
from each parent causes the offspring to express
that recessive trait. Therefore, jessica received
the recessive gene from each of her parent.
**Target Answer: Jessica inherited one recessive**
form of the gene from each parents from each
parent.
**D.4. EntailmentBankQA Task 1 - depth 3: SI**
**+ Halter + Search**
Below are the top five examples of reasoning
traces on EntailmentBankQA problems that require three steps of reasoning, ranked by their
value according to our learned value function and
are otherwise not cherry picked.
**_Example 1 (value: -0.0080)_**
**Context:**
if something receives sunlight, then it will
increase in temperature
if something increases in temperature, then that
something will be warm
the air around the equator receives direct
sunlight
the air mass formed close to the ocean is humid
in moisture
**Question:**
Which type of air mass forms over the ocean near
the equator? moist and warm OR moist and cool
OR dry and warm OR dry and cool.
**Proof:**
the air around the equator receives direct
sunlight. We know that the air mass formed close
to the ocean is humid in moisture. Therefore, the
air mass formed close to the ocean would receive
direct sunlight.
the air mass formed close to the ocean would
receive direct sunlight. We know that if something receives sunlight, then it will increase in
temperature. Therefore, the air mass formed
close to the ocean would increase in temperature.
the air mass formed close to the ocean would
increase in temperature. We know that if
something increases in temperature, then that
something will be warm. Therefore, the air mass
formed close to the ocean would be warm.
**Answer: moist and warm**
**Target Proof:**
if something receives sunlight, then it will
increase in temperature. We know that the air
around the equator receives direct sunlight.
Therefore, the air around the equator will
increase in temperature.
the air around the equator will increase in temperature. We know that if something increases in
temperature, then that something will be warm.
Therefore, the air is warm around the equator.
the air is warm around the equator. We know
that the air mass formed close to the ocean is
humid in moisture. Therefore, air masses formed
over the ocean near the equator are moist /
warm.
**Target Answer: moist and warm**
**_Example 2 (value: -0.014)_**
**Context: sliding is a kind of movement**
friction occurs when two objects move against
each other
a rock is a kind of object
a frozen pond is a kind of object
a rock is slided across the frozen pond
friction causes the speed of an object to decrease
**Question:**
Kelly slides a flat rock across the smooth ice
of a frozen pond. The rock slows down after
several seconds. What causes the rock to
slow down? The thickness of the ice OR The
temperature of the air above the ice OR The force
of friction between the ice and the rock OR The
gravitational force between the ice and the rock.
**Proof:**
sliding is a kind of movement. We know that a
rock is slided across the frozen pond and a rock
is a kind of object. Therefore, the rock is slid
across the frozen pond.
the rock is slid across the frozen pond. We know
32
-----
Faithful Reasoning Using Large Language Models
that friction occurs when two objects move
against each other and a rock is a kind of object
and a frozen pond is a kind of object. Therefore,
friction occurs when the rock is slid across the
frozen pond.
friction occurs when the rock is slid across the
frozen pond. We know that friction causes the
speed of an object to decrease and a rock is a
kind of object. Therefore, friction will cause
the speed of the rock to decrease when it is slid
across the frozen pond.
**Answer:** The force of friction between the ice
and the rock
**Target Proof:**
sliding is a kind of movement. We know that a
rock is slided across the frozen pond. Therefore,
a rock is moving on the frozen pond.
a rock is moving on the frozen pond. We know
that friction occurs when two objects move
against each other and a rock is a kind of object
and a frozen pond is a kind of object. Therefore,
friction will occur between the rock and the
frozen pond.
friction will occur between the rock and the
frozen pond. We know that a rock is a kind of
object and friction causes the speed of an object
to decrease. Therefore, friction will cause the
speed of the rock to decrease.
**Target Answer: The force of friction between**
the ice and the rock
**_Example 3 (value: -0.027)_**
**Context:**
the parent cats have white fur
white fur is white in color
inheriting is when an inherited characteristic is
passed from parent to offspring
a cat is a kind of animal
the color of / coloration of fur is an inherited
characteristic
animals produce offspring
**Question:**
Which trait would a cat most likely inherit from
its parents? having a scar OR chasing a toy OR
having white fur OR knowing its name.
**Proof:**
inheriting is when an inherited characteristic is
passed from parent to offspring. We know that
the color of / coloration of fur is an inherited
characteristic. Therefore, offspring can inherit
the color of fur from parents.
offspring can inherit the color of fur from parents.
We know that the parent cats have white fur
and white fur is white in color. Therefore, the
offspring of the cats will inherit white fur from
their parents.
**Answer: having white fur**
**Target Proof:**
inheriting is when an inherited characteristic is
passed from parent to offspring. We know that
the color of / coloration of fur is an inherited
characteristic. Therefore, the offspring will
inherit the color of the fur of its parent.
the offspring will inherit the color of the fur of
its parent. We know that a cat is a kind of animal
and animals produce offspring. Therefore, a cat
will inherit the color of the fur of its parents.
a cat will inherit the color of the fur of its parents.
We know that the parent cats have white fur and
white fur is white in color. Therefore, the cat will
inherit the white colored fur from its parents.
**Target Answer: having white fur**
**_Example 4 (value: -0.027)_**
**Context:**
carbon is a kind of element
a compound is made of two or more elements
chemically combined
calcium carbonate is made of calcium, carbon,
and oxygen
calcium is a kind of element
oxygen is a kind of element
**Question:**
Calcium carbonate (CaCO3) is considered a
compound because it is made up of two or more
molecules. OR elements. OR gases. OR solids..
**Proof:**
carbon is a kind of element. We know that
calcium is a kind of element and oxygen is a kind
of element. Therefore, calcium, oxygen, and
carbon are all elements.
calcium, oxygen, and carbon are all elements.
We know that calcium carbonate is made of
calcium, carbon, and oxygen. Therefore, calcium
33
-----
Faithful Reasoning Using Large Language Models
carbonate is made of elements.
**Answer: elements**
**Target Proof:**
carbon is a kind of element. We know that
calcium is a kind of element and oxygen is a
kind of element. Therefore, calcium, carbon, and
oxygen are three elements.
calcium, carbon, and oxygen are three elements.
We know that calcium carbonate is made of
calcium, carbon, and oxygen. Therefore, calcium
carbonate is made of three elements.
calcium carbonate is made of three elements.
We know that a compound is made of two or
more elements chemically combined. Therefore,
calcium carbonate is a kind of compound.
**Target Answer: elements.**
**_Example 5 (value: -0.031)_**
**Context:**
trees takes in oxygen through roots close to the
surface of the ground
earthworms create tunnels in soil
the looseness of soil increases the amount of
oxygen in that soil
tunnels in soil loosen that soil
**Question:**
Trees need oxygen. Roots close to the surface of
the ground take in the oxygen the tree needs.
Which organisms help trees get oxygen? woodpeckers making holes in the tree OR earthworms
making holes in the ground near the tree OR
mushrooms growing at the base of the tree OR
squirrels eating walnuts on the ground near the
tree.
**Proof:**
tunnels in soil loosen that soil. We know that the
looseness of soil increases the amount of oxygen
in that soil. Therefore, tunnels in soil increase
the amount of oxygen in the soil.
tunnels in soil increase the amount of oxygen
in the soil. We know that trees takes in oxygen
through roots close to the surface of the ground.
Therefore, tunnels in soil increase the amount of
oxygen that can be taken in through the roots of
trees.
tunnels in soil increase the amount of oxygen
that can be taken in through the roots of trees.
We know that earthworms create tunnels in
soil. Therefore, earthworms help trees get more
oxygen through their roots by creating tunnels
in the soil.
**Answer: earthworms making holes in the ground**
near the tree
**Target Proof:**
earthworms create tunnels in soil. We know
that tunnels in soil loosen that soil. Therefore,
earthworm can help loosen the soil by creating
tunnels.
earthworm can help loosen the soil by creating
tunnels. We know that the looseness of soil
increases the amount of oxygen in that soil.
Therefore, earthworm creating tunnes can
increase the amount of oxygen in the soil.
earthworm creating tunnes can increase the
amount of oxygen in the soil. We know that
trees takes in oxygen through roots close to the
surface of the ground. Therefore, more oxygen is
available for tree roots.
**Target Answer: earthworms making holes in**
the ground near the tree
**D.5. EntailmentBankQA Task 1 - depth 5: SI**
**+ Halter + Search**
Below are the top five examples of reasoning
traces on EntailmentBankQA problems that require a five steps of reasoning, ranked by their
value according to our learned value function and
are otherwise not cherry picked.
**_Example 1 (value: -0.033)_**
**Context:**
woodpeckers live in forests
animals adapt to their environment
a squirrel is a kind of animal
a woodpecker is a kind of bird
squirrels live in forests
a deer lives in a forest
a deer is a kind of animal
a forest is a kind of environment
**Question:**
In which group of animals below are all of
the members well adapted to living in a forest
ecosystem? field mouse, rabbit, pheasant OR
34
-----
Faithful Reasoning Using Large Language Models
gull, tree frog, grasshopper OR coyote, antelope,
bison OR squirrel, deer, woodpecker.
**Proof:**
a squirrel is a kind of animal. We know that a
deer is a kind of animal. Therefore, a deer and a
squirrel are kinds of animals.
a deer and a squirrel are kinds of animals. We
know that animals adapt to their environment.
Therefore, a deer and a squirrel adapt to their
environment.
**Answer: squirrel, deer, woodpecker**
**Target Proof:**
a squirrel is a kind of animal. We know that
squirrels live in forests. Therefore, squirrels are
animals live in forests.
a deer lives in a forest. We know that a deer is a
kind of animal. Therefore, deer is an animal lives
in a forest.
woodpeckers live in forests. We know that
a woodpecker is a kind of bird. Therefore,
woodpeckers are animals live in forests.
squirrels are animals live in forests. We know
that deer is an animal lives in a forest and
woodpeckers are animals live in forests.
Therefore, squirrels and deer and woodpeckers
are animals that live in forests.
squirrels and deer and woodpeckers are animals
that live in forests. We know that animals adapt
to their environment and a forest is a kind of
environment. Therefore, squirrels and deer and
woodpeckers are adapted to living in forests.
**Target Answer: squirrel, deer, woodpecker**
**_Example 2 (value: -0.039)_**
**Context:**
toxic chemicals are dumped into a pond
poisonous things can cause harm to organisms
toxic chemicals are poisonous
harming an organism can cause that organism to
die
a fish is a kind of organism
a pond is a kind of body of water
a fish lives in water
**Question:**
Dumping toxic chemicals into a pond would most
likely result in an increase in oxygen levels in the
pond. OR plants near the pond growing more
quickly. OR the toxic chemicals having no effect
on the pond. OR fish in the pond being harmed
or dying off..
**Proof:**
a fish is a kind of organism. We know that
harming an organism can cause that organism to
die. Therefore, harming a fish can cause that fish
to die.
harming a fish can cause that fish to die. We
know that poisonous things can cause harm to
organisms. Therefore, poisonous things can
cause fish to die.
harming a fish can cause that fish to die. We
know that poisonous things can cause fish to die.
Therefore, poisonous things can harm fish.
poisonous things can cause fish to die. We know
that toxic chemicals are poisonous. Therefore,
toxic chemicals can cause fish to die.
**Answer: fish in the pond being harmed or dying**
off
**Target Proof:**
toxic chemicals are dumped into a pond. We
know that toxic chemicals are poisonous. Therefore, the pond is poisonous because of toxic
chemicals.
poisonous things can cause harm to organisms.
We know that harming an organism can cause
that organism to die. Therefore, poisonous things
can cause organisms to die.
a pond is a kind of body of water. We know that
a fish lives in water. Therefore, fish live in ponds.
fish live in ponds. We know that a fish is a kind of
organism. Therefore, a fish is a kind of organism
that lives in ponds.
the pond is poisonous because of toxic chemicals.
We know that poisonous things can cause
organisms to die and a fish is a kind of organism
that lives in ponds. Therefore, the pond with
toxic chemicals can cause fish in the pond to die.
**Target Answer: fish in the pond being harmed**
or dying off.
**_Example 3 (value: -0.049)_**
In this example, the model makes a mistake on
the first step of reasoning but is able to recover
and does not use the incorrect inference in later
steps.
35
-----
Faithful Reasoning Using Large Language Models
**Context:**
pulling a plow is a kind of chore
a horse is a kind of animal
an instinctive behavior is a kind of inherited
characteristic
inherited characteristics are the opposite of
learned characteristics
doing chores is a learned characteristic
a learned characteristic is a kind of characteristic
of an animal
**Question:**
Which of these is not an instinctive behavior?
a bird building a nest OR a turtle burying its
eggs OR a bear hibernating in winter OR a horse
pulling a plow.
**Proof:**
doing chores is a learned characteristic. We know
that inherited characteristics are the opposite of
learned characteristics. Therefore, doing chores
**is an inherited characteristic.**
a horse is a kind of animal. We know that a
learned characteristic is a kind of characteristic
of an animal. Therefore, a learned characteristic
of a horse is a kind of learned characteristic of an
animal.
pulling a plow is a kind of chore. We know
that doing chores is a learned characteristic.
Therefore, pulling a plow is a kind of learned
characteristic.
a learned characteristic of a horse is a kind of
learned characteristic of an animal. We know
that inherited characteristics are the opposite
of learned characteristics. Therefore, a horse
cannot inherit a learned characteristic.
pulling a plow is a kind of learned characteristic.
We know that a horse cannot inherit a learned
characteristic. Therefore, a horse cannot inherit
the characteristic of pulling a plow.
**Answer: a horse pulling a plow**
**Target Proof:**
pulling a plow is a kind of chore. We know that
doing chores is a learned characteristic. Therefore, pulling a plow is a learned characteristic.
pulling a plow is a learned characteristic. We
know that a learned characteristic is a kind of
characteristic of an animal. Therefore, an animal
pulling a plow is a kind of learned characteristic.
an animal pulling a plow is a kind of learned
characteristic. We know that a horse is a kind of
animal. Therefore, a horse pulling a plow is a
learned characteristic.
an instinctive behavior is a kind of inherited
characteristic. We know that inherited characteristics are the opposite of learned characteristics.
Therefore, instinctive behaviors are opposite to
learned characteristics.
a horse pulling a plow is a learned characteristic.
We know that instinctive behaviors are opposite
to learned characteristics. Therefore, a horse
pulling a plow is not an instinctive behavior.
**Target Answer: a horse pulling a plow**
**_Example 4 (value: -0.052)_**
**Context:**
disease-causing microbes have a negative impact
on the body
decreasing something negative has a positive
impact on a thing
destroying something decreases the amount of
that something
a source of something produces that something
protecting a living thing has a positive impact on
that living thing ’s survival / health
specialized cells are a source of chemicals that
destroy
disease-causing microbes
**Question:**
Specialized cells protect the human body from
disease-causing microbes by producing chemicals
that destroy the microbes OR dissolving wastes
from the microbes OR creating recessive genes in
the microbes OR supplying oxygen and nutrients
to the microbes.
**Proof:**
disease-causing microbes have a negative impact
on the body. We know that decreasing something
negative has a positive impact on a thing.
Therefore, decreasing disease-causing microbes
has a positive impact on the body.
decreasing disease-causing microbes has a
positive impact on the body. We know that
specialized cells are a source of chemicals that
destroy disease-causing microbes. Therefore, as
the number of specialized cells in the body increases, the number of disease-causing microbes
being destroyed will increase.
36
-----
Faithful Reasoning Using Large Language Models
**Answer: creating recessive genes in the microbes**
**Target Proof:**
a source of something produces that something.
We know that specialized cells are a source of
chemicals that destroy disease-causing microbes.
Therefore, specialized cells produce chemicals
that destroy disease-causing microbes.
decreasing something negative has a positive
impact on a thing. We know that destroying
something decreases the amount of that something. Therefore, destroying something negative
has a positive impact on a thing.
destroying something negative has a positive
impact on a thing. We know that disease-causing
microbes have a negative impact on the body.
Therefore, destroying disease-causing microbes
has a positive impact on the body.
destroying disease-causing microbes has a
positive impact on the body. We know that
protecting a living thing has a positive impact on
that living thing ’s survival / health. Therefore,
destroying disease-causing microbes is a kind of
method for protecting the body.
specialized cells produce chemicals that destroy
disease-causing microbes. We know that destroying disease-causing microbes is a kind of method
for protecting the body. Therefore, specialized
cells protect the body be producing chemicals
that destroy disease-causing microbes.
**Target** **Answer:** producing chemicals that
destroy the microbes
**_Example 5 (value: -0.057)_**
**Context:**
building materials are used to build buildings
wood boards can be used to build houses
a house is a kind of building
wood is a renewable resource
wood boards are made of wood
a renewable resource is a kind of natural resource
**Question:**
The building of houses in Maryland uses many
different materials. Which building material is
made of a renewable natural resource? copper
wire OR steel beams OR wood boards OR plastic
siding.
**Proof:**
wood boards can be used to build houses.
We know that wood is a renewable resource.
Therefore, wood boards are a renewable resource
that can be used to build houses.
**Answer: wood boards**
**Target Proof:**
building materials are used to build buildings.
We know that a house is a kind of building.
Therefore, building materials can be used to
build houses.
building materials can be used to build houses.
We know that wood boards can be used to build
houses. Therefore, wood boards are a kind of
building material.
wood is a renewable resource. We know that a
renewable resource is a kind of natural resource.
Therefore, wood is a kind of renewable natural
resource.
wood is a kind of renewable natural resource.
We know that wood boards are made of wood.
Therefore, wood boards are made of a renewable
natural resource.
wood boards are a kind of building material. We
know that wood boards are made of a renewable
natural resource. Therefore, wood boards are
a kind of building material that is made of a
renewable natural resource.
**Target Answer: wood boards**
**D.6. EntailmentBankQA Task 2 - depth 1: SI**
**+ Halter + Search**
Below are the top three examples of reasoning
traces on EntailmentBankQA problems that require a single step of reasoning, ranked by their
value according to our learned value function and
are otherwise not cherry picked.
**_Example 1 (value: -0.018)_**
**Context: oil is a kind of energy resource**
gas is a kind of substance
gasoline is a source of chemical energy
chemical energy is a kind of potential energy
an engine is a part of an automobile
a fuel is used to make energy
ethanol is a kind of substance
as the use of a resource decreases, the length of
37
-----
Faithful Reasoning Using Large Language Models
time that resource
being available will increases
to give off means to be the source of
potential energy means stored energy
operating an automobile usually requires fossil
fuels
amount is a property of something and includes
ordered values of none / least / little / some /
half / much / many / most / all
gasoline is a kind of chemical power source
to run on means to use as an energy source
using the fuel requires burning the fuel
an energy source is a kind of resource
using a gasoline engine requires burning gasoline
best means most positive
if the amount of electricity converted from the
potential energy of an object is less than the
amount of potential energy in the object, then
the potential energy is converted into other
energy other than electricity
potential energy is a measure of the energy of an
object due to
position relative to other objects
oil is a kind of substance / matter
if an object / a substance makes something then
that object / that substance is a source of that
thing
heat means heat energy
an internal combustion engine is used to move
the automobile
if something requires something else then that
something requires a source of that something
else
**Question:**
Which of the following is the best example of
potential energy? a fallen rock OR a burning
match OR a liter of gasoline OR a red-hot piece
of iron.
**Proof:**
gasoline is a kind of chemical power source.
We know that using a gasoline engine requires
burning gasoline. Therefore, using a gasoline
engine requires burning chemical power source.
using a gasoline engine requires burning chemical power source. We know that gasoline is
a source of chemical energy. Therefore, using
a gasoline engine requires burning chemical
energy.
using a gasoline engine requires burning chem
ical energy. We know that chemical energy is
a kind of potential energy. Therefore, using
a gasoline engine requires burning potential
energy.
Answer: a liter of gasoline
**Target Proof:**
gasoline is a source of chemical energy. We know
that chemical energy is a kind of potential energy.
Therefore, gasoline is a source of potential energy.
**Target Answer: a liter of gasoline**
**_Example 2 (value: -0.023)_**
**Context:**
if heat is transferred to an object, then the object
will get hot
heat energy is synonymous with thermal energy
a source of heat can be used to heat something
greater means higher / more in value
heat energy can be used to heat up an object
if something causes an increase in something else
then that something produces that something
else
if a body part was burned then that body part
was exposed to a lot of heat energy
as the temperature of a chemical reaction
increases, the rate of the chemical reaction will
increase
adding heat means increasing temperature
fanning increases the oxygen near the fire
heat means heat energy
warm up means increase temperature
if something emits something else then that
something increases the amount of that something else
amount is a property of something and includes
ordered values of none / least / little / some /
half / much / many / most / all
fire transfers heat through waves
if a thermal conductor / an object is exposed to a
source of heat then that conductor / that object
may become hot / warm
heat means the transfer of thermal energy
heating means adding heat
if heat is absorbed from a source then that heat
source will cool
increase means more
as the amount of oxygen exposed to a fire
38
-----
Faithful Reasoning Using Large Language Models
increases, the fire will burn more
heat is a kind of energy
burning converts chemical energy into thermal
energy
combustion means fire converts chemical energy
into light energy and heat energy
thermal energy is a kind of energy
**Question:**
Fanning can make a wood fire burn hotter
because the fanning makes the food hot enough
to burn OR adds more oxygen needed for burning
OR increases the amount of wood there is to burn
OR provides the energy needed to keep the fire
going.
**Proof:**
fanning increases the oxygen near the fire. We
know that as the amount of oxygen exposed to a
fire increases, the fire will burn more. Therefore,
as the amount of oxygen exposed to a fire
increases, the fire will burn more quickly.
as the amount of oxygen exposed to a fire
increases, the fire will burn more. We know
that fanning increases the oxygen near the fire.
Therefore, fanning the fire increases the amount
of burning woodchips.
as the amount of oxygen exposed to a fire
increases, the fire will burn more quickly. We
know that fanning the fire increases the amount
of burning woodchips. Therefore, fanning the
fire increases the amount of oxygen exposed to
the fire.
as the amount of oxygen exposed to a fire
increases, the fire will burn more quickly. We
know that fanning the fire increases the amount
of burning woodchips. Therefore, fanning the
fire increases the amount of oxygen exposed to
the fire.
fanning the fire increases the amount of oxygen
exposed to the fire. We know that as the amount
of oxygen exposed to a fire increases, the fire
will burn more quickly. Therefore, fanning the
fire will cause the fire to burn more quickly.
**Answer: provides the energy needed to keep the**
fire going
**Target Proof:**
fanning increases the oxygen near the fire. We
know that as the amount of oxygen exposed to a
fire increases, the fire will burn more. Therefore,
fanning can cause the fire to burn hotter.
**Target Answer: adds more oxygen needed for**
burning
**_Example 3 (value: -0.023)_**
**Context:**
melting point is a property of a substance /
material
ice is cold in temperature
ice crystals means ice
definite is the opposite of variable
solid is a kind of state of matter
melting is when solids are heated above their
melting point
an ice cube is a kind of solid
matter in the solid phase has definite shape
melting point means temperature at which a
solid melts / above which a solid melts
state of matter means physical state
state means form
water is in the liquid state, called liquid water,
for temperatures between 273 / 32 / 0 and 373
/ 212 / 100 k / f / c
to be found in means to be contained in
freezing causes a solid to form
melting is a kind of phase change
state of matter at room temperature is a kind of
physical property
a physical property is a kind of property
temperature / heat energy is a property of
objects / weather and includes ordered values of
cold / cool / warm / hot
liquid has a a lower melting point than solid
ice is colder in temperature than water
temperature is a measure of heat energy
matter in the solid phase has definite volume
to be made of something means to be formed of
that something
matter in the liquid phase has definite volume
the properties of something can be used to
identify / used to describe that something
**Question:**
Which word best describes the physical state of
an ice cube? gas OR solid OR liquid OR plasma.
**Proof:**
an ice cube is a kind of solid. We know that solid
is a kind of state of matter. Therefore, an ice
cube is in solid state.
39
-----
Faithful Reasoning Using Large Language Models
**Answer: solid**
**Target Proof:**
solid is a kind of state of matter. We know that
an ice cube is a kind of solid. Therefore, an ice
cube is solid in its physical state.
**Target Answer: solid**
**_Example 4 (value: -0.038)_**
**Context:**
oxygenated means having oxygen
mitochondrion provide energy to cells using
oxygen and glucose
where something comes from is a source of that
something
sugar is a kind of food
providing something means making / delivering
something
atp is formed by the breakdown of molecules of
glucose
if something has the ability to do something else
, then that something can do that something else
cells are a part of the body
if something performs a task then that task is
that something ’s function
to provide something means to be the source of
that something
cell division is a kind of process
cell activity means cell process
purpose means role
oxygen can be found in air
to provide means to supply
atp supplies chemical energy for organisms to
move
both means two
to have a function is similar to to be responsible
for
mitosis produces two cells with the same number
of chromosomes as the parent cell
all cells take in energy
the function of something is what that something
is used to do
amount is a property of something and includes
ordered values of none / least / little / some /
half / much / many / most / all
including means containing
energy transformation means one kind of energy
changes into another kind of energy
a mitochondrion is a kind of cellular organelle
**Question:**
Which cellular organelle uses oxygen and glucose
to provide energy to the cell? mitochondrion OR
nucleus OR ribosome OR vacuole.
**Proof:**
mitochondrion provide energy to cells using
oxygen and glucose. We know that a mitochondrion is a kind of cellular organelle. Therefore,
mitochondrion is the cellular organelle that
provides energy to cells using oxygen and
glucose.
**Answer: mitochondrion**
**Target Proof:**
mitochondrion provide energy to cells using
oxygen and glucose. We know that a mitochondrion is a kind of cellular organelle. Therefore,
mitochondrion is the cellular organelle that uses
oxygen and glucose to provide energy to cells.
**Target Answer: mitochondrion**
**D.7. EntailmentBankQA Task 2 - depth 1: SI**
**+ Halter**
A random selection of five reasoning traces on
Task 2 using SI + Halter, these include ‘Unknown‘
examples where the model did not terminate its
reasoning.
**_Example 1_**
**Context:**
electric currents produce magnetic fields
knowing the properties of something means
knowing information about that something
luster is a property of an object / a material and
includes ordered values of dull / shiny
more likely means increased likelihood
if something is a part of something else then that
something else contains that something
nitratine is always magnetic
iron is a kind of element
if something is a part of something then that something can be found in that something
cadmium is always magnetic
the metal is the core of electromagnet
iron in contact with water and oxygen will rust
the properties of something can be used to iden
40
-----
Faithful Reasoning Using Large Language Models
tify / used to describe that something
tungsten is a kind of metal
steel / carbon steel is always ferromagnetic / magnetic
iron is always magnetic
iron nails are made of iron
classifying means grouping objects / materials by
their properties
similar means in common
a electromagnet is formed by attaching an iron
nail wrapped in a copper wire to a circuit
electric currents produce magnetic fields
a magnetic field is a kind of force
nickel is always ferromagnetic / magnetic
cobalt is always ferromagnetic / magnetic
magnetic force means magnetism
the properties of something are used for describing that something
**Question:**
Which object most likely has magnetic properties? iron nail OR plastic clip OR rubber eraser
OR wooden spoon.
**Proof:**
iron is always magnetic. We know that iron nails
are made of iron. Therefore, iron nails are magnetic.
**Answer:**
iron nail Target Proof:
iron is always magnetic. We know that iron nails
are made of iron. Therefore, iron nails are magnetic.
**Target Answer: iron nails are magnetic**
**_Example 2_**
**Context:**
celestial bodies / celestial objects are found in
space
if a condition is necessary for a process, then that
condition make the process possible
our sun is located at the center of our solar system
gravity causes orbits
to keep something occurring means to cause that
something to occur
exerting force on an object means a force acts on
an object
motion / movement means moving / to move
isaac newton discovered the theory of gravity
take place means happen
planets in the solar system orbit the sun
gravity means gravitational pull / gravitational
energy / gravitational force / gravitational attraction
can be means able to be
both means two
a force is a kind of push or pull on an object
planets are found in space
requiring something means needing that something
revolving around something means orbiting that
something
mars is a kind of planet
if something required by something else is in a
place then that something else may be in that
place
to cause means to have an effect
the sun is the strongest source of gravity in the
solar system
to depend on / to rely on / to need means to
require
to be used for something means to be required
by that something
orbit is a kind of characteristic
due to means caused by
**Question:**
The force necessary to keep planets in orbit
around the Sun is gravity. OR friction. OR magnetism. OR nuclear..
**Proof:**
planets in the solar system orbit the sun. We
know that gravity causes orbits. Therefore, gravity causes the planets in the solar system to orbit
the sun.
**Answer: gravity**
**Target Proof:**
planets in the solar system orbit the sun. We
know that gravity causes orbits. Therefore, gravity causes planets in the solar system to orbit the
sun.
**Target Answer: gravity causes planets in the so-**
lar system to orbit the sun
**_Example 3_**
**Context:**
human eyes can only see light in the range of
visible light
41
-----
Faithful Reasoning Using Large Language Models
classifying is when one sorts something by kind /
type
visible light is a part of the electromagnetic spectrum
both means two
if an object reflects a light of a certain color then
the object appears to be that color
category means kind
made up of means contains / made of
a type is synonymous with a kind
electromagnetic energy travels as electromagnetic
waves
knowing the properties of something means
knowing information about that something
spectrum means a range of colors
all waves transfer / transmit energy from one
place to another
the properties of something are used for describing that something
radiation means radiant energy
light means light energy
electromagnetic radiation means electromagnetic
energy
light rays means light
light is a kind of nonliving thing
the properties of something can be used to identify / used to describe that something
uv light is a kind of light
the color of an object can be discovered by looking at that object
amount is a property of something and includes
ordered values of none / least / little / some /
half / much / many / most / all
visible light is made of lights of different colors
including violet, blue, green, yellow, orange, red
light
if something is a part of something else then that
something else contains that something
color is a property of an object / a material
**Question:**
The electromagnetic spectrum is the complete
range of electromagnetic radiation ranging from
radio waves to gamma rays. Which type of radiation in this spectrum is associated with observable
colors such as red, green, and blue? gamma rays
OR X-rays OR visible light OR radio waves.
**Proof:**
visible light is made of lights of different colors
including violet, blue, green, yellow, orange, red
light. We know that visible light is a part of the
electromagnetic spectrum. Therefore, visible light
is a part of the electromagnetic spectrum that includes violet blue green yellow orange and red
light.
**Answer: visible light**
**Target Proof:**
visible light is made of lights of different colors
including violet, blue, green, yellow, orange, red
light. We know that visible light is a part of the
electromagnetic spectrum. Therefore, visible light
is the part of electromagnetic spectrum that has
red, green, and blue light.
**Target Answer: visible light is the part of elec-**
tromagnetic spectrum that has red, green, and
blue light
**_Example 4_**
**Context:**
to have a function is similar to to be responsible
for
to enable is to help
providing support is a kind of function
a function is a kind of characteristic
a trunk is a part of a tree for supporting the tree
if an organism can do something, then that organism is able to do that something
bark is a protective covering around the trunk of
/ branches of a tree
purpose means role
role means function
if something has the ability to do something else
, then that something can do that something else
to be used for something means to be required
by that something
to be used for something means to help to do that
something
a function is an activity
the function of something is what that something
is used to do
a component of something means a part of that
something
essential means required
ability is a property of things and includes ordered values of able / unable / can / cannot
to anchor means to provide support
role is similar to importance
a cell wall provides support / protection / shape
42
-----
Faithful Reasoning Using Large Language Models
/ structure for a plant cell
something is used for that something ’s purpose
if something requires something else then that
something else is important to that something
helping means something can be done / completed more easily / better
being used for is similar to providing
function is similar to purpose
**Question:**
The main function of a tree’s trunk is to provide
air OR fruit OR sunlight OR support.
**Proof:**
a cell wall provides support / protection / shape
/ structure for a plant cell. We know that a trunk
is a part of a tree for supporting the tree. Therefore, the cell wall is the part of a plant cell that
supports the plant.
the cell wall is the part of a plant cell that supports the plant. We know that bark is a protective
covering around the trunk of / branches of a tree.
Therefore, the cell wall is the part of a plant cell
that protects the plant.
the cell wall is the part of a plant cell that supports the plant. We know that bark is a protective
covering around the trunk of / branches of a tree.
Therefore, the cell wall is the part of a plant cell
that protects the plant.
the cell wall is the part of a plant cell that supports the plant. We know that bark is a protective
covering around the trunk of / branches of a tree.
Therefore, the cell wall is the part of a plant cell
that protects the plant.
the cell wall is the part of a plant cell that supports the plant. We know that bark is a protective
covering around the trunk of / branches of a tree.
Therefore, the cell wall is the part of a plant cell
that protects the plant.
the cell wall is the part of a plant cell that supports the plant. We know that bark is a protective
covering around the trunk of / branches of a tree.
Therefore, the cell wall is the part of a plant cell
that protects the plant.
**Answer: Unknown**
**Target Proof:**
providing support is a kind of function. We know
that a trunk is a part of a tree for supporting the
tree. Therefore, a function of a tree’s trunk is to
provide support.
**Target Answer:**
a function of a tree’s trunk is to provide support
**_Example 5_**
**Context:**
metal is an electrical / electrical energy conductor
can means having the ability to
different materials usually have different properties
material means substance
to demonstrate means to exhibit
characteristic means property
iron is a kind of metal
both means two
ability is a property of things and includes ordered values of able / unable / can / cannot
motion / movement means moving / to move
if an object is made of something then the parts
of that object are made of that something
similar means in common
a demonstration is similar to an example
if something has the ability to do something else
, then that something can do that something else
what something is made of is a property of the
organization or that something
metal is flexible
best means most positive
the properties of something can be used to identify / used to describe that something
property is similar to measurement
a wire is usually made of metal
a wire is a part of an electrical circuit for conducting electricity
a property is a kind of information
metal is a kind of substance
the properties of something are used for describing that something
an insulator is the opposite of a conductor
**Question:**
Flexibility is a physical property of some matter.
Which of these materials best demonstrates the
property of flexibility? a mirror OR a pencil OR a
metal wire OR a telephone pole.
**Proof:**
metal is flexible. We know that a wire is usually
made of metal. Therefore, metal is flexible and a
wire is made of metal.
**Answer: a metal wire**
**Target Proof:**
43
-----
Faithful Reasoning Using Large Language Models
metal is flexible. We know that a wire is usually
made of metal.
Therefore, a metal wire is flexible.
**Target Answer: a metal wire is flexible**
### E. Baseline Model Outputs
**E.1. EntailmentWriter on Proof Writer**
Below are example EntailmentWriter + Answer
proofs on the Proof Writer dataset. We have replaced the sentence labels with the corresponding
statements from the context and have not shown
the context for brevity. We see that in each case
the final conclusion answers the question, but
using the incorrect reasoning. We see that the
model often makes up inferences that do not follow, but do help the model solve the problem.
Unlike SI, whose inference step does not have access to the question, EntailmentWriter has direct
access to the question and therefore has the incentive to make up inferences that help to answer
the questions even if they do not follow logically.
**_Example 1_**
**Question: Does it imply that the statement "The**
dog is big" is True?
**Reason: If something eats the bear then the bear**
needs the lion. We know that the dog eats the
bear. Therefore, the bear needs the lion.
If something needs the lion then the lion eats
the bear. We know that the bear needs the lion.
Therefore, the lion eats the bear.
If the lion eats the bear then the bear does not
eat the dog. We know that the lion eats the bear.
Therefore, the bear does not eat the dog.
If something is red and it does not eat the dog
then the dog sees the rabbit. We know that the
rabbit needs the dog and the bear does not eat
the dog. Therefore, the dog sees the rabbit.
If something sees the rabbit then it is not big. We
know that the dog sees the rabbit. Therefore,
the dog is not big.
**_Example 2_**
**Question: Does it imply that the statement "The**
dog does not chase the dog" is True?
**Reason: If something chases the mouse then it**
is big. We know that the cow chases the mouse.
Therefore, the cow chases the cow.
If something chases the cow and it is big then it
does not chase the dog. We know that the cow
chases the mouse and the cow chases the cow.
Therefore, the cow needs the dog.
If something chases the cow then it needs the
dog. We know that the cow chases the mouse.
Therefore, the dog is nice.
If something needs the dog and it is nice then
the dog is nice. We know that the dog is nice
and the cow is nice. Therefore, the dog chases
**the cow.**
If something is nice then it chases the cow. We
know that the dog chases the cow. Therefore,
the dog does not chase the dog.
**_Example 3_**
**Question: Does it imply that the statement "The**
cat sees the cat" is True?
**Reason: All nice things are young. We know that**
The cow is nice. Therefore, the cat is nice. If
something is young and nice then it sees the cat.
We know that the cat is nice and the cow is nice.
Therefore, the cat is young. If something sees
the cat and the cat needs the cow then cat is nice.
We know that the cat is young and the cat needs
the cow. Therefore, the cat sees the cat.
44
-----
Faithful Reasoning Using Large Language Models
Figure 14 **Examples of correct (top) and incorrect (bottom) training data samples used to**
|
**train the Value LM. Targets are shown in red. The underlined statement in the last line of the**
incorrect reasoning trace is the one that is substituted in. The Inference LM is used to compute the
inference. It is very easy to see here that the second statement is incorrect because it contains a rule
rather than a fact.
45
-----
Faithful Reasoning Using Large Language Models
Figure 15 **Examples of correct (top) and incorrect (bottom) training data samples used to**
|
**train the Value LM. Targets are shown in red. The underlined statement in the last line of the correct**
reasoning trace is replaced with a random, incorrect statement from the context. The Inference LM is
used to compute the inference.
Figure 16 **Qualitative results on EntailmentBankQA showing halter outputs on the Entailment-**
|
**BankQA dataset.**
46
-----
Faithful Reasoning Using Large Language Models
(b) Comparing the number of reasoning steps in
**the GT proof to those in the predicted proof.**
(d) Exact string match between the ground truth
**proof and the predicted proof.**
(a) Jaccard similarity between the GT intermedi**ate steps and the predicted intermediate steps.**
(c) Intermediate inference accuracy where order
**of the inferences matters.**
Figure 17 **Evaluating reasoning traces on Proof Writer. For exact string match we remove all**
|
removing non-alphabetic characters and compare characters in lower case.
47
-----
Faithful Reasoning Using Large Language Models
Figure 18 **EntailmentBankQA: Rouge Score**
|
**between ground truth and predicted interme-**
**diate inferences in order.**
48
-----
| [
"Murray, Shanahan",
"Antonia, Creswell"
] | 2022-08-30T00:00:00 | null | false | 104 | 6 | null | http://arxiv.org/abs/2208.14271 | https://arxiv.org/abs/2208.14271 | https://www.semanticscholar.org/paper/f0a0e8b6e84207f50db4d24cc4016e40601214ef |
LILA: A Unified Benchmark for Mathematical Reasoning | Mathematical reasoning skills are essential for general-purpose intelligentsystems to perform tasks from grocery shopping to climate modeling. Towards evaluating and improving AI systems in this domain, we proposeLILA, a unified mathematical reasoning benchmark consisting of 23 diversetasks along four dimensions:(i) mathematical abilities e.g., arithmetic, calculus (ii) language format e.g., question-answering, fill-in-the-blanks (iii) language diversity e.g., no language, simple language (iv) external knowledge e.g., commonsense, physics. We construct our benchmark by extending 20 datasets benchmark by collecting task instructions and solutions in the form of Python programs,thereby obtaining explainable solutions in addition to the correct answer. We additionally introduce two evaluation datasets to measure out-of-distribution performance and robustness to language perturbation. Finally, we introduce BHASKARA,a general-purpose mathematical reasoning model trained on LILA. Importantly, we find that multi-tasking leads to significant improvements (average relative improvement of 21.83% F1 score vs. single-task models),while the best performing model only obtains 60.40%,indicating the room for improvement in general mathematical reasoning and understanding. | It is found that multi-tasking leads to significant improvements (average relative improvement of 21.83% F1 score vs. single-task models), indicating the room for improvement in general mathematical reasoning and understanding. | ## L[¯]ila: A Unified Benchmark for Mathematical Reasoning
**Swaroop Mishra[∗†]** **Matthew Finlayson[∗‡]**
Arizona State University The Allen Institute for AI
**Pan Lu[†]** **Leonard Tang** **Sean Welleck**
UCLA Harvard University The Allen Institute for AI
**Chitta Baral** **Tanmay Rajpurohit**
Arizona State University Georgia Institute of Technology
**Oyvind Tafjord** **Ashish Sabharwal**
The Allen Institute for AI The Allen Institute for AI
**Peter Clark** **Ashwin Kalyan[‡]**
The Allen Institute for AI The Allen Institute for AI
**Abstract**
Mathematical reasoning skills are essential for general-purpose intelligent systems to perform tasks from grocery shopping to climate modeling.
Towards evaluating and improving AI systems in this domain, we propose
L¯ila, a unified mathematical reasoning benchmark consisting of 23 diverse
tasks along four dimensions: (i) mathematical abilities e.g., arithmetic,
calculus (ii) language format e.g., question-answering, fill-in-the-blanks
(iii) language diversity e.g., no language, simple language (iv) external
knowledge e.g., commonsense, physics. We construct our benchmark
by extending 20 datasets benchmark by collecting task instructions and
solutions in the form of Python programs, thereby obtaining explainable
solutions in addition to the correct answer. We additionally introduce
two evaluation datasets to measure out-of-distribution performance and
robustness to language perturbation. Finally, we introduce Bhaskara¯,
a general-purpose mathematical reasoning model trained on L¯ila. Importantly, we find that multi-tasking leads to significant improvements
(average relative improvement of 21.83% F1 score vs. single-task models),
while the best performing model only obtains 60.40%, indicating the room
for improvement in general mathematical reasoning and understanding.[1]
∗Equal first authors.
†Work done while at the Allen Institute for AI.
‡Corresponding authors: [email protected], [email protected].
[1Our dataset: https://github.com/allenai/Lila.](https://github.com/allenai/Lila) [Our model: https://huggingface.co/](https://huggingface.co/allenai/bhaskara)
[allenai/bhaskara.](https://huggingface.co/allenai/bhaskara)
-----
**0DWKDELOLW\EDVLFPDWK**
**/DQJXDJHFRPSOH[LW\VLPSOHODQJXDJH**
**)RUPDWJHQHUDWLYHTXHVWLRQDQVZHULQJ**
**.QRZOHGJHQRH[WHUQDONQRZOHGJH**
**,QVWUXFWLRQ<RXDUHJLYHQDTXHVWLRQWKDWLQYROYHVWKH**
FDOFXODWLRQRIQXPEHUV<RXQHHGWRSHUIRUPHLWKHUDQ
DGGLWLRQRUVXEWUDFWLRQRSHUDWLRQRQWKHQXPEHUV*HQHUDWH
\RXUDQVZHUWRWKHJLYHQTXHVWLRQ
**4XHVWLRQ6DUDSLFNHGSHDUVDQG6DOO\SLFNHGSHDUV**
IURPWKHSHDUWUHH+RZPDQ\SHDUVZHUHSLFNHGLQWRWDO"
**3URJUDP**
?@ANJGPODJIST
<INR@MÓSÏT
M@OPMI<INR@M
KMDIONJGPODJI¿À¼¼ ºOJO<GK@<MNDNOC@NPHJA
K@<MNRDOC<M<<I?<GGT
**3URJUDP**
SÓ¿À
TÓ¼¼
<INR@MÓSÏTºOJO<GK@<MNDNOC@NPHJAK@<MNRDOC
<M<<I?<GGT
KMDIO<INR@M
**$QVZHU**
Figure 1: A data example with two Python programs in L¯ila. One program
annotation uses function construct whereas the other one is a plain script without
function. The instruction for each task and categories across four dimensions
are annotated for developing L¯ila.
### 1 Introduction
Mathematical reasoning is required in all aspects of life, from buying ingredients
for a recipe to controlling the world economy. Given the fundamental nature
of mathematical reasoning, a number of works propose datasets to evaluate
specific mathematical reasoning abilities of AI agents, e.g., Kushman et al. (2014)
(algebra word problems), Mishra et al. (2022c) (arithmetic reasoning), Saxton
et al. (2019) (templated math reasoning spanning algebra, calculus, probability,
etc.) Since evaluating high-capacity models on narrowly scoped mathematical
reasoning datasets risks overestimating the reasoning abilities of these AI systems,
creating the need for a unified benchmark for systematic evaluation over diverse
topics and problem styles.
To this end, we introduce L¯ila[2], a unified mathematical reasoning benchmark that consists of 23 mathematical reasoning tasks. L¯ila is constructed by
extending 20 existing datasets spanning a wide range of topics in mathematics,
varying degrees of linguistic complexity, and diverse question formats and background knowledge requirements. Importantly, L¯ila extends all of these datasets
to include a solution program as opposed to only an answer, and instruction
2Named after L¯ilavati, a 12[th] century mathematical treatise on arithmetic that covers topics
like arithmetic and geometric progressions, indeterminate equations and combinations. It is
also widely known for the extensive number of math word problems. The author, Bh¯askara is
known for fundamental and original contributions to calculus, physics, number theory, algebra,
and astronomy (Colebrooke, 1817; Sarkar, 1918; Kolachana et al., 2019)
-----
annotations to enable instruction-based learning (Sanh et al., 2021; Wei et al.,
2021; Mishra et al., 2022b).
In order to accurately assess the mathematical reasoning ability of models,
evaluating the chain of reasoning that leads to the correct solution is equally
important (if not more important) to evaluating the final answer or expression.
We therefore collect Python programs that serve as reasoning chains for each
question in the benchmark. We achieve this by automatically converting domainspecific language (DSL) annotations into Python programs and by manually
collecting expert annotations when no DSL annotations are available. By
incorporating program annotations, L¯ila unifies various mathematical reasoning
datasets under a single problem formulation i.e., given an input problem in
natural language, generate a Python program that upon execution returns the
desired answer. This formulation allows neural approaches to focus on the highlevel aspects of mathematical problem solving (e.g., identifying potential solution
strategies, decomposing the problem into simpler sub-problems), while leveraging
external solvers (e.g., Python builtins, Sympy) to perform precise operations
like adding huge numbers or simplifying expressions. Figure 1 illustrates a
sample from our L¯ila benchmark that illustrates the question, answer, program,
instructions, and category tags.
In addition to evaluating high-level problem solving, we also facilitate two
other key ways to make a fair assessment of models on mathematical reasoning tasks. In line with Bras et al. (2020), Ribeiro et al. (2020) and Welleck
et al. (2022), we evaluate generalization e.g., alternate formulations of a problem
(“2+2=?” vs. “What is two plus two?”) using an out-of-distribution evaluation
set (L¯ila-OOD) containing datasets requiring the same underlying mathematical reasoning skills, but were collected independently of the training datasets.
Further, we collect a robustness split L¯ila-Robust, that introduces linguistic
perturbations (e.g., active vs. passive voice) via crowd-sourcing. The evaluation scheme is a combination of the performance on all three sets: L¯ila-Test,
L¯ila-OOD and L¯ila-Robust.
**Contributions**
1. We present L¯ila, a holistic benchmark for mathematical reasoning. L¯ila
extends 20 existing datasets with solutions in the form of Python programs
and instruction annotations, and categorizes questions into 23 tasks based on
their language complexity, question format and need for external knowledge.
Our benchmark measures performance on out-of-distribution examples and
robustness to language perturbations in addition to standard test-set.
2. We introduce Bhaskara¯, a multi-task model fine-tuned on our dataset. Our
best-performing model achieves comparable performance to a 66× larger
model pre-trained on both code and language.
3. We provide an analysis of our models’ performance and find that (1) multitasking improves considerably over task-specific learning both in in-distribution
and out-of-distribution evaluation (2) program synthesis substantially outperforms answer prediction, (3) few-shot prompting with codex has the strongest
-----
performance. We also identify areas for improvement for future work, e.g.,
data gaps in L¯ila categories.
### 2 Related Work
**Mathematical Reasoning Datasets.** Our work builds on an existing body
of mathematical reasoning literature. Early work in this areas focuses on smallscale datasets testing addition-subtraction (Hosseini et al., 2014), templated
questions with equations as parameters (Kushman et al., 2014) and other forms of
arithmetic reasoning (Koncel-Kedziorski et al., 2015; Roy and Roth, 2016; Upadhyay et al., 2016; Roy and Roth, 2017, 2018; Ling et al., 2017). Later datasets
increase in complexity and scale, incorporating reading comprehension Dua et al.
(2019b), algebra (Saxton et al., 2019), and multi-modal contexts (Lu et al., 2021a,
2022). Still other numerical reasoning datasets focus on diversity (Miao et al.,
2020a) with multiple categories of numerical reasoning tasks (e.g., Amini et al.,
2019). Most recently, new datasets have focused on increasing difficulty, e.g.,
olympiad problems (Hendrycks et al., 2021b) and adversarial problems (Patel
et al., 2021), as well as increasing the knowledge requirements to solve tasks,
with a growing focus on commonsense reasoning (Zhou et al., 2019; Zhang et al.;
Lu et al., 2021b; Mishra et al., 2022c).
A separate line of work in mathematical reasoning includes datasets testing
mathematical theorem proving (e.g., Li et al., 2021; Wu et al., 2021; Welleck
et al., 2021; Zheng et al., 2021; Han et al., 2021). We do not, however, consider
theorem proving in our work, choosing instead to focus on numerical reasoning.
**Task Hierarchy and Multi-tasking in Numerical Reasoning.** We take
inspiration from the success of multi-task learning in NLP (Weston et al., 2015),
including benchmarks (e.g., Wang et al., 2018, 2019; Dua et al., 2019a) and
multitasking models (e.g., McCann et al., 2018; Khashabi et al., 2020; Lourie
et al., 2021; Aghajanyan et al., 2021). NumGLUE (Mishra et al., 2022c) has
been proposed as a multi-tasking numerical reasoning benchmark that contains
8 different tasks. L¯ila expands NumGLUE to provide wider coverage of mathematical abilities, along with evaluation that captures out-of-domain, robustness,
and instruction-following performance. Our introduction of mathematical reasoning categories and the evaluation setup is inspired by task hierarchies in
other domains such as vision (Zamir et al., 2018) and NLP (Rogers et al., 2021)
which appear in large scale benchmarks (e.g., Srivastava et al., 2022; Wang et al.,
2022).
### 3 L¯ila
L¯ila is composed of 23 tasks across 4 dimensions, curated from 44 sub-datasets
across 20 dataset sources. Here we discuss the construction and composition of
the benchmark and provide descriptive statistics of the datasets.
-----
Category Tasks
Math ability Basic math, multiplication/division, number theory, algebra, geometry, counting and statistics, calculus, linear algebra, advanced
math
Language No language, simple language, complex language
Knowledge No background knowledge, commonsense, math, science, computer
science, real world knowledge
Format Fill-in-the-blank, generative question answering, multiple-choice,
natural language inference, reading comprehension
Table 1: Categories and their associated tasks.
**3.1** **Dataset Construction**
**Data Sources.** L¯ila incorporates 20 existing datasets from the mathematical
reasoning literature (Table 19 gives a detailed list), where inputs are natural
language or templated text and outputs are numerical or expressions, e.g., we
exclude theorem proving (Welleck et al., 2021; Han et al., 2021), where the
output is not a number or expression. We leave the incorporation of formats
like theorem proving to future work.
**Unified format.** We normalize all datasets to a unified format with the
following fields:
1. The source dataset. Category tags for each of the four dimensions (math
ability, language complexity, format, and external knowledge; see §3.2).
2. The question, in English.
3. The answer to the question, as a string containing a number, expression, list,
or other data format. A set of Python strings that print the answer.
4. A task-level instruction in natural language.
We also retain meta-data from the original dataset.
**Automatic program annotation.** Most of the annotations in the source
datasets do not contain output in the form of a Python program. We automatically annotate most datasets by generating Python programs using the
annotations (answer, explanation, etc.) provided in the source datasets. Where
possible, we generate multiple Python programs for a single question. This is to
account for variation in the program space such as the choice of data structure,
language construct, variable name, and programming style (e.g., declarative vs
procedural). For example, Figure 1 gives multiple Python programs solving the
same question; in this case one program directly calculates the answer, whereas
the other defines a function to solve the problem more generally.
Some datasets contain program annotations that can be captured by a domainspecifc language (DSL) in which case we write rules to convert them into Python
programs, e.g., volume(sphere,3) to the Python expression 4/3*math.pi*3**3.
In some cases where a DSL annotation is not provided, we use pattern matching
-----
to convert highly templated datasets like the AMPS dataset (Hendrycks et al.,
2021b) to our unified format. In other cases, instead of converting the existing
dataset, we modify the data generation code to reproduce the dataset with
program annotations. For the DeepMind mathematics dataset (Saxton et al.,
2019), this allows us to create diverse, compositional math problems with program
annotations using a sophisticated grammar.
**Expert program annotation.** For many datasets, it is not possible to obtain
Python program annotations via automated methods described above; either the
original dataset contains only the final answer or contains solutions expressed in
free-form natural language. For such datasets, we obtain annotations from experts
who are proficient in basic programming and high-school level mathematics. See
Appendix B.1 for details.
**Instruction annotation.** Given the effectiveness of instruction learning (Mishra
et al., 2022b; Wei et al., 2021; Mishra et al., 2022a; Sanh et al., 2021) for effective
generalization, we collect instruction annotation for each task. Each instruction
contains a definition that clearly defines the task and provides guidelines, a
_prompt that provides a short and straight forward instruction, and examples_
that facilitate learning by demonstration (Brown et al., 2020). Figure 1 shows
an example instruction for the basic math task (§3.2).
**3.2** **Categories and Tasks**
We create 4 views [3]or categories of L¯ila along the dimensions of mathematical
area, language complexity, external knowledge, and question format. Altogether,
these views classify the data into 23 tasks (Table 1). By creating multiple views
of the benchmark, we are able to systematically characterize the strengths and
weaknesses of existing models at a granular level.
The first category, math ability, partitions the datasets into common pedagogical subjects: arithmetic, algebra, geometry, calculus, etc.
Our second category, language complexity, separates math problems by the
complexity of the language used to represent them. This ranges from formal
representations only (e.g., 1+1=?) to natural language (e.g., “Mariella has 3
pears. . . ”).
We next partition datasets based on the type of background knowledge,
required to solve the problem. For instance, commonsense questions like “How
many legs to 3 people have?” or science questions like “Will water boil at 200
degrees Celsius?” require different sets of knowledge to answer.
Lastly, we categorize based on question format, putting e.g., multiple choice
questions under one task and natural language inference under another. Examples
of each task and the datasets included are in Appendix B.
3Note that it is not a partition of the benchmark as each dimensions divides the constituent
examples in different ways
-----
**3.3** **L¯ila-OOD**
In order to measure if the model has truly learned the underlying mathematical
reasoning skill, we evaluate both in-distribution (IID, i.e., standard train-test
splits) and out-of-distribution (OOD) performance for each task, i.e., we evaluate
on examples requiring the same underlying mathematical reasoning skill but
from a different dataset. To construct L¯ila-OOD, we follow Bras et al. (2020)
and Hendrycks et al. (2020) by randomly assigning the datasets for each task into
IID and an OOD sets, using the IID set for training and standard evaluation and
the OOD set to evaluate generalization. We do not include tasks in L¯ila-OOD
for tasks containing only one dataset.
**3.4** **L¯ila-Robust**
In light of recent work demonstrating the brittleness of language models at
solving math problems (Patel et al., 2021), we create a high-quality evaluation
dataset, L¯ila-Robust, to evaluate performance on mathematical reasoning tasks
when linguistic perturbations are introduced. Specifically, we define and apply
a set of carefully chosen augmentation templates, summarized in Table 16, on
each task, yielding a set of challenging problems that are consistent answer-wise
but stylistically different question-wise. Overall, we define a total of 9 templates
for such question perturbations: 3 from Patel et al. (2021) and 6 of our own.
From each constituent dataset, we sample 20 questions and obtain perturbed
question annotations via Amazon Mechanical Turk (AMT). Refer to Appendix
B.1 for additional details on the construction of L¯ila-Robust.
**3.5** **Statistics**
Table 2 shows key statistics of our proposed benchmark, L¯ila. L¯ila contains
_≈_ 134K examples with significant diversity across question, answer, program
and instruction length (see detailed statistics in Appendix C). Figure 2 shows
the diversity of questions in L¯ila. Note that we downsample (via random
selection) some datasets like AMPS (Hendrycks et al., 2021b) which contains
numerous templated questions that can get over-representated in the distribution
of examples across categories in L¯ila.
### 4 Experiments
In this section, we introduce our modeling contributions for the L¯ila benchmark
and discuss the overall experimental setup.
**Data partition and evaluation.** For the IID setup, we randomly partition
the data in each task into training (70%), development (10%) and test (20%)
sets. Additionally, we also evaluate on L¯ila-OOD and L¯ila-Robust settings;
thus, the final evaluation scheme is a combination of the performance on all
three evaluation setups
-----
**Statistic** **Number**
# Total tasks 23
# Total datasets 44
# Total instructions 44
# Total questions 133,815
# Total programs 358,769
Unique questions 132,239
Unique programs 325,597
Unique answers 271,264
Average length of instructions 31.18
Average length of questions 47.72
Average length of programs 47.85
Table 2: Key statistics of L¯ila.
**Fine-tuning.** We fine-tune a series of GPT-Neo-2.7B causal language models (Black et al., 2021)) on L¯ila. We choose GPT-Neo because it was pre-trained
on both natural language and code (Gao et al., 2020), as opposed to solely on
natural language. To assess the capabilities of GPT-Neo on various aspects of
the dataset, we fine-tune single-task models on each of the 23 tasks in L¯ila. We
also evaluate the benefit of transfer learning by fine-tuning a single multi-task
GPT-Neo baseline on all the tasks simultaneously. We call our multitask model
Bh¯askara.
**Prompting.** We also use few-shot prompting to evaluate GPT-3 and Codex[4] (Brown
et al., 2020; Chen et al., 2021). For the IID setting, we prompt the model with a
random input-output examples from the same dataset as the input. In the OOD
setting, we take examples from other datasets (Table 12-15) within the same
task. We repeat this evaluation with increasing numbers of examples (up to the
token size of models) to study the effect on performance[5].
**Evaluation.** We evaluate our models under two regimes—directly outputting
the answer i.e., program induction and outputting a Python program that is then
executed to obtain the final answer i.e., program synthesis. In the case of our
fine-tuned models, we train them to output both the final answer and the Python
program conditioned on the input question. To evaluate our models under direct
question answering, we use F1-score[6] to compare the model output and the
gold answer. To evaluate program synthesis, we execute the model’s output
within a Python interpreter and compare the program output with the output
of the gold program, again using F1. We evaluate based on the program output,
4text-davinci-002, code-davinci-002
5Henceforth we refer to the max example model unless otherwise specified.
6This is a soft version of exact match accuracy assigning partial credit when common words
are present in the output and gold answer.
-----
Figure 2: Question n-gram distribution in L¯ila.
more
dayspercenthours
times
polyhedron's years
following the many minutes
did did
the does
is$ WhichSimplifyTheConvertMultiplyWhenSolveEstimate How longmuch moneydoesdid
the Compute often doesdiddo
What
afterofthedid happenedpercentwastime is $all Find the seconddistancesmallest
the number
\real eigenvectorsunionquotientjacobiandifferenceleastsumgreatest
rather than the program itself, to account for diversity in solving techniques and
programming styles.
### 5 Results and Analysis
A summary of all key results on our L¯ila benchmark are shown in Table 3. In
this section, we will discuss the performance of fine-tuned 2.7B GPT-Neo models
(§5.1), performance of models along the 4 categories of tasks (§5.2) and finally,
the few-shot performance of much larger (∼175B parameters) models (§5.3).
**5.1** **Results: Fine-tuned Models**
**Multitasking improves IID performance, robustness, and OOD gener-**
**alization.** The multi-tasking model (Bhaskara¯ ) substantially improves upon
the single task models (Neo). Bhaskara¯ achieves better average in-domain
performance than the 23 individual per-task models (0.480 vs. 0.394 average
score), suggesting that it leverages cross-task structure not present in a single
task’s training set.
We also find that our multi-task model is robust to the linguistic perturbations we test in L¯ila-Robust. We did not find any degradation in performance
when testing on perturbed IID test examples. Additionally, multi-task training
substantially improves out-of-domain generalization (0.448 vs. 0.238). The gap
between IID and OOD performance is much smaller for Bhaskara¯ than for the
single task models (Table 3), and in one case (format) Bhaskara¯ ’s OOD performance on held-out tasks is better than its IID performance (Table 4). L¯ila’s
multi-task structure opens interesting future directions related to developing
improved multitasking techniques, and further understanding its benefits.
-----
_→_ **Supervision/Size** Few-shot, 175B Few-shot, 175B Fine-tuned, 2.7B Fine-tuned, 2.7B Fine-tuned, 2.7B Fine-tuned, 2.7B
**GPT-3** **Codex** **Neo-A** **Neo-P** **Bh¯askara-A** **Bh¯askara-P**
**Task** **Category**
_↓_ IID OOD IID OOD IID OOD IID OOD IID OOD IID OOD
|IID OOD|IID OOD|IID OOD|IID OOD|IID OOD|IID OOD|
|---|---|---|---|---|---|
|1 Basic math 0.766 0.818 2 Muldiv 0.479 0.665 3 Number theory 0.240 0.154 4 Algebra 0.338 0.130 5 Geometry 0.283 0.120 6 Statistics 0.183 0.210 7 Calculus 0.231 0.208 8 Linear algebra 0.127 - 9 Advanced math 0.150 -|0.791 0.762 0.691 0.790 0.472 0.344 0.603 0.511 0.000 0.250 0.650 0.200 0.930 0.884 0.692 - 0.472 -|0.533 0.523 0.136 0.089 0.108 0.095 0.164 0.031 0.288 0.025 0.107 0.008 0.138 0.119 0.229 - 0.012 -|0.611 0.555 0.388 0.194 0.328 0.107 0.348 0.051 0.077 0.021 0.839 0.034 0.486 0.334 0.809 - 0.100 -|0.693 0.657 0.155 0.083 0.129 0.190 0.203 0.054 0.297 0.105 0.115 0.179 0.102 0.167 0.240 - 0.019 -|0.790 0.787 0.448 0.395 0.358 0.293 0.473 0.007 0.079 0.250 0.947 0.164 0.495 0.805 0.808 - 0.160 -|
|10 No language 0.213 0.162 11 Simple language 0.486 0.561 12 Complex language 0.356 0.413|0.853 0.770 0.568 0.610 0.456 0.583|0.143 0.083 0.269 0.243 0.147 0.113|0.698 0.330 0.363 0.292 0.216 0.106|0.140 0.138 0.332 0.269 0.215 0.259|0.703 0.850 0.433 0.384 0.288 0.557|
|13 Fill in the blank 0.710 0.620 14 Generative QA 0.305 0.385 15 MCQ 0.801 0.870 16 NLI 0.500 - 17 RC 0.460 -|0.790 0.660 0.566 0.632 0.771 0.870 0.710 - 0.615 -|0.086 0.193 0.142 0.135 0.636 0.818 0.221 - 0.135 -|0.304 0.193 0.376 0.199 0.652 0.818 0.212 - 0.295 -|0.059 0.519 0.178 0.160 0.752 0.888 0.566 - 0.132 -|0.262 0.519 0.476 0.235 0.817 0.888 0.893 - 0.264 -|
|18 No external k. 0.437 0.485 19 Commonsense 0.788 0.698 20 Math formulas 0.259 0.162 21 Science formulas 0.305 0.120 22 Computer science k. 0.262 0.128 23 Real-world k. 0.150 -|0.638 0.660 0.752 0.815 0.661 0.544 0.315 0.250 0.425 0.408 0.472 -|0.138 0.110 0.613 0.364 0.137 0.074 0.158 0.025 0.151 0.137 0.012 -|0.387 0.159 0.624 0.356 0.454 0.382 0.239 0.021 0.147 0.134 0.100 -|0.167 0.199 0.735 0.470 0.170 0.077 0.157 0.105 0.232 0.304 0.019 -|0.400 0.465 0.778 0.526 0.599 0.404 0.181 0.250 0.220 0.278 0.160 -|
|Average score 0.384 0.384|0.604 0.586|0.204 0.177|0.394 0.238|0.252 0.268|0.480 0.448|
Table 3: Evaluations of different baselines across 23 tasks in L¯ila. On most
tasks, Codex outperforms all baselines while Bh¯askara-P outperforms all
fine-tuned baselines. A model usually performs worse on the OOD data set. The
**bold score refers to the best score among models with the same supervision**
method; the underlined score refers to the best score among all models. GPT-3
and Codex performance is computed on 100 uniformly distributed examples
owing to their cost and usage limit. Fine-tuned model performance is calculated
on the full test set.
Lastly, we do not find any benefit to fine-tuning with instructions. Our best
instruction tuned model achieves 0.133 F1, whereas the worst non-instructiontuned multitask model achieves 0.290.
**Program synthesis substantially outperforms answer prediction.** Synthesizing the program and evaluating it to get an answer substantially outperforms directly predicting the answer. For instance, multi-task program synthesis
(Bhaskara¯ -P) has an average score of 0.480 while multi-task answer prediction
(Bhaskara¯ -A) scores 0.252. This means models are often able to generate
a program that evaluates to the correct answer, even when the model cannot
directly compute the answer.
Program synthesis improves over answer prediction in all math categories
except Geometry, with the largest improvements in Statistics and Linear
Algebra; see Table 5 for examples. We even see benefits of program synthesis
in NLI, a classification-based task. L¯ila’s unified problem format decouples
synthesis from computation, while opening directions for further study on either
aspect.
10
-----
Neo-A Neo-P
**Dimension**
IID OOD IID OOD
Math ability 0.191 0.129 **0.445** **0.188**
Language 0.189 0.147 **0.429** **0.246**
Format 0.246 0.382 **0.372** **0.404**
Knowledge 0.206 0.143 **0.331** **0.213**
Average 0.208 0.200 **0.394** **0.263**
Table 4: Multi-task models are able to generalize to unseen tasks in some
categories. Program output (Neo-P) always outperforms number output (NeoA).
|Data|Answer (% F1) Program (% F1)|Col3|
|---|---|---|
||Neo Multi ∆ Neo Multi ∆||
|100% 40% 20%|28.4 32.3 +4.0 20.0 21.1 +1.2 15.8 18.4 +2.6|80.0 82.4 +2.5 75.2 70.3 -4.9 66.3 67.1 +0.8|
Table 5: Here we show the results of fine-tuning both GPT-Neo-2.7B (Neo)
and Bhaskara¯ (Multi) on 100%, 40%, and 20% of the held-out data from
L¯ila-OOD. The Multi almost always outperforms Neo (the ∆ column shows
the margin).
**Models leverage symbolic execution and libraries.** The gap between
program synthesis and answer prediction suggests that the neural language
model offloads computations to the symbolic Python runtime that are otherwise
difficult to compute directly. We identify two common cases. First, the model
leverages standard Python as a calculator. For instance, this pattern is common
in the basic_math and mul_div categories, which involve evaluating arithmetic
expressions; Table 4 shows examples. Second, the model is able to call external
libraries that perform sophisticated computations. For instance, in statistics
the model uses scipy.stats.entropy or np.linalg.det in linear algebra while
solving problems (Table 5).
**Models occasionally generate non-executable code.** Roughly 10% of
Bhaskara¯ ’s IID programs fail to execute. 86% of these are SyntaxErrors,
which often occur because decoding terminates before finishing the program or
the model generates a program of the form ‘2+3=5’, which is invalid Python.
The remaining 14% of execution failures are less trivial, including NameErrors
(7%) and TypeErrors (1%) (see Table 6).
**Bh¯askara is a good starting point for further fine-tuning** Table 5
shows that our Bhaskara¯ model is a better starting point for downstream
fine-tuning than the vanilla pre-trained GPT-Neo-2.7B. When comparing fine
11
-----
tuning for direct question answering with T5-3B, we see an almost 8% absolute
improvement in F1 (30.1% to 37.6%). These findings establish Bhaskara¯ as a
strong starting point for further fine-tuning on new tasks. For this reason, we
release our multi-task model for public use under the name Bhaskara¯, with
the hope that it will be useful for future research into math reasoning models.
**5.2** **Results: Category-wise Analysis**
In this section we discuss the trends among the tasks within each category. For
brevity, we primarily consider Bhaskara¯, the GPT-Neo multi-task model in
the program-synthesis setting.
**Math ability.** Among the tasks in the math category, Bhaskara¯ excels
in basic math, linear algebra, and in-domain statistics. On these tasks, it
performs equal or better to Codex. On the other hand, Bhaskara¯ struggles
in advanced math and geometry, with mediocre performance in multiplicationdivision, number theory, and calculus. Codex shows analogous trends, except
for performing very well on calculus (0.930)[7].
**Language complexity .** Models generally show lower performance on program synthesis as language complexity increases. Bhaskara¯ gets mean F1 over
0.5 only for datasets with the least linguistic complexity where it achieves an F1
of 0.7.
**Question format.** Among the format tasks in the dataset, Bhaskara¯ does
exceptionally well on multiple-choice and natural-language inference, getting
performance close to 0.9 on the latter, and outperforming Codex on both. On
the other hand, the model performs close to 0.25 for reading comprehension and
fill-in-the-blank, though with 0.5 F1 on out-of-domain fill-in-the-blank.
**Background knowledge.** Bhaskara¯ performs above 0.5 F1 only for problems requiring commonsense and math formulas and fails to do similarly on
problems requiring other forms of external knowledge like physics, computer
science, or real-world knowledge.
**5.3** **Results: Few-shot Prompting**
Finally, we study the few-shot performance of much larger models (≈175B), to
better understand the performance of the smaller trained models (≈2.7B) and
to provide a benchmark for evaluating other large language models. Overall, we
find that few-shot prompted models generally outperform their much smaller
but fine-tuned counterparts.
12
-----
0.6
0.5
0.4
0.3
0.2
0.1
max
|Col1|Model GPT3|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||Codex|||||
|||||||
|||||||
|||||||
|||||||
|||||||
Model
GPT3
Codex
Number of few-shot examples
Figure 3: Average F1 scores of GPT-3 and Codex with different numbers of
few-shot examples in L¯ila.
Zero-shot Few-shot (3)
Dimension
w/o Inst w/ Inst w/o Inst w/ Inst
Math ability 0.120 **0.123** **0.311** 0.306
Language 0.124 **0.131** **0.352** 0.350
Format 0.241 **0.257** **0.555** 0.540
Knowledge 0.108 **0.112** **0.367** 0.363
Average 0.148 **0.156** **0.396** 0.390
Table 6: The IID scores for GPT-3 models with and without instruction prompting (Inst). Instruction helps slightly in zero-shot setting, but not in few-shot
setting.
**Instructions and more examples improve performance.** We find that
the number of few-shot examples greatly impacts prompt models’ performance.
Figure 3 shows that GPT-3 answer prediction beats Codex program synthesis
in zero- to one-shot settings, but Codex overtakes with more examples. Table 6 shows that prompting with instructions improves performance only in the
zero-shot setting, meaning that in the limited contexts of the prompt models,
examples are more important than instructions for mathematical reasoning.
This is consistent with the findings of Puri et al. (2022) on instruction-example
equivalence.
**Few-shot GPT-3 answer prediction underperforms Bh¯askara.** While
prompt-based models outperform our fine-tuned models in general when comparing within direct-answering and program-synthesis, when comparing Bhaskara¯
program-synthesis to GPT-3 direct answering we find that the much smaller
Bh¯askara consistently outperforms GPT-3.
7Note that the training set for Codex is not known.
13
-----
**Few-shot Codex performance is relatively strong.** Relative to the 2.7B
trained models, Codex demonstrates strong few-shot IID and OOD performance.
Some notable exceptions to this pattern are the statistics, linear algebra, multiplechoice question answering, and NLI tasks. Generally, OOD few-shot performs
much better than OOD for the fine-tuned models.
**Few-shot Codex fails on some tasks.** Despite strong performance relative
to Bhaskara¯, Codex obtains less that 0.5 F1 on several tasks, with especially
poor performance on geometry, number theory, advanced math, complex language, computer science problems, science formulas, and real world knowledge.
### 6 Conclusion
In this work, we introduce L¯ila, a unified mathematical reasoning benchmark for
a holistic evaluation of AI agents. L¯ila consists of 23 tasks across 4 dimensions
(i) mathematical abilities, (ii) language format, (iii) language complexity, (iv)
external knowledge. It builds on 20 existing mathematical reasoning datasets to
collect instructions and Python programs. Further, it also supports measuring
out-of-distribution performance and robustness to language perturbations via
L¯ila-OOD and L¯ila-Robust respectively. We also introduce Bhaskara¯, a
2.7B-parameter fine-tuned multi-task model. We find that multi-tasking improves
over single-task performance by 21.83% F1 score on average, and that our model
is a strong starting point for further fine-tuning on new math reasoning tasks.
The best performing model we evaluate achieves only 60.40% F1 indicating the
potential for improvement on the proposed benchmark.
**6.1** **Limitations**
One drawback of our unified format is the difficulty of evaluating models. In
our work we use F1 for lack of a better alternative. F1 likely over-estimates
performance, e.g., given the gold answer “2 apples”, the predicted answers “2”
and “apples” receive the same score, though the former is better.
L¯ila contains 23 tasks which are created from 20 datasets and 44 subdatasets. There is scope to add more mathematical reasoning datasets (e.g.,
theorem proving.) The flexible unified format of L¯ila allows for future extensions.
Additionally, our categorization provides a way to identify areas for extension.
For instance, we only have 1 dataset for linear algebra, which happens to not
use natural language, and takes the form of generative QA. Our benchmark
will benefit from future linear algebra additions, perhaps with word problems
formatted as fill-in-the-blank questions.
14
-----
### References
Gilles Adda, Benoît Sagot, Karën Fort, and Joseph Mariani. 2011. Crowdsourcing
for language resource development: Critical analysis of amazon mechanical
turk overpowering use. In 5th Language and Technology Conference.
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations
with pre-finetuning. arXiv preprint arXiv:2101.11038.
Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and
Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem
solving with operation-based formalisms. arXiv preprint arXiv:1905.13319.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk
Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc
Le, et al. 2021. Program synthesis with large language models. arXiv preprint
_arXiv:2108.07732._
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021.
[Gpt-neo: Large scale autoregressive language modeling with mesh-tensorflow.](https://doi.org/10.5281/zenodo.5297715)
Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers,
Matthew E Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters
of dataset biases. arXiv preprint arXiv:2002.04108.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry,
Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger,
Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu,
Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott
Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are
few-shot learners. In Advances in Neural Information Processing Systems,
volume 33, pages 1877–1901. Curran Associates, Inc.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde
de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas
Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained
on code.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168.
Henry T Colebrooke. 1817. Arithmetic and mensuration of brahmegupta and
bhaskara.
15
-----
Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, and Matt Gardner. 2019a. Orb: An open reading benchmark for comprehensive evaluation of
machine reading comprehension. arXiv preprint arXiv:1912.12598.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh,
and Matt Gardner. 2019b. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference
_of the North American Chapter of the Association for Computational Lin-_
_guistics: Human Language Technologies, Volume 1 (Long and Short Papers),_
pages 2368–2378.
Karën Fort, Gilles Adda, and Kevin Bretonnel Cohen. 2011. Amazon mechanical
turk: Gold mine or coal mine? Computational Linguistics, pages 413–420.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles
Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser,
and Connor Leahy. 2020. The Pile: An 800gb dataset of diverse text for
language modeling. arXiv preprint arXiv:2101.00027.
Jesse Michael Han, Jason M. Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas
Polu. 2021. Proof artifact co-training for theorem proving with language
models. ArXiv, abs/2102.06203.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora,
Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al.
2021a. Measuring coding challenge competence with apps. arXiv preprint
_arXiv:2105.09938._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart,
Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical
problem solving with the math dataset. arXiv preprint arXiv:2103.03874.
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan,
and Dawn Song. 2020. Pretrained transformers improve out-of-distribution
robustness. In Proceedings of the 58th Annual Meeting of the Association for
_Computational Linguistics, pages 2744–2751._
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In In Conference on Empirical Methods in Natural Language Processing
_(EMNLP)._
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma.
2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the 54th Annual Meeting of the
_Association for Computational Linguistics (Volume 1: Long Papers), pages_
887–896.
16
-----
Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark,
and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with
a single qa system. arXiv preprint arXiv:2005.00700.
Aditya Kolachana, K Mahesh, and K Ramasubramanian. 2019. Use of calculus
in hindu mathematics. In Studies in Indian Mathematics and Astronomy,
pages 345–355. Springer.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni,
and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations.
_Transactions of the Association for Computational Linguistics, 3:585–597._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh
Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of
_the 2016 Conference of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Technologies, pages 1152–1157._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014.
Learning to automatically solve algebra word problems. In Proceedings of
_the 52nd Annual Meeting of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 271–281._
[Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. 2021. Isarstep: a](https://openreview.net/forum?id=Pzj6fzU6wkj)
[benchmark for high-level mathematical reasoning. In International Conference](https://openreview.net/forum?id=Pzj6fzU6wkj)
_on Learning Representations._
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have
four legs?! numersense: Probing numerical commonsense knowledge of pretrained language models. In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), pages 6862–6868._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program
induction by rationale generation: Learning to solve and explain algebraic
word problems. arXiv preprint arXiv:1705.04146.
Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021.
Unicorn on rainbow: A universal commonsense reasoning model on a new
multitask benchmark. In Proceedings of the AAAI Conference on Artificial
_Intelligence, volume 35, pages 13480–13488._
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang,
and Song-Chun Zhu. 2021a. Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. In The 59th Annual Meeting of
_the Association for Computational Linguistics (ACL)._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay
Rajpurohit, Peter Clark, and Ashwin Kalyan. 2022. Dynamic prompt learning
via policy gradient for semi-structured mathematical reasoning. arXiv preprint
_arXiv:2209.14610._
17
-----
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu,
Xiaodan Liang, and Song-Chun Zhu. 2021b. Iconqa: A new benchmark for
abstract diagram understanding and visual language reasoning. In The 35th
_Conference on Neural Information Processing Systems Track on Datasets and_
_Benchmarks (NeurIPS 2021)._
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018.
The natural language decathlon: Multitask learning as question answering.
_arXiv preprint arXiv:1806.08730._
[Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020a. A diverse corpus for](https://doi.org/10.18653/v1/2020.acl-main.92)
[evaluating and developing English math word problem solvers. In Proceedings](https://doi.org/10.18653/v1/2020.acl-main.92)
_of the 58th Annual Meeting of the Association for Computational Linguistics,_
pages 975–984, Online. Association for Computational Linguistics.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020b. A diverse corpus for
evaluating and developing english math word problem solvers. In Proceedings
_of the 58th Annual Meeting of the Association for Computational Linguistics,_
pages 975–984.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh
[Hajishirzi. 2022a. Reframing instructional prompts to GPTk’s language. In](https://doi.org/10.18653/v1/2022.findings-acl.50)
_Findings of the Association for Computational Linguistics: ACL 2022, pages_
589–612, Dublin, Ireland. Association for Computational Linguistics.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022b.
Cross-task generalization via natural language crowdsourcing instructions. In
_Proceedings of the 60th Annual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 3470–3487._
Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter
Clark, Chitta Baral, and Ashwin Kalyan. 2022c. Numglue: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of
_the 60th Annual Meeting of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 3505–3523._
[Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models](https://doi.org/10.18653/v1/2021.naacl-main.168)
[really able to solve simple math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Ravsehaj Singh Puri, Swaroop Mishra, Mihir Parmar, and Chitta Baral. 2022.
How many data samples is an additional instruction worth? arXiv preprint
_arXiv:2203.09161._
Abhilasha Ravichander, Aakanksha Naik, Carolyn Rose, and Eduard Hovy. 2019.
Equate: A benchmark evaluation framework for quantitative reasoning in
natural language inference. arXiv preprint arXiv:1901.03735.
18
-----
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh.
2020. Beyond accuracy: Behavioral testing of nlp models with checklist. In
_Proceedings of the 58th Annual Meeting of the Association for Computational_
_Linguistics, pages 4902–4912._
Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2021. Qa dataset explosion:
A taxonomy of nlp resources for question answering and reading comprehension.
_arXiv preprint arXiv:2107.12708._
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In
_Proceedings of the 2015 Conference on Empirical Methods in Natural Language_
_Processing, pages 1743–1752._
Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems.
_arXiv preprint arXiv:1608.01413._
Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application
to arithmetic word problem solving. In Thirty-First AAAI Conference on
_Artificial Intelligence._
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word
problem solving. Transactions of the Association for Computational Linguistics,
6:159–172.
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities
in natural language. _Transactions of the Association for Computational_
_Linguistics, 3:1–13._
Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika,
Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja,
et al. 2021. Multitask prompted training enables zero-shot task generalization.
_arXiv preprint arXiv:2110.08207._
Benoy Kumar Sarkar. 1918. Hindu Achievements in Exact Science: A Study in
_the History of Scientific Development. Longmans, Green and Company._
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019.
Analysing mathematical reasoning abilities of neural models. arXiv preprint
_arXiv:1904.01557._
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb,
Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta,
Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint
_arXiv:2206.04615._
Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal.
2019. Quarel: A dataset and models for answering questions about qualitative
relationships. In Proceedings of the AAAI Conference on Artificial Intelligence,
volume 33, pages 7063–7071.
19
-----
Shyam Upadhyay and Ming-Wei Chang. 2015. Draw: A challenging and diverse
algebra word problem set. Technical report, Citeseer.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016.
Learning from explicit and implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on Empirical Methods in Natural
_Language Processing, pages 297–306._
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian
Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A
stickier benchmark for general-purpose language understanding systems. In
_Advances in Neural Information Processing Systems, pages 3261–3275._
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and
Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform
for natural language understanding. arXiv preprint arXiv:1804.07461.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan
Dhanasekaran, Atharva Naik, David Stap, et al. 2022. Benchmarking generalization via in-context instructions on 1,600+ language tasks. arXiv preprint
_arXiv:2204.07705._
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian
Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language
models are zero-shot learners. arXiv preprint arXiv:2109.01652.
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi,
[and Kyunghyun Cho. 2021. Naturalproofs: Mathematical theorem proving in](https://openreview.net/forum?id=Jvxa8adr3iY)
[natural language. In Thirty-fifth Conference on Neural Information Processing](https://openreview.net/forum?id=Jvxa8adr3iY)
_Systems Datasets and Benchmarks Track (Round 1)._
[Sean Welleck, Peter West, Jize Cao, and Yejin Choi. 2022. Symbolic brittleness](https://arxiv.org/pdf/2109.13986.pdf)
[in sequence models: on systematic generalization in symbolic mathematics. In](https://arxiv.org/pdf/2109.13986.pdf)
_AAAI._
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart
van Merriënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards aicomplete question answering: A set of prerequisite toy tasks. arXiv preprint
_arXiv:1502.05698._
[Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Baker Grosse. 2021. {INT}:](https://openreview.net/forum?id=O6LPudowNQm)
[An inequality benchmark for evaluating generalization in theorem proving. In](https://openreview.net/forum?id=O6LPudowNQm)
_International Conference on Learning Representations._
Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham
[Neubig. 2018. Learning to mine aligned code and natural language pairs from](https://doi.org/https://doi.org/10.1145/3196398.3196408)
[stack overflow. In International Conference on Mining Software Repositories,](https://doi.org/https://doi.org/10.1145/3196398.3196408)
MSR, pages 476–486. ACM.
20
-----
Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik,
and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In
_Proceedings of the IEEE conference on computer vision and pattern recognition,_
pages 3712–3722.
Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth.
Do language embeddings capture scales?
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. 2021. Minif2f: a crosssystem benchmark for formal olympiad-level mathematics. arXiv preprint
_arXiv:2109.00110._
Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “going on a vacation” takes longer than “going for a walk”: A study of temporal commonsense
understanding. In Proceedings of the 2019 Conference on Empirical Methods
_in Natural Language Processing and the 9th International Joint Conference_
_on Natural Language Processing (EMNLP-IJCNLP), pages 3363–3369._
21
-----
**Task: Basic Math**
**Problem: Before December, cus-**
tomers buy 1346 ear muffs from
the mall. During December, they
buy 6444, and there are none. In
all, how many ear muffs do the customers buy?
**Predicted Answer: 1346.0 **
**Generated Program:**
answer = 1346.0 + 6444.0
**print** (answer)
# Result ==> 7790.0
**Gold Answer: 7790.0 **
**Task: Muldiv**
**Problem: Tickets to the school**
play cost 6 for students and 8 for
adults. If 20 students and 12 adults
bought tickets, how many dollars’
worth of tickets were sold?
**Predicted Answer: 48 **
**Generated Program:**
a= 20 *6
b= 12 *8
c=a+b
answer =c
**print** (answer)
# Result ==> 216.0
**Gold Answer: 216 **
Figure 4: Examples with Bh¯askara on Basic Math and Muldiv.
### A Qualitative Examples
Figures 4 and 5 give examples of input-output behavior of Bhaskara¯ . Figure 6
gives an example of a non-compiling output program.
### B Dataset Collection
Tables 12-15 give examples and datasets from each task for each category.
Category Examples Datasets
Math Table 8 Table 12
Language Table 9 Table 13
Format Table 10 Table 14
Knowledge Table 11 Table 15
Table 7: Examples and datasets meta-table.
**B.1** **Expert annotation**
In the worker qualification process, we ask each worker to annotate 30 questions.
We manually verify each annotation and qualify those whose Python annotations
are satisfactory. We also provide feedback such as "write simpler programs, use
representative variable names instead of just letters, add comments wherever
possible" to annotators after the worker qualification process. We instruct
22
-----
**Problem: Determine the KL Di-** **Problem: Find the determinant**
039, [0, 5, 0],
**Task: Linear Algebra**
**Problem: Find the determinant**
of the matrix
0 2 3
_, 0.19_ _−_ _−_
0 5 0 _._
1 3 2
05, 0.072.
**Predicted Answer: −8 **
**Generated Program:**
**import numpy as np**
a = np. array ([
[0, -2, -3],
[0, 5, 0],
[1, 3, 2]])
**print** (np.linalg. det (a))
# Result ==> 15.0
**Gold Answer: 15 **
**Task: Statistics**
**Problem: Determine the KL Di-**
vergence between the empirical distributions
0.149, 0.039, 0.099, 0.09, 0.203, 0.077, 0.
and
0.158, 0.256, 0.042, 0.008, 0.064, 0.051, 0
**Predicted Answer: 0.36 **
**Generated Program:**
**from scipy.stats import**
entropy
distribution1 = [ 0.149, 0.
039,
0.099, 0.09, 0.203,
0.077, 0.039, 0.19 ]
distribution2 = [ 0.158, 0.
256,
0.042, 0.008, 0.064,
0.051, 0.05, 0.072 ]
**print** (entropy(distribution1,
qk =
distribution2)
)
# Result ==> 0.5122 ...
**Gold Answer: 0.5122... **
Figure 5: Examples with Bh¯askara on Statistics and Linear Algebra.
annotators to use a minimal set of Python libraries, and we ask them to record
the Python libraries they use in a common document. We find that the annotators
could get the task done just by using the sympy and the datetime libraries. We
also ask annotators to report any bugs in answer annotation, which they report
for a small number of questions; we subsequently fix those.
We give 10 sample question annotations to annotators as illustrative examples
which vary in structure, length, format, underlying reasoning skill, etc. We pay
20 dollars per hour up to 20 hours per week as compensation for the data
annotation work.
**L¯ila-Robust** To create the L¯ila-Robust dataset, we first define a set of 9
templates, consisting of 3 variation styles defined in SVAMP (Patel et al., 2021)
as well as 6 novel templates of our own. We refer to the SVAMP templates as
SVAMP-COO, SVAMP-COP, and SVAMP-IU, which correspond to changing the
order of objects, changing the order of phrases, and adding irrelevant, unhelpful
information to the problem statement, respectively. Our novel templates are
23
-----
**Model: Codex**
**Task: Statistics**
**Problem: Simplify the expression**
(9x[2] +3x +7)+(3x[2] +7x[5] +2). Express your answer as a polynomial
with terms arranged in decreasing
order of degree.
**Generated Program:**
**from sympy import Poly**
p = Poly(9 - (x ** 2) + 3 - x + 7
+
3 - (x ** 2) + 7 - (x ** 5)
+ 2)
answer = p.as_expr ()
**print** (answer)
# ==> NameError (x is not
defined)
**Gold Answer: 7x[5]** +12x[2] +3x+9
**Model: Bh¯askara**
**Task: Muldiv**
**Problem: Jenny collects cans and**
bottles to take down to the recycling center. Each bottle weighs
6 ounces and each can weighs 2
ounces. Jenny can carry a total of
100 ounces. She collects 20 cans
and as many bottles as she can
carry. If she gets paid 10 cents per
bottle and 3 cents per can, how
much money does she make (in
cents)?
**Generated Program:**
a= 20 *6
b=a*2
c=b*3
d=c* 10
e=d*3
f=e*3
g=f+g
answer =g
**print** (answer)
# ==> NameError (g is not
defined)
**Gold Answer: 216**
Figure 6: NameErrors in Codex and Bh¯askara.
named ROBUST-IR, ROBUST-AP, ROBUST-ADJ, ROBUST-Q, ROBUST-RQ,
and ROBUST-RM. ROBUST-IR refers to adding information that is unhelpful
for solving the question but may be related to the context of the problem.
ROBUST-AP refers to increasing problem verbosity by turning active speech to
passive speech. ROBUST-ADJ refers to increasing problem verbosity by adding
adjectives or adverbs. ROBUST-Q indicates turning a problem statement into a
question, in the style of a conversation with a student. ROBUST-RQ indicates
removing question words in a problem and turning it into a statement; it is
roughly the inverse of ROBUST-Q. Finally, ROBUST-RM refers to the removal
of mathematics terms that are implicitly defined. Examples of each template
are found in Table 16.
For our crowdsourcing pipeline, we provide each Amazon Mechanical Turk
worker with 10 questions split from 20 questions sampled from each dataset. We
run a separate job for each of our 9 templates. In particular, each HIT contains
the 10 split questions from the original datasets, alongside the problem solution.
Workers are asked to submit an augmentation for each question according to
the style of the template assigned to each job. Thus, we run 9 separate jobs
24
-----
**Task** **Question category** **Example**
Basic math: addition, subtraction, fact **Original: If Jimbo is 484 feet away from a beetle and quarter of 827 feet away**
Task 1 based QA etc. from a grasshopper, which insect will seem bigger to him?? "Option 1": beetle,
"Option 2" :grasshopper Answer: Option 2
Muldiv: multiplication, division along **Question: Mrs. Hilt bought 2 pizzas. Each pizza had 8 slices. So, she had __**
Task 2
with addition, subtraction etc. total slices of pizza. Answer: 16
Number theory: prime, power, negation, **Question: How many numbers are divisible by both 2 and 3 up to 300?**
Task 3
modulus and other operators etc. **Answer: 50**
Algebra: equations, functions, polyno- **Question: The sum of the three smallest of four consecutive integers is 30 more**
Task 4
mials, series etc. than the largest integer. What are the four consecutive integers ? Answer:
15.0
Geometry: triangles, polygons, 3D **Question: A hall is 6 meters long and 6 meters wide. If the sum of the areas**
Task 5 structures etc. of the floor and the ceiling is equal to the sum of the areas of four walls, what
is the volume of the hall (in cubic meters)? Answer: 108
Statistics: binomial, divergence, mean, **Question: There are 11 boys and 10 girls in a class. If three students are**
Task 6
median, mode, variance etc. selected at random, in how many ways that 3 girl and 2 boys are selected?
**Answer: 6600**
Calculus: differentiation, integration, **Question: Let g(y) = 9*y**4 + 25*y**2 + 6. Let s(d) = 1 - d**4. Let x(t) =**
Task 7
gradient, series expansion etc. -g(t) + 6*s(t). What is the third derivative of x(f) wrt f? Answer: -360*f
Linear algebra: vectors, dot products, **Question: Problem: Convert the following matrix to reduced row echelon form:**
Task 8 Eigen vectors, matrices etc. _−75_ _−−102_ _−210_ _−−74_ . Answer: 10 01 _−209[13]10_ _−698040[13]_
Advanced math: heuristics required **Question: Let f** (x) = 2[x]. Find _f_ (f (f (f (1)))). Answer: 256
Task 9 along with probability, statistics, or al
p
gebra, Olympiad level problems
Table 8: Example of each task in the math ability category of the L¯ila benchmark.
to obtain augmentations for all templates across all datasets. To familiarize
workers with the intended style of each template, we provide 3 demonstrative
augmentations within the instructions of each HIT, as summarized in Table
16. We restrict our crowdsourcing pipeline to workers that had above a 98%
acceptance rate with over 1000 completed HITs. We provide workers with an
upper bound of 1 hour to complete each HIT but specify in the instructions that
each HIT should feasible be completed in 10 minutes. Based on minimum wage
policies and under the assumption that workers follow the 10-minute completion
guideline, we accordingly compensate $3 per HIT. Finally, to ensure dataset
quality of generations via the Amazon Mechanical Turk Fort et al. (2011); Adda
et al. (2011), we manually assess the worker augmentations produced for each
template.
### C Dataset Statistics
Figure 8 gives relatives sizes of tasks within each category. Figure 9 illustrates
the unigram frequencies in L¯ila, where larger words indicate higher frequency.
Table 17 gives comprehensive statistics on each task. Table 19 cites each
component dataset of L¯ila.
### D Additional Results
Table 18 gives the unaggregated performance of each model on each dataset in
L¯ila (some datasets are split across tasks).
25
-----
**Task** **Question category** **Example**
Task 10 No language Compute the median of 4√2, −6, 3e, 3, −6, − _√[14]π_ _, 6. Answer: 3_
Simple language **Question: Joan had 9 blue balloons, but Sally popped 5 of them. Jessica has**
Task 11
2 blue balloons. They have __ blue balloons now. Answer: 6
Complex language: involving co-reference
resolution etc., multi-sentence language, adversarial language: containing tricky words
etc., often created adversarially
**Question: Passage: According to the 2011 National Household Survey, 89.3%**
of Markhams residents are Canadian citizens, and about 14.5% of residents are
recent immigrants (from 2001 to 2011). The racial make up of Markham is;
East Asian (39.7%), White Canadian (27.5%), South Asian Canadian (19.1%),
Southeast Asian (3.9%), Black Canadians (3.2%), West Asian & Arab Canadians
(3.2%), Latin American Canadian (0.5%), Aboriginal peoples in Canada (0.2%),
and 1.9% of the population is multiracial while the rest of the population (0.7%)
is of another group. Markham has the highest visible minority population of
any major Canadian city (over 100,000 residents) at 72.3%, and is one of eight
major cities with no majority racial group. Question: How many percent of
people were not white? Answer: 72.5
Task 12
Table 9: Example of each task in the language complexity category of the L¯ila
benchmark.
**Task** **Question category** **Example**
Task 13 Fill in the blank **Question: Delphinium has _ florets or they are full of holes. Answer: no**
Task 14 Generative question answering **Question: Calculate the remainder when 160 is divided by 125. Answer: 35**
Multiple choice question answering (MCQ) **Question: The fish glided with a speed of 8 m/s through the water and 5 m/s**
Task 15 through the jello because the __ is smoother? "Option 1": jello, "Option 2":
water. Answer: Option 2
Natural language inference (NLI) **Question: "statement 1": Alyssa picked 42.0 pears from the pear tree and**
Task 16 Nancy sold 17.0 of the pears, "statement 2" :25.0 pears were left, "options: "
Entailment or contradiction? Answer: Entailment
Reading comprehension (RC) **Question: Passage: A late game rally by Washington led them to the Eagles’**
26 yard line. A shot to the end zone by Robert Griffin III would be intercepted
by Brandon Boykin, clinching an Eagles win. The Eagles would move to 6-5.
This is the Eagles first win at Lincoln Financial Field since Week 4 of the 2012
season, because prior to this game, the Eagles had never won a game in their
home stadium in 414 days since that same week, snapping a 10-game losing
streak at home with this win. Question: How many more wins than losses did
the Eagles have after this game? Answer: 1
Task 17
Table 10: Example of each task in the question formatcategory of the L¯ila
benchmark.
**Task** **Question category** **Example**
No external knowledge: only mathemati- **Question: If there are 7 bottle caps in a box and Linda puts 7 more bottle**
Task 18
cal commonsense knowledge required caps inside, how many bottle caps are in the box? Answer: 14
Commonsense: temporal commonsense
knowledge (e.g., people usually play basketball for a few hours and not days),
numerical commonsense knowledge (e.g.
birds has 2 legs)
**Question: Outside temple, there is a shop which charges 12 dollars for each**
object. Please note that one shoe is counted as an object. Same is true for socks
and mobiles. Paisley went to temple with both parents. All of them kept their
shoes, socks and mobiles in the shop. How much they have to pay? Answer:
180
Task 19
Math formulas: algebra, geometry, prob- **Question: Simplify -3*(sqrt(1700) - (sqrt(1700) + (3 + sqrt(1700))*-6)) + -3.**
Task 20
ability etc. **Answer: -180*sqrt(17) - 57**
Science formulas: physics, chemistry etc. **Question: Find the number of moles of H2O formed on combining 2 moles of**
Task 21
NaOH and 2 moles of HCl. Answer: 2
Computer science knowledge: data struc- **Question: Apply functions ‘mean’ and ‘std’ to each column in dataframe ‘df’**
Task 22
ture algorithms like merge sort etc. **Answer: df.groupby(lambda idx: 0).agg([’mean’, ’std’])**
Real-world knowledge: COVID mod- **Question: Our physics club has 20 members, among which we have 3 officers:**
elling, climate modelling etc. President, Vice President, and Treasurer. However, one member, Alex, hates
another member, Bob. How many ways can we fill the offices if Alex refuses to
serve as an officer if Bob is also an officer? (No person is allowed to hold more
than one office.) Answer: 6732
Task 23
Table 11: Example of each task in the background knowledgecategory of the L¯ila
benchmark.
26
-----
**Task** **Math category** **IID** **OOD**
addsub.json MCTaco_event_duration_structured.json
Numersense_structured.json NumGLUE_Task3.json
MCTaco_stationarity_structured.json
MCTaco_frequency_structured.json
MCTaco_event_typical_time_structured.json
MCTaco_event_ordering_structured.json
NumGLUE_Task7.json
singleop.json svamp_structured.json
multiarith.json NumGLUE_Task4.json
asdiv.json
GSM8k_structured.json
NumGLUE_Task1.json
NumGLUE_Task2.json
deepmind_mathematics_muldiv.json
mathqa_physics.json mbpp_structured.json
APPS_structured.json mathqa_other.json
mathqa_gain.json
amps_number_theory.json
mathqa_general.json
conala_structured.json
NumGLUE_Task5.json
deepmind_mathematics_numbertheory.json
singleq.json draw_structured.json
simuleq.json dolphin_structured.json
amps_algebra.json
NumGLUE_Task8.json
deepmind_mathematics_algebra.json
Task 1 Basic math
Task 2 Muldiv
Task 8 Number theory
Task 4 Algebra
Task 5 Geometry amps_geometry.json mathqa_geometry.json
Task 6 Statistics amps_counting_and_stats.json mathqa_probability.json
amps_calculus.json deepmind_mathematics_calculus.json
Task 7 Calculus deepmind_mathematics_basicmath.json
Task 8 Linear algebra amps_linear_algebra.json
Task 9 Advanced math MATH_crowdsourced.json
Table 12: Raw datasets used to create different tasks in L¯ila across different
math categories.
**Question: A gardener is going to plant 2 red rosebushes and 2 white rosebushes. If**
the gardener is to select each of the bushes at random, one at a time, and plant them
in a row, what is the probability that the 2 rosebushes in the middle of the row will
be the red rosebushes?
**Options: {A:1/12, B:1/6, C:1/5, D:1/3, E:1/2}**
**Answer: B**
**Explanation: We are asked to find the probability of one particular pattern: wrrw.**
Total # of ways a gardener can plant these four bushes is the # of permutations of 4
letters wwrr, out of which 2 w’ s and 2 r’ s are identical, so 4 ! / 2 ! 2 ! = 6 ; so p =
1 / 6. Answer: B.
**Program: import scipy**
n0 = 2.0
n1 = 2.0
n2 = 2.0
t0 = n0 + n0
t1 = scipy.special.comb(t0, n0)
answer = 1.0 / t1
Figure 7: An example of instruction annotation.
27
-----
**ID** **Language** **cate-** **IID** **OOD**
**gory**
amps_number_theory.json amps_algebra.json
amps_counting_and_stats.json deepmind_mathematics_calculus.json
amps_calculus.json
amps_linear_algebra.json
deepmind_mathematics_muldiv.json
deepmind_mathematics_numbertheory.json
deepmind_mathematics_algebra.json
deepmind_mathematics_basicmath.json
addsub.json MCTaco_frequency_structured.json
Numersense_structured.json NumGLUE_Task1.json
MCTaco_stationarity_structured.json mathqa_general.json
MCTaco_event_typical_time_structured.json NumGLUE_Task4.json
MCTaco_event_ordering_structured.json
MCTaco_event_duration_structured.json
singleop.json
multiarith.json
asdiv.json
GSM8k_structured.json
APPS_structured.json
mathqa_gain.json
mathqa_other.json
singleq.json
simuleq.json
NumGLUE_Task8.json
draw_structured.json
dolphin_structured.json
mathqa_probability.json
mathqa_physics.json mbpp_structured.json
APPS_structured.json mathqa_other.json
mathqa_gain.json
amps_number_theory.json
mathqa_general.json
conala_structured.json
NumGLUE_Task5.json
deepmind_mathematics_numbertheory.json
Task 10 No language
Task 11 Simple language
Task 12 Complex language
Table 13: Raw datasets used to create different tasks in L¯ila across different
language categories.
28
-----
**ID** **Format category** **IID** **OOD**
Task 13 Fill in the blank NumGLUE_Task4.json Numersense_structured.json
amps_number_theory.json svamp_structured.json
amps_counting_and_stats.json mathqa_geometry.json
amps_linear_algebra.json amps_calculus.json
amps_algebra.json singleq.json
deepmind_mathematics_calculus.json NumGLUE_Task2.json
addsub.json mbpp_structured.json
singleop.json deepmind_mathematics_numbertheory.json
multiarith.json
asdiv.json
GSM8k_structured.json
APPS_structured.json
mathqa_gain.json
mathqa_other.json
simuleq.json
NumGLUE_Task8.json
draw_structured.json
dolphin_structured.json
mathqa_probability.json
MCTaco_frequency_structured.json
NumGLUE_Task1.json
mathqa_general.json
mathqa_physics.json
conala_structured.json
amps_geometry.json
MATH_crowdsourced.json
deepmind_mathematics_calculus.json
deepmind_mathematics_muldiv.json
deepmind_mathematics_algebra.json
deepmind_mathematics_basicmath.json
NumGLUE_Task3.json MCTaco_event_typical_time_structured.json
MCTaco_stationarity_structured.json
MCTaco_event_ordering_structured.json
MCTaco_event_duration_structured.json
Task 14 Generative QA
Task 15 MCQ
Task 16 NLI NumGLUE_Task5.json
Task 17 RC mathqa_physics.json mbpp_structured.json
Table 14: Raw datasets used to create different tasks in L¯ila across different
format categories.
29
-----
**ID** **Knowledge** **cate-** **IID** **OOD**
**gory**
addsub.json NumGLUE_Task4.json
singleop.json GSM8k_structured.json
multiarith.json svamp_structured.json
asdiv.json NumGLUE_Task7.json
simuleq.json
NumGLUE_Task8.json
draw_structured.json
dolphin_structured.json
NumGLUE_Task5.json
deepmind_mathematics_muldiv.json
Numersense_structured.json NumGLUE_Task1.json
MCTaco_frequency_structured.json MCTaco_event_ordering_structured.json
NumGLUE_Task3.json
MCTaco_stationarity_structured.json
MCTaco_event_duration_structured.json
MCTaco_event_typical_time_structured.json
amps_number_theory.json amps_counting_and_stats.json
amps_linear_algebra.json mathqa_general.json
amps_algebra.json amps_calculus.json
deepmind_mathematics_calculus.json
mathqa_probability.json
singleq.json
mathqa_gain.json
mathqa_other.json
deepmind_mathematics_algebra.json
deepmind_mathematics_basicmath.json
deepmind_mathematics_calculus.json
deepmind_mathematics_numbertheory.json
Task 18 No external knowledge
Task 19 Commonsense
Task 20 Math formulas
amps_geometry.json
Task 21 Science formulas NumGLUE_Task2.json
mathqa_physics.json
Computer science APPS_structured.json mathqa_geometry.json
Task 22
knowledge conala_structured.json
Task 23 Real-world knowledge MATH_crowdsourced.json mbpp_structured.json
Table 15: Raw datasets used to create different tasks in L¯ila across different
knowledge categories.
30
-----
**Template Name** **Variation** **Example**
SVAMP-COO Change the order of objects **Question: Allen bought 20 stamps at the post office in 37 cents and 20 cents**
denominations . If the total cost of the stamps was $ 7.06, how many 37 cents
stamps did Allen buy ?
**Variation: Allen bought 20 stamps at the post office in 20 cents and 37 cents**
denominations . If the total cost of the stamps was $ 7.06, how many 37 cents
stamps did Allen buy ?
SVAMP-COP Change the order of phrases **Question: One pipe can fill a tank in 5 hours and another pipe can fill the**
same tank in 4 hours . A drainpipe can empty the full content of the tank in 20
hours . With all the three pipes open, how long will it take to fill the tank ?
**Variation: A drainpipe can empty the full content of a tank in 20 hours . One**
pipe can fill the tank in 4 hours and another pipe can fill the same tank in 5
hours . With all the three pipes open, how long will it take to fill the tank
with all the three pipes open ?
SVAMP-IU Add irrelevant, unhelpful information **Question: the area of an isosceles trapezoid with sides of length 5 and bases**
of length 7 and 13 is ?
**Variation: monkeys and apes are both primates, which means they’re both**
part of the human family tree . the area of an isosceles trapezoid with sides of
length 5 and bases of length 7 and 13 is ?
ROBUST-IR Add unhelpful, but contextually related **Question: Tom is 15 years younger than alice . Ten years ago, Alice was 4**
information times as old as Tom was then . How old is each now ?
**Variation: Tom is 15 years younger than alice . Ten years ago, Alice was 4**
times as old as Tom was then . Alice really likes pinapple pizza. How old is
each now ?
ROBUST-AP Turn active into passive speech to in- **Question: Hay’s Linens sells hand towels in sets of 17 and bath towels in**
crease problem verbosity sets of 6. If the store sold the same number of each this morning, what is the
smallest number of each type of towel that the store must have sold?
**Variation: Hand towels are sold by Hay’s Linens in sets of 17 and bath towels**
are sold in sets of 6. If the same number of each were sold by the store this
morning, what is the smallest number of each type of towel that the store must
have sold?
ROBUST-ADJ Add adjectives and adverbs to increase **Question: ThereTea leaves exposed to oxygen for up to _ hours become black**
problem verbosity tea.
**Variation: Black tea leaves continuously exposed to oxygen for up to _ hours**
become a very rich black tea.
ROBUST-Q Turn a task statement into a question **Question: Product of -7 and -1469.125.**
**Variation: What is the product of -7 and -1469.125?**
ROBUST-RQ Turn a question into a task statement **Question: Problem: If the product of 5 and a number is increased by 4, the**
result is 19. What is the number?
**Variation: Increasing the product of 5 and a number by 4 results is 19. Find**
the number.
ROBUST-RM Remove explicitly mathematical terms **Problem: Find the arclength of the function f** (x) = 2[√]x on the interval x = 2
that are implicitly defined to x = 8
**Variation: Find the arclength of f** (x) = 2[√]x on [2, 8]
Table 16: Example for each template provided to MTurk workers to produce
L¯ila-Robust
**ID** **Category** **Questions** **Unique questions** **Question length** **Programs** **Unique programs** **Program length**
Task 1 Basic math 31,052 31,032 43.1 31,052 7,066 13.3
Task 2 Muldiv 16,021 15,936 26.9 16,021 15,279 8.2
Task 3 Number theory 44,760 44,183 41.3 269,232 261,865 33.2
Task 4 Algebra 15,882 15,615 19.3 16,364 15,986 12.7
Task 5 Geometry 3,190 3,149 36.1 3,190 3,035 28.7
Task 6 Counting and statistics 6,423 6,384 39.7 6,423 6,335 31.5
Task 7 Calculus 4,493 4,202 21.2 4,493 4,170 40.6
Task 8 Linear algebra 11,248 11,204 32.4 11,248 11,204 23.0
Task 9 Advanced math 746 746 21.2 746 745 27.3
Task 10 No language 41,191 40,551 21.2 42,466 41,794 40.6
Task 11 Simple language 66,505 66,172 26.9 290,184 258,839 8.2
Task 12 Complex language 26,119 25,728 36.1 26,119 25,052 28.7
Task 13 Fill in the blank 11,634 11,615 11.0 11,634 997 3.0
Task 14 Generative QA 102,493 101,239 14.7 327,447 314,652 16.0
Task 15 MCQ 9,989 9,989 28.3 9,989 470 3.0
Task 16 NLI 6,326 6,325 50.8 6,326 6,243 25.8
Task 17 RC 3,642 3,552 182.5 3,642 3,592 10.4
Task 18 No external knowledge 28,115 27,964 50.8 28,115 27,117 25.8
Task 19 Commonsense 24,677 24,658 30.9 24,677 823 3.0
Task 20 Math formulas 57,841 56,947 19.1 59,116 57,019 25.5
Task 21 Science formulas 10,505 10,319 36.1 10,505 9,764 28.7
Task 22 Complex knowledge 12,200 12,086 14.5 235,879 230,486 24.2
Task 23 Real-world knowledge 746 746 21.2 746 745 27.3
Table 17: Main statistics of L¯ila across the total of 23 tasks.
31
-----
(c) Format categories.
(d) Knowledge categories.
8.4%0.6% Basic math
3.4% 23.2% Muldiv 19.5%
4.8% Number theory 30.8%
Algebra No language
2.4% 12.0% Geometry Simple language
Statistics Complex language
11.9% Calculus
Linear algebra
33.4% Advanced math 49.7%
(a) Math ability categories. (b) Language categories.
4.7%2.7% 8.7% 9.1%[0.6%] 21.0%
7.4% Fill in the blank 7.8% No external knowledge
Commonsense
Generative QA
Math formulas
MCQ
Science formulas
NLI 18.4% Complex knowledge
RC
Real-world knowledge
43.1%
76.4%
Figure 8: Task diversity in L¯ila across math, language, format, and knowledge
categories.
Figure 9: The word cloud distribution of annotated programs in the L¯ila dataset.
32
-----
**ID** **Dataset** **GPT-3** **Neo-A** **Neo-P** **Codex**
1 addsub 0.910 0.116 0.797 **0.950**
2 amps_algebra 0.116 0.100 **0.902** 0.655
3 amps_calculus 0.192 0.168 **0.922** 0.860
4 amps_counting_and_stats 0.183 0.117 **0.958** 0.650
5 amps_geometry **0.283** 0.263 0.074 0.000
6 amps_linear_algebra 0.127 0.235 **0.815** 0.692
7 amps_number_theory 0.273 0.026 0.875 **1.000**
8 APPS_structured 0.167 0.154 0.134 **0.459**
9 asdiv **0.737** 0.166 0.092 0.022
10 conala_structured 0.356 0.329 0.329 **0.391**
11 deepmind_mathematics_algebra 0.202 0.258 0.847 **0.910**
12 deepmind_mathematics_basicmath 0.270 0.125 0.614 **1.000**
13 deepmind_mathematics_calculus 0.208 0.026 0.152 **0.884**
14 deepmind_mathematics_muldiv 0.160 0.034 0.909 **1.000**
15 deepmind_mathematics_numbertheory 0.296 0.462 0.538 **0.710**
16 dolphin_t2_final 0.170 0.027 0.006 **0.812**
17 draw_structured 0.090 0.034 0.005 **0.210**
18 GSM8k_structured 0.110 0.060 0.139 **0.350**
19 MATH_crowdsourced 0.150 0.013 0.074 **0.472**
20 mathqa_gain 0.134 0.054 **0.339** 0.270
21 mathqa_general 0.110 0.073 **0.193** 0.120
22 mathqa_geometry 0.120 0.002 0.000 **0.250**
23 mathqa_other 0.180 0.043 0.011 **0.280**
24 mathqa_physics 0.120 0.087 **0.429** 0.210
25 mathqa_probability **0.210** 0.003 0.000 0.200
26 mbpp_structured 0.128 0.175 0.164 **0.408**
27 MCTaco_event_duration_structured **0.800** 0.773 0.773 0.710
28 MCTaco_event_ordering_structured 0.860 0.831 0.831 **0.890**
29 MCTaco_event_typical_time_structured 0.870 **0.881** **0.881** 0.870
30 MCTaco_frequency_structured **0.890** 0.862 0.862 0.790
31 MCTaco_stationarity_structured 0.710 **0.758** **0.758** 0.670
32 multiarith 0.360 0.143 0.921 **0.990**
33 Numersense_structured 0.620 0.495 0.495 **0.660**
34 NumGLUE_Type_1 0.535 0.108 0.083 **0.740**
35 NumGLUE_Type_2 0.512 0.285 0.646 **0.735**
36 NumGLUE_Type_3 **0.835** 0.004 0.001 0.815
37 NumGLUE_Type_4 0.710 0.076 0.208 **0.790**
38 NumGLUE_Type_5 0.460 0.200 0.305 **0.615**
39 NumGLUE_Type_7 0.500 0.516 **0.854** 0.710
40 NumGLUE_Type_8 0.420 0.082 0.257 **0.610**
41 simuleq 0.120 0.074 0.010 **0.170**
42 singleop 0.940 0.347 0.611 **1.000**
43 singleq **0.830** 0.143 0.474 0.670
44 svamp_structured 0.620 0.085 0.060 **0.790**
Average F1 score 0.400 0.223 0.440 **0.613**
Table 18: Evaluation results of baselines across different single datasets. On
most datasets, Codex performs best. Model names: GPT-3: the few-shot
175B GPT-3 model; GPT-Neo-A: the fine-tuned 2.7B GPT-3 model where the
prediction output is an answer; GPT-Neo-P: the fine-tuned 2.7B GPT-3 model
33
where the prediction output is a program; Codex: the few-shot Codex model
where the prediction output is a program.
-----
**ID** **Dataset** **References**
1 addsub Hosseini et al. (2014)
2 amps Hendrycks et al. (2021b)
3 APPS Hendrycks et al. (2021a)
4 asdiv Miao et al. (2020b)
5 conala Yin et al. (2018)
6 mathematics Saxton et al. (2019)
7 dolphin Huang et al. (2016)
8 draw Upadhyay and Chang (2015)
9 GSM8k Cobbe et al. (2021)
10 MATH Hendrycks et al. (2021b)
11 mathqa Amini et al. (2019)
12 mbpp Austin et al. (2021)
13 MCTaco Zhou et al. (2019)
14 multiarith Roy and Roth (2015)
15 Numersense Lin et al. (2020)
16 NumGLUE Mishra et al. (2022c); Dua et al. (2019b); Ravichander et al. (2019); Kushman et al. (2014); Tafjord
et al. (2019); Roy and Roth (2018, 2017); KoncelKedziorski et al. (2016, 2015)
17 simuleq Kushman et al. (2014)
18 singleop Roy et al. (2015)
19 singleq Koncel-Kedziorski et al. (2015)
20 svamp Patel et al. (2021)
Table 19: List of source datasets and corresponding references used in constructing L¯ila.
34
-----
| [
"Sean, Welleck",
"Pan, Lu",
"Swaroop, Mishra",
"Matthew, Finlayson",
"Leonard, Tang",
"Tanmay, Rajpurohit",
"Chitta, Baral",
"Oyvind, Tafjord",
"Peter, Clark",
"Ashish, Sabharwal"
] | 2022-01-01T00:00:00 | EMNLP 2022 Main | true | 103 | 18 | null | https://arxiv.org/abs/2210.17517 | https://arxiv.org/abs/2210.17517 | https://www.semanticscholar.org/paper/52fb239ea5cea1e9a2636f8f7922c8ede3e50ba7 |
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems | Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions. Motivated by the intuition about how human generates the equations given the problem texts, this paper presents a neural approach to automatically solve math word problems by operating symbols according to their semantic meanings in texts. This paper views the process of generating equation as a bridge between the semantic world and the symbolic world, where the proposed neural math solver is based on an encoder-decoder framework. In the proposed model, the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a dataset Math23K, and our model significantly outperforms both the state-of-the-art single model and the best non-retrieval-based model over about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic worlds from math word problems. | The proposed neural math solver is based on an encoder-decoder framework, where the encoder is designed to understand the semantics of problems, and the decoder focuses on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. | # Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
**Ting-Rui Chiang** **Yun-Nung Chen**
National Taiwan University, Taipei, Taiwan
[email protected] [email protected]
**Abstract**
Solving math word problems is a challenging task that requires accurate natural language
understanding to bridge natural language texts
and math expressions. Motivated by the intuition about how human generates the equations
given the problem texts, this paper presents a
neural approach to automatically solve math
word problems by operating symbols according to their semantic meanings in texts. This
paper views the process of generating equations as a bridge between the semantic world
and the symbolic world, where the proposed
neural math solver is based on an encoderdecoder framework. In the proposed model,
the encoder is designed to understand the semantics of problems, and the decoder focuses
on tracking semantic meanings of the generated symbols and then deciding which symbol to generate next. The preliminary experiments are conducted in a benchmark dataset
Math23K, and our model significantly outperforms both the state-of-the-art single model
and the best non-retrieval-based model over
about 10% accuracy, demonstrating the effectiveness of bridging the symbolic and semantic
worlds from math word problems.[1]
**1** **Introduction**
Automatically solving math word problems has
been an interesting research topic and also been
viewed as a way of evaluating machines’ ability (Mandal and Naskar, 2019). For human, writing down an equation that solves a math word
problem requires the ability of reading comprehension, reasoning, and sometimes real world understanding. Specifically, to solve a math word
problem, we first need to know the goal of
the given problem, then understand the semantic
1The source code is available at https://github.
com/MiuLab/E2EMathSolver.
meaning of each numerical number in the problem, perform reasoning based on the comprehension in the previous step, and finally decide what
to write in the equation.
Most prior work about solving math word problems relied on hand-crafted features, which required more human knowledge. Because those
features are often in the lexical level, it is not
clear whether machines really understand the math
problems. Also, most prior work evaluated their
approaches on relatively small datasets, and the
capability of generalization is concerned.
This paper considers the reasoning procedure
when writing down the associated equation given a
problem. Figure 1 illustrates the problem solving
process. The illustration shows that human actually assigns the semantic meaning to each number
when manipulating symbols, including operands
(numbers) and operators (+ −×÷). Also, we believe that the semantic meaning of operands can
help us decide which operator to use. For example,
the summation of “price of one pen” and “number
_of pens Tom bought” is meaningless; therefore the_
addition would not be chosen.
Following the observation above, this paper
proposes a novel encoder decoder model, where
the encoder extracts semantic meanings of numbers in the problem, and the decoder is equipped
with a stack that facilitates tracking the semantic
meanings of operands. The contributions of this
paper are 4-fold:
_• This paper is the first work that models se-_
mantic meanings of operands and operators
for math word problems.
_• This paper proposes an end-to-end neural_
math solver with a novel decoding process
that utilizes the stack to generate associated
equations.
2656
-----
Figure 1: The solving process of the math word problem “Each notebok takes $0.5 and each pen takes $1. Tom has
_$10. How many notebook can he buy after buying 5 pens?” and the associated equation is x = (10 −_ 1 × 5) ÷ 0.5.
The associated equation is x = (10 − 1 × 5) ÷ 0.5.
_• This paper achieves the state-of-the-art per-_
formance on the large benchmark dataset
Math23K.
_• This paper is capable of providing interpreta-_
tion and reasoning for the math word problem
solving procedure.
**2** **Related Work**
There is a lot of prior work that utilized handcrafted features, such as POS tags, paths in the dependency trees, keywords, etc., to allow the model
to focus on the quantities in the problems (Kushman et al., 2014; Hosseini et al., 2014; Roy et al.,
2015; Roy and Roth, 2015; Koncel-Kedziorski
et al., 2015; Roy et al., 2016; Upadhyay et al.,
2016; Upadhyay and Chang, 2017; Roy and Roth,
2018; Wang et al., 2018). Recently, Mehta et al.;
Wang et al.; Ling et al. attempted at learning models without predefined features. Following the recent trend, the proposed end-to-end model in this
paper does not need any hand-crafted features.
Kushman et al. first extracted templates about
math expressions from the training answers, and
then trained models to select templates and map
quantities in the problem to the slots in the template. Such two-stage approach has been tried
and achieved good results (Upadhyay and Chang,
2017). The prior work highly relied on human knowledge, where they parsed problems into
equations by choosing the expression tree with the
highest score calculated by an operator classifier,
working on a hand-crafted “trigger list” containing quantities and noun phrases in the problem, or
utilizing features extracted from text spans (Roy
et al., 2015, 2016; Koncel-Kedziorski et al., 2015).
Shi et al. defined a Dolphin language to connect
math word problems and logical forms, and generated rules to parse math word problems. Upadhyay et al. parsed math word problems without
explicit equation annotations. Roy and Roth clas
sified math word problems into 4 types and used
rules to decide the operators accordingly. Wang
et al. trained the parser using reinforcement learning with hand-crafted features. Hosseini et al.
modeled the problem text as transition of world
states, and the equation is generated as the world
states changing. Our work uses a similar intuition,
but hand-crafted features are not required and our
model can be trained in an end-to-end manner.
Some end-to-end approaches have been proposed,
such as generating equations directly via a seq2seq
model (Wang et al., 2017). Ling et al. tried to
generate solutions along with its rationals with a
seq2seq-like model for better interpretability.
This paper belongs to the end-to-end category,
but different from the previous work; we are the
first approach that generates equations with stack
actions, which facilitate us to simulate the way
how human solves problems. Furthermore, the
proposed approach is the first model that is more
interpretable and provides reasoning steps without
the need of rational annotations.
**3** **End-to-End Neural Math Solver**
Our approach composes of two parts, an encoder
and a decoder, where the process of solving math
word problems is viewed as transforming multiple text spans from the problems into the target
information the problems ask for. In the example shown in Figure 1, all numbers in the problem
are attached with the associated semantics. Motivated by the observation, we design an encoder to
extract the semantic representation of each number in the problem text. Considering that human
usually manipulates those numbers and operators
(such as addition, subtraction, etc.) based on their
semantics for problem solving, a decoder is designed to construct the equation, where the semantics is aligned with the representations extracted
by the encoder. The idea of the proposed model
2657
-----
|Operation Selector|Operand Selector|
|---|---|
OP Return
_Each notebook takes $0.5 and each pen_
_takes_ _$1._ _Tom_ _has_ _$10._ _How_ _many_
_notebooks can he buy after buying 5 pens?_ Apply OP Semantic Transformer
Operation Selector Operand Selector
Attention Attention
Stack Stack
Tom has $ 10 5 pens ?
**Encoder** **Decoder**
Figure 2: The encoder-decoder model architecture of the proposed neural solver machine.
is to imitate the human reasoning process for solving math word problems. The model architecture
is illustrated in Figure 2.
**3.1** **Encoder**
The encoder aims to extract the semantic representation of each constant needed for solving problems. However, the needed constants may come
from either the given problem texts or domain
knowledge, so we detail these two procedures as
follows.
**3.1.1** **Constant Representation Extraction**
For each math word problem, we are given a passage consisting of words {wt[P] _[}]t[m]=1[, whose word]_
embeddings are {e[P]t _[}]t[m]=1[. The problem text in-]_
cludes some numbers, which we refer as constants.
The positions of constants in the problem text are
denoted as {pi}i[n]=1[. In order to capture the seman-]
tic representation of each constant by considering
its contexts, a bidirectional long short-term memory (BLSTM) is adopted as the encoder (Hochreiter and Schmidhuber, 1997):
_h[E]t_ _[, c]t[E]_ [= BLSTM(][h]t[E] 1[, c][E]t 1[, e][P]t [)][,] (1)
_−_ _−_
and then for the i-th constant in the problem, its
semantic representation e[c]i [is modeled by the cor-]
responding BLSTM output vector:
_e[c]i_ [=][ h]p[E]i[.] (2)
**3.1.2** **External Constant Leveraging**
External constants, including 1 and π, are leveraged, because they are required to solve a math
word problem, but not mentioned in the problem text. Due to their absence from the problem
text, we cannot extract their semantic meanings by
BLSTM in (2). Instead, we model their semantic
representation e[π], e[1] as parts of the model parameters. They are randomly initialized and are learned
during model training.
**3.2** **Decoder**
The decoder aims at constructing the equation that
can solve the given problem. We generate the
equation by applying stack actions on a stack to
mimic the way how human understands an equation. Human knows the semantic meaning of
each term in the equation, even compositing of
operands and operators like the term ”(10−1×5)”
in Figure 1. Then what operator to apply on a
pair operands can be chosen based on their semantic meanings accordingly. Hence we design our
model to generate the equation in a postfix manner: a operator is chosen base on the semantic representations of two operands the operator is going
to apply to. Note that the operands a operator can
apply to can be any results generated previously.
That is the reason why we use “stack” as our data
structure in order to keep track of the operands
a operator is going to apply to. The stack contains both symbolic and semantic representations
of operands, denoted as
_S = [(vl[S]t_ _[, e]l[S]t[)][,][ (][v]l[S]t_ 1[, e][S]lt 1[)][,][ · · ·][,][ (][v]1[S][, e]1[S][)]][,][ (3)]
_−_ _−_
where v[S] of each pair is the symbolic part, such
as x + 1, while e[S] is the semantic representation,
which is a vector. The components in the decoder
are shown in the right part of Figure 2, each of
which is detailed below.
**3.3** **Decoding State Features**
At each decoding step, decisions are made based
on features of the current state. At each step, fea
2658
-----
|Push 10 Push 5 Pu 𝑥 Encoder & 0.5 Push 𝒙 Generated 1 5 Var. 10 10 10 𝑥 𝑥 𝑥 5 SymPy 0.5 10 −1 × 5 ÷ 0.5 10 −1 × 5 10 −1 × 5 𝑥= 10 −1 × 5 ÷ 0.5 𝑥 𝑥 𝑥|𝑥 0.5 1 10 5|Col3|Push 10 Push 5 Pu Push 𝒙 5 10 10 𝑥 𝑥 𝑥|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|||||||10 𝑥|||
|||10 −1 × 5 ÷ 0.5 𝑥|||0.5 10 −1 × 5 𝑥|||10 −1 × 5 𝑥|
||||||0.5||||
|||10 −1 × 5 ÷ 0.5|||10 −1 × 5|||10 −1 × 5|
|𝑥= 10 −1 × 5 ÷ 0.5||𝑥|||𝑥|||𝑥|
**Push 10** **Push 5** **Push 1**
𝑥
0.5 **Push 𝒙** 1
5 5
1
10 10 10
10
𝑥 𝑥 𝑥 𝑥
5
**Apply ×**
0.5 1 × 5
10 −1 × 5 ÷ 0.5 10 −1 × 5 10 −1 × 5 10
𝑥= 10 −1 × 5 ÷ 0.5 𝑥 𝑥 𝑥 𝑥
**Apply ÷** **Push 0.5** **Apply −**
**Apply =**
Figure 3: Illustration of the inference process. The purple round blocks denote the transformed semantics, while
the green ones are generated by the variable generator.
tures r[sa] and r[opd] are extracted to select a stack
action (section 3.3.2) and an operand to push (section 3.3.3). Specifically, the features are the gated
concatenation of following vectors:
_• h[D]t_ [is the output of an LSTM, which encodes]
the history of applied actions:
_h[D]t_ _[, c]t[D]_ [= LSTM(][h]t[D] 1[, c][D]t 1[,][ res][t][−][1][)][,] (4)
_−_ _−_
where rest 1 is the result from the previ_−_
ous stack action similar to the seq2seq model
(Sutskever et al., 2014). For example, if the
previous stack action ot 1 is “push”, then
_−_
rest 1 is the semantic representation pushed
_−_
into the stack. If the previous stack action
_ot_ 1 is to apply an operator, then rest 1 is
_−_ _⋄_ _−_
the semantic representation generated by f .
_⋄_
_• st is the stack status. It is crucial because_
some operators are only applicable to certain
combinations of operand semantics, which is
similar to the type system in programming
languages. For example, operating multiplication is applicable to the combination of
“quantity of an item” and “price of an item”,
while operating addition is not. Considering that all math operators supported here
(+, −, ×, ÷) are binary operators, the semantic representations of the stack’s top 2 elements at the time t − 1 are considered:
_st = [e[S]lt[;][ e]l[S]t[]][.]_ (5)
_• qt incorporates problem information in the_
decision. It is believed that the attention
mechanism (Luong et al., 2015) can effectively capture dependency for longer distance. Thus, the attention mechanism over
the encoding problem h[E]1 _[, h]2[E][,][ · · ·][ is adopted:]_
_qt = Attention(h[D]t_ _[,][ {][h]i[E][}]i[m]=1[)][,]_ (6)
where the attention function in this paper is
defined as a function with learnable parameters w, W, b:
Attention(u, {vi}i[m]=1[) =]
_αihi,_ (7)
_i=1_
X
exp(si)
_αi =_ _m_ (8)
_l=1_ [exp(][s][i][)] _[,]_
_si = wP[T]_ tanh(W _[T]_ [u; vi] + b). (9)
In order to model the dynamic features for different decoding steps, features in rt[sa] is gated as
follows:
_rt[sa]_ = [gt,[sa]1 _[·][ h]t[D][;][ g]t,[sa]2_ _[·][ s][t][;][ g]t,[sa]3_ _[·][ q][t][]][,]_ (10)
_gt[sa]_ = σ(W _[sa]_ _· [h[D]t_ [;][ s][t][;][ q][t][])][,] (11)
where σ is a sigmoid function and W _[sa]_ is a
learned gating parameter. _rt[opd]_ is defined similarly, but with a different learned gating parameter
_W_ _[opd]._
**3.3.1** **Stack Action Selector**
The stack action selector is to select an stack action at each decoding step (section 3.3.2) until the
unknowns are solved. The probability of choosing
action a at the decoding step t is calculated with
a network NN constituted of one hidden layer and
ReLU as the activation function:
_P_ (Yt |{yi}[t]i=1[−][1][,][ {][w][i][}]i[m]=1[)] (12)
= StackActionSelector(rt[sa][)]
= softmax(NN(rt[sa][))][,]
where rt[sa] is decoding state features as defined in
section 3.3.
2659
-----
**3.3.2** **Stack Actions**
The available stack actions are listed below:
_• Variable generation: The semantic repre-_
sentation of an unknown variable x is generated dynamically as the first action in the decoding process. Note that this procedure provides the flexibility of solving problems with
more than one unknown variables. The decoder module can decide how many unknown
variables are required to solve the problem,
and the semantic representation of the unknown variable is generated with an attention
mechanism:
_e[x]_ = Attention(h[D]t _[,][ {][h]i[E][}]i[m]=1[)][.]_ (13)
_• Push: This stack action pushes the operand_
chosen by the operand selector (section
3.3.3). Both the symbolic representation v
_∗_
and semantic representation e of the chosen
_∗_
operand would be pushed to the stack S in
(3). Then the stack state becomes
_S = [(v[S]_ _lt_ _[, e]l[S]t[)][,][ · · ·][,][ (][v]1[S][, e]1[S][)]][.]_
_∗_ _[, e][S]∗_ [)][,][ (][v][S]
(14)
_• Operator ⋄_ **application (⋄∈{+, −, ×, ÷}):**
One stack action pops two elements from the
top of the stack, which contains two pairs,
(vi, ei) and (vj, ej), and then the associated
symbolic operator, vk = vi ⋄ _vj, is recorded._
Also, a semantic transformation function f
_⋄_
for that operator is invoked, which generates
the semantic representation of vk by transforming semantic representations of vi and vj
to ek = f (ei, ej). Therefore, after an opera_⋄_
tor is applied to the stack specified in (3), the
stack state becomes
_S =[(vl[S]t_ _lt−1[, f][⋄][(][e][S]lt[, e]l[S]t−1[))][,]_ (15)
_[⋄]_ _[v][S]_
(vl[S]t 2[, e][S]lt 2[)][,][ · · ·][,][ (][v]1[S][, e]1[S][)]][.]
_−_ _−_
_• Equal application: When the equal appli-_
cation is chosen, it implies that an equation
is completed. This stack action pops 2 tuples from the stack, (vi, ei), (vj, ej), and then
_vi = vj is recorded._ If one of them is
an unknown variable, the problem is solved.
Therefore, after an OP is applied to the stack
specified in (3), the stack state becomes
_S = [(vl[S]t_ 2[, e][S]lt 2[)][,][ · · ·][,][ (][v]1[S][, e]1[S][)]][.] (16)
_−_ _−_
**3.3.3** **Operand Selector**
When the stack action selector has decided to
push an operand, the operand selector aims at
choosing which operand to push. The operand
candidates e include constants provided in the
problem text whose semantic representations are
_e[c]1[, e][c]2[,][ · · ·][, e]n[c]_ [, unknown variable whose semantic]
representation is e[x], and two external constants 1
and π whose semantic representations are e[1], e[π]:
_e = [e[c]1[, e]2[c][,][ · · ·][, e]n[c]_ _[, e][1][, e][π][, e][x][]][.]_ (17)
An operand has both symbolic and semantic representations, but the selection focuses on its semantic meaning; this procedure is the same as what
human does when solving math word problems.
Inspired by addressing mechanisms of neural
Turing machine (NTM) (Graves et al., 2014), the
probability of choosing the i-th operand candidate
is the attention weights of rt over the semantic representations of the operand candidates as in (8):
_P_ (Zt | {yi}[t]i=1[−][1][,][ {][w][i][}]i[m]=1[)] (18)
= OperandSelector(rt[opd])
= AttentionWeight(rt[opd], _ei_ _i=1_
_{_ _}[m]_ _[∪{][e][1][, e][π][, e][x][}][)][,]_
and rt[opd] is defined in section 3.3.
**3.3.4** **Semantic Transformer**
A semantic transformer is proposed to generate the
semantic representation of a new symbol resulted
from applying an operator, which provides the capability of interpretation and reasoning for the target task. The semantic transformer for an operator
_⋄∈{+, −, ×, ÷} transforms semantic represen-_
tations of two operands e1, e2 into
_f⋄(e1, e2) = tanh(U⋄ReLU(W⋄[e1; e2]+b⋄)+c⋄),_
(19)
where W _, U_ _, b_ _, c_ are model parameters. Se_⋄_ _⋄_ _⋄_ _⋄_
mantic transformers for different operators have
different parameters in order to model different
transformations.
**3.4** **Training**
Both stack action selection and operand selection
can be trained in a fully supervised way by giving
problems and associated ground truth equations.
Because our model generates the equation with
stack actions, the equation is first transformed into
its postfix representation. Let the postfix representation of the target equation be y1, · · · yt, · · ·, yT,
2660
-----
**Algorithm 1 Training and Inference**
**function SOLVEPROBLEM(problem text)**
_v ←_ ExtractConstants(problem text)
_▷v is a list of constants in the problem._
_h[E], h[D]0_ _[, c]0[D][, E][ ←]_ [Encoder(problem text)]
_S ←_ Stack()
ret, loss, t, equations ← padding, 0, 1, {}
**while not solvable(equations) do**
_hst[D]t_ _[←]S.[LSTM(]get top2()[h]t[D]−1[, c]t−1[,][ ret)]_
_←_
_h[E]_ Attention(h[D]t 1[, h][E][)]
_←_ _−_
_prtsa ←_ [hStackActionSelector([D]t _[, s]t[, h][E][]]_ _rt)_
_popd ←_ OperandSelector(rt)
**if training ←** **then**
_▷_ Target equation y is available when training.
where yt can be either an operator (+, −, ×, ÷, =)
or a target operand. Then for each time step t, the
loss can be computed as
_L1(push op) + L2(yt)_ _yt is an operand_
_L1(yt)_ otherwise
_L(yt) =_
where L1 is the stack action selection loss and L2
is the operand selection loss defined as
_L1(yt) = −_ log P (Yt = yt | {oi}i[t]=1[−][1][,][ {][w][i][}]i[m]=1[)][,]
_L2(yt) = −_ log P (Zt = yt | rt).
_Yt_ _yt_
**if y ←t is operand then**
loss loss + L1(push) + L2(yt)
_←_
**else**
loss loss + L1(yt)
_←_
**end if**
**else**
**ifYt Y ←t = pushStackActionSelector( then** _rt[sa][)]_
_Zt_ OperandSelector(rt[opd])
_←_
**end if**
**end if**
**if Yt = gen var then**
_e[x]_ _←_ Attention(h[D]t _[, h][E][)]_
ret ← _e[x]_
**else if Yt = push then**
_S.push(vZt_ _, eZt_ )
ret _eZt_
_←_
**else if(va Y, et ∈{a), (+vb, −, eb, ×) =, ÷} S. thenpop(), S.pop()**
_S.push(vaYtvb, fYt_ (ea, eb))
ret _fYt_ (ea, eb)
_←_
**else if Yt = equal then**
(va, ea), (vb, eb) = S.pop(), S.pop()
equations = equations ∪ ”va = vb”
ret ← _S.top()_
**end if**
**end while**
**return solve(equations)**
**end function**
problems respectively. The reasons about not evaluating on these two datasets are 1) Dolphin18k
contains some unlabeled math word problems and
some incorrect labels, and 2) AQuA contains rational for solving the problems, but the equations
in the rational are not formal (e.g. mixed with
texts, using x to represent ×, etc.) and inconsistent. Therefore, the following experiments are
performed and analyzed using Math23K, the only
large scaled, good-quality dataset.
**4.2** **Results**
The results are shown in Table 1. The retrievalbased methods compare problems in test data with
problems in training data, and choose the most
The objective of our training process is to minimize the total loss for the whole equation,
_T_
_t=1_ _[L][(][y][t][)][.]_
P
**3.5** **Inference**
When performing inference, at each time step
_t, the stack action with the highest probability_
_P_ (Yt|{y˜i}i[t]=1[−][1][,][ {][w][i][}]i[m]=1[)][ is chosen. If the chosen]
stack action is “push”, the operand with the highest probability P (Zt|{Y[˜]i}i[t]=1[−][1][,][ {][w][i][}]i[m]=1[)][ is chosen.]
When the stack has less than 2 elements, the probability of applying operator +, −, ×, ÷, = would
be masked out to prevent illegal stack actions, so
all generated equations must be legal math expressions. The decoder decodes until the unknown variable can be solved. After the equations
are generated, a Python package SymPy (Meurer
et al., 2017) is used to solve the unknown variable.
The inference procedure example is illustrated in
Figure 3. The detailed algorithm can be found in
Algorithm 1.
**4** **Experiments**
To evaluate the performance of the proposed
model, we conduct the experiments on the benchmark dataset and analyze the learned semantics.
**4.1** **Settings**
The experiments are benchmarked on the dataset
Math23k (Wang et al., 2017), which contains 23,162 math problems with annotated equations. Each problem can be solved by a singleunknown-variable equation and only uses operators +, −, ×, ÷. Also, except π and 1, quantities
in the equation can be found in the problem text.
There are also other large scale datasets like Dolphin18K (Shi et al., 2015) and AQuA (Ling et al.,
2017), containing 18,460 and 100,000 math word
2661
-----
**Model** **Accuracy**
Jaccard 47.2%
Retrieval
Cosine 23.8%
BLSTM 57.9%
Classification
Self-Attention 56.8%
Seq2Seq w/ SNI 58.1%
Generation Proposed Word-Based 65.3%
Proposed Char-Based **65.8%**
Hybrid Retrieval + Seq2Seq 64.7%
Table 1: 5-fold cross validation results on Math23K.
similar one’s template to solve the problem (Kushman et al., 2014; Upadhyay and Chang, 2017).
The classification-based models choose equation
templates by a classifier trained on the training
data. Their performance are reported in Robaidek
et al.. The seq2seq and hybrid models are from
Wang et al., where the former directly maps natural language into symbols in equations, and the
latter one ensembles prediction from a seq2seq
model and a retrieval-based model. The ensemble
is the previous state-of-the-art results of Math23K.
Our proposed end-to-end model belongs to the
generation category, and the single model performance achieved by our proposed model is new
state-of-the-art (> 65%) and even better than the
hybrid model result (64.7%). In addition, we are
the first to report character-based performance on
this dataset, and the character-based results are
slightly better than the word-based ones. Among
the single model performance, our models obtain about more than 7% accuracy improvement
compared to the previous best one (Wang et al.,
2017). The performance of our character-based
model also shows that our model is capable of
learning the relatively accurate semantic representations without word boundaries and achieves better performance.
**4.3** **Ablation Test**
To better understand the performance contributed
by each proposed component, we perform a series
of ablation tests by removing components one by
one and then checking the performance by 5-fold
cross validation. Table 2 shows the ablation results.
**Char-Based v.s.** **Word-Based** As reported
above, using word-based model instead of
character-based model only causes 0.5% performance drop. To fairly compare with prior word
**Model** **Accuracy**
Char-Based 65.8%
Word-Based 65.3%
Word-Based - Gate 64.1%
Word-Based - Gate - Attention 62.5%
Word-Based - Gate - Attention - Stack 60.1%
Word-Based - Semantic Transformer 64.1%
Word-Based - Semantic Representation 61.7%
Table 2: 5-fold cross validation results of ablation tests.
based models, the following ablation tests are performed on the word-based approach.
**Word-Based - Gate** It uses rt instead of rt[sa] [and]
_rt[opr]_ as the input of both StackActionSelector and
OperandSelector.
**Word-Based - Gate - Attention** Considering
that the prior generation-based model (seq2seq)
did not use any attention mechanism, we compare
the models with and without the attention mechanism. Removing attention means excluding qt−1
in (11), so the input of both operator and operand
selector becomes rt = [h[D]t [;][ s][t][]][. The result implies]
that our model is not better than previous models
solely because of the attention.
**Word-Based - Gate - Attention - Stack** To
check the effectiveness of the stack status (st in
(11)), the experiments of removing the stack status from the input of both operator and operand
selectors (rt = h[D]t [) are conducted. The results]
well justify our idea of choosing operators based
on semantic meanings of operands.
**Word-Based - Semantic Transformer** To validate the effectiveness of the idea that views an
operator as a semantic transformer, we modify
the semantic transformer function of the operator
_⋄_ into f⋄(e1, e2) = e⋄, where e⋄ is a learnable
parameter and is different for different operators.
Therefore, e acts like the embedding of the op_⋄_
erator ⋄, and the decoding process is more similar to a general seq2seq model. The results show
that the semantic transformer in the original model
encodes not only the last operator applied on the
operands but other information that helps the selectors.
**Word-Based - Semantic Representation** To
explicitly evaluate the effectiveness of operands’
semantic representations, we rewrite semantic representation of the i-th operand in the problem texts
2662
-----
|.02 .15 .02 .03 .26|.14 .01 .02 .00 .02|.31 .00 .00 .00 .00|.00 .01 .00 .01 .00|.00 .00 .00|
|---|---|---|---|---|
||||||
|.01 .00 .00 .00 .01|.09 .42 .21 .02 .05|.13 .00 .00 .00 .00|.00 .01 .01 .02 .00|.01 .00 .00|
||||||
|.00 .00 .00 .00 .00|.00 .00 .00 .00 .01|.01 .00 .00 .00 .00|.08 .65 .21 .01 .01|.01 .00 .00|
||||||
Figure 4: The self-attention map visualization of operands’ semantic expressions for the problem “There are 58
58.0 .02 .15 .02 .03 .26 .14 .01 .02 .00 .02 .31 .00 .00 .00 .00 .00 .01 .00 .01 .00 .00 .00 .00
6.0 .01 .00 .00 .00 .01 .09 .42 .21 .02 .05 .13 .00 .00 .00 .00 .00 .01 .01 .02 .00 .01 .00 .00
9.0 .00 .00 .00 .00 .00 .00 .00 .00 .00 .01 .01 .00 .00 .00 .00 .08 .65 .21 .01 .01 .01 .00 .00
58.0quantifier banana 个 香蕉 ,every (basket) <unk>每 quantifier 6.0 个 take off ,how many 拿掉 多少quantifier 个 ,then 就can 可以exactly 正好 fill 装 quantifier 9.0 baskets 个 篮子 了<unk> .
_bananas. Each basket can contain 6 bananas. How many bananas are needed to be token off such that exactly 9_
_baskets are filled?”._
from (2) to e[c]i [=][ b]i[c][, where][ b][c]i [is a parameter.]
Thus for every problem, the representation of the
_i-th operand is identical, even though their mean-_
ings in different problems may be different. This
modification assumes that no semantic information is captured by b[c]i [, which can merely represent]
a symbolic placeholder in an equation. Because
the semantic transformer is to transform the semantic representations, applying this component
is meaningless. Here the semantic transformer is
also replaced with f⋄(e1, e2) = e⋄ as the setting
of the previous ablation test. The results show that
the model without using semantic representations
of operands causes a significant accuracy drop of
3.5%. The main contribution of this paper about
modeling semantic meanings of symbols is validated and well demonstrated here.
**5** **Qualitative Analysis**
To further analyze whether the proposed model
can provide interpretation and reasoning, we visualize the learned semantic representations of constants to check where the important cues are,
**5.1** **Constant Embedding Analysis**
To better understand the information encoded in
the semantic representations of constants in the
problem, a self-attention is performed when their
semantic representations are extracted by the encoder. Namely, we rewrite (2) as
_e[c]i_ [= Attention(][h]p[E]i[,][ {][h]t[E][}]t[m]=1[.] (20)
Then we check the trained self-attention map (α in
the attention function) on the validation dataset.
For some problems, the self-attention that generates semantic representations of constants in the
problem concentrates on the number’s quantifier
or unit, and sometimes it also focuses on informative verbs, such as “gain”, “get”, “fill”, etc., in
the sentence. For example, Figure 4 shows the attention weights for an example math word problem, where lighter colors indicate higher weights.
The numbers “58” and “6” focus more on the
quantifier-related words (e.g. “every” and “how
_many”), while “9” pays higher attention to the verb_
“fill”. The results are consistent with those handcraft features for solving math word problems proposed by the prior research (Hosseini et al., 2014;
Roy and Roth, 2015; Roy et al., 2015). Hence, we
demonstrate that the automatically learned semantic representations indeed capture critical information that facilitates solving math word problems
without providing human-crafted knowledge.
**5.2** **Decoding Process Visualization**
We visualize the attention map (qt in (6)) to see
how the attention helps the decoding process. An
example is shown in the top of Figure 5, where
most attention focuses on the end of the sentence.
Unlike the machine translation task, the attention
shows the word-level alignment between source
and target languages, solving math word problems
requires high-level understanding due to the task
complexity.
To further analyze the effectiveness of the proposed gating mechanisms for stack action and
operand selection, the activation of gates g[sa], g[opd]
at each step of the decoding process is shown in
the bottom of Figure 5. It shows that most of time,
the gate activation is high, demonstrating that the
proposed gating mechanisms play an important
role during decoding. We also observe a common phenomenon that the activation g2[sa][, which]
controls how much attention the stack action selector puts on the stack state when deciding an
stack action, is usually low until the last “operator application” stack action. For example, in the
example of Figure 5, g2[sa] [is less than][ 0][.][20][ till the]
last argument selection stack action, and activates
when deciding the division operator application
(÷) and the equal application (=). It may result from the higher-level semantics of the operand
(6.75−2.75) on the stack when selecting the stack
action division operator application (÷). In terms
2663
-----
**Problem & Results**
`红花有60朵,黄花比红花多1/6朵,黄花有多少朵.` (There are 60 red flowers. Yellow flowers
are more than red ones by 1/6. How many yellow flowers are there?)
_Generated Equation: 60 +_ 6[1]
_Correct Answer: 70_
`火车` 48 小时行驶 5920 千米,汽车 25 小时行驶 2250 千米,汽车平均每小时比火车每小时慢
`多少` `千米` `?` (The train travels 5920 kilometers in hours, and the car travels 2250 kilometers in 25
hours. How many kilometers per hour is the car slower than the train?)
_Generated Equation: 2250 ÷ 25 −_ 5920 ÷ 48
_Correct Answer: 33_ [1]3
`小红前面` 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7
people behind. How many persons are there in total?)
_Generated Equation: 5 + 7_
_Correct Answer: 13_
Table 3: Randomly sampled incorrect predictions.
gv x 6.75 2.75 - 5.0 / = noop
|.03 .01 .10 .09 .25 .28 .01 .00 .11 .14 .06 .04 .03 .09 .04 .03 .01 .02 .00 .00 .00 .00 .01 .01 .00 .01 .35 .30 .00 .00|.01 .01|.00 .00|.01 .08|.0|1|
|---|---|---|---|---|---|
||.08 .02 .22 .07|.00 .01 .01 .03|.02 .09 .10 .11|.0 .0|2 6|
||.00 .00 .07 .02|.00 .00 .00 .01|.00 .05 .05 .08|.0 .0|0 2|
||.02 .02 .14 .18|.00 .01 .19 .13|.07 .08 .07 .05|.0 .4|1 8|
||.04 .18 .02 .14|.01 .04 .02 .03|.12 .11 .02 .06|.0 .0|5 8|
||.00 .03 .00 .00|.00 .01 .00 .00|.01 .01 .00 .02|.0 .0|1 0|
||.01 .01 .00 .01|.00 .00 .00 .01|.00 .03 .02 .06|.0 .0|1 1|
||.38 .30 .00 .01|.77 .72 .00 .01|.50 .13 .01 .04|.2 .0|3 1|
|Col1|.97 1.0|1.0 1.0|1.0 .99|.61 .73|.12|
|---|---|---|---|---|---|
||.18 .06|||||
|||.02 .06|.02 .09|.26 .20|.83|
||.99 1.0|||||
|||1.0 1.0|.97 .96|.96 1.0|1.0|
|||||||
|Col1|.74 .98|.38 .66|.32 .50|.90 .45|.34|
|---|---|---|---|---|---|
||.69 .61|||||
|||.48 .63|.74 .83|.83 .93|.70|
||.77 .99|||||
|||1.0 .98|.90 .78|.62 .04|.06|
|||||||
.03 .01 .01 .01 .00 .00 .01 .08 .01
.10 .09 .08 .02 .00 .01 .02 .09 .02
.25 .28 .22 .07 .01 .03 .10 .11 .06
.01 .00 .00 .00 .00 .00 .00 .05 .00
.11 .14 .07 .02 .00 .01 .05 .08 .02
.06 .04 .02 .02 .00 .01 .07 .08 .01
.03 .09 .14 .18 .19 .13 .07 .05 .48
.04 .03 .04 .18 .01 .04 .12 .11 .05
.01 .02 .02 .14 .02 .03 .02 .06 .08
.00 .00 .00 .03 .00 .01 .01 .01 .01
.00 .00 .00 .00 .00 .00 .00 .02 .00
.01 .01 .01 .01 .00 .00 .00 .03 .01
.00 .01 .00 .01 .00 .01 .02 .06 .01
.35 .30 .38 .30 .77 .72 .50 .13 .23
.00 .00 .00 .01 .00 .01 .01 .04 .01
.97 1.0 1.0 1.0 1.0 .99 .61 .73 .12
.18 .06 .02 .06 .02 .09 .26 .20 .83
.99 1.0 1.0 1.0 .97 .96 .96 1.0 1.0
.74 .98 .38 .66 .32 .50 .90 .45 .34
.69 .61 .48 .63 .74 .83 .83 .93 .70
.77 .99 1.0 .98 .90 .78 .62 .04 .06
gv x 6.75 2.75 - 5.0 / = noop
of the activation of g[opd], we find that three features
are important in most cases, demonstrating the effectiveness of the proposed mechanisms.
**5.3** **Error Analysis**
We randomly sample some results predicted incorrectly by our model shown in Table 3. In the first
example, the error is due to the language ambiguity, and such ambiguity cannot be resolved without
considering the exact value of the number. From
the second example, although our model identifies the problem as a comparison problem successfully, it handles the order of the operands incorrectly. For the third problem, it cannot be solved
by using only the surface meaning but requires
some common sense. Therefore, above phenomena show the difficulty of solving math word problems and the large room for improvement.
**6** **Conclusion**
We propose an end-to-end neural math solver using an encoder-decoder framework that incorporates semantic representations of numbers in order to generate mathematical symbols for solving
math word problems. The experiments show that
the proposed model achieves the state-of-the-art
performance on the benchmark dataset, and empirically demonstrate the effectiveness of each component in the model. In sum, the proposed neural math solver is designed based on how human
performs reasoning when writing equations, providing better interpretation without the need of labeled rationals.
6.75
substrates 减去
unknown 某
number 数
's 的
5.0
times 倍
is 得
2.75
, ,
ask 求
unknown 某
number 数
<unk>
, .
Figure 5: Word attention and gate activation (g[sa] and
_g[opd]) visualization when generating stack actions for_
the problem “6.75 deducting 5 times of an unknown
_number is 2.75. What is the unknown number?”, where_
the associated equation is x = (6.75 − 2.75) ÷ 5. Note
that g[opd] is meaningful only when the t-th stack action
is push op.
2664
-----
**References**
Alex Graves, Greg Wayne, and Ivo Danihelka.
2014. Neural turing machines. _arXiv preprint_
_arXiv:1410.5401._
Sepp Hochreiter and J¨urgen Schmidhuber. 1997.
Long short-term memory. _Neural Computation,_
9(8):1735–1780.
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Process-_
_ing, pages 523–533._
Armand Joulin, Edouard Grave, Piotr Bojanowski,
Matthijs Douze, H´erve J´egou, and Tomas Mikolov.
2016. Fasttext.zip: Compressing text classification
models. arXiv preprint arXiv:1612.03651.
Diederik P. Kingma and Jimmy Ba. 2014. Adam:
A method for stochastic optimization. _CoRR,_
abs/1412.6980.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. TACL, 3:585–597.
Nate Kushman, Luke Zettlemoyer, Regina Barzilay,
and Yoav Artzi. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics, ACL 2014, pages 271–281._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguis-_
_tics, ACL 2017, pages 158–167._
Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based
neural machine translation. In Proceedings of the
_2015 Conference on Empirical Methods in Natural_
_Language Processing, pages 1412–1421._
Sourav Mandal and Sudip Kumar Naskar. 2019. Solving arithmetic mathematical word problems: A review and recent advancements. In Information Tech_nology and Applied Mathematics, pages 95–114._
Springer.
Purvanshi Mehta, Pruthwik Mishra, Vinayak Athavale,
Manish Shrivastava, and Dipti Misra Sharma. 2017.
Deep neural network based system for solving arithmetic word problems. In Proceedings of the IJC_NLP 2017, pages 65–68._
Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondˇrej Cert´ˇ ık, Sergey B. Kirpichev,
Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rathnayake,
Sean Vig, Brian E. Granger, Richard P. Muller,
Francesco Bonazzi, Harsh Gupta, Shivam Vats,
Fredrik Johansson, Fabian Pedregosa, Matthew J.
Curry, Andy R. Terrel, Stˇ[ˇ] ep´an Rouˇcka, Ashutosh
Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz. 2017. Sympy: symbolic computing in python. _PeerJ Computer Sci-_
_ence, 3:e103._
Benjamin Robaidek, Rik Koncel-Kedziorski, and
Hannaneh Hajishirzi. 2018. Data-driven methods for solving algebra word problems. _CoRR,_
abs/1804.10718.
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In Proceedings of the
_2015 Conference on Empirical Methods in Natural_
_Language Processing, EMNLP 2015, Lisbon, Portu-_
_gal, September 17-21, 2015, pages 1743–1752._
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. TACL,
6:159–172.
Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016.
Equation parsing : Mapping sentences to grounded
equations. In Proceedings of the 2016 Conference
_on Empirical Methods in Natural Language Pro-_
_cessing, EMNLP 2016, Austin, Texas, USA, Novem-_
_ber 1-4, 2016, pages 1088–1097._
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. TACL,
3:1–13.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Process-_
_ing, EMNLP 2015, pages 1132–1142._
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks. In Advances in Neural Information Process_ing Systems 27: Annual Conference on Neural Infor-_
_mation Processing Systems 2014, pages 3104–3112._
Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and
dataset for algebra word problems. In Proceed_ings of the 15th Conference of the European Chap-_
_ter of the Association for Computational Linguistics,_
pages 494–504.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang,
and Wen-tau Yih. 2016. Learning from explicit and
implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on
_Empirical Methods in Natural Language Process-_
_ing, pages 297–306._
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018. MathDQN: Solving arithmetic word problems via deep
2665
-----
reinforcement learning. In Proceedings of the
_Thirty-Second AAAI Conference on Artificial Intel-_
_ligence._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854.
**A** **Algorithm Detail**
The training and inference procedures are shown
in Algortihm 1.
**B** **Hyperparameter Setup**
The model is trained with the optimizer adam
(Kingma and Ba, 2014), and the learning rate is set
to 0.001. Pretrained embeddings using FastText
(Joulin et al., 2016) are adopted. The hidden state
size of LSTM used in the encoder and decoder is
256. The dimension of hidden layers in attention,
semantic transformer and operand/stack action selector is 256. The dropout rate is set as 0.1 before
inputting the decoder LSTM, before the stack action selector and after the hidden layer of the stack
action selector and attention. The reported accuracy is the result of 5-fold cross-validation, same
as Wang et al. for fair comparison.
**C** **Error Analysis between Seq2Seq**
We implement the seq2seq model as proposed by
Wang et al. and compare the performance difference between our proposed model and the baseline seq2seq model. Table 4 shows the generated
results seq2seq predicts correctly but our model
predicts incorrectly. Table 5 show the results our
model can predict correctly but seq2seq cannot.
2666
-----
**Problem & Results**
`小红前面` 5 人,后面 7 人,一共有多少人? (There are 5 people in front of Little Red and 7
people behind. How many persons are there in total?)
_Proposed Model: 5 + 7_
_Seq2Seq Model: 5 + 7 + 1_
`两个数相差28,如果被减数减少3,减数增加5,那么它们的差=?` (The difference between
two numbers is 28. If the minuend is reduced by 3, and the subtrahend is increased by 5, then their
difference=?)
_Proposed Model: (28 −_ 3) ÷ 5
_Seq2Seq Model: 28 −_ (3 + 5)
```
机床厂第一车间有55人,第二车间有45人,每人每天平均生产261个零件,这两个 车间每
```
`天共生产多少个零件?` (There are 55 people in the first workshop of the machine tool factory and
45 people in the second workshop. Each person produces 261 small components per day in average.
How many components do the two workshops produce every day in total?)
_Proposed Model: (55 + 45) ÷ 261_
_Seq2Seq Model: (55 + 45) × 261_
`箭鱼游动时的速度是28米/秒,8秒可以游多少米?` (The swordfish swims at speed 28 meters/sec.
How many meters can it swim in 8 seconds?)
_Proposed Model: 28 ÷ 8_
_Seq2Seq Model: 28 × 8_
```
水果店有梨子387千克,卖出205千克后,又运来一批,现在水果店共有梨子945千克.水果
```
`店又运来梨子多少千克?` (The fruit shop has 387 kilograms of pears . After selling 205 kilograms,
some pears arrive. Now the fruit shop has 945 kilograms of pears in total. How many kilograms of
pears does the fruit shop get?)
_Proposed Model: 945 × (387 −_ 205)
_Seq2Seq Model: 945 −_ (387 − 205)
```
王老师买排球用了40元,买篮球用的钱数是排球的3倍.王老师买球一共用了多少元?
```
(Teacher Wang spent 40 dollars buying volleyballs and 3 times of money for basketballs. How many
dollars did Teacher Wang spend for the balls?)
_Proposed Model: 40 ÷ 3 + 40_
_Seq2Seq Model: 40 + 40 × 3_
```
筑路队修筑一条长1200米的公路,甲队单独修40天可以完成任务,乙队单独修30天可以完成
```
`任务.甲队每天修的比乙队少多少米?` (The road construction team built a road with a length of
1200 meters. Team A can complete the task in 40 days alone, and team B can complete the task in
30 days alone. How many meters does team A construct more than team B every day?)
_Proposed Model: 1200 ÷ 40 −_ 1200 ÷ 30
_Seq2Seq Model: 1200 ÷ 30 −_ 1200 ÷ 40
```
一共1800本,我们六年级分得2/9,分给五年级的本数相当于六年级的4/5,五年级分得多少
```
`本?` (There are 1800 books in total. We sixth grade get 2/9. The number of books given to the fifth
grade is equal to 4/5 of the number to the sixth grade. How many books does the fifth grade get?)
_Proposed Model: 1800_ 9[2] 5
_Seq2Seq Model: 1800_ _×[2]9_ _[÷]5[ 4]_
```
有一批布料,如果只 ×做上[×]衣[ 4]可以做10件,如果只做裤子可以做15条,那么这批布料可以做几
```
`套这样的衣服?` (There is a batch of fabrics. If all is used for making shirts, 10 pieces can be made,
and 15 pieces if used to make pants only. Then how many suits of such clothes can be made with this
batch of fabric?)
_Proposed Model: 10 × 1 ÷ 15_
_Seq2Seq Model: 1 ÷ (1 ÷ 10 + 1 ÷ 15)_
```
贝贝的钱买一本5.9元笔记本差0.6元,他买一本4.8元的,剩下的钱正好买一只圆珠笔,这只
```
`圆珠笔多少钱?` (Beibei needs 0.6 dollars more to buy a notebook of 5.9 dollars. If he buys one of
4.8 dollars, the remaining money allows her to buy exactly one ball pen. How much is the ball pen?)
_Proposed Model: 5.9 + 0.6 −_ 4.8
_Seq2Seq Model: 5.9 −_ 0.6 − 4.8 2667
-----
**Problem & Results**
```
医院里经常要给病人输入葡萄糖水,这种葡萄糖水是把葡萄糖和水按1:19配制的,根据
```
`这些信息,你能知道什么?` (In hospital, it is often necessary to give glucose injection to patient.
This glucose water is prepared by mixing glucose and water at 1:19. Based on this information, what
do you know?)
_Proposed Model: 1 ÷ (1 + 19.0)_
_Seq2Seq Model: 1 × (1 + 19.0)_
```
一根长2.45米的木桩打入河底,现在测得木桩水上部分长0.75米,水中长1.05米,求这根
```
`桩打在泥中的长度=多少米?` (A wooden pile of 2.45 meters long is hammered into the bottom
of a river. Now the part above water is measured as 0.75 meters long, and the part in the water is
measured as 1.05 meters long. How long is the part of the pile in the mud?)
_Proposed Model: 2.45 −_ 0.75 − 1.05
_Seq2Seq Model: 2.45 + 0.75 + 1.05_
`李强6月份的生活费为255元,比计划节省了15%,节省了多少元.` (Li Qiang’s living expenses
in June were 255 dollars, 15% savings over the plan. How much did he save?)
_Proposed Model: (255.0 ÷ (1 −_ 0.15)) × 0.15
_Seq2Seq Model: 0.15 = 6.0/(1 −_ 255.0) − 6.0
`小芳在计算一个数除以10时,将除号看成了乘号,结果得3.2,正确的结果应该=.` (When
Xiaofang calculates a number divided by 10, the division sign is mistakenly treated as a multiplication sign, and the result is 3.2. The correct result should be = .)
_Proposed Model: 3 ÷ 10 ÷ 10_
_Seq2Seq Model: 3.2 ÷ (1 + 10)_
24 + 91 的 2/13,所得的和再除 19/20,商 = ? (2/13 of 91 + 24, and the sum is divided by 19/20,
quotient = ?)
_Proposed ModelSeq2Seq Model1/3 + 0.25 = ?:(1/3 + 0.25 = ?):[19]20[19]20[÷][÷][ (24][ (24 + 91][ ×][ 91][ −][ ×]13[ 2][2]13[)][)]_
_Proposed Model:_ [1]3 [+ 0][.][25]
_Seq2Seq Model:_ [1]3
```
商店运来鸡蛋和鸭[×]蛋[ 0][.]各[25]7箱.鸡蛋每箱重26千克,鸭蛋每箱重31千克,商店一共运来的鸡蛋
```
`和鸭蛋共多少千克?` (The store shipped 7 boxes of eggs and duck eggs respectively. Eggs weigh
26 kilograms per box, duck eggs weigh 31 kilograms per box. How many kilograms of eggs and
duck eggs are shipped from the store in total?)
_Proposed Model: 26 × 7 + 31 × 7_
_Seq2Seq Model: 26 × 7 + 31_
3.8 - 2.54 + 1.46 = ? (3.8 - 2.54 + 1.46 =)
_Proposed Model: 3.8 −_ 2.54 + 1.46
_Seq2Seq Model: 3.8 + 2.54 + 1.46_
```
有一池水,第一天放出200吨,第二天比第一天多放20%,第3天放了整池水的36%,正好全
```
`部放完.这池水共` `有多少吨?` (There was a pool of water, which released 200 tons of water in
the first day, 20% more in the second day than the first day, and 36% of the whole pool on the third
day. Then the water is gone. How many tons of water did this pool have?)
_Proposed Model: (200.0 + 200.0 × (1 + 0.2)) ÷ (1 −_ 0.36)
_Seq2Seq Model: (200.0 + 0.2) × 3.0 + 0.2 × (1 −_ 0.36)
16 的 5/12 比一个数的 7 倍多 2 , `这个数` = ? (5/12 of 16 is more than 7 times of a number by 2.
What is the number=?)
_Proposed Model: (16_ 12[5]
_Seq2Seq Model: (16 × ×12[5]_ [+ 7)][−] [2)][ ÷][ ÷][ 2][ 7]
Table 5: Examples that Seq2Seq predicts incorrectly while our proposed model predicts correctly.
2668
-----
| [
"Ting-Rui, Chiang",
"Yun-Nung, Chen"
] | 2019-01-01T00:00:00 | NAACL 2019 Main | true | 100 | 17 | null | http://aclweb.org/anthology/N19-1272 | null | https://www.semanticscholar.org/paper/f67ede490780f939103d4cc4e9c6866b83ee59b3 |
A Survey of Deep Learning for Mathematical Reasoning | Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems in language has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain. | This survey paper reviews the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade, and evaluates existing benchmarks and methods and discusses future research directions in this domain. | ## A Survey of Deep Learning for Mathematical Reasoning
**Pan Lu[1], Liang Qiu[1], Wenhao Yu[2], Sean Welleck[3][∗], Kai-Wei Chang[1][∗]**
1UCLA, 2University of Notre Dame, 3University of Washington
[https://github.com/lupantech/dl4math](https://github.com/lupantech/dl4math)
60
50
40
30
20
10
**Abstract**
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable
in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems
capable of solving math problems and proving theorems has garnered significant interest
in the fields of machine learning and natural
language processing. For example, mathematics serves as a testbed for aspects of reasoning
that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models
have opened up new benchmarks and opportunities to use deep learning for mathematical
reasoning. In this survey paper, we review the
key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss
future research directions in this domain.
2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
Year
Figure 1: Estimated counts of annually published papers on deep learning for mathematical reasoning. This
field has been experiencing rapid growth since 2018.
research in the fields of machine learning and natural language processing (NLP), dating back to the
1960s (Feigenbaum et al., 1963; Bobrow, 1964). In
recent years, there has been a surge of interest in
this area, as demonstrated in Figure 1.
Deep learning has shown great success in various natural language processing tasks, such as question answering and machine translation (Sutskever
et al., 2014; Devlin et al., 2018). Similarly, researchers have developed various neural network
approaches for mathematical reasoning, which
have been demonstrated to be effective in tackling
complex tasks like math word problem solving,
theorem proving, and geometry problem solving.
For instance, deep learning-based math word problem solvers have adopted a sequence-to-sequence
framework with attention mechanisms to generate mathematical expressions as intermediate steps
(Wang et al., 2018a; Chiang and Chen, 2019). In addition, via large-scale corpora and the Transformer
model (Vaswani et al., 2017), pre-trained language
models have yielded promising results on a variety
of mathematical tasks. Recently, large language
models (LLMs) like GPT-3 (Brown et al., 2020)
have demonstrated impressive capabilities in complex reasoning and in-context learning, further advancing the field of mathematical reasoning.
**1** **Introduction**
“The study of mathematics, like the Nile, begins in
_minuteness but ends in magnificence.”_
— Charles Caleb Colton, English writer
Mathematical reasoning is a key aspect of human intelligence that enables us to comprehend and
make decisions based on numerical data and language. It is applicable in various fields, including
science, engineering, finance, and everyday life,
and encompasses a range of abilities, from basic
skills such as pattern recognition and numerical
operations to more advanced skills like problemsolving, logical reasoning, and abstract thinking.
The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has been a long-standing focus of
_∗denotes co-senior authors._
-----
The recent progress in research on mathematical
reasoning has been impressive and encouraging. In
this survey paper, we review the advances in deep
learning for mathematical reasoning. We discuss
various tasks and datasets (Section 2), and examine the advancements of neural networks (Section
3) and pre-trained language models (Section 4) in
mathematical domains. We also explore the rapid
progress of in-context learning with large language
models (Section 5) for mathematical reasoning. We
further analyze existing benchmarks and find that
there is less focus on multi-modal and low-resource
settings (Subsection 6.1). Evidence-based studies
suggest that current numeracy representations are
insufficient and deep learning methods are inconsistent for mathematical reasoning (Subsection 6.2).
Following this, we then suggest that it would be
beneficial to improve current work in terms of generalization and robustness, trustworthy reasoning,
learning from feedback, and multi-modal mathematical reasoning (Section 7).
**2** **Tasks and Datasets**
In this section, we will examine the various tasks
and datasets currently available for the study of
mathematical reasoning using deep learning methods. A summary of the commonly used datasets in
this field can be found in Table 2.
**2.1** **Math Word Problem Solving**
Developing algorithms to solve math word problems (MWPs) automatically has been an interest
of NLP researchers for decades (Feigenbaum et al.,
1963; Bobrow, 1964). A math word problem (also
termed an algebraic or arithmetic word problem)
describes a brief narrative that involves characters,
entities, and quantities. The mathematical relationship of an MWP can be modeled with a set of
equations whose solution reveals the final answer
to the question. A typical example is shown in Table 1. A question involves the four basic arithmetic
operations of addition, subtraction, multiplication,
and division with single or multiple operation steps.
The challenge of MWPs for NLP systems lies in the
need for language comprehension, semantic parsing, and multiple mathematical reasoning skills.
Existing MWP datasets cover grade school problems, which are crawled from online learning websites (Koncel-Kedziorski et al., 2015), collected
from textbooks, or manually annotated by human
workers (Patel et al., 2021). Early math word prob
**Question: Bod has 2 apples and David has 5 apples.**
How many apples do they have in total?
**Rationale: x = 2 + 5**
**Solution: 7**
Table 1: A typical math word problem.
lem datasets are relatively small or limited to a
small number of operation steps (Hosseini et al.,
2014; Kushman et al., 2014; Roy et al., 2015).
Some recently curated datasets aim to increase
problem diversity and difficulty levels. For example, Ape210K (Zhao et al., 2020) consists of
210k elementary math word problems, which is the
largest publicly available. The problems in GSM8K
(Cobbe et al., 2021) can involve up to 8 steps to
solve. SVAMP (Patel et al., 2021) is a benchmark
that tests the robustness of deep learning models to
math word problems with simple variations. More
recently built datasets involve modalities beyond
text. For example, IconQA (Lu et al., 2021b) provides an abstract diagram as a visual context, while
TabMWP (Lu et al., 2022b) provides a tabular context for each problem.
Most MWP datasets provide annotated equations
as a rationale for the solution (e.g., Table 1). To
improve the performance and interpretability of
the learned solvers, MathQA (Tafjord et al., 2019)
is annotated with precise operation programs, and
MathQA-Python (Austin et al., 2021) is provided
with specific Python programs instead. Another
line of datasets annotates the problems with multistep natural language solutions that are regarded
as more human-readable (Ling et al., 2017; Cobbe
et al., 2021; Lu et al., 2022b). Lila (Mishra et al.,
2022a) annotates many of the previously mentioned
MWP datasets with Python program rationales.
**2.2** **Theorem Proving**
Automating theorem proving is a long-standing
challenge in AI (Newell et al., 1957; Feigenbaum
et al., 1963). The problem is to demonstrate the
truth of a mathematical claim (a theorem) through
a sequence of logical arguments (a proof). Theorem proving tests various skills, such as choosing
effective multi-step strategies, using background
knowledge, and performing symbolic manipulations (e.g., arithmetic or derivations).
Recently, there has been increased interest in
using language models for theorem proving in formal interactive theorem provers (ITP) (e.g., Polu
-----
and Sutskever (2020); Han et al. (2022); Polu et al.
(2022); Jiang et al. (2022a,b); Lample et al. (2022)).
Example ITPs include Lean (Moura et al., 2015),
Isabelle (Paulson, 1994), Coq (Barras et al., 1999),
and Metamath (Megill and Wheeler, 2019). To
prove a theorem in an ITP, the theorem is stated in
the ITP’s programming language, then simplified
by generating “proof steps” until it is reduced to
known facts. The result is a sequence of steps that
constitutes a verified proof.
Data sources for neural theorem proving in ITPs
include interactive learning environments that interface with ITPs, and datasets derived from proofs
in ITP libraries. For example, CoqGym (Yang and
Deng, 2019) provides an interactive environment
and 71K human-written proofs for the Coq ITP. For
Isabelle, PISA (Jiang et al., 2021) enables interaction and provides a dataset of 183k proofs mined
from the Isabelle standard library and Archive of
Formal Proofs. For Lean, LeanStep (Han et al.,
2022) provides a dataset of proof-steps from Lean’s
mathematical library along with auxiliary tasks,
while Lean-Gym (Polu et al., 2022) provides an interactive REPL. The miniF2F (Zheng et al., 2022)
benchmark aims to provide a shared benchmark
across ITPs, consisting of 488 problem statements
sourced from mathematical competitions.
Other resources provide proxy environments or
tasks. For example, INT (Wu et al., 2021c) provide a synthetic proving environment to measure
six different types of generalization. Li et al. construct IsarStep using the Isabelle Archive of Formal
Proofs, and propose a task of filling in a missing intermediate proposition. Early applications of deep
learning for formal theorem proving focus on selecting relevant premises (Alemi et al., 2016).
_Informal theorem proving presents an alternative_
medium for theorem proving, in which statements
and proofs are written in the mixture of natural language and symbols used in “standard” mathematics
(e.g., in LATE[X), and are checked for correctness by]
humans. Early work focuses on selecting relevant
premises (Ferreira and Freitas, 2020b,a). Welleck
et al. (2021) develop NaturalProofs, a large-scale
dataset of 32k informal mathematical theorems,
definitions, and proofs, and provide a benchmark
for premise selection via retrieval and generation
tasks. Welleck et al. (2022a) adapt NaturalProofs
for full proof generation, and provide a human evaluation protocol and proxy automatic metrics.
An emerging area of research aims to combine
_C_
**Question: In triangle ABC, AD = 3 and**
_BD = 14. Find CD._
**Choices: (A) 6.0 (B) 6.5 (C) 7.0 (D) 8.5**
**Answer: (B) 6.5**
_A_ _D_ _B_
Figure 2: An example of geometry problem in the Geometry3K (Lu et al., 2021a) dataset.
elements of informal and formal theorem proving.
For example, Wu et al. (2022b) explore translating informal statements into formal statements,
while Jiang et al. (2022b) release a new version
of the miniF2F benchmark augmented with informal statements and proofs, which we refer to as
_miniF2F+informal. Jiang et al. (2022b) explore_
translating provided (or generated) informal proofs
into formal proofs.
**2.3** **Geometry Problem Solving**
Automated geometry problem solving (GPS) is also
a long-standing AI task in mathematical reasoning
research (Gelernter et al., 1960; Wen-Tsun, 1986;
Chou et al., 1996; Ye et al., 2008) and has attracted
much attention in recent years. Different from a
math word problem, a geometry problem consists
of a textual description in natural language and a
geometric diagram. As shown in Figure 2, the multimodal inputs describe the entities, attributes, and
relationships of geometric elements, and the goal
is to find the numeric solution to an unknown variable. GPS is a challenging task for deep learning
methods due to the complex skills it requires. It
involves the ability to parse multimodal information, perform symbolic abstraction, utilize theorem
knowledge, and conduct quantitative reasoning.
Some early datasets are proposed to facilitate
research in this domain (Seo et al., 2015; Alvin
et al., 2017; Sachan et al., 2017; Sachan and Xing,
2017). However, these datasets are relatively small
or not publicly available, which limits the development of deep learning methods. In response to
this limitation, Lu et al. create the Geometry3K
dataset, which consists of 3,002 multi-choice geometry problems with unified logic form annotations
for the multimodal inputs. More recently, largerscale datasets such as GeoQA (Chen et al., 2021a),
GeoQA+ (Cao and Xiao, 2022), and UniGeo (Chen
et al., 2022a) have been introduced and are annotated with programs that can be learned by neural
solvers and executed to obtain the final answers.
-----
**2.4** **Math Question Answering**
Numerical reasoning is a core ability within human
intelligence and plays an important role in many
NLP tasks. Aside from theorem proving and gradelevel math word problem solving, there is a wide
range of question answering (QA) benchmarks
that center around mathematical reasoning. In this
work, we refer to these tasks as math question answering (MathQA). A large number of datasets
have been presented recently. For example, QuaRel
(Tafjord et al., 2019) is a dataset of diverse story
questions that involve 19 different types of quantities. McTaco (Zhou et al., 2019) studies temporal commonsense problems, while Fermi (Kalyan
et al., 2021) studies Fermi problems whose answers
can only be approximately estimated.
Recent studies have shown that state-of-the-art
mathematical reasoning systems might suffer from
brittleness in reasoning, in that the models rely on
spurious signals and plug-and-chug calculations in
the specific dataset to achieve “satisfactory” performance (Hendrycks et al., 2021; Mishra et al.,
2022b). To address this issue, new benchmarks
are proposed from various aspects. The Mathematics dataset (Saxton et al., 2020) consists of many
different types of mathematics problems, covering arithmetic, algebra, probability, and calculus.
The dataset allows for measuring the algebraic generalization ability of a model. Similarly, MATH
(Hendrycks et al., 2021) consists of challenging
competition mathematics to measure the problemsolving ability of models in complex scenarios.
Some work incorporates tabular contexts in the
question inputs. For example, FinQA (Chen et al.,
2021c), TAT-QA (Zhu et al., 2021), and MultiHiertt
(Zhao et al., 2022) collect questions that require
both table understanding and numeric reasoning
to answer. Others, instead, present large-scale
unified benchmarks for mathematical reasoning.
NumGLUE (Mishra et al., 2022b) is a multi-task
benchmark with the goal of evaluating the performance of models on eight different tasks. Mishra
et al. 2022a push this direction further and presents
Lila, which consists of 23 mathematical reasoning
tasks, spanning a wide range of mathematics topics, linguistic complexity, question formats, and
background knowledge requirements.
**2.5** **Other Quantitative Problems**
Numbers are an integral part of our daily lives, and
we humans reason with numbers in a variety of
tasks, such as understanding news, reports, elections, and markets. This has led many in the community to question whether AI systems can effectively perform quantitative reasoning in everyday
scenarios. To this end, various benchmarks have
been developed to evaluate the capabilities of AI
systems in this area.
Diagrams, such as figures, charts, and plots, are
essential media that convey large amounts of information in a concise way. FigureQA (Kahou et al.,
2017), DVQA (Kafle et al., 2018), MNS (Zhang
et al., 2020c), PGDP5K (Hao et al., 2022), and
GeoRE (Yu et al., 2021a), are released to investigate models’ abilities to reason about quantitative
relationships among entities grounded in diagrams.
NumerSense (Lin et al., 2020), instead, examines
whether and to what extent existing pre-trained language models can induce numerical commonsense
knowledge. EQUATE (Ravichander et al., 2019)
formalizes aspects of quantitative reasoning in a
natural language inference framework. Quantitative reasoning can appear frequently in specific
domains like finance, science, and programming.
For instance, the ConvFinQA (Chen et al., 2022c)
targets numerical reasoning over financial reports
in a conversational question answering format. ScienceQA (Lu et al., 2022a) involves numerical reasoning in scientific domains, while P3 (Schuster
et al., 2021) studies the function inference ability
of deep learning models to find a valid input which
makes the given program return True.
**3** **Neural Networks for Mathematical**
**Reasoning**
**3.1** **Seq2Seq Networks for Math**
Sequence-to-sequence (Seq2Seq) (Sutskever et al.,
2014) neural networks have been successfully applied to mathematical reasoning tasks, such as math
word problem solving (Wang et al., 2017), theorem
proving (Yang and Deng, 2019), geometry problem solving (Robaidek et al., 2018), and math question answering (Tafjord et al., 2019). A Seq2Seq
model uses an encoder-decoder architecture and
usually formalizes mathematical reasoning as a sequence generation task. The basic idea behind this
approach is to map an input sequence (e.g. a mathematical problem) to an output sequence (e.g. an
equation, program, and proof). Common encoders
and decoders include Long Short Term Memory
network (LSTM) (Hochreiter and Schmidhuber,
1997), Gated Recurrent Unit (GRU) (Cho et al.,
-----
**Dataset** **Task** **Size** **Input** **Output** **Rationale** **Domain**
**Verb395 (2014)** MWP 395 Question Number Equation Math
**Alg514 (2014)** MWP 514 Question Number Equation Math
**IL (2015)** MWP - Question Number Equation Math
**SingleEQ (2015)** MWP 508 Question Number Equation Math
**DRAW (2015)** MWP 1,000 Question Number Equation Math
**Dolphin1878 (2015)** MWP 1,878 Question Number Equation Math
**Dolphin18K (2016)** MWP 18,460 Question Number Equation Math
**MAWPS (2016)** MWP 3,320 Question Number Equation Math
**AllArith (2017)** MWP 831 Question Number Equation Math
**DRAW-1K (2017)** MWP 1,000 Question Number Equation Math
**Math23K (2017)** MWP 23,162 Question Number Equation Math
**AQuA (2017)** MWP 100,000 Question Option Natural language Math
**Aggregate (2018)** MWP 1,492 Question Number Equation Math
**MathQA (2019)** MWP 37,297 Question Number Program Math
**ASDiv (2020)** MWP 2,305 Question Number Equation Math
**HMWP (2020)** MWP 5,470 Question Number Equation Math
**Ape210K (2020)** MWP 210,488 Question Number Equation Math
**SVAMP (2021)** MWP 1,000 Question Number Equation Math
**GSM8K (2021)** MWP 8,792 Question Number Natural language Math
**IconQA (2021b)** MWP 107,439 Figure+Question Option+Text span Math
**MathQA-Python (2021)** MWP 23,914 Question Number Python program Math
**ArMATH (2022)** MWP 6,000 Question Number Equation Math
**TabMWP (2022b)** MWP 38,431 Table+Question Option+Number Natural language Math
**MML (2015)** TP 57,882 Statement Proof steps Math
**HolStep (2017)** TP 2,209,076 Statement Proof steps Math
**Gamepad (2019)** TP - Statement Proof steps Math
**CoqGym (2019)** TP 71,000 Statement Proof steps Math
**HOList (2019)** TP 29,462 Statement Proof steps Math
**IsarStep (2021)** TP 860,000 Statement Proof steps Math
**PISA (2021)** TP 183,000 Statement Proof steps Math
**INT (2021c)** TP - Statement Proof steps Math
**NaturalProofs (2021)** TP 32,000 Statement Proof steps Math
**NaturalProofs-Gen (2022a)** TP 14,500 Statement Proof steps Math
**miniF2F (2022)** TP 488 Statement Proof steps Math
**miniF2F+informal (2022b)** TP 488 Statement Proof steps Math
**LeanStep (2022)** TP 21,606,000 Statement Proof steps Math
**GEOS (2015)** GPS 186 Figure+Question Option Geometry
**GeoShader (2017)** GPS 102 Figure+Question Number Geometry
**GEOS++ (2017)** GPS 1,406 Figure+Question Number Geometry
**GEOS-OS (2017)** GPS 2,235 Figure+Question Option Demonstration Geometry
**Geometry3K (2021a)** GPS 3,002 Figure+Question Option Logical form Geometry
**GeoQA (2021a)** GPS 4,998 Figure+Question Option Program Geometry
**GeoQA+ (2022)** GPS 12,054 Figure+Question Option Program Geometry
**UniGeo (2022a)** GPS/TP 14,541 Figure+Question Option Program Geometry
**Quarel (2019)** MathQA 2,771 Question Option Logical form Math
**McTaco (2019)** MathQA 13,225 Text+Question Option Time
**DROP (2019)** MathQA 96,567 Passage+Question Number+Text span Math
**Mathematics (2020)** MathQA 2,010,000 Question Free-form Number Math
**FinQA (2021c)** MathQA 8,281 Text+Table+Q Number Program Finance
**Fermi (2021)** MathQA 11,000 Question Number Program+Fact Math
**MATH (2021)** MathQA 12,500 Question Number Natural language Math
**TAT-QA (2021)** MathQA 16,552 Text+Table+Q Number+Text span Finance
**AMPS (2021)** MathQA 5,000,000 Question - LATE[X] Math
**MultiHiertt (2022)** MathQA 10,440 Text+Table+Q Number+Text span Expression Finance
**NumGLUE (2022b)** MathQA 101,835 Text+Question Number+Text span Math
**Lila (2022a)** MathQA 134,000 Text+Question Free-form Python program Math
**FigureQA (2017)** VQA 1,000,000+ Figure+Question Binary Math
**DVQA (2018)** VQA 3,487,194 Figure+Question Text span Number+Text span Math
**DREAM (2019)** ConvQA 10,197 Dialog+Question Option Math
**EQUATE (2019)** NLI - Premise+Hypothesis Binary Math
**NumerSense (2020)** Filling 13,600 Masked question Word Math
**MNS (2020c)** IQ Test - Figure Number Math
**P3 (2021)** Puzzle 397 Text Program Math
**NOAHQA (2021)** ConvQA 21,347 Dialog+Question Text span Reasoning graph Math
**ConvFinQA (2022c)** ConvQA 3,892 Report+Dialog+Q Number Expression Math
**PGDP5K (2022)** Parsing 5,000 Figure+Question Number Geometry
**GeoRE (2022a)** Parsing 12,901 Figure+Question Number Geometry
**ScienceQA (2022a)** VQA 21,208 Context+Question Option Natural language Science
Table 2: A summarization of mathematical reasoning datasets.
2014), and their bidirectional variants: BiLSTM
and BiGRU. DNS (Wang et al., 2017) is the first
work that uses a Seq2Seq model to transform sentences in word problems into mathematical equa
-----
**Paper** **Task** **Problem** **Network** **Encod** **Decod** **ATT Description**
**DNS (Wang et al., 2017)** MWP Generation Seq2Seq GRU LSTM The first deep MWP solver
**AnsRat (Ling et al., 2017)** MWP Generation Seq2Seq LSTM LSTM Trained with staged back-propagation
**Math-EN (Wang et al., 2018a)** MWP Generation Seq2Seq BiLSTM LSTM A standard Seq2Seq model with attention
**CASS (Huang et al., 2018)** MWP Generation Seq2Seq BiGRU BiGRU Copy and alignment with RL
**S-Aligned (Chiang and Chen, 2019)** MWP Generation Seq2Seq BiLSTM LSTM Operating symbols
**T-RNN (Wang et al., 2019)** MWP Generation Seq2Seq BiLSTM BiLSTM Predicting a tree-structure math template
**GROUP-ATT (Li et al., 2019)** MWP Generation Seq2Seq BiLSTM LSTM Group attention
**SMART (Hong et al., 2021b)** MWP Generation Seq2Seq - - Explicitly incorporating values
**SelfAtt (Robaidek et al., 2018)** GPS Classification Seq2Seq BiLSTM - Multi-hop self-attention
**QuaSP+ (Tafjord et al., 2019)** MathQA Generation Seq2Seq BiLSTM LSTM Adopting attributed grammar
**AST-Dec (Liu et al., 2019a)** MWP Generation Seq2Tree BiLSTM Tree Using prefix order decoding
**GTS (Xie and Sun, 2019)** MWP Generation Seq2Tree BiGRU Tree A goal-driven tree-structured approach
**KA-S2T (Wu et al., 2020)** MWP Generation Seq2Tree BiLSTM Tree A knowledge-aware method
**TSN-MD (Zhang et al., 2020a)** MWP Generation Seq2Tree BiGRU Tree A teacher-student network
**T-LSTM (Zaporojets et al., 2021)** MWP Generation Seq2Tree BiLSTM Tree A child-sum tree-LSTM model
**NT-LSTM (Zaporojets et al., 2021)** MWP Generation Seq2Tree BiLSTM Tree An N-ary tree-LSTM model
**NS-Solver (Qin et al., 2021)** MWP Generation Seq2Tree BiGRU Tree A neural-symbolic solver with programs
**NumS2T (Wu et al., 2021b)** MWP Generation Seq2Tree BiLSTM Tree Explicitly incorporating values
**HMS (Lin et al., 2021)** MWP Generation Seq2Tree GRU Tree A word-clause-problem encoder
**LBF (Hong et al., 2021a)** MWP Generation Seq2Tree BiGRU Tree A learning-by-fixing (LBF) framework
**Seq2DAG (Cao et al., 2021)** MWP Generation Seq2Graph GRU Graph A direct acyclic graph (DAG) structure
**Graph2Tree (Zhang et al., 2020b)** MWP Generation Graph2Tree Graph Tree Generating better solution expressions
**Multi-E/D (Shen and Jin, 2020)** MWP Generation Graph2Tree Graph Tree A graph encoder and a tree-bad decoder
**Graph2Tree (Li et al., 2020b)** MWP Generation Graph2Tree Graph Tree A graph-to-tree neural network
**EEH-G2T (Wu et al., 2021a)** MWP Generation Graph2Tree Graph Tree A hierarchical graph-to-tree model
**ASTactic (Yang and Deng, 2019)** TP Generation Tree2Seq TreeLSTM GRU Generating tactics as programs
**MathDQN (Wang et al., 2018b)** MWP Search DQN - - RL with a deep Q-network (DQN)
**DDT (Meng and Rumshisky, 2019)** MWP Generation Transformer Trm Trm A Transformer-based model
**DeepMath (Irving et al., 2016)** TP Classification CNN CNN - The first deep large scale theorem prover
**Holophrasm (Whalen, 2016)** TP Classification BiGRU BiGRU - A neural prover for higher-order logic
**CNNTP (Loos et al., 2017)** TP Classification CNN CNN - A CNN-based theorem prover
**WaveNetTP (Loos et al., 2017)** TP Classification WaveNet WaveNet - A WaveNet-based theorem prover
**DeepHOL (Bansal et al., 2019)** TP Generation WaveNet WaveNet - A neural theorem prover with RL
**NGS (Chen et al., 2021a)** GPS Generation VQA LSTM* LSTM The first deep geometry solver
**PGDPNet (Zhang et al., 2022a)** Parsing Generation GNN - - A neural diagram parser with GNN
Table 3: A summarization of deep neural network models for mathematical reasoning. Encod: encoder, Decod:
decoder, ATT: Attention. LSTM*: ResNet + LSTM, Trm: Transformer
tions. A large amount of work has shown the performance advantage of Seq2Seq models over previous
statistical learning approaches (Ling et al., 2017;
Wang et al., 2018a; Huang et al., 2018; Chiang and
Chen, 2019; Wang et al., 2019; Li et al., 2019).
**3.2** **Graph-based Networks for Math**
Seq2Seq approaches show their advantages of generating mathematical expressions and not relying
on hand-crafted features. Mathematical expressions could be transformed into a tree-based structure, e.g., an abstract syntax tree (AST) and a graphbased structure, which describes structured information in the expressions. However, this important
information is not explicitly modeled by Seq2Seq
methods. To solve this issue, graph-based neural networks are developed to explicitly model the
structure in expressions.
Sequence-to-tree (Seq2Tree) models explicitly
model the tree structure when encoding the output
sequences (Liu et al., 2019a; Xie and Sun, 2019;
Wu et al., 2020; Zhang et al., 2020a; Zaporojets
et al., 2021; Qin et al., 2021; Wu et al., 2021b; Lin
et al., 2021; Hong et al., 2021a). For example, (Liu
et al., 2019a) devise a Seq2Tree model to better use
information from an equation’s AST. Seq2DAG
(Cao et al., 2021), instead, applies a sequence-tograph (Seq2Graph) framework when generating the
equations since the graph decoder is able to extract
complex relationships among multiple variables.
The graph-based information can also be embedded
when encoding the input mathematical sequences
(Zhang et al., 2020b; Shen and Jin, 2020; Li et al.,
2020b; Wu et al., 2021a). For example, ASTactic
(Yang and Deng, 2019) applies TreeLSTM (Tai
et al., 2015) on ASTs to represent the input goal
and premises for theorem proving.
**3.3** **Attention-based Networks for Math**
The attention mechanism has been successfully applied to natural language processing (Bahdanau
et al., 2014) and computer vision problems (Xu
-----
et al., 2015; Woo et al., 2018), taking into account
the hidden vectors of the inputs during the decoding
processing. Recently, researchers have been exploring its usefulness in mathematical reasoning tasks,
as it can be used to identify the most important
relationships between mathematical concepts. For
instance, MATH-EN (Wang et al., 2018a) is a math
word problem solver which benefits from longdistance dependency information learned by selfattention. Attention-based methods have also been
applied to other mathematical reasoning tasks such
as geometry problems solving (Robaidek et al.,
2018; Chen et al., 2021a) and theorem proving
(Yang and Deng, 2019). Various attention mechanisms have been studied to extract better representations, such as Group-ATT (Li et al., 2019)
which uses different multi-head attention to extract
various types of MWP features, and graph attention which is applied to extract knowledge-aware
information in (Wu et al., 2020).
**3.4** **Other Neural Networks for Math**
Deep learning approaches to mathematical reasoning tasks can also make use of other neural
networks, such as convolutional neural networks
(CNN) and multimodal networks. Some work encodes the input text using a convolutional neural
network architecture, giving the model the ability
to capture long-term relationships between symbols in the input (Gehring et al., 2017; Wang et al.,
2018a,a; Robaidek et al., 2018; Irving et al., 2016;
Loos et al., 2017). For example, the first application of deep neural networks for theorem proving
is proposed in (Irving et al., 2016), which relies
on convolutional networks for premise selection in
large theories.
Multimodal mathematical reasoning tasks, such
as geometry problem solving and diagram-based
mathematical reasoning, are formalized as visual
question answer (VQA) problems (Kafle et al.,
2018; Chen et al., 2021a; Lu et al., 2021b). In this
domain, visual inputs are encoded using ResNet
(He et al., 2016) or Faster-RCNN (Ren et al., 2015),
while textual representations are obtained via GRU
or LTSM. Subsequently, the joint representation is
learned using multimodal fusion models, such as
BAN (Kim et al., 2018), FiLM (Perez et al., 2018),
and DAFA (Gao et al., 2019).
Other deep neural network structures can also be
used in mathematical reasoning. A Graph Neural
Network (GNN) is employed for geometry prob
lem parsing in Zhang et al. (2022a), taking advantage of its success in spatial reasoning. WaveNet
has been applied to theorem proving (Loos et al.,
2017; Bansal et al., 2019), due to its ability to address longitudinal time-series data. Furthermore,
Transformers are found to outperform GRU in generating mathematical equations in DDT (Meng and
Rumshisky, 2019). Finally, MathDQN (Wang et al.,
2018b) is the first work to explore reinforcement
learning for math word problem solving, taking
advantage of its strong search capabilities.
**4** **Pre-trained Language Models for**
**Mathematical Reasoning**
Pre-trained language models (e.g., Devlin et al.
(2018); Radford et al. (2020); Brown et al. (2020))
have demonstrated remarkable performance gains
on a wide range of NLP tasks (Qiu et al., 2020). By
pre-training on a large corpus of text, the models
learn valuable world knowledge (Guu et al., 2020),
which could be applied to downstream tasks such
as question answering (Khashabi et al., 2020), text
classification (Minaee et al., 2021), and dialogue
generation (Zhang et al., 2019; Qiu et al., 2022a,b).
Similar ideas can be applied to math-related problems, and previous work has shown promising performance of pre-trained language models in answering math word problems (Kim et al., 2020; Shen
et al., 2021; Yu et al., 2021b; Cobbe et al., 2021;
Li et al., 2022b; Jie et al., 2022; Ni et al., 2022), assisting with theorem proving (Polu and Sutskever,
2020; Han et al., 2022; Wu et al., 2022b; Jiang et al.,
2022b; Welleck et al., 2022a), as well as other mathematical tasks (Lu et al., 2021a; Chen et al., 2022a;
Cao and Xiao, 2022; Clark et al., 2020; Chen et al.,
2021c; Zhu et al., 2021; Hendrycks et al., 2021;
Zhao et al., 2022; Nye et al., 2021; Charton, 2021).
However, though large language models excel in
modeling natural language, there are several challenges to using them for mathematical reasoning.
First, pre-trained language models are not specifically trained on mathematical data. This likely
contributes to them being less proficient in mathrelated tasks compared to natural language tasks.
There is also less mathematical or scientific data
available for large-scale pre-training compared to
text data. Second, the size of pre-trained models
continues to grow, making it expensive to train the
entire model from scratch for specific downstream
tasks. Additionally, downstream tasks may deal
with different input formats or modalities, such as
-----
**Paper** **Backbone** **Size** **Corpus** **Pre-training task**
**GPT-f (Polu and Sutskever, 2020)** Transformer 774M Math Causal language modeling
**LISA (Jiang et al., 2021)** Transformer 163M Math Causal language modeling
**MATH-PLM (Hendrycks et al., 2021)** GPT-2 1.5B Math Causal language modeling
**MWP-BERT (Liang et al., 2022b)** RoBERTa 123M Math 8 numeracy augmented tasks
**TaPEx (Liu et al., 2022b)** BART 406M SQL Query result generation
**HTPS (Lample et al., 2022)** Transformer 600M Math Masked Seq2Seq modeling
**Thor (Jiang et al., 2022a)** Transformer 700M Github, arXiv Causal language modeling
**PACT (Han et al., 2022)** Transformer 837M Math Masked/Causal language modeling
**Minerva (Lewkowycz et al., 2022)** PaLM 540B Science & Math Causal language modeling
**GenBERT (Geva et al., 2020)** BERT 110M Number, Text Masked/Causal language modeling
**NF-NSM (Feng et al., 2021)** RoBERTa 110M Number Number prediction
**LIME (Wu et al., 2021d)** Transformer 11B Math Causal language modeling
**Set (Wu et al., 2022c)** T5 60M Math Unique token generation
Table 4: Comparison of model backbone, size, pre-training corpus, and pre-training tasks of language models for
mathematical reasoning.
structured tables (Zhao et al., 2022; Chen et al.,
2021c; Zhu et al., 2021) or diagrams (Lu et al.,
2021a; Chen et al., 2022a; Lu et al., 2021b). To
address these challenges, researchers have to adjust
pre-trained models by finetuning them on downstream tasks or adapting the neural architectures.
Lastly, though pre-trained language models can encode substantial amounts of linguistic information,
it may be difficult for models to learn numerical
representation or high-level reasoning skills just
from the language modeling objective (Lin et al.,
2020; Kalyan et al., 2021). Taking this into consideration, there are recent studies investigating
the injection of mathematical-related skills with a
curriculum starting from basics (Geva et al., 2020;
Feng et al., 2021; Wu et al., 2021d).
**4.1** **Self-Supervised Learning for Math**
Self-supervised learning is a machine learning approach in which an algorithm learns to perform
a task without being explicitly provided with labeled training data. An example of self-supervised
learning is next-token prediction, which allows a
language model to learn the relationships between
words and understand the meaning of the text from
large-scale unlabeled data. Table 4 provides a list of
language models pre-trained with self-supervised
tasks for mathematical reasoning.
**Model scale. There is a clear trend that pre-trained**
language models have become increasingly larger
in the past few years (Devlin et al., 2018; Lewis
et al., 2020; Raffel et al., 2020; Radford et al., 2020;
Brown et al., 2020). A recent study (Liang et al.,
2022a) shows that model scale within a model family reliably predicts model accuracy. The study
also mentions an interesting thresholding effect:
“all models that win head-to-head model comparisons for accuracy at a rate well above chance are
at least 50B parameters”. A similar size-growing
trend can be observed in the field of mathematical
reasoning with pre-trained language models. For
example, MWP-BERT (Liang et al., 2022b) uses
a backbone of BERT (110M) (Devlin et al., 2018)
and RoBERTa (123M) (Liu et al., 2019b) for Math
Word Problems. TaPEx (Liu et al., 2022b) pre-train
their model based on BARTlarge, which has 460M
parameters. Most recently, Minerva (Lewkowycz
et al., 2022) based on the PaLM (Chowdhery et al.,
2022) pre-trained language model has a variable
size with up to 540B parameters.
**Pre-training corpus.** There are generally two
types of pre-training corpus for mathematical language models. (i) Curated datasets from openly
accessible sources. For example, Hendrycks et al.
(2021) present the first large-scale mathematics
pre-training dataset with step-by-step solutions
in natural language and LATE[X, called the Auxil-]
iary Mathematics Problems and Solutions (AMPS).
AMPS consists of Khan Academy and Mathematica data. Minerva (Lewkowycz et al., 2022) collects a high-quality dataset containing scientific and
mathematical data, which contains 38.5B tokens
from webpages filtered for mathematical content
and from papers submitted to the arXiv preprint
server. Thor (Jiang et al., 2022a) pre-trains a language model on the GitHub + arXiv subsets of The
Pile (Gao et al., 2020). (ii) Synthetic datasets based
on templates or interaction with engines. Recent
work (Wu et al., 2021d; Krishna et al., 2021; Ri
and Tsuruoka, 2022; Anderson and Farrell, 2022;
-----
Wu et al., 2022c) shows that pre-training on data
that is fully synthetically generated—synthetic pretraining can actually provide substantial gains. Representative work includes TaPEx (Liu et al., 2022b),
which obtains a pre-training corpus by automatically synthesizing executable SQL queries and their
execution outputs. LISA (Jiang et al., 2021) extracts lemmas and theorems by interacting with the
Isabelle standard library and the Archive of Formal
Proofs. GenBERT (Geva et al., 2020) generats numerical and textual pre-training datasets based on
manually crafted and extracted templates.
**Pre-training tasks. General pre-training language**
models have two typical self-supervised learning
tasks: (i) Masked Language Modeling (MLM),
where it randomly masks a portion of words in
each sequence to predict the outcome; (ii) Causal
Language Modeling (CLM), where the model is
trained to predict the next token in a sequence of
tokens. Following the same paradigm, researchers
pre-train language models with MLM and CLM
tasks on mathematical/scientific corpora for downstream tasks (Polu and Sutskever, 2020; Jiang et al.,
2021; Hendrycks et al., 2021; Han et al., 2022;
Lewkowycz et al., 2022; Jiang et al., 2022a).
There is also recent work that designs customized tasks to inject mathematical reasoning
capabilities into language models. For instance,
Liang et al. (2022b) pre-train language models with
a suite of 8 numeracy-augmented tasks with consideration of reasoning logic and numerical properties. LIME (Wu et al., 2021d) proposes synthetic
pre-training tasks to learn three reasoning primitives: deduction, induction, and abduction before
learning more complex reasoning skills, which also
be regarded as a form of curriculum learning. A
follow-up work (Wu et al., 2022c) finds that pretraining on a simple and generic synthetic task of
predicting unique tokens in its original order (Set)
achieves similar performance as LIME. Geva et al.
(2020) train their language models on a numerical
data generation task followed by a text data generation task. The first task teaches models numerical operations and the second task teaches models
to comprehend how numerical operations are expressed in text.
Besides knowledge injection, there are also studies about probing whether pre-trained language
models have captured numerical commonsense
knowledge, i.e., commonsense knowledge that
provides an understanding of the numeric rela
**Paper** **Backbone** **Task**
**EPT (2020)** ALBERT MWP
**GenerateRank (2021)** BART MWP
**RPKHS (2021b)** RoBERTa MWP
**PatchTRM (2021b)** ResNet+BERT MWP
**GSM8K-PLM (2021)** GPT-3 MWP
**BERT-TD+CL (2022b)** BERT MWP
**DeductReasoner (2022)** RoBERTa MWP
**Self-Sampling (2022)** GPT-Neo MWP
**Bhaskara (2022a)** GPT-Neo MWP
**miniF2F-PLM (2022)** GPT-f TP
**NaturalProver (2022a)** GPT-3 TP
**Inter-GPS (2021a)** BART GPS
**UniGeo (2022a)** VL-T5 GPS
**DPE-NGS (2022)** RoBERTa GPS
**Aristo (2020)** RoBERTa MathQA
**FinQANet (2021c)** RoBERTa MathQA
**TagOp (2021)** RoBERTa MathQA
**MATH-PLM (2021)** GPT-3 MathQA
**MT2Net (2022)** RoBERTa MathQA
**Scratchpad (2021)** Transformer Mixed
**LAMT (2021)** Transformer Mixed
Table 5: Finetuned pre-trained language models for
downstream mathematical reasoning tasks.
tion between entities (Lin et al., 2020). Zhang
et al. (2020d) evaluate the language model embeddings via scalar probing and Berg-Kirkpatrick and
Spokoyny (2020) carry out a large-scale empirical investigation of masked number prediction and
numerical anomaly detection in text.
**4.2** **Task-specific Fine-tuning for Math**
Task-specific fine-tuning is a technique to improve
the performance of a pre-trained language model
on a specific task. This is also a common practice when there is not enough data for training the
large models from scratch. As shown in Table 5,
existing work fine-tunes pre-trained language models on a variety of downstream tasks, such as Math
Word Problems (Kim et al., 2020; Shen et al., 2021;
Yu et al., 2021b; Lu et al., 2021b; Cobbe et al.,
2021; Li et al., 2022b; Jie et al., 2022; Ni et al.,
2022; Mishra et al., 2022a; Welleck et al., 2022b),
MathQA over financial tabular data (Zhao et al.,
2022; Chen et al., 2021c; Zhu et al., 2021), Geometry (Lu et al., 2021a; Chen et al., 2022a; Cao and
Xiao, 2022), Linear Algebra (Charton, 2021), and
informal theorem proving (Welleck et al., 2022a).
Apart from fine-tuning the model parameters, much
work also uses pre-trained language models as encoders and ensemble them with other modules for
downstream tasks, e.g., IconQA (Lu et al., 2021b)
proposes to combine the ResNet (He et al., 2016)
-----
and BERT for diagram recognition and text understanding, respectively.
**5** **In-context Learning for Mathematical**
**Reasoning**
Large language models (LLMs), such as GPT3 (Brown et al., 2020), have recently revolutionized
the field of natural language processing (NLP), especially on account of their powerful few-shot incontext learning capabilities (Brown et al., 2020).
In-context Learning (ICL) enables LLMs to perform target tasks by providing some task examples
as conditions at inference time, without updating
model parameters (Radford et al., 2020; Brown
et al., 2020). ICL allows users to quickly build
models for new use cases without worrying about
fine-tuning and storing a large amount of new parameters for each task, so it is widely used in fewshot settings nowadays (Min et al., 2022).
An in-context example typically contains an
input-output pair with some prompt words, e.g.,
_Please select the largest number from the list. In-_
_put: [2, 4, 1, 5, 8]. Output: 8, and few-shot works_
by giving multiple examples, and then a final input example, where the model is expected to predict the output. However, such standard few-shot
promptings, in which the LLM is given in-context
examples of input–output pairs in front of test-time
examples, have not yet proved sufficient to achieve
high performance on challenging tasks such as
mathematical reasoning (Rae et al., 2021).
Chain-of-thought prompting (CoT) (Wei et al.,
2022), leverages intermediate natural language rationales as prompts to enable LLMs to first generate
_reasoning chains and then predict an answer for_
an input question. For example, a CoT prompt for
solving the math word problem could be
**Question: Roger has 5 tennis balls. He**
buys 2 more cans of tennis balls. Each
can has 3 tennis balls. Then, how many
tennis balls does Roger have now?
**Answer: Roger started with 5 balls. 2**
_cans of 3 tennis balls each are 6 tennis_
_balls. 5 + 6 = 11. The answer is 11._
Apart from Kojima et al. (2022) showing that
LLMs are decent zero-shot reasoners when given
the “Let’s think step by step!" prompt, most of
the recent work has focused on how to improve
chain-of-thought reasoning under the few-shot setting. This work is mainly divided into two parts,
(i) selecting better in-context examples (Fu et al.,
2022; Lu et al., 2022b; Zhang et al., 2022b) and (ii)
creating better reasoning chains (Zhou et al., 2022;
Wang et al., 2022; Li et al., 2022a).
**5.1** **In-context Example Selection**
Early chain-of-thought work randomly or heuristically selects in-context examples. However, recent studies have shown that this type of few-shot
learning can be highly unstable across different selections of in-context examples (Rubin et al., 2022;
Liu et al., 2022a). Therefore, which in-context reasoning examples make the most effective prompts
is still an unknown problem in the literature.
To address the limitation, recent work has investigated various methods to optimize the incontext examples selection process (Rubin et al.,
2022; Zhang et al., 2022b; Lu et al., 2022b; Yu
et al., 2022; Fu et al., 2022). For example, Rubin
et al. (2022) attempt to address this issue by retrieving semantically similar examples. However,
this approach has been shown to work poorly on
mathematical reasoning problems (Zhang et al.,
2022b), and it is sometimes hard to measure the
similarity if structured information (e.g., tables)
is contained (Lu et al., 2022b). In addition, Fu
et al. (2022) propose complexity-based prompting,
which chooses examples with complex reasoning
chains, i.e., chains with more reasoning steps, as
the prompt. Lu et al. (2022b) propose a method
for selecting in-context examples via reinforcement
learning (RL). Specifically, an agent learns to find
optimal in-context examples from a candidate pool,
with the goal of maximizing the prediction rewards
on given training examples when interacting with
the GPT-3 environment. In addition, Zhang et al.
(2022b) find diversifying demonstration questions
could also improve model performance. They propose a two-step approach to construct in-context
demonstrations: first, partitioning questions of a
given dataset into a few clusters; second, selecting a representative question from each cluster and
generating its reasoning chain using a zero-shot
chain-of-thought with simple heuristics.
**5.2** **High-quality Reasoning Chains**
Early chain of thought work (e.g., Wei et al. (2022))
mainly relies on a single human-annotated reasoning chain as a prompt. However, manually creating
reasoning chains has two disadvantages. First, as
tasks become more complex, current models may
not be sufficient to learn to perform all necessary
-----
**Engine** **ICL** **Rationale** **Rationale**
**Models** **Post method**
**(best performed)** **source** **type** **source**
Few-shot-CoT (Wei et al., 2022) PaLM (540B) Random Language Hand-crafted -
Self-Consistency-CoT (Wang et al., 2022) Codex (175B) Random Language Hand-crafted Self-consistency
Least-to-most CoT (Zhou et al., 2022) Codex (175B) Random Language Hand-crafted -
Retrieval-CoT (Zhang et al., 2022b) GPT-3 (175B) Retrival Language Auto-generated -
PromptPG-CoT (Lu et al., 2022b) GPT-3 (175B) RL Language Hand-crafted -
Auto-CoT (Zhang et al., 2022b) Codex (175B) Clustering Language Auto-generated -
Complexity-CoT (Fu et al., 2022) GPT-3 (175B) Complexity Language Hand-crafted Self-consistency
Few-shot-PoT (Chen et al., 2022b) GPT-3 (175B) Random Code Hand-crafted -
Table 6: In-context learning with large language models for mathematical reasoning. For GPT-3, all papers use the
text-davinci-002 version; for Codex, all papers use the code-davinci-002. RL is short for reinforcement learning.
reasoning steps and cannot easily generalize to different tasks. Second, a single decoding process
is vulnerable to incorrect inference steps, leading
to an incorrect prediction as the final answer. To
address this limitation, recent studies mainly focus on two aspects, (i) hand-crafting more complex
demonstrations, which we refer to as process-based
_approaches (Zhou et al., 2022; Chen et al., 2022b),_
(ii) leveraging ensemble-like methods, which we
refer to as outcome-based approaches (Wang et al.,
2022; Li et al., 2022a).
**Process-based approaches aim to improve the**
chain-of-thought reasoning quality, especially for
complex reasoning tasks. In least-to-most prompting (Zhou et al., 2022), the problem-solving process is implemented through two-stage prompting:
(i) reducing a complex problem into a list of subproblems; (ii) solving these sub-problems sequentially, so that solving a given sub-problem is facilitated by the answers to previously solved subproblems. Similarly, Khot et al. (2022) leverage
diverse decomposition structures and use different prompts to answer each sub-question. Apart
from these multi-step reasoning methods, Chen
et al. (2022b); Gao et al. (2022) propose programof-thoughts (PoT), an alternative solution that uses
large language models to express the reasoning
process as a program. The computation is then relegated to an external computer, which executes the
generated programs to derive the answer.
**Outcome-based approaches acknowledge the**
potential incorrectness of an individual reasoning path, and instead use multiple reasoning
paths (Wang et al., 2022; Li et al., 2022a). Selfconsistency (Wang et al., 2022) generates a set of
reasoning paths by sampling from the language
model, and marginalizes out the reasoning paths
by choosing the most common answer. In addition
to using sampling with a single prompt to produce
multiple reasoning paths, Li et al. (2022a) propose
to introduce diverse prompts through “self teaching”, as a complementary solution to produce a
higher degree of diversity.
**6** **Discussion**
**6.1** **Analysis of Benchmarks**
**Multi-modal setting. Most existing benchmarks**
for mathematical reasoning have targeted the
textual-only modality. However, visual elements
can provide a rich source of quantitative information, making multi-modal datasets beneficial for
reasoning over quantitative relations in natural images (Lu et al., 2022a), abstract diagrams (Lu et al.,
2021b), figures (Kahou et al., 2017), and charts
(Kafle et al., 2018). Tables, which are commonly
found in daily documents and contain hierarchically structured information, have also been the
focus of tasks that require quantitative reasoning
over textual and tabular context (Chen et al., 2021c;
Zhu et al., 2021; Zhao et al., 2022; Lu et al., 2022b).
In addition, recent datasets have been developed for
mathematical reasoning grounded on conversations
(Sun et al., 2019; Zhang et al., 2021; Chen et al.,
2022c), as well as reports (Chen et al., 2022c).
**Low-resource setting.** Despite the creation of
various datasets, mathematical reasoning in lowresource settings remains largely under-explored.
Pioneering research has developed mathematical
reasoning benchmarks for financial (Chen et al.,
2021c; Zhu et al., 2021; Zhao et al., 2022) and
scientific domains (Lu et al., 2022a). Additionally, there have been attempts to build non-English
datasets for Chinese (Wang et al., 2017; Qin et al.,
2020; Yu et al., 2021a) and Arabic (Alghamdi et al.,
2022) for mathematical reasoning.
**Rationale annotations. Complex reasoning usu-**
-----
**Problems** **GPT-3 (text-davinci-002)**
John had 8 balls and he gave 3 to Mary. John has 5 balls.
How many balls does John have now?
**T5** **UnifiedQA** **GPT-3** **GPT-3**
(Large) (Large) (davinci-002) (davinci-003)
3 balls + 5 balls = 5 balls 8 balls 8 balls
23 balls + 145 balls = 58 balls 168 balls
23 balls + 1,855 balls = 2,878 balls 2,988 balls
Table 7: Language models struggle with large numbers.
ally involves multiple steps to arrive at the final
answer. To bridge this gap, datasets annotated with
intermediate rationales such as logic forms (Tafjord
et al., 2019; Lu et al., 2021a), programs (Amini
et al., 2019; Chen et al., 2021c,a; Cao and Xiao,
2022; Chen et al., 2022a), and reasoning graphs
(Zhang et al., 2021) have been proposed to train
models for complex reasoning tasks. Python programs are used as reasoning annotations in (Austin
et al., 2021; Mishra et al., 2022a) due to their enhanced accessibility and readability. To imitate the
reasoning process of a human, a more recent trend
is to annotate solutions in natural language (Ling
et al., 2017; Cobbe et al., 2021; Lu et al., 2022b;
Hendrycks et al., 2021; Lu et al., 2022a).
**6.2** **Analysis of Deep Learning Methods**
**Is the current representation of numeracy suffi-**
**cient? While neural networks and language mod-**
els have achieved impressive results, their ability
to represent and comprehend numbers is still not
ideal. The standard practice for deep learning techniques is to treat numbers in the same way as words.
Early neural network methods create a vocabulary
that maps input words and numbers to token IDs,
resulting in less frequent numbers being collapsed
into an “UNK” token. On the other hand, pre-trained
language models (such as BERT) and newer large
language models (such as GPT-3) use subword tokenization techniques (Wu et al., 2016; Sennrich
et al., 2016) to split numbers into atomic tokens.
Recent studies have shown that these tokenization approaches are suboptimal (Wallace et al.,
2019; Lin et al., 2020; Zhang et al., 2020e; Thawani
et al., 2022). Two numbers on the same or close
number line could have surface forms with no
shared common tokens. For example, a number
like 1598 is tokenized as “15” and “98” in GPT-3,
while another format like 1, 598 is split as three
different tokens: “1”, “,”, and “598”. This lack of
consistent representation can make it difficult for
deep learning models to effectively process numbers, especially when compared to pure text. The
insufficient representations of numbers can lead to
John had 3 apples. John had 8 balls
and he gave 3 to Mary. How many
balls does Mary have now?
Mary has 5 balls.
John had 8 balls and he gave 3 to Mary. John has more balls.
Who has more balls now?
John had 8 balls and he gave 3 to Mary.
Does John have more balls now?
No, John has 5 balls now.
John had 8 balls and he gave 4 to Mary. No, John has 4 balls now.
Does John have more balls now?
John had 8 balls and he gave 4 to Mary.
Who has more balls now?
John has more balls.
Table 8: Examples where large language models are
not consistent for mathematical reasoning.
out-of-distribution (OOD) problems. Table 7 provides examples of where language models tend to
struggle with large numbers. Although increasing
model scales could help, even the state-of-the-art
large language model GPT-3 performs poorly when
reasoning over large numbers. Some recent work
suggests that using scientific notation (Zhang et al.,
2020e) and digit-level decomposition (Geva et al.,
2020) may be helpful in improving numeracy representation, but this remains an open problem in
the field.
**Are deep learning methods consistent for math-**
**ematical reasoning?** Recent developments in
deep learning have led to impressive results on various mathematical reasoning tasks. The zero-shotCoT Minerva 540B achieves a score of 75.0% on
the MMLU-STEM benchmark (Hendrycks et al.,
2020a), which assesses multitask reasoning ability in the fields of science, technology, engineering, and mathematics (STEM) at both high school
and college levels. Similarly, few-shot-CoT GPT-3
175B achieves a high accuracy of 93.0% on the
MultiArith task. However, the question remains as
to whether these methods are sufficiently advanced
to tackle more complex problems.
There is strong evidence that deep learning methods for mathematical reasoning are not robust and
susceptible to adversarial attacks (Lin et al., 2020;
Patel et al., 2021; Mishra et al., 2022b,a; Welleck
et al., 2022c). The SVAMP (Patel et al., 2021)
dataset is a collection of one-unknown arithmetic
word problems up to grade 4, with slight word variations from previous datasets. It is surprising that
current state-of-the-art (SOTA) methods perform
poorly on this dataset, with Graph2Tree achieving
only a 43.8% accuracy and zero-shot-CoT GPT-3
-----
(175B) only reaching 63.7%, which is just above
an “F” grade. Table 8 also shows the inconsistent
performance of the zero-shot GPT-3 model in scenarios with slightly different descriptions, while
human performance remains unchanged. This indicates a lack of consistency in the mathematical
reasoning ability of SOTA large language models.
**7** **Future Work**
**7.1** **Generalization and Robustness**
Despite impressive progress, neural models commonly display generalization and robustness failures on reasoning tasks. For example, above we discussed difficulties in generalizing to larger numbers
(Table 7) or remaining robust to nearby problems
(Table 8), while others identify failures in generalizing to longer problems than those observed in
training (e.g., Anil et al. (2022)). One direction is
to explore new inference-time (Jung et al., 2022;
Mitchell et al., 2022) or fine-tuning (Anil et al.,
2022) strategies.
Another aspect of generalization relates to the
role of memorization. For example, is the ability to
produce a complex solution dependent on seeing
many similar solutions during training, or even on
memorizing the solution? Term frequency in the
pretraining corpus is known to impact accuracy in
simple arithmetic tasks (Razeghi et al., 2022) or
factual question answering (Kandpal et al., 2022).
On the other hand, Lewkowycz et al. (2022) did not
find evidence of memorization in complex outputs,
yet their training set and model are not available
for inspection. Gaining a full understanding of
these factors for complex problems and outputs
(e.g., multi-step solutions or proofs) requires more
analysis, as well as accessible datasets and models.
**7.2** **Trustworthy Reasoning**
Although recent advances in language models
demonstrate the powerful capabilities of mathematical reasoning, users cannot always trust the given
answer predicted by the model, because language
models can generate ungrounded answers that users
must choose either to blindly accept or to verify
themselves (Nakano et al., 2021). Even with recent
prompting strategies that provide rationales before
making predictions (Wei et al., 2022), language
models still often hallucinate statements, produce
flawed reasoning, and output wrong answers.
Therefore, methods that enable more trustworthy
reasoning are urgently needed. Some potential di
rections for this include: (i) using language models
to provide evidence, such as theorems, to support
the reasoning process; (ii) incorporating a mechanism that makes a judgment when the model is
unsure of the answer; and (iii) using a model itself
or another module to detect and locate mistakes in
a model’s reasoning.
**7.3** **Learning from Feedback**
Another important direction to further improve language models for math is to let the model learn
from feedback. Such a process makes the continual
improvement of models’ output quality and safety
possible. An example is using reinforcement learning from human feedback (RLHF) (Ouyang et al.,
2022) to align language models with instructions.
The idea is to let humans rank the generated outputs of language models and use the learned reward
function to finetune the language model with policy
gradient (Ouyang et al., 2022; Glaese et al., 2022;
Qiu et al., 2022a). Existing work about online
learning (LeCun et al., 1998; Gimpel et al., 2010;
Liang and Klein, 2009) and incorporating humans
in the loop (Li et al., 2016; Wu et al., 2022a) is
also related to this research direction. In the context of mathematical reasoning, feedback does not
necessarily come from humans directly. The outcome of a theorem-proof engine (Jiang et al., 2021;
Wu et al., 2021d, 2022c) or the execution result
of model-generated scripts can also be used as the
reward source (Polu and Sutskever, 2020).
**7.4** **Multi-modal Mathematical Reasoning**
In recent years, there has been growing interest
in multi-modal mathematical reasoning, which involves using multiple sources of information, such
as text, tables, natural images, and diagrams, to
solve mathematical problems (Kahou et al., 2017;
Kafle et al., 2018; Lu et al., 2021b, 2022b). Despite this growing interest, there are still many challenges and opportunities for further research in this
field. Currently available datasets in this domain
tend to be small (Zhao et al., 2022), generated from
templates (Kahou et al., 2017), or focus on specific
topics (Lu et al., 2021a; Chen et al., 2022a). One
line of current research involves applying VQAbased frameworks to analyze figures and plots, but
this approach can result in significant semantic gaps
due to the fact that most VQA models are trained
on natural images. Similar issues can arise when
converting tables and natural images into text descriptions, as important information can be lost dur
-----
ing this process. One potential direction for future
work is to enhance the ability of multi-modal mathematical reasoning systems to tackle more complex
and realistic problems. This may involve creating unified models for interpreting and integrating
different modalities, as well as developing better
evaluation benchmarks to assess the performance
of these systems.
**8** **Conclusion**
In this paper, we present a comprehensive survey
of deep learning for mathematical reasoning. We
review the various tasks and datasets that have been
used, and discuss the various approaches that have
been taken, including early neural networks, later
pre-trained language models, and recent large language models. We also identify several gaps in
the existing datasets and methods, including limited focus on low-resource settings, insufficient numeracy representations, and inconsistent reasoning
abilities. Finally, we outline directions for future
research and highlight the potential for further exploration in this field. Our goal with this paper is
to provide a comprehensive and useful resource
for readers interested in the development of deep
learning for mathematical reasoning. To aid in
this effort, we have created a reading list that will
be continually updated in a GitHub repository at
[https://github.com/lupantech/dl4math.](https://github.com/lupantech/dl4math)
**References**
Alexander A. Alemi, François Chollet, Niklas Een,
Geoffrey Irving, Christian Szegedy, and Josef Ur[ban. 2016. DeepMath - Deep sequence models for](http://arxiv.org/abs/1606.04442)
[premise selection. In Advances in Neural Informa-](http://arxiv.org/abs/1606.04442)
_tion Processing Systems._
Reem Alghamdi, Zhenwen Liang, and Xiangliang
Zhang. 2022. Armath: a dataset for solving arabic
math word problems. In Proceedings of the Thir_teenth Language Resources and Evaluation Confer-_
_ence, pages 351–362._
Chris Alvin, Sumit Gulwani, Rupak Majumdar, and
Supratik Mukhopadhyay. 2017. Synthesis of solutions for shaded area geometry problems. In The
_Thirtieth International Flairs Conference._
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math
word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies (NAACL), pages 2357–2367._
Connor Anderson and Ryan Farrell. 2022. Improving fractal pre-training. In Proceedings of the
_IEEE/CVF Winter Conference on Applications of_
_Computer Vision, pages 1300–1309._
Peter Anderson, Xiaodong He, Chris Buehler, Damien
Teney, Mark Johnson, Stephen Gould, and Lei
Zhang. 2018. Bottom-up and top-down attention for
image captioning and visual question answering. In
_Proceedings of the IEEE conference on computer vi-_
_sion and pattern recognition, pages 6077–6086._
Cem Anil, Yuhuai Wu, Anders Johan Andreassen,
Aitor Lewkowycz, Vedant Misra, Vinay Venkatesh
Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan
Dyer, and Behnam Neyshabur. 2022. [Exploring](https://openreview.net/forum?id=zSkYVeX7bC4)
[length generalization in large language models. In](https://openreview.net/forum?id=zSkYVeX7bC4)
_Advances in Neural Information Processing Sys-_
_tems._
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten
Bosma, Henryk Michalewski, David Dohan, Ellen
Jiang, Carrie Cai, Michael Terry, Quoc Le, et al.
2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. _arXiv preprint_
_arXiv:1409.0473._
Kshitij Bansal, Sarah Loos, Markus Rabe, Christian
Szegedy, and Stewart Wilcox. 2019. Holist: An environment for machine learning of higher order logic
theorem proving. In International Conference on
_Machine Learning, pages 454–463. PMLR._
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Yann Coscoy, David Delahaye, Daniel
de Rauglaudre, Jean-Christophe Filliâtre, Eduardo
Giménez, Hugo Herbelin, et al. 1999. The coq proof
assistant reference manual. INRIA, version, 6(11).
Taylor Berg-Kirkpatrick and Daniel Spokoyny. 2020.
[An empirical investigation of contextualized number](https://doi.org/10.18653/v1/2020.emnlp-main.385)
[prediction. In Proceedings of the 2020 Conference](https://doi.org/10.18653/v1/2020.emnlp-main.385)
_on Empirical Methods in Natural Language Process-_
_ing (EMNLP), pages 4754–4764, Online. Associa-_
tion for Computational Linguistics.
Daniel G Bobrow. 1964. Natural language input for
a computer problem solving system. AI Technical
_Reports._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in Neural Information Process_ing Systems (NeurIPS), 33:1877–1901._
Jie Cao and Jing Xiao. 2022. An augmented benchmark dataset for geometric question answering
through dual parallel text encoding. In Proceedings
_of the 29th International Conference on Computa-_
_tional Linguistics, pages 1511–1520._
-----
Yixuan Cao, Feng Hong, Hongwei Li, and Ping Luo.
2021. A bottom-up dag structure extraction model
for math word problems. In Proceedings of the
_AAAI Conference on Artificial Intelligence, pages_
39–46.
François Charton. 2021. Linear algebra with transformers. arXiv preprint arXiv:2112.01898.
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin,
Chongyu Chen, and Xiaodan Liang. 2022a. Unigeo:
Unifying geometry logical reasoning via reformulating mathematical expression. In The 2022 Con_ference on Empirical Methods in Natural Language_
_Processing (EMNLP)._
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan
Liang, Lingbo Liu, Eric Xing, and Liang Lin. 2021a.
Geoqa: A geometric question answering benchmark
towards multimodal numerical reasoning. In Find_ings of the Association for Computational Linguis-_
_tics: ACL-IJCNLP 2021, pages 513–523._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde de Oliveira Pinto, Jared Kaplan,
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg
Brockman, et al. 2021b. Evaluating large language models trained on code. _arXiv preprint_
_arXiv:2107.03374._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022b. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. _arXiv preprint_
_arXiv:2211.12588._
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena
Shah, Iana Borova, Dylan Langdon, Reema Moussa,
Matt Beane, Ting-Hao Huang, Bryan R Routledge,
et al. 2021c. Finqa: A dataset of numerical reasoning over financial data. In Proceedings of the 2021
_Conference on Empirical Methods in Natural Lan-_
_guage Processing (EMNLP), pages 3697–3711._
Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang
Ma, Sameena Shah, and William Yang Wang. 2022c.
Convfinqa: Exploring the chain of numerical reasoning in conversational finance question answering.
_arXiv preprint arXiv:2210.03849._
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-aligned equation generation for
solving and reasoning math word problems. In
_Proceedings of the 2019 Conference of the North_
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
_Volume 1 (Long and Short Papers), pages 2656–_
2668.
Kyunghyun Cho, Bart van Merrienboer Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares Holger
Schwenk, and Yoshua Bengio. 2014. Learning
phrase representations using rnn encoder–decoder
for statistical machine translation. In Proceedings of
_the 2014 Conference on Empirical Methods in Nat-_
_ural Language Processing (EMNLP), pages 1724–_
1734.
Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong
Zhang. 1996. Automated generation of readable
proofs with geometric invariants. Journal of Auto_mated Reasoning, 17(3):325–347._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Peter Clark, Oren Etzioni, Tushar Khot, Daniel
Khashabi, Bhavana Mishra, Kyle Richardson,
Ashish Sabharwal, Carissa Schoenick, Oyvind
Tafjord, Niket Tandon, et al. 2020. From ‘f’to ‘a’on
the ny regents science exams: An overview of the
aristo project. AI Magazine, 41(4):39–53.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint
_arXiv:2110.14168._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceed_ings of the 2019 Conference of the North American_
_Chapter of the Association for Computational Lin-_
_guistics: Human Language Technologies, Volume 1_
_(Long and Short Papers), pages 2368–2378._
Edward A Feigenbaum et al. 1963. _Computers and_
_thought. McGraw-Hill._
Yu Feng, Jing Zhang, Xiaokang Zhang, Lemao Liu,
Cuiping Li, and Hong Chen. 2021. Injecting numerical reasoning skills into knowledge base question answering models. arXiv preprint arXiv:2112.06109.
Deborah Ferreira and André Freitas. 2020a. [Natu-](https://aclanthology.org/2020.lrec-1.266)
[ral language premise selection: Finding supporting](https://aclanthology.org/2020.lrec-1.266)
[statements for mathematical text.](https://aclanthology.org/2020.lrec-1.266) In Proceedings
_of the Twelfth Language Resources and Evaluation_
_Conference, pages 2175–2182, Marseille, France._
European Language Resources Association.
[Deborah Ferreira and André Freitas. 2020b. Premise](https://doi.org/10.18653/v1/2020.acl-main.657)
[selection in natural language mathematical texts. In](https://doi.org/10.18653/v1/2020.acl-main.657)
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 7365–_
7374, Online. Association for Computational Linguistics.
-----
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark,
and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. _arXiv preprint_
_arXiv:2210.00720._
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020.
The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language
models. arXiv preprint arXiv:2211.10435.
Peng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu,
Steven CH Hoi, Xiaogang Wang, and Hongsheng Li.
2019. Dynamic fusion with intra-and inter-modality
attention flow for visual question answering. In The
_IEEE Conference on Computer Vision and Pattern_
_Recognition (CVPR), pages 6639–6648._
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. 2021. [Tactic-](https://doi.org/10.1007/s10817-020-09580-x)
[Toe: Learning to Prove with Tactics. Journal of Au-](https://doi.org/10.1007/s10817-020-09580-x)
_tomated Reasoning._
Jonas Gehring, Michael Auli, David Grangier, Denis
Yarats, and Yann N Dauphin. 2017. Convolutional
sequence to sequence learning. In International
_conference on machine learning, pages 1243–1252._
PMLR.
Herbert Gelernter, James R Hansen, and Donald W
Loveland. 1960. Empirical explorations of the geometry theorem machine. In Papers presented at the
_May 3-5, 1960, western joint IRE-AIEE-ACM com-_
_puter conference, pages 143–149._
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
Injecting numerical reasoning skills into language
models. arXiv preprint arXiv:2004.04487.
Kevin Gimpel, Dipanjan Das, and Noah A Smith. 2010.
Distributed asynchronous online learning for natural
language processing. In Proceedings of the Four_teenth Conference on Computational Natural Lan-_
_guage Learning, pages 213–222._
Amelia Glaese, Nat McAleese, Maja Tr˛ebacz, John
Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth
Rauh, Laura Weidinger, Martin Chadwick, Phoebe
Thacker, et al. 2022. Improving alignment of dialogue agents via targeted human judgements. arXiv
_preprint arXiv:2209.14375._
Adam Grabowski, Artur Korniłowicz, and Adam Naumowicz. 2015. Four decades of mizar. Journal of
_Automated Reasoning, 55(3):191–198._
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In Inter_national Conference on Machine Learning, pages_
3929–3938. PMLR.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W
Ayers, and Stanislas Polu. 2022. Proof artifact cotraining for theorem proving with language models.
In International Conference on Learning Represen_tations (ICLR)._
Yihan Hao, Mingliang Zhang, Fei Yin, and Linlin Huang. 2022. Pgdp5k: A diagram parsing
dataset for plane geometry problems. arXiv preprint
_arXiv:2205.09947._
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on
_computer vision and pattern recognition, pages 770–_
778.
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob
Steinhardt. 2020a. Measuring massive multitask language understanding. _arXiv preprint_
_arXiv:2009.03300._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. In 35th Con_ference on Neural Information Processing Systems_
_(NeurIPS) Track on Datasets and Benchmarks._
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam
Dziedzic, Rishabh Krishnan, and Dawn Song.
2020b. Pretrained transformers improve out-ofdistribution robustness. In Proceedings of the 58th
_Annual Meeting of the Association for Computa-_
_tional Linguistics, pages 2744–2751._
Jonathan Herzig, Pawel Krzysztof Nowak, Thomas
Mueller, Francesco Piccinno, and Julian Eisenschlos. 2020. Tapas: Weakly supervised table parsing
via pre-training. In Proceedings of the 58th Annual
_Meeting of the Association for Computational Lin-_
_guistics (ACL), pages 4320–4333._
Sepp Hochreiter and Jürgen Schmidhuber. 1997.
Long short-term memory. _Neural computation,_
9(8):1735–1780.
Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, and
Song-Chun Zhu. 2021a. Learning by fixing: Solving math word problems with weak supervision. In
_Proceedings of the AAAI Conference on Artificial In-_
_telligence, pages 4959–4967._
Yining Hong, Qing Li, Ran Gong, Daniel Ciao, Siyuan
Huang, and Song-Chun Zhu. 2021b. Smart: A situation model for algebra story problems via attributed
grammar. In AAAI, pages 13009–13017.
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
[Oren Etzioni, and Nate Kushman. 2014. Learning](https://aclanthology.org/D14-1058)
[to solve arithmetic word problems with verb catego-](https://aclanthology.org/D14-1058)
[rization. In Proceedings of the 2014 Conference on](https://aclanthology.org/D14-1058)
_Empirical Methods in Natural Language Processing_
_(EMNLP)._
-----
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya
Sutskever. 2019. Gamepad: A learning environment
for theorem proving. In ICLR.
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
2018. Neural math word problem solver with reinforcement learning. In Proceedings of the 27th Inter_national Conference on Computational Linguistics,_
pages 213–223.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and
Jian Yin. 2017. Learning fine-grained expressions
to solve math word problems. In Proceedings of
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 805–814._
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (ACL), pages 887–896._
Geoffrey Irving, Christian Szegedy, Alexander A
Alemi, Niklas Eén, François Chollet, and Josef Urban. 2016. Deepmath-deep sequence models for
premise selection. Advances in neural information
_processing systems, 29._
Albert Q Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygó´zd´z, Piotr Miło´s,
Yuhuai Wu, and Mateja Jamnik. 2022a. Thor:
Wielding hammers to integrate language models
and automated theorem provers. _arXiv preprint_
_arXiv:2205.10893._
Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda
Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix,
[Yuhuai Wu, and Guillaume Lample. 2022b. Draft,](https://arxiv.org/abs/2210.12283)
[sketch, and prove: Guiding formal theorem provers](https://arxiv.org/abs/2210.12283)
[with informal proofs. In Submitted to The Eleventh](https://arxiv.org/abs/2210.12283)
_International Conference on Learning Representa-_
_tions._
Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han,
and Yuhuai Wu. 2021. Lisa: Language models of
isabelle proofs. In 6th Conference on Artificial Intel_ligence and Theorem Proving (AITP 2021)._
Zhanming Jie, Jierui Li, and Wei Lu. 2022. Learning to reason deductively: Math word problem solving as complex relation extraction. arXiv preprint
_arXiv:2203.10316._
Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and
Yejin Choi. 2022. Maieutic prompting: Logically consistent reasoning with recursive explanations. ArXiv, abs/2205.11822.
Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. Dvqa: Understanding data visualizations via question answering. In Proceedings of
_the IEEE Conference on Computer Vision and Pat-_
_tern Recognition (CVPR), pages 5648–5656._
Samira Ebrahimi Kahou, Vincent Michalski, Adam
Atkinson, Ákos Kádár, Adam Trischler, and Yoshua
Bengio. 2017. Figureqa: An annotated figure dataset for visual reasoning. _arXiv preprint_
_arXiv:1710.07300._
Cezary Kaliszyk, François Chollet, and Christian
Szegedy. 2017. Holstep: A machine learning dataset
for higher-order logic theorem proving. In ICLR.
Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal, and Peter Clark.
2021. How much coffee was consumed during
emnlp 2019? fermi problems: A new reasoning challenge for ai. In Proceedings of the 2021 Conference
_on Empirical Methods in Natural Language Process-_
_ing, pages 7318–7328._
Nikhil Kandpal, H. Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. _ArXiv,_
abs/2211.08411.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish
Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single qa system. In Find_ings of the Association for Computational Linguis-_
_tics (EMNLP), pages 1896–1907._
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao
Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2022. Decomposed prompting: A modular
approach for solving complex tasks. arXiv preprint
_arXiv:2210.02406._
Bugeun Kim, Kyung Seo Ki, Donggeon Lee, and Gahgene Gweon. 2020. Point to the expression: Solving algebraic word problems using the expressionpointer transformer model. In Proceedings of the
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 3768–3779._
Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang.
2018. Bilinear attention networks. In Advances in
_Neural Information Processing Systems (NeurIPS),_
pages 1571–1581.
Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt:
Vision-and-language transformer without convolution or region supervision. In Proceedings of the
_38th International Conference on Machine Learning_
_(ICML), pages 5583–5594._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. _arXiv_
_preprint arXiv:2205.11916._
Rik Koncel-K., Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A
math word problem repository. In Proceedings
_of the 2016 Conference of the North American_
_Chapter of the Association for Computational Lin-_
_guistics: Human Language Technologies (NAACL),_
pages 1152–1157.
-----
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics (TACL), 3:585–597._
Kundan Krishna, Jeffrey Bigham, and Zachary C
Lipton. 2021. Does pretraining for summarization require knowledge transfer? _arXiv preprint_
_arXiv:2109.04953._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (ACL), pages 271–281._
Guillaume Lample and François Charton. 2020. Deep
learning for symbolic mathematics. In International
_Conference on Learning Representations (ICLR)._
Guillaume Lample, Marie-Anne Lachaux, Thibaut
Lavril, Xavier Martinet, Amaury Hayat, Gabriel
Ebner, Aurélien Rodriguez, and Timothée Lacroix.
2022. Hypertree proof search for neural theorem
proving. arXiv preprint arXiv:2205.11491.
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan,
Bing Tian Dai, Yan Wang, Dongxiang Zhang, and
Ee-Peng Lim. 2022. Mwptoolkit: an open-source
framework for deep learning-based math word problem solvers. In Proceedings of the AAAI Conference
_on Artificial Intelligence, pages 13188–13190._
Zhenzhong Lan, Mingda Chen, Sebastian Goodman,
Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. _arXiv preprint_
_arXiv:1909.11942._
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick
Haffner. 1998. Gradient-based learning applied to
document recognition. _Proceedings of the IEEE,_
86(11):2278–2324.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer
Levy, Veselin Stoyanov, and Luke Zettlemoyer.
2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation,
and comprehension. In Proceedings of the 58th An_nual Meeting of the Association for Computational_
_Linguistics, pages 7871–7880, Online. Association_
for Computational Linguistics (ACL).
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative
reasoning problems with language models. _arXiv_
_preprint arXiv:2206.14858._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
Bing Tian Dai, and Dongxiang Zhang. 2019. Modeling intra-relation in math word problems with different functional multi-head attentions. In Proceedings
_of the 57th Annual Meeting of the Association for_
_Computational Linguistics, pages 6162–6167._
Jiwei Li, Alexander H Miller, Sumit Chopra,
Marc’Aurelio Ranzato, and Jason Weston. 2016.
Dialogue learning with human-in-the-loop. _arXiv_
_preprint arXiv:1611.09823._
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui
Hsieh, and Kai-Wei Chang. 2020a. What does bert
with vision look at? In Proceedings of the 58th An_nual Meeting of the Association for Computational_
_Linguistics (ACL), pages 5265–5275._
Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu,
Fengyuan Xu, and Sheng Zhong. 2020b. Graphto-tree neural networks for learning structured inputoutput translation with applications to semantic parsing and math word problem. In Findings of the As_sociation for Computational Linguistics: EMNLP_
_2020, pages 2841–2852._
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C Paulson. 2021. Isarstep: a benchmark for high-level
mathematical reasoning. In International Confer_ence on Learning Representations (ICLR)._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Jian-Guang Lou, and Weizhu Chen. 2022a. On the
advance of making language models better reasoners. arXiv preprint arXiv:2206.02336.
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou,
Chao Li, Hongzhi Liu, and Yunbo Cao. 2022b.
Seeking patterns, not just memorizing procedures:
Contrastive learning for solving math word problems. In Findings of the Association for Computa_tional Linguistics: ACL 2022, pages 2486–2496._
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris
Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian
Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022a. Holistic evaluation of language
models. arXiv preprint arXiv:2211.09110.
Percy Liang and Dan Klein. 2009. Online em for unsupervised models. In Proceedings of human lan_guage technologies: The 2009 annual conference of_
_the North American chapter of the association for_
_computational linguistics, pages 611–619._
Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin,
Yunshi Lan, Jie Shao, and Xiangliang Zhang. 2022b.
Mwp-bert: Numeracy-augmented pre-training for
math word problem solving. In Findings of the
_Association for Computational Linguistics: NAACL_
_2022, pages 997–1009._
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! numersense:
Probing numerical commonsense knowledge of pretrained language models. In Proceedings of the
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 6862–6868._
-----
Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen,
Qi Liu, Hao Wang, and Shijin Wang. 2021. Hms:
A hierarchical solver with dependency-enhanced understanding for math word problem. In Proceedings
_of the AAAI Conference on Artificial Intelligence,_
pages 4232–4240.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguistics_
_(ACL), pages 158–167._
Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B
Dolan, Lawrence Carin, and Weizhu Chen. 2022a.
What makes good in-context examples for gpt-3?
In Proceedings of Deep Learning Inside Out (Dee_LIO 2022): The 3rd Workshop on Knowledge Ex-_
_traction and Integration for Deep Learning Architec-_
_tures, pages 100–114._
Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi
Lin, Weizhu Chen, and Jian-Guang Lou. 2022b.
[TAPEX: Table pre-training via learning a neural](https://openreview.net/forum?id=O50443AsCP)
[SQL executor.](https://openreview.net/forum?id=O50443AsCP) In International Conference on
_Learning Representations._
Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng,
Daisuke Kawahara, and Sadao Kurohashi. 2020.
Reverse operation based data augmentation for
solving math word problems. _arXiv preprint_
_arXiv:2010.01556._
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke
Kawahara. 2019a. Tree-structured decoding for
solving math word problems. In Proceedings of
_the 2019 conference on empirical methods in natu-_
_ral language processing and the 9th international_
_joint conference on natural language processing_
_(EMNLP-IJCNLP), pages 2370–2379._
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Sarah Loos, Geoffrey Irving, Christian Szegedy, and
Cezary Kaliszyk. 2017. Deep network guided proof
search. arXiv preprint arXiv:1701.06972.
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
Huang, Xiaodan Liang, and Song-Chun Zhu. 2021a.
Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. In
_The 59th Annual Meeting of the Association for Com-_
_putational Linguistics (ACL)._
Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. 2022a. Learn to explain:
Multimodal reasoning via thought chains for science
question answering. In The 36th Conference on Neu_ral Information Processing Systems (NeurIPS 2022)._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian
Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter
Clark, and Ashwin Kalyan. 2022b. Dynamic
prompt learning via policy gradient for semistructured mathematical reasoning. arXiv preprint
_arXiv:2209.14610._
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao,
Wei Zhang, Zhou Yu, Xiaodan Liang, and SongChun Zhu. 2021b. Iconqa: A new benchmark for
abstract diagram understanding and visual language
reasoning. In The 35th Conference on Neural In_formation Processing Systems (NeurIPS) Track on_
_Datasets and Benchmarks._
Yao Lu, Max Bartolo, Alastair Moore, Sebastian
Riedel, and Pontus Stenetorp. 2022c. Fantastically
ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceed_ings of the 60th Annual Meeting of the Association_
_for Computational Linguistics (ACL), pages 8086–_
8098.
[The mathlib Community. 2020. The lean mathemati-](https://doi.org/10.1145/3372885.3373824)
[cal library. In CPP 2020 - Proceedings of the 9th](https://doi.org/10.1145/3372885.3373824)
_ACM SIGPLAN International Conference on Certi-_
_fied Programs and Proofs, co-located with POPL_
_2020._
Norman D. Megill and David A. Wheeler. 2019. Meta_math:_ _A Computer Language for Mathematical_
_Proofs._ Lulu Press, Morrisville, North Carolina.
http://us.metamath.org/downloads/metamath.pdf.
Yuanliang Meng and Anna Rumshisky. 2019. Solving math word problems with double-decoder transformer. arXiv preprint arXiv:1908.10924.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceed_ings of the 58th Annual Meeting of the Association_
_for Computational Linguistics, pages 975–984._
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe,
Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations:
What makes in-context learning work? Proceedings
_of Empirical Methods in Natural Language Process-_
_ing (EMNLP)._
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao.
2021. Deep learning–based text classification: a
comprehensive review. _ACM Computing Surveys_
_(CSUR), 54(3):1–40._
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard
Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark,
and Ashwin Kalyan. 2022a. Lila: A unified benchmark for mathematical reasoning. In Proceedings of
_the 2022 Conference on Empirical Methods in Natu-_
_ral Language Processing (EMNLP)._
-----
Swaroop Mishra, Arindam Mitra, Neeraj Varshney,
Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and
Ashwin Kalyan. 2022b. Numglue: A suite of fundamental yet challenging mathematical reasoning
tasks. In Proceedings of the 60th Annual Meet_ing of the Association for Computational Linguistics_
_(ACL), pages 3505–3523._
Eric Mitchell, Joseph J. Noh, Siyan Li, William S.
Armstrong, Ananth Agarwal, Patrick Liu, Chelsea
[Finn, and Christopher D. Manning. 2022. Enhanc-](https://ericmitchell.ai/concord.pdf)
[ing self-consistency and performance of pretrained](https://ericmitchell.ai/concord.pdf)
[language models with nli.](https://ericmitchell.ai/concord.pdf) In Proceedings of the
_2022 Conference on Empirical Methods in Natu-_
_ral Language Processing (EMNLP). Association for_
Computational Linguistics.
Leonardo de Moura, Soonho Kong, Jeremy Avigad,
Floris van Doorn, and Jakob von Raumer. 2015.
The lean theorem prover (system description). In
_International Conference on Automated Deduction,_
pages 378–388. Springer.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu,
Long Ouyang, Christina Kim, Christopher Hesse,
Shantanu Jain, Vineet Kosaraju, William Saunders,
et al. 2021. Webgpt: Browser-assisted questionanswering with human feedback. _arXiv preprint_
_arXiv:2112.09332._
[A. Newell, J. C. Shaw, and H. A. Simon. 1957. Empiri-](https://doi.org/10.1145/1455567.1455605)
[cal explorations of the logic theory machine: A case](https://doi.org/10.1145/1455567.1455605)
[study in heuristic.](https://doi.org/10.1145/1455567.1455605) In Proceedings of the Western
_Joint Computer Conference, IRE-AIEE-ACM 1957._
Ansong Ni, Jeevana Priya Inala, Chenglong Wang,
Oleksandr Polozov, Christopher Meek, Dragomir
Radev, and Jianfeng Gao. 2022. Learning from
self-sampled correct and partially-correct programs.
_arXiv preprint arXiv:2205.14318._
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language
models. arXiv preprint arXiv:2112.00114.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _arXiv preprint_
_arXiv:2203.02155._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are nlp models really able to solve simple math word problems? In Proceedings of the
_2021 Conference of the North American Chapter of_
_the Association for Computational Linguistics: Hu-_
_man Language Technologies (NAACL), pages 2080–_
2094.
[Lawrence C. Paulson. 1994. Isabelle - A Generic The-](https://doi.org/10.1007/BFb0030541)
_[orem Prover (with a contribution by T. Nipkow),](https://doi.org/10.1007/BFb0030541)_
volume 828 of Lecture Notes in Computer Science.
Springer.
Ethan Perez, Florian Strub, Harm De Vries, Vincent
Dumoulin, and Aaron Courville. 2018. Film: Visual reasoning with a general conditioning layer. In
_Proceedings of the AAAI Conference on Artificial In-_
_telligence._
Stanislas Polu, Jesse Michael Han, Kunhao Zheng,
Mantas Baksys, Igor Babuschkin, and Ilya Sutskever.
2022. Formal mathematics statement curriculum
learning. ArXiv, abs/2202.01344.
Stanislas Polu and Ilya Sutskever. 2020. Generative
language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393._
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng
Tang, and Liang Lin. 2021. Neural-symbolic solver
for math word problems with auxiliary tasks. In Pro_ceedings of the 59th Annual Meeting of the Associa-_
_tion for Computational Linguistics and the 11th In-_
_ternational Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pages 5870–_
5881.
Jinghui Qin, Lihui Lin, Xiaodan Liang, Rumin Zhang,
and Liang Lin. 2020. Semantically-aligned universal tree-structured solver for math word problems.
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 3780–3789.
Liang Qiu, Yizhou Zhao, Jinchao Li, Pan Lu, Baolin
Peng, Jianfeng Gao, and Song-Chun Zhu. 2022a.
Valuenet: A new dataset for human value driven dialogue system. IEEE Trans. on Pattern Analysis and
_Machine Intelligence (TPAMI), 44(5):2468–2484._
Liang Qiu, Yizhou Zhao, Yuan Liang, Pan Lu, Weiyan
Shi, Zhou Yu, and Song-chun Zhu. 2022b. Towards
socially intelligent agents with mental state transition and human value. In Proceedings of the 23rd
_Annual Meeting of the Special Interest Group on Dis-_
_course and Dialogue, pages 146–158._
Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao,
Ning Dai, and Xuanjing Huang. 2020. Pretrained models for natural language processing: A
survey. _Science China Technological Sciences,_
63(10):1872–1897.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2020. Language models are unsupervised multitask learners.
_OpenAI Blog._
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie
Millican, Jordan Hoffmann, Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models:
Methods, analysis & insights from training gopher.
_arXiv preprint arXiv:2112.11446._
-----
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text
transformer. Journal of Machine Learning Research
_(JMLR), 21:1–67._
Abhilasha Ravichander, Aakanksha Naik, Carolyn
Rose, and Eduard Hovy. 2019. Equate: A benchmark evaluation framework for quantitative reasoning in natural language inference. In Proceedings of
_the 23rd Conference on Computational Natural Lan-_
_guage Learning (CoNLL), pages 349–361._
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot reasoning. ArXiv,
abs/2202.07206.
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian
Sun. 2015. Faster r-cnn: Towards real-time object
detection with region proposal networks. Advances
_in neural information processing systems, 28._
Ryokan Ri and Yoshimasa Tsuruoka. 2022. Pretraining with artificial language: Studying transferable
knowledge in language models. In Proceedings
_of the 60th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 7302–7315.
Benjamin Robaidek, Rik Koncel-Kedziorski, and Hannaneh Hajishirzi. 2018. Data-driven methods for
solving algebra word problems. _arXiv preprint_
_arXiv:1804.10718._
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752._
Subhro Roy and Dan Roth. 2017. Unit dependency
graph and its application to arithmetic word problem
solving. In Proceedings of the AAAI Conference on
_Artificial Intelligence (AAAI)._
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transac_tions of the Association for Computational Linguis-_
_tics (TACL), 6:159–172._
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transac_tions of the Association for Computational Linguis-_
_tics (TACL), 3:1–13._
Ohad Rubin, Jonathan Herzig, and Jonathan Berant.
2022. Learning to retrieve prompts for in-context
learning. North American Chapter of the Associa_tion for Computational Linguistics (NAACL)._
Mrinmaya Sachan, Kumar Dubey, and Eric Xing. 2017.
From textbooks to knowledge: A case study in
harvesting axiomatic knowledge from textbooks to
solve geometry problems. In Proceedings of Em_pirical Methods in Natural Language Processing_
_(EMNLP), pages 773–784._
Mrinmaya Sachan and Eric Xing. 2017. Learning
to solve geometry problems from natural language
demonstrations in textbooks. In Proceedings of the
_6th Joint Conference on Lexical and Computational_
_Semantics, pages 251–261._
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2020. Analysing mathematical reasoning abilities of neural models. In International
_Conference on Learning Representations (ICLR)._
Tal Schuster, Ashwin Kalyan, Alex Polozov, and
Adam Tauman Kalai. 2021. Programming puzzles.
In Thirty-fifth Conference on Neural Information
_Processing Systems Datasets and Benchmarks Track_
_(Round 1)._
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words
with subword units. In Proceedings of the 54th An_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 1715–_
1725.
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of Empirical Methods in Nat_ural Language Processing (EMNLP), pages 1466–_
1476.
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin
Jiang, Ming Zhang, and Qun Liu. 2021. Generate &
rank: A multi-task framework for math word problems. In Findings of the Association for Computa_tional Linguistics: EMNLP 2021, pages 2269–2279._
Yibin Shen and Cheqing Jin. 2020. Solving math word
problems with multi-encoders and multi-decoders.
In Proceedings of the 28th International Conference
_on Computational Linguistics, pages 2924–2934._
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 conference on
_empirical methods in natural language processing_
_(EMNLP), pages 1132–1142._
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence
pre-training for language generation. arXiv preprint
_arXiv:1905.02450._
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi,
and Claire Cardie. 2019. Dream: A challenge data
set and models for dialogue-based reading comprehension. Transactions of the Association for Com_putational Linguistics, 7:217–231._
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
Sequence to sequence learning with neural networks.
_Advances in neural information processing systems,_
27.
-----
Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau
Yih, and Ashish Sabharwal. 2019. Quarel: A dataset
and models for answering questions about qualitative relationships. In Proceedings of the AAAI Con_ference on Artificial Intelligence, pages 7063–7071._
Kai Sheng Tai, Richard Socher, and Christopher D
Manning. 2015. Improved semantic representations
from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meet_ing of the Association for Computational Linguistics_
_and the 7th International Joint Conference on Natu-_
_ral Language Processing (Volume 1: Long Papers),_
pages 1556–1566.
Avijit Thawani, Jay Pujara, and Ashwin Kalyan. 2022.
Estimating numbers without regression. In 36th
_Conference on Neural Information Processing Sys-_
_tems (NeurIPS 2022) Workshop on MATH-AI._
Shyam Upadhyay and Ming-Wei Chang. 2015. Draw:
A challenging and diverse algebra word problem set.
Technical report, Citeseer.
Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and
dataset for algebra word problems. In Proceed_ings of the 15th Conference of the European Chap-_
_ter of the Association for Computational Linguistics_
_(ACL), pages 494–504._
Josef Urban. 2006. Mptp 0.2: Design, implementation,
and initial experiments. Journal of Automated Rea_soning, 37(1):21–43._
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In Advances in Neural Information Pro_cessing Systems (NeurIPS), pages 5998–6008._
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh,
and Matt Gardner. 2019. Do nlp models know numbers? probing numeracy in embeddings. In Proceed_ings of the 2019 Conference on Empirical Methods_
_in Natural Language Processing and the 9th Inter-_
_national Joint Conference on Natural Language Pro-_
_cessing (EMNLP-IJCNLP), pages 5307–5315._
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math word
problem to a expression tree. In Proceedings of the
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 1064–1069._
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI
_Conference on Artificial Intelligence._
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers
with recursive neural networks. In Proceedings
_of the AAAI Conference on Artificial Intelligence,_
pages 7144–7151.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. 2022. Self-consistency
improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In
_Proceedings of the 2017 Conference on Empirical_
_Methods in Natural Language Processing (EMNLP),_
pages 845–854.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large
language models. arXiv preprint arXiv:2201.11903.
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun Cho.
2021. Naturalproofs: Mathematical theorem proving in natural language. In Thirty-fifth Conference
_on Neural Information Processing Systems Datasets_
_and Benchmarks Track (Round 1)._
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh
Hajishirzi, and Yejin Choi. 2022a. Naturalprover:
Grounded mathematical proof generation with language models. arXiv preprint arXiv:2205.12910.
Sean Welleck, Ximing Lu, Peter West, Faeze Brahman,
Tianxiao Shen, Daniel Khashabi, and Yejin Choi.
2022b. Generating sequences by learning to selfcorrect. ArXiv, abs/2211.00053.
Sean Welleck, Peter West, Jize Cao, and Yejin Choi.
[2022c. Symbolic brittleness in sequence models: on](https://arxiv.org/pdf/2109.13986.pdf)
[systematic generalization in symbolic mathematics.](https://arxiv.org/pdf/2109.13986.pdf)
In AAAI.
Wu Wen-Tsun. 1986. Basic principles of mechanical
theorem proving in elementary geometries. Journal
_of automated Reasoning, 2(3):221–252._
Daniel Whalen. 2016. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv
_preprint arXiv:1608.02644._
Sanghyun Woo, Jongchan Park, Joon-Young Lee, and
In So Kweon. 2018. Cbam: Convolutional block
attention module. In Proceedings of the European
_conference on computer vision (ECCV), pages 3–19._
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuan-Jing
Huang. 2020. A knowledge-aware sequence-to-tree
network for math word problem solving. In Proceed_ings of the 2020 Conference on Empirical Methods_
_in Natural Language Processing (EMNLP), pages_
7137–7146.
Qinzhuo Wu, Qi Zhang, and Zhongyu Wei. 2021a. An
edge-enhanced hierarchical graph-to-tree network
for math word problem solving. In Findings of the
_Association for Computational Linguistics: EMNLP_
_2021, pages 1473–1482._
-----
Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuan-Jing
Huang. 2021b. Math word problem solving with
explicit numerical values. In Proceedings of the
_59th Annual Meeting of the Association for Compu-_
_tational Linguistics and the 11th International Joint_
_Conference on Natural Language Processing (Vol-_
_ume 1: Long Papers), pages 5859–5869._
Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang
Zhang, Tianlong Ma, and Liang He. 2022a. A survey of human-in-the-loop for machine learning. Fu_ture Generation Computer Systems._
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V
Le, Mohammad Norouzi, Wolfgang Macherey,
Maxim Krikun, Yuan Cao, Qin Gao, Klaus
Macherey, et al. 2016. Google’s neural machine
translation system: Bridging the gap between human and machine translation. _arXiv preprint_
_arXiv:1609.08144._
Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Baker
Grosse. 2021c. Int: An inequality benchmark for
evaluating generalization in theorem proving. In
_International Conference on Learning Representa-_
_tions (ICLR)._
Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li,
Markus Norman Rabe, Charles E Staats, Mateja
[Jamnik, and Christian Szegedy. 2022b. Autoformal-](https://openreview.net/forum?id=IUikebJ1Bf0)
[ization with large language models. In Advances in](https://openreview.net/forum?id=IUikebJ1Bf0)
_Neural Information Processing Systems._
Yuhuai Wu, Felix Li, and Percy Liang. 2022c. Insights
into pre-training via simpler synthetic tasks. arXiv
_preprint arXiv:2206.10139._
Yuhuai Wu, Markus N Rabe, Wenda Li, Jimmy Ba,
Roger B Grosse, and Christian Szegedy. 2021d.
Lime: Learning inductive bias for primitives of
mathematical reasoning. In International Confer_ence on Machine Learning, pages 11251–11262._
PMLR.
Zhipeng Xie and Shichao Sun. 2019. A goal-driven
tree-structured neural model for math word problems. In IJCAI, pages 5299–5305.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho,
Aaron Courville, Ruslan Salakhudinov, Rich Zemel,
and Yoshua Bengio. 2015. Show, attend and tell:
Neural image caption generation with visual attention. In International conference on machine learn_ing, pages 2048–2057. PMLR._
Kaiyu Yang and Jia Deng. 2019. Learning to prove
theorems via interacting with proof assistants. In In_ternational Conference on Machine Learning, pages_
6984–6994. PMLR.
Zheng Ye, Shang-Ching Chou, and Xiao-Shan Gao.
2008. An introduction to java geometry expert. In
_International workshop on automated deduction in_
_geometry, pages 189–195. Springer._
Wei Yu, Mengzhu Wang, Xiaodong Wang, Xun Zhou,
Yongfu Zha, Yongjian Zhang, Shuyu Miao, and Jingdong Liu. 2021a. Geore: A relation extraction
dataset for chinese geometry problems. In 35th Con_ference on Neural Information Processing Systems_
_(NeurIPS 2021) Workshop on Math AI for Education_
_(MATHAI4ED)._
Weijiang Yu, Yingpeng Wen, Fudan Zheng, and Nong
Xiao. 2021b. Improving math word problems with
pre-trained knowledge and hierarchical reasoning.
In Proceedings of the 2021 Conference on Empiri_cal Methods in Natural Language Processing, pages_
3384–3394.
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu,
Mingxuan Ju, Soumya Sanyal, Chenguang Zhu,
Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. _arXiv preprint_
_arXiv:2209.10063._
Klim Zaporojets, Giannis Bekoulis, Johannes Deleu,
Thomas Demeester, and Chris Develder. 2021. Solving arithmetic word problems by scoring equations
with recursive neural networks. Expert Systems with
_Applications, 174:114704._
Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei
Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a.
Teacher-student networks with multiple decoders for
solving math word problem. In IJCAI.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graphto-tree learning for solving math word problems. In
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 3928–_
3937.
Ming-Liang Zhang, Fei Yin, Yi-Han Hao, and ChengLin Liu. 2022a. Learning to understand plane geometry diagram. In 36th Conference on Neural In_formation Processing Systems (NeurIPS 2022) Work-_
_shop on MATH-AI._
Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang
Wang, Yang Wang, Jing Jiang, and Ee-Peng Lim.
2021. Noahqa: Numerical reasoning with interpretable graph question answering dataset. In Find_ings of the Association for Computational Linguis-_
_tics: EMNLP 2021, pages 4147–4161._
Wenhe Zhang, Chi Zhang, Yixin Zhu, and Song-Chun
Zhu. 2020c. Machine number sense: A dataset of
visual arithmetic problems for abstract and relational
reasoning. In Proceedings of the AAAI Conference
_on Artificial Intelligence, pages 1332–1340._
Xikun Zhang, Deepak Ramachandran, Ian Tenney,
[Yanai Elazar, and Dan Roth. 2020d. Do language](https://doi.org/10.18653/v1/2020.blackboxnlp-1.27)
[embeddings capture scales?](https://doi.org/10.18653/v1/2020.blackboxnlp-1.27) In Proceedings of the
_Third BlackboxNLP Workshop on Analyzing and In-_
_terpreting Neural Networks for NLP, pages 292–299,_
Online. Association for Computational Linguistics.
-----
Xikun Zhang, Deepak Ramachandran, Ian Tenney,
Yanai Elazar, and Dan Roth. 2020e. Do language
embeddings capture scales? In Proceedings of
_the Third BlackboxNLP Workshop on Analyzing and_
_Interpreting Neural Networks for NLP, pages 292–_
299.
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen,
Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing
Liu, and Bill Dolan. 2019. Dialogpt: Large-scale
generative pre-training for conversational response
generation. arXiv preprint arXiv:1911.00536.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
Smola. 2022b. Automatic chain of thought prompting in large language models. _arXiv preprint_
_arXiv:2210.03493._
Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and
Jingming Liu. 2020. Ape210k: A large-scale and
template-rich dataset of math word problems. arXiv
_preprint arXiv:2009.11506._
Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang.
2022. Multihiertt: Numerical reasoning over multi
hierarchical tabular and textual data. In Proceed_ings of the 60th Annual Meeting of the Association_
_for Computational Linguistics (ACL), pages 6588–_
6600.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and
Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models.
In International Conference on Machine Learning
_(ICML), pages 12697–12706. PMLR._
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu.
2022. Minif2f: a cross-system benchmark for formal olympiad-level mathematics. In International
_Conference on Learning Representations (ICLR)._
Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan
[Roth. 2019. "Going on a vacation" takes longer than](https://cogcomp.seas.upenn.edu/papers/ZKNR19.pdf)
["Going for a walk": A Study of Temporal Common-](https://cogcomp.seas.upenn.edu/papers/ZKNR19.pdf)
[sense Understanding. In Proc. of the Conference on](https://cogcomp.seas.upenn.edu/papers/ZKNR19.pdf)
_Empirical Methods in Natural Language Processing_
_(EMNLP)._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao
Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and
Tat-Seng Chua. 2021. Tat-qa: A question answering
benchmark on a hybrid of tabular and textual content
in finance. In Proceedings of the 59th Annual Meet_ing of the Association for Computational Linguis-_
_tics and the 11th International Joint Conference on_
_Natural Language Processing (ACL-JCNLP), pages_
3277–3287.
-----
| [
"Sean, Welleck",
"Pan, Lu",
"Wenhao, Yu",
"Liang, Qiu",
"Kai-Wei, Chang"
] | 2022-12-20T00:00:00 | ACL 2023 | true | 99 | 16 | null | http://arxiv.org/abs/2212.10535 | https://arxiv.org/abs/2212.10535 | https://www.semanticscholar.org/paper/2dbec38fe353ab0e495ad09263389dbc9260824d |
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data | Chain-of-thought (CoT) advances the reasoning abilities of large language models (LLMs) and achieves superior performance in complex reasoning tasks. However, most CoT studies rely on carefully designed human-annotated rational chains to prompt LLMs, posing challenges for real-world applications where labeled data is available without rational chains. This paper proposes a new strategy, AutomateCoT (Automatic Prompt Augmentation and Selection with Chain-of-Thought), that can bypass human engineering of CoT by automatically augmenting rational chains from a small labeled dataset, and then pruning low-quality chains to construct a candidate pool of machinegenerated rationale chains based on the labels. Finally, it selects the optimal combination of several rationale chains from the pool for CoT prompting by employing a variance-reduced policy gradient strategy to estimate the significance of each example. Automate-CoT enables a quick adaptation of the CoT technique to different tasks. Experimental results demonstrate the effectiveness of our method, where competitive results are achieved on arithmetic reasoning (+2.7%), commonsense reasoning (+3.4%), symbolic reasoning (+3.2%), and non-reasoning tasks (+2.5%). | A new strategy, Automate-CoT (Automatic Prompt Augmentation and Selection with Chain-of-Thought), that can bypass human engineering of CoT by automatically augmenting rational chains from a small labeled dataset, and then pruning low-quality chains to construct a candidate pool of machine-generated rationale chains based on the labels. | ## Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data
**Kashun Shum[♡∗], Shizhe Diao[♡∗], Tong Zhang[♡]**
_♡The Hong Kong University of Science and Technology_
{ksshumab, sdiaoaa, tongzhang}@ust.hk
**Abstract**
Chain-of-thought (CoT) advances the reasoning abilities of large language models (LLMs)
and achieves superior performance in complex
reasoning tasks. However, most CoT studies
rely on carefully designed human-annotated
rational chains to prompt LLMs, posing challenges for real-world applications where labeled data is available without rational chains.
This paper proposes a new strategy, Automate**CoT (Automatic Prompt Augmentation and**
Selection with Chain-of-Thought), that can bypass human engineering of CoT by automatically augmenting rational chains from a small
labeled dataset, and then pruning low-quality
chains to construct a candidate pool of machinegenerated rationale chains based on the labels.
Finally, it selects the optimal combination of
several rationale chains from the pool for CoT
prompting by employing a variance-reduced
policy gradient strategy to estimate the significance of each example. Automate-CoT enables a quick adaptation of the CoT technique
to different tasks. Experimental results demonstrate the effectiveness of our method, where
competitive results are achieved on arithmetic
reasoning (+2.7%), commonsense reasoning
(+3.4%), symbolic reasoning (+3.2%), and nonreasoning tasks (+2.5%).[1]
basic idea of CoT prompting is adding a few rationale chains to the answer as exemplars to illustrate the intermediate reasoning steps. Following
CoT, several recent studies improve it by leveraging
self-consistency (Wang et al., 2023), explanation
learning (Lampinen et al., 2022), complexity-based
prompting (Fu et al., 2023), self-training (Huang
et al., 2022), voting verifier (Li et al., 2022a), and
bootstrapping (Zelikman et al., 2022).
However, most of them are constrained to a few
_fixed human-written exemplars, which require sig-_
nificant human efforts to create and adapt to new
datasets. The annotation process is nontrivial because humans need to not only select the questions but also carefully design the reasoning steps
for each question. In the process of searching for
the perfect exemplars, we identify four critical factors that affect the performance of chain-of-thought
prompting and require large human effort to deal
with: (1) order sensitivity (Zhao et al., 2021): the
order combination of the exemplars; (2) complexity (Sugawara et al., 2018; Lai et al., 2021; Fu
et al., 2023): the number of reasoning steps of
the rationale chains; (3) diversity: the combination of different complex-level exemplars; (4) style
sensitivity (Papadopoulos et al., 2010): the writing/linguistic style of the rationale chains. Detailed
analysis of the four factors is covered in Section 2.
All of these sensitivities make human-based prompt
engineering costly and motivate us to find an automatic and task-agnostic way to adapt chain-ofthought exemplars to any downstream tasks.
In this paper, we solve these problems by a CoT
augmentation and selection process to find suitable
exemplars automatically. This can be divided into
three steps: (1) Augment: The language model
generates multiple pseudo-chains for query questions automatically. (2) Prune: Based on an assumption: Generating correct reasoning is a nec_essary condition for generating correct answers._
This assumption is natural because the answer is
**1** **Introduction**
The recent success in large language models
(LLMs) has shown that properly prompted LLMs
demonstrate emergent capabilities on complex understanding and question-answering tasks (Wei
et al., 2022a). Especially, with the recently proposed chain-of-thought (CoT) prompting (Wei
et al., 2022b), LLMs are capable of solving reasoning tasks including arithmetic reasoning, commonsense reasoning, and symbolic reasoning. The
*Equal Contribution.
1The code is available at [https://github.com/](https://github.com/SHUMKASHUN/Automate-CoT)
[SHUMKASHUN/Automate-CoT.](https://github.com/SHUMKASHUN/Automate-CoT)
-----
generated after several reasoning steps. When a
correct answer is generated, the rationale chain
of these steps is most likely correct, contributing
to the final correctness. As a result, We prune
the pseudo-chains according to the consistency between generated and ground-truth answers to reduce the noise. (3) Select: Given that all the data
have been annotated with rationale paths, we propose to apply a variance-reduced policy gradient
strategy (Williams, 1992; Dong et al., 2020; Zhou
et al., 2021; Diao et al., 2022) to estimate the gradients and optimize the selection process to find the
most helpful chain-of-thought for each task. Compared to prior manually written CoT, AutomateCoT could find the optimal and diverse CoT automatically, adaptable to any task without human
effort. Compared with Auto-CoT (Zhang et al.,
2023), which samples diverse questions by clustering and generates rationale chains, AutomateCoT considers and mitigates the aforementioned
sensitivity issues, while achieving a greater performance boost for each task. Automate-CoT is
a fully automatic pipeline for finding better chainof-thought prompts, mitigating the sensitivity issues of manually written exemplars, and further
improving the performance by a large margin. Experimental results demonstrate the effectiveness of
Automate-CoT on arithmetic reasoning (+2.7%),
commonsense reasoning (+3.4%), symbolic reasoning (+3.2%), and non-reasoning tasks (+2.5%).
**2** **Motivation**
Recent studies observed sensitivity issues of GPT3’s few-shot learning caused by different selections of in-context examples such as order instability (Zhao et al., 2021; Zhang et al., 2022; Liu et al.,
2022; Lu et al., 2022). Based on their findings,
we first investigate whether these sensitivities still
exist in chain-of-thought methods. Then we further
explore other factors that would not only affect the
performance but require human efforts to deal with.
We conclude with the following four factors:
_• Order Sensitivity: Different orders of few-shot_
exemplars may cause a huge impact on the performance in traditional few-shot prompting (Lu
et al., 2022). Thus we conduct experiments on
GPT-3 to test if there is such sensitivity in chainof-thought methods. Although Manual-CoT (Wei
et al., 2022b) reports that the human-written CoT
is robust to order changes (<2%) with the LaMDA
model, we observed that the performance of GPT-3
Figure 1: The performance across different numbers of
hops (reasoning steps of rationale chains) on GSM8K.
Manual-CoT refers to the human-written chain-ofthought by Wei et al. (2022b). Complex-CoT refers
to the chain-of-thought using 9-hop rationale chains.
fluctuates with different orders of chain-of-though
exemplars. For the GSM8K dataset, we simply
randomly shuffle the order of the exemplars in
Manual-CoT 10 times and the lowest accuracy can
be 59.8% which is 3.3% lower than the average
accuracy (63.1%) they report, suggesting that order
sensitivity still exists.
_• Complexity: We first define complexity as the_
number of hops (reasoning steps) in an exemplar
where more steps indicate greater complexity. It
is observed that human-written CoT tends to be
simple (≤3 hops), achieving good accuracy in simple math questions while suffering from complex
questions, as shown in Figure 1. In addition, a previous study (Fu et al., 2023) suggested that using all
complex exemplars can improve CoT performance.
However, in our experiments (Figure 1), we found
that Complex-CoT can improve the accuracy of
complex questions, but perform poorly in simple
questions. Therefore, we conjecture that the inconsistency between the hops of provided exemplars
and the required hops of the real question causes
the performance drop, suggesting that determining
the appropriate complexity level of exemplars is
crucial.
_• Diversity: Based on the above discovery about_
complexity, a natural question is what combination
of different complex-level exemplars is most effective. However, testing various combinations is a
challenging task for humans and requires significant effort to determine the optimal one. In our
-----
|1|2|3|
|---|---|---|
|K-2|K-1|K|
|---|---|---|
|1|2|3|
|---|---|---|
|K-2|K-1|K|
|---|---|---|
|1|2|3|
|---|---|---|
|K-2|K-1|K|
|---|---|---|
(1) Augment
**Manual-CoT** **Training Stage**
Q: There are 15 trees in the grove. Grove workers will… (2) Select Index+ ~
A: There are 15 trees originally…… The answer is 6. Training ……
… Batch
✅ ✅ ✅ ❌
Q: Olivia has $23. She bought five bagels for $3 each…
A: Olivia had 23 dollars. 5 bagels…… The answer is 8.
Pool Construction Question+ ℒ (prediction, label)
Q: Natalia sold clips to 48 of her friends in April, and …OR 1 2 3 … 9 …… K-2 K-1 K Estimate Gradient :𝒈𝒗𝒓𝒑𝒊 = 𝑰−𝟏𝟏 𝒌%𝟏[&]𝑰 𝓛𝑻 [𝒌] − [𝟏]𝑰 [&]𝒋%𝟏𝑰 𝓛𝑻 [𝒋] 𝜵𝒑𝒊𝒍𝒐𝒈𝑷(𝒕𝒊)
**Lack of Manual-CoT (Zero-shot)**
Q: Natalia sold clips to 48 of her friends in April, and … Update 𝒑𝒊 𝒑𝒊 ←𝒑𝒓𝒐𝒋𝑪 𝒑𝒊 −𝜼( 𝒈𝒗𝒓𝒑𝒊, 𝒊= 𝟏, (((, 𝒏
A: Let’s think step by step.
1 2 3 … 9 …… K-2 K-1 K **Inference Stage**
Index = argmax = [3, 9, …, K]
**LLM**
Q: Ralph is going to practice playing tennis with a tennis ball …
… **5-hop** A:Ralph started with 175 tennis balls. He hit 2/5 of the first 100 balls, so he hit 2/5 * 100 = 40 balls. He hit 1/3 of the next 75 balls, so he hit 1/3 * 75 = 25 balls. In total he hit 40 + 25 = 65
A: She sold 48 clips in April. In May she sold half as many, … balls. He did not hit 175 - 65 = 110 balls. The answer is 110.
so she sold 48 / 2 = 24 clips. In total she sold 48 + 24 = 72 clips. The answer is 72. **6-hop2-hop** …
Q: Hans booked a room in a hotel. The hotel has 10 floors …
Ground Truth : 72 1 2 3 … 9 …… K-2 K-1 K … A: here are 10 floors with 10 rooms each. The last floor is unavailable. So there are 9 * 10 = 90 rooms available.The
Not Match Match **3-hop** answer is 90.
+
Drop Add to Pool Pool Size = K Q: Janet’s ducks lay 16 eggs per day. She eats three for Test Question
breakfast…
Prune
Figure 2: Illustrations of our proposed approach. The left and middle parts of the figure contain two steps of our
method: (1) Augment and Prune and (2) Select. The right part illustrates the training stage (right top) and the
inference stage (right bottom), respectively.
**3** **Approach**
Our approach receives a training dataset D containing n questions Q = _q1, q2, ..., qn_, and n
_{_ _}_
answers A = _a1, a2, ..., an_ . The overall archi_{_ _}_
tecture of our approach is shown in Figure 2. In
this section, we start with a detailed description
of augment and prune operation and end with an
illustration of select operation.
**3.1** **Augment and Prune**
Inspired by Wang et al. (2022), which shows that
the generated rationale chains are of comparable
quality to the human-annotated ones, we aim to
automatically generate the rationale chains to augment the candidate exemplars. Given m fixed rationale chains C = _c1, c2, ..., cm_, a question q,
_{_ _}_
we ask the large language model G to generate k
rationale chains for each q. A larger k can form a
larger pool and some post-processes can be done
to improve the quality of the pool. Considering the
cost and efficiency, we choose k = 1 for our experiments. Our method works well even without C
(i.e., m = 0), which is based on zero-shot prompting. Then we prune those incorrect ones out and
only keep those with the correct final answer. In
other words, the final answer should be consistent
with the ground-truth answer. After pruning, we
experiments (Figure 1), we found that a combination of different complex-level exemplars outperforms CoT with all complex exemplars, suggesting
a complexity-diversity trade-off.
_• Style Sensitivity: Previous research in educa-_
tional psychology found that different learning
styles would limit the cognitive benefit for students
from the prompting (Papadopoulos et al., 2010).
We further argue that students with specific learning styles benefit to varying degrees from different
styles of prompting. In addition, the empirical evidence from Manual-CoT (Wei et al., 2022b) shows
that different annotators can cause up to 28.2%
accuracy difference in a symbolic reasoning task,
verifying our conjecture. As a result, some bad
styles may lead to a huge performance drop. However, humans cannot determine the performance of
a particular style beforehand, so it requires trial and
error by checking on the validation set, which further increases the effort of writing chain-of-thought
exemplars.
In light of this empirical evidence, we are motivated to design a framework not only to augment
rationale chains but also to select helpful chains
adaptively. With this framework, it is expected to
bypass the order and style sensitivities and reach
a better complexity-diversity trade-off without human effort, finally boosting performance.
-----
obtain a pool of K high-quality exemplars.
**3.2** **Select**
With a large pool of high-quality exemplars, we
cannot directly apply all of them due to four considerations: (1) context length limit: the maximum
length is 2,048 for GPT-3, so we cannot feed too
many exemplars into the model. (2) fair comparison: existing studies usually take 4-8 questionanswer pairs as exemplars following Wei et al.
(2022b). (3) sensitivity: the model performance
may be sensitive to the contexts (Jiang et al., 2020),
orders (Lu et al., 2022) and lengths (Lester et al.,
2021) from the observation of prompt learning literature. (4) adaptation: different downstream tasks
may require different exemplars. Therefore, a natural idea is to select the most suitable 4-8 exemplars
automatically.
The process can be deemed as optimizing a supervised model with latent variables. For each
chain-of-thought index i, we initialize a latent variable ji ∼ Cat(pi). The random variable ji is
sampled with the probability distribution pi =
[pi,1, _, pi,N_ ] over the N candidate demonstra_· · ·_
tion indexes, where pi ∈C and C = {p : ∥p∥1 =
1, 0 ⪯ **_p ⪯_** 1}. Since pi is independent of each
other, the joint probability of the whole input exemplars is P (T ) = Π[n]i=1[P] [(][t][i][) = Π][n]i=1[p][i,j]i[. The]
loss is formulated as L(G([T, S], y)), where T represents the full few-shot exemplars, ti denotes the
i-th exemplar, and S is the current question (user’s
query). However, directly updating the prompts
by back-propagating through **_pi_** ( ([T, S], y))
_∇_ _L_ _G_
is not possible because of the inaccessible gradients, where y is the label. We resort to the
variance-reduced policy gradient estimator (VRPGE) (Williams, 1992; Dong et al., 2020; Zhou
et al., 2021; Diao et al., 2022), a kind of reinforcement learning method to optimize the loss function
via forward propagation with:
ET [L(T )] = _L(T_ )P (T ) dT, (1)
Z
and estimate the gradient of pi by:
gradient descent algorithm:
**_pi_** proj (pi _η_ **_gp[vr]i_** [)][, i][ = 1][,][ · · ·][, n] (3)
_←_ _C_ _−_ _·_
where η is the learning rate, I is the sample size,
and proj is the projection calculation (details are
_C_
presented in the Appendix A).
**4** **Experimental Settings**
In this section, we first introduce the setting of
eleven datasets and their corresponding evaluation
metrics (§ 4.1). Then the baseline models (§ 4.2)
and implementation details (§ 4.3) are presented in
the following two subsections, respectively. Full
details about the experimental setting are illustrated
in Appendix B.
**4.1** **Datasets and Evaluation Metrics**
Following Wei et al. (2022b), we conduct our experiments on eight reasoning tasks, including five math
word problem datasets: GSM8K, ASDiv, SVAMP,
AQuA, and SingleOp; two commonsense reasoning datasets: CommonsenseQA (CSQA) and StrategyQA, and one symbolic reasoning task: Last
Letter Concatenation (Letter (4)). We also generalize our method to non-reasoning tasks including
one question-answering task (OpenBookQA), one
natural language inference task (e-SNLI), and one
sentiment analysis task (SST-2). The detailed statistics of the datasets are listed in Table 5. The evaluation metric for all tasks is the exact match accuracy.
First, we conduct pre-processing for predictions
to remove all the special symbols. For example,
"$100,000" will be processed to "100000". Then
we check if it has the same value as the ground
truth to calculate the exact match accuracy.
**4.2** **Baselines**
We compare our method with the following
baseline methods: chain-of-thought (ManualCoT) (Wei et al., 2022b), self-consistency
(SC) (Wang et al., 2023), and Auto-CoT (Zhang
et al., 2023). And we utilize the public
APIs from OpenAI’s services[2] and test with
text-davinci-002 and code-davinci-002.
**4.3** **Implementation**
**Augment and Prune: Following Wei et al.**
(2022b) and Wang et al. (2022), we keep the same
number of exemplars (4-8) listed in Table 5. For
main experiments, we augment and prune a pool
of 100 high-quality exemplars for all datasets.
[2https://openai.com/api/](https://openai.com/api/)
_I_ _I_
1
**_gp[vr]i_** [=] (T [(][k][)]) (T [(][j][)]) **_pi log P_** (ti)
_I −_ 1 _k=1_ _L_ _−_ _I[1]_ _j=1_ _L_ ! _∇_
X X
(2)
where T [(][k][)], k = 1, · · ·, I are sampled independently from P (T ). Therefore, the exemplar distribution pi can be updated by a projected stochastic
-----
|METHOD|GSM8K ASDIV SVAMP AQUA SINGLEOP CSQA STQA LETTER (4) OBQA E-SNLI SST-2|AVG.|
|---|---|---|
|Prior Best*|55.0a 75.3b 57.4c 37.9d - 91.2e 73.9f - - - 97.5g|-|
|---|---|---|
_text-davinci-002_
|Auto-CoT Manual-CoT + Automate-CoT SC + Automate-CoT|47.9 - 69.5 36.5 - 74.4 65.4 59.7 - - - 46.9 71.3 68.9 35.8 88.8 73.5 65.4 56.6 75.5 79.1 86.2 49.7↑2.8 74.2↑2.9 73.3↑4.4 37.9↑2.1 90.0↑1.2 76.1↑2.6 67.9↑2.5 58.9↑2.3 79.1↑3.6 82.3↑3.2 87.5↑1.3 58.2 76.9 78.2 41.8 90.8 72.9 70.7 57.6 81.5 83.4 89.2 67.8↑9.6 78.9↑2.0 80.5↑2.3 43.4↑1.6 91.9↑1.1 80.2↑7.3 76.3↑5.6 60.8↑3.2 84.8↑3.3 86.4↑3.0 90.6↑1.4|- 68.0 70.6↑2.6 72.8 76.5↑3.7|
|---|---|---|
_code-davinci-002_
|Auto-CoT Manual-CoT + Automate-CoT SC + Automate-CoT|62.8 - - - - - - - - - - 63.1 80.4 76.4 45.3 91.8 77.9 73.2 70.4 80.4 67.5 89.7 67.6↑4.5 83.1↑2.7 78.2↑1.8 47.8↑2.5 92.4↑0.6 81.3↑3.4 75.3↑2.1 75.0↑4.6 83.2↑2.8 71.2↑3.7 90.8↑1.1 78.0 87.8 86.8 52.0 92.8 81.5 79.8 73.4 88.4 74.8 91.5 82.4↑4.4 88.9↑1.1 87.8↑1.0 55.6↑3.6 94.0↑1.2 84.0↑2.5 80.6↑0.8 76.2↑2.8 89.7↑1.3 78.3↑3.5 92.8↑1.3|- 74.2 76.9↑2.7 80.6 82.8↑2.2|
|---|---|---|
Table 1: The overall performance of Automate-CoT and the comparison against existing models on eleven
downstream tasks. Manual-CoT and SC represent chain-of-thought (Wei et al., 2022b) and self-consistency (Wang
et al., 2023) methods. Bold denotes the best in code-davinci-002-based methods and Underline denotes the best
in text-davinci-002-based methods. *: Prior Best is the best performance before CoT comes out. a: Cobbe et al.
(2021), b: Lan et al. (2022), c: Pi et al. (2022), d: Amini et al. (2019), e: Xu et al. (2022), f : Chowdhery et al. (2022),
_g: Raffel et al. (2020). Most statistics of Manual-CoT and SC have been obtained directly from their latest version._
For some entries they did not report, we obtain the result from DIVERSE (Li et al., 2022b).
**Select: Both the training and validation sets have**
a size of 100 to reach a performance and cost tradeoff. Then by utilizing the log probability returned
by API calls, we calculate the cross-entropy loss
of the answer token. Finally, we optimize the latent variables by AdamW (Loshchilov and Hutter,
2019) for 5 epochs with a learning rate of 1 × 10[−][3]
and batch size of 10. After optimization, we choose
the exemplars combination (arg max pi) with the
highest validation accuracy to be further evaluated
on the test set. By default, we query the language
model once to get the answer. Under the selfconsistency setting, similar to Wang et al. (2023),
we query the language model 40 times and choose
the most consistent one as the final answer.
**Hyper-parameter Setting: Under few-shot set-**
ting, we set max_tokens = 256 for all augmentation, selection and inference. In addition, we set
logprobs = 5 when training. Moreover, we set temperature = 0.7 for evaluation under self-consistency
while temperature = 0 for all other cases.
**5** **Experimental Results**
The experimental results are shown in Table 1. We
discuss our results in three sections based on the
task categories. Automate-CoT are averaged over
three runs, and the variance over different runs is
reported in Appendix Table 7. Overall, AutomateCoT achieves superior results on all tasks. With
text-davinci-002, Automate-CoT outperforms
Manual-CoT and SC by 2.6% and 3.7% on average.
With code-davinci-002, Automate-CoT also outperforms Manual-CoT and SC by 2.7% and 2.2%,
respectively.
**Arithmetic Reasoning: For text-davinci-002,**
Automate-CoT improves Manual-CoT by 2.7%
over five arithmetic reasoning tasks. In addition, under the self-consistency setting, AutomateCoT improves SC by a large margin by an average of 3.3%. Moreover, compared to AutoCoT, Automate-CoT also outperforms it on all
three arithmetic tasks (GSM8K, SVAMP, and
AQuA). While for code-davinci-002, AutomateCoT achieves an average of 2.4% improvement
across all five arithmetic reasoning tasks, illustrating the effectiveness of our proposed approach with
different language models. Additionally, AutomateCoT outperforms Auto-CoT in GSM8K by 4.8%,
since Auto-CoT only constructs experiments on
GSM8K under code-davinci-002. AutomateCoT demonstrates consistent improvement over
arithmetic tasks, especially on GSM8K, where it
can outperform Manual-CoT by a large margin. Finally, under the self-consistency setting, AutomateCoT also shows similar trends to improve the SC
baseline, demonstrating the synergistic effects of
our proposed method and self-consistency method.
**Commonsense and Symbolic Reasoning Sim-**
ilarly, on commonsense and symbolic reasoning tasks, Automate-CoT demonstrates significant improvement over Manual-CoT, SC, and
Auto-CoT. It achieves an average of 2.5% and
-----
Figure 3: Comparisons between Random Selection, Manual-CoT and Automate-CoT on six datasets.
3.4% improvement on text-davinci-002 and
code-davinci-002 respectively, demonstrating
that our method is effective on different task types.
More surprisingly, the improvement in the Letter
(4) is significant, demonstrating our method’s robustness to deal with out-of-distribution data.
**Non-Reasoning Tasks Automate-CoT has also**
reached great success on question answering (OpenBookQA), natural language inference (eSNLI), and sentiment analysis (SST-2) tasks by
an improvement of 2.8%, 3.4% and 1.3%, respectively. The results show that our method can be
generalized to various task types and is not limited
to reasoning tasks.
**6** **Additional Experiments and Analysis**
We further conduct several experiments to evaluate
the effectiveness of Automate-CoT and analyze
the contributions of each module. Since queries
to text-davinci-002 are limited and expensive,
most additional experiments are conducted with
code-davinci-002.
**6.1** **Effects of Selection Algorithm**
After obtaining a large pool of exemplars, a natural
question would be what is the performance if we
randomly select from the pool regardless of order.
In Figure 3, we compare the accuracy obtained by
random selection, human-written (Manual-CoT),
and our Automate-CoT. For random selection, we
randomly sample exemplars from the pool and combine them regardless of order to form the prompts.
We repeat this process five times and report the
accuracy with an error bar. The results show that
random selection suffers from high variance and relatively low accuracy compared to Manual-CoT and
Automate-CoT. Surprisingly, we observed the average performance of a random selection from modelgenerated exemplars can outperform Manual-CoT
in some datasets (e.g. GSM8K, CSQA). This also
suggests that manual prompt engineering needs to
take efforts to design carefully in terms of difficulty,
Figure 4: The performance across different pool sizes
of Automate-CoT compare with Manual-CoT. Pool size
refers to the number of exemplars in the pool.
diversity, and style. In conclusion, if we simply randomly select the exemplars from the pool, it is very
likely to obtain a much lower accuracy than the
manually written method. However, our AutomateCoT can consistently outperform random selection
and Manual-CoT which shows the effectiveness of
our method.
**6.2** **Effects of Pool Size**
We further conduct a set of experiments to test different pool sizes. As shown in Figure 4, if the
pool size is limited to only 10, the performance of
Automate-CoT is worse than Manual-CoT or comparable with Manual-CoT. It turns out that if the
pool size is small, Automate-CoT is unable to select a good combination to beat carefully designed
Manual-CoT. However, Automate-CoT can outperform Manual-CoT when the pool size reaches 20
or larger. The trends show that the performance
would be better as pool size keeps increasing. This
is intuitive and matches our hypothesis because as
pool size increases, there would be more complex,
diverse exemplars to choose from. It is expected
that the performance would keep increasing, but
since more queries for GPT-3 are time-consuming
and expensive, we limited these additional experiments to have a max pool size of 150.
**6.3** **Effects of Chain Complexity**
It is observed that exemplars written by human
are rather simple, so we further explore how chain
complexity affect performance. We randomly pick
-----
|RUNS|GSM8K|SVAMP|Letter(4)|
|---|---|---|---|
|Rand(Training Set) 1 Rand(Training Set) 2 Rand(Training Set) 3 Variance|67.55 67.93 67.25 0.077|78.2 77.8 77.6 0.062|75.0 76.6 75.8 0.426|
|---|---|---|---|
Table 2: The effect of different randomly chosen training set on performance over three datasets.
8 exemplars with complex rationale chains (each
has 9 hops) and refer to them as Complex-CoT.
For human-written exemplars (Manual-CoT) Wei
et al. (2022b), exemplars are all 2-3 hops. We compare them with our Automate-CoT which has an
average hop of 4 and ranges from 2-hop to 6-hop
on GSM8K dataset. From Figure 1, Manual-CoT
has an overall accuracy of 62%, achieving good
results on simple questions. However, it suffers
from complex math questions, especially 7-hop
and 8-hop questions. Complex-CoT can improve
the accuracy on complex questions by a large margin but it performs poorly on simple questions,
which only has an overall accuracy of 60%. In
contrast, our Automate-CoT can select a combination of different complex-level exemplars automatically. It achieves good results on simple questions
and reasonable results on complex questions at the
same time, outperforming both Manual-CoT and
Complex-CoT by a large margin. The result shows
the superiority of our method because it can automatically achieve a complexity-diversity trade-off.
**6.4** **Effects of Training Example Selection**
Since training examples to construct CoT are randomly chosen, we also measure the performance
vary regarding this random selection. Three different randomly chosen training sets are used to
train Automate-CoT and the results are reported in
Table 2. According to the result, Automate-CoT
shows its robustness to training examples. Randomly chosen training examples have quite a small
impact on the result.
**6.5** **Bypass Manual Effort by Zero-shot-CoT**
Starting with 4-8 manually constructed chain-ofthought exemplars, our methods show great success
in automatically generating, pruning, and selecting
suitable exemplars for each task. After that, we
raise a new question: Can we further bypass the
_effort of writing the initial chain-of-thought exem-_
_plars? Based on current research of Zero-Shot-_
CoT (Kojima et al., 2022), we found it is possible.
|METHOD|GSM8K|SVAMP|Letter (4)|
|---|---|---|---|
|Zero-Shot-CoT Manual-CoT Auto-CoT Zero-Shot-Automate-CoT Automate-CoT|40.7 46.9 48.9 49.1 49.7|62.1 73.5 69.5 74.3 76.1|57.6 56.6 59.7 59.3 58.9|
|---|---|---|---|
Table 3: The performance of Automate-CoT in zeroshot setting compared with other baselines. Lightgray
highlights our main model which uses a manually constructed chain-of-thought and is not intended for comparison. We list it here only for reference.
Instead of using 4-8 manual-written exemplars to
generate the chains, we simply add "Let’s think step
_by step." and let LLMs generate the chains. We_
test the result under text-davinci-002 model on
GSM8K, SVAMP, and Letter (4) and compare it
with Zero-shot-CoT, Manual-CoT and Auto-CoT.
Surprisingly, we observe the result can be comparable and even outperform Manual-CoT and AutoCoT a bit as shown in Table 3. The results further demonstrate that our method can effectively
select a suitable combination of exemplars even
from a pool that may contain low-quality chains.
In conclusion, if a dataset already has manually
written chains, our method can be applied to boost
the performance. If a dataset does not have manually written chains, our method can still be used
to achieve higher accuracy than if it had manually
written chains, demonstrating the superiority of our
method.
**7** **Ablation Study**
In this section, we further conduct ablation experiments to verify the advantage of the generated
prompts on four factors, respectively.
**Advantage over Order Factor The advantages**
of Automate-CoT on order factor can be viewed in
two ways. Firstly, it requires a large human effort
to determine a good order by trying many different
orders on validation sets. However, Automate-CoT
can automatically construct the exemplars without
further adjustment to have a good result. Secondly,
Automate-CoT is less affected by the order sensitivity. We further conduct an experiment to compare
selected exemplars and random permutations of
Automate-CoT’s selected exemplars as shown in
Table 4. We randomly permutate the selected exemplars to see how performance varies compared to
the selected order by Automate-CoT. It is observed
that the order sensitivity still exists and our se
-----
plars S1 = [A1, B1, C1, D1, E1, F1, G1, H1] for
GSM8K. Then we copy this set and edit its written /
linguistic style manually to be worse while keeping
the order, complexity, and diversity the same which
gives S2 = [A2, B2, C2, D2, E2, F2, G2, H2].
Now we have 16 examplars says _S_ =
[A1, A2, B1, B2, ..., H1, H2]. A-H represents the
No.1-8 exemplars. Subscript 1 represents the originally selected exemplars and 2 represents the edited
ones. Then, Automate-CoT selects 8 exemplars
from the previous 16 exemplars. Note that we limit
Automate-CoT to select exactly one of [A1, A2]
and [B1, B2] ... and keep the same order A-H.
Subsequently, when we perform Automate-CoT algorithm, we observe that Automate-CoT is able to
successfully select the original exemplars S1. Furthermore, we find that the selected exemplars can
outperform the non-selected exemplars by 2%.
**8** **Related Work**
In this section, we first review the recent progress of
prompt-based learning (§8.1) and chain-of-thought
prompting (§8.2), and then discuss the black-box
optimization methods (§8.3).
**8.1** **Prompt-based Learning**
Prompt-based Learning (Prompting) aims to leverage large language models (LLMs) (Devlin et al.,
2018; Liu et al., 2019; He et al., 2021; Diao et al.,
2020, 2021) to trigger helpful knowledge for downstream tasks. Existing prompting methods can be
categorized into two types based on their nature:
1) discrete prompts (Wallace et al., 2019; Shin
et al., 2020; Jiang et al., 2020; Yuan et al., 2021;
Haviv et al., 2021; Gao et al., 2021; Ben-David
et al., 2022; Davison et al., 2019; Su et al., 2022;
Diao et al., 2022; Guo et al., 2023) and continuous
prompts (Zhong et al., 2021; Qin and Eisner, 2021;
Hambardzumyan et al., 2021; Liu et al., 2021; Han
et al., 2021; Li and Liang, 2021; Yang et al., 2023).
Discrete prompts optimize a sequence of discrete
tokens, while continuous prompts optimize a sequence of vectors. One of the most important advantages of prompting is saving fine-tuning costs
by refraining from the parameter changes of large
language models, and we only need to optimize a
small set of parameters.
**8.2** **Chain-of-thought Prompting**
Chain-of-thought (Wei et al., 2022b) introduces
a chain of rationale steps for each exemplar of
|RUNS|GSM8K|SVAMP|Letter(4)|
|---|---|---|---|
|perm(Automate-CoT) 1 perm(Automate-CoT) 2 perm(Automate-CoT) 3 perm(Automate-CoT) 4 perm(Automate-CoT) 5|66.7 66.6 66.9 67.8 67.5|77.2 78.4 78.0 78.2 78.1|73.0 72.6 72.0 74.2 75.0|
|---|---|---|---|
|Automate-CoT Mean ±std|68.4 67.3 ±0.64|78.7 78.2 ±0.46|75.2 73.7 ±1.21|
|---|---|---|---|
Table 4: Comparison of different permutations orders
of Automate-CoT’s selected examplars.
lected exemplars have better performance than that
of all 5 random permutation runs, demonstrating
Automate-CoT can automatically choose a good
order without any human effort.
**Advantage over Complexity Factor As dis-**
cussed in the complexity factor of Section 2, we
show that the complexity of manually written
chains is quite simple (less than or equal to 3 hops).
It would require more human effort to design complex rationales. However, Automate-CoT can automatically augment and select examples with different complexity, reaching a better trade-off accuracy
between simple questions and complex questions
(Appendix Table 9).
**Advantage over Diversity Factor The diversity**
of Manual-CoT or Complexity-CoT is limited.
For example, every exemplar of Complexity-CoT
has the same complexity and every exemplar of
Manual-CoT ranges from 1-3 hops as illustrated in
the motivation section. However, Automate-CoT
can automatically select an optimal combination of
complexity that best suits the dataset. For example,
our selected exemplars on GSM8K have an average hop of 5.4 and range from 3 hops to 8 hops
as shown in Appendix G. It contains both simple
exemplars and complex exemplars which reach the
best performance.
**Advantage over Style Factor Our extensive ex-**
perience with multiple experiments indicates that
a good linguistic style is typically formal and detailed. This style entails the use of (1) explicit and
logical connection words (e.g., "so", "that means"),
(2) detailed reasoning steps within a single sentence, (3) symbols when appropriate (e.g., using the
$ symbol to denote monetary values), and (4) minimizing the use of abbreviations. We further conduct an ablation experiment to test how our method
can choose the examples with better style. Firstly,
we use Automate-CoT to select 8 rationale exem
-----
in-context learning and significantly improves the
performance on several complex tasks like arithmetic reasoning, commonsense reasoning, and
symbolic reasoning. Based on this simple yet effective idea, many following works propose different
strategies to improve it: self-consistency (Wang
et al., 2023), explanation learning (Lampinen et al.,
2022), complexity-based prompting (Fu et al.,
2023), self-training (Huang et al., 2022), voting
verifier (Li et al., 2022a), zero-shot prompting (Kojima et al., 2022; Fung et al., 2022), and bootstrapping (Zelikman et al., 2022).
**8.3** **Black-box Optimization**
Nowadays, large language models provide services
as commercial APIs deployed in the cloud, such
as OpenAI’s GPT-3 (Brown et al., 2020) and ChatGPT[3]. It usually accepts query inputs and outputs
the predictions with a web interface. Their model
parameters and gradients are not accessible, causing difficulties in optimization with gradients. Previous research on black-box optimization mainly
focuses on score-based black-box adversarial attack (Ilyas et al., 2018, 2019; Huang and Zhang,
2020; Andriushchenko et al., 2020; Cheng et al.,
2019). Most recently, black-box prompt learning (Diao et al., 2022; Sun et al., 2022; Prasad
et al., 2022) is introduced, aiming to optimize the
prompts without accessing gradients, but their models suffer from limited reasoning abilities and are
limited to zero-shot settings with classification task.
**9** **Conclusion**
In this paper, we proposed a chain-of-thought optimization method consisting of three steps: augment, prune, and select. Automate-CoT first generates rationale chains according to the standard CoT
process with several exemplars, and then prunes
those incorrect ones according to the consistency of
the predicted answer and ground-truth answer. Finally, we apply a variance-reduced policy gradient
strategy to estimate the gradients and optimize the
latent variables to select better CoTs. Experimental
results demonstrate the effectiveness of our method
on arithmetic reasoning, commonsense reasoning,
symbolic reasoning tasks, and non-reasoning tasks.
**10** **Limitations**
It is shown that Automate-CoT demonstrates superior performance over previous chain-of-thought
[3https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/)
prompting methods. However, despite these exciting results, there are still some limitations to our
current work, as well as potential opportunities for
future research.
**Comparision with Fine-tuning : Our main base-**
lines include original chain-of-thought (Wei et al.,
2022b), self-consistency (Wang et al., 2023) which
are manual-written based prompt method. In
addition, we also compare the clustering-based
and retrieval-based methods to select the prompt
exemplars like Auto-CoT (Zhang et al., 2023),
BM25 (Robertson, 2009), PromptPG (Lu et al.,
2023). As large language models are dominating
the field, the performance of training the large language models by using these labeled data might be
interesting. However, it is not covered in this study
due to the prompt setting of this study and limited
resources.
**Prompt Style Definition : Another limitation of**
this work is that it does not provide a rigorous
definition of what constitutes good versus bad linguistic style. While we have observed several patterns of good and bad style during numerous experiments, and the results show that Automate-CoT
is able to mitigate style sensitivity in Manual-CoT,
we cannot determine what perfect style entails. As
such, we acknowledge that defining what constitutes good versus bad linguistic style can be a challenging task and an important area for further exploration and development.
**Acknowledgments**
We thank the anonymous reviewers for their valuable suggestions. This work was supported by the
General Research Fund (GRF) of Hong Kong (No.
16310222). Shizhe Diao was supported by the
Hong Kong Ph.D. Fellowship Scheme (HKPFS).
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. MathQA: Towards interpretable math](https://doi.org/10.18653/v1/N19-1245)
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/N19-1245)
[malisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367, Minneapolis, Minnesota. Association for
Computational Linguistics.
Maksym Andriushchenko, Francesco Croce, Nicolas
Flammarion, and Matthias Hein. 2020. Square at
-----
tack: a query-efficient black-box adversarial attack
via random search. In Computer Vision–ECCV 2020:
_16th European Conference, Glasgow, UK, August 23–_
_28, 2020, Proceedings, Part XXIII, pages 484–501._
Springer.
Eyal Ben-David, Nadav Oved, and Roi Reichart. 2022.
[PADA: Example-based prompt learning for on-the-](https://doi.org/10.1162/tacl_a_00468)
[fly adaptation to unseen domains. Transactions of the](https://doi.org/10.1162/tacl_a_00468)
_Association for Computational Linguistics, 10:414–_
433.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners. In Ad-](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
_vances in Neural Information Processing Systems 33:_
_Annual Conference on Neural Information Process-_
_ing Systems 2020, NeurIPS 2020, December 6-12,_
_2020, virtual._
Oana-Maria Camburu, Tim Rocktäschel, Thomas
[Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natu-](https://proceedings.neurips.cc/paper/2018/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html)
[ral language inference with natural language explana-](https://proceedings.neurips.cc/paper/2018/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html)
[tions. In Advances in Neural Information Processing](https://proceedings.neurips.cc/paper/2018/hash/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html)
_Systems 31: Annual Conference on Neural Informa-_
_tion Processing Systems 2018, NeurIPS 2018, Decem-_
_ber 3-8, 2018, Montréal, Canada, pages 9560–9572._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
[Greg Brockman, et al. 2021. Evaluating large lan-](https://arxiv.org/abs/2107.03374)
[guage models trained on code.](https://arxiv.org/abs/2107.03374) _ArXiv preprint,_
abs/2107.03374.
Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su,
[and Jun Zhu. 2019. Improving black-box adversar-](https://proceedings.neurips.cc/paper/2019/hash/32508f53f24c46f685870a075eaaa29c-Abstract.html)
[ial attacks with a transfer-based prior. In Advances](https://proceedings.neurips.cc/paper/2019/hash/32508f53f24c46f685870a075eaaa29c-Abstract.html)
_in Neural Information Processing Systems 32: An-_
_nual Conference on Neural Information Processing_
_Systems 2019, NeurIPS 2019, December 8-14, 2019,_
_Vancouver, BC, Canada, pages 10932–10942._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
[Sebastian Gehrmann, et al. 2022. Palm: Scaling](https://arxiv.org/abs/2204.02311)
[language modeling with pathways. ArXiv preprint,](https://arxiv.org/abs/2204.02311)
abs/2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. 2021. [Training veri-](https://arxiv.org/abs/2110.14168)
[fiers to solve math word problems. ArXiv preprint,](https://arxiv.org/abs/2110.14168)
abs/2110.14168.
Joe Davison, Joshua Feldman, and Alexander Rush.
[2019. Commonsense knowledge mining from pre-](https://doi.org/10.18653/v1/D19-1109)
[trained models. In Proceedings of the 2019 Confer-](https://doi.org/10.18653/v1/D19-1109)
_ence on Empirical Methods in Natural Language Pro-_
_cessing and the 9th International Joint Conference_
_on Natural Language Processing (EMNLP-IJCNLP),_
pages 1173–1178, Hong Kong, China. Association
for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2018. BERT: Pre-training of](https://doi.org/10.48550/arXiv.1810.04805)
[Deep Bidirectional Transformers for Language Un-](https://doi.org/10.48550/arXiv.1810.04805)
[derstanding. arXiv.](https://doi.org/10.48550/arXiv.1810.04805)
Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and
Yonggang Wang. 2020. Zen: Pre-training chinese
text encoder enhanced by n-gram representations.
In Findings of the Association for Computational
_Linguistics: EMNLP 2020, pages 4729–4740._
Shizhe Diao, Zhichao Huang, Ruijia Xu, Xuechun Li,
[Yong Lin, and Tong Zhang. 2022. Black-box prompt](https://arxiv.org/abs/2201.08531)
[learning for pre-trained language models.](https://arxiv.org/abs/2201.08531) _ArXiv_
_preprint, abs/2201.08531._
Shizhe Diao, Ruijia Xu, Hongjin Su, Yilei Jiang, Yan
Song, and Tong Zhang. 2021. Taming pre-trained
language models with n-gram representations for lowresource domain adaptation. In Proceedings of the
_59th Annual Meeting of the Association for Compu-_
_tational Linguistics and the 11th International Joint_
_Conference on Natural Language Processing (Vol-_
_ume 1: Long Papers), pages 3336–3349._
[Zhe Dong, Andriy Mnih, and George Tucker. 2020. Dis-](https://proceedings.neurips.cc/paper/2020/hash/d880e783834172e5ebd1868d84463d93-Abstract.html)
[arm: An antithetic gradient estimator for binary latent](https://proceedings.neurips.cc/paper/2020/hash/d880e783834172e5ebd1868d84463d93-Abstract.html)
[variables. In Advances in Neural Information Pro-](https://proceedings.neurips.cc/paper/2020/hash/d880e783834172e5ebd1868d84463d93-Abstract.html)
_cessing Systems 33: Annual Conference on Neural_
_Information Processing Systems 2020, NeurIPS 2020,_
_December 6-12, 2020, virtual._
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023. Complexity-based prompting for](https://openreview.net/forum?id=yf1icZHC-l9)
[multi-step reasoning. In International Conference on](https://openreview.net/forum?id=yf1icZHC-l9)
_Learning Representations._
Yi R Fung, Tuhin Chakraborty, Hao Guo, Owen
Rambow, Smaranda Muresan, and Heng Ji. 2022.
Normsage: Multi-lingual multi-cultural norm discovery from conversations on-the-fly. arXiv preprint
_arXiv:2210.08604._
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
[Making pre-trained language models better few-shot](https://doi.org/10.18653/v1/2021.acl-long.295)
[learners. In Proceedings of the 59th Annual Meet-](https://doi.org/10.18653/v1/2021.acl-long.295)
_ing of the Association for Computational Linguistics_
_and the 11th International Joint Conference on Natu-_
_ral Language Processing (Volume 1: Long Papers),_
pages 3816–3830, Online. Association for Computational Linguistics.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://doi.org/10.1162/tacl_a_00370)
[use a laptop? a question answering benchmark with](https://doi.org/10.1162/tacl_a_00370)
[implicit reasoning strategies. Transactions of the](https://doi.org/10.1162/tacl_a_00370)
-----
_Association for Computational Linguistics, 9:346–_
361.
Chunxi Guo, Zhiliang Tian, Jintao Tang, Shasha Li,
Zhihua Wen, Kaixuan Wang, and Ting Wang. 2023.
Retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain. arXiv preprint arXiv:2307.05074.
Karen Hambardzumyan, Hrant Khachatrian, and
[Jonathan May. 2021. WARP: Word-level Adversarial](https://doi.org/10.18653/v1/2021.acl-long.381)
[ReProgramming. In Proceedings of the 59th Annual](https://doi.org/10.18653/v1/2021.acl-long.381)
_Meeting of the Association for Computational Lin-_
_guistics and the 11th International Joint Conference_
_on Natural Language Processing (Volume 1: Long_
_Papers), pages 4921–4933, Online. Association for_
Computational Linguistics.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu,
and Maosong Sun. 2021. [PTR: Prompt Tuning](https://arxiv.org/abs/2105.11259)
[with Rules for Text Classification. ArXiv preprint,](https://arxiv.org/abs/2105.11259)
abs/2105.11259.
Adi Haviv, Jonathan Berant, and Amir Globerson. 2021.
[BERTese: Learning to speak to BERT. In Proceed-](https://doi.org/10.18653/v1/2021.eacl-main.316)
_ings of the 16th Conference of the European Chap-_
_ter of the Association for Computational Linguistics:_
_Main Volume, pages 3618–3623, Online. Association_
for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and
Weizhu Chen. 2021. [DEBERTA: Decoding-](https://openreview.net/forum?id=XPZIaotutsD)
[enhanced bert with disentangled attention. In Inter-](https://openreview.net/forum?id=XPZIaotutsD)
_national Conference on Learning Representations._
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu,
Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022.
[Large language models can self-improve.](https://arxiv.org/abs/2210.11610) _ArXiv_
_preprint, abs/2210.11610._
[Zhichao Huang and Tong Zhang. 2020. Black-box ad-](https://openreview.net/forum?id=SJxhNTNYwB)
[versarial attack with transferable model-based embed-](https://openreview.net/forum?id=SJxhNTNYwB)
[ding. In 8th International Conference on Learning](https://openreview.net/forum?id=SJxhNTNYwB)
_Representations, ICLR 2020, Addis Ababa, Ethiopia,_
_April 26-30, 2020. OpenReview.net._
Andrew Ilyas, Logan Engstrom, Anish Athalye, and
[Jessy Lin. 2018. Black-box adversarial attacks with](http://proceedings.mlr.press/v80/ilyas18a.html)
[limited queries and information. In Proceedings of](http://proceedings.mlr.press/v80/ilyas18a.html)
_the 35th International Conference on Machine Learn-_
_ing, ICML 2018, Stockholmsmässan, Stockholm, Swe-_
_den, July 10-15, 2018, volume 80 of Proceedings_
_of Machine Learning Research, pages 2142–2151._
PMLR.
Andrew Ilyas, Logan Engstrom, and Aleksander Madry.
[2019. Prior convictions: Black-box adversarial at-](https://openreview.net/forum?id=BkMiWhR5K7)
[tacks with bandits and priors. In 7th International](https://openreview.net/forum?id=BkMiWhR5K7)
_Conference on Learning Representations, ICLR 2019,_
_New Orleans, LA, USA, May 6-9, 2019. OpenRe-_
view.net.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham
[Neubig. 2020. How can we know what language](https://doi.org/10.1162/tacl_a_00324)
[models know? Transactions of the Association for](https://doi.org/10.1162/tacl_a_00324)
_Computational Linguistics, 8:423–438._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://openreview.net/forum?id=e2TBb5y0yFf)
[guage models are zero-shot reasoners. In Advances](https://openreview.net/forum?id=e2TBb5y0yFf)
_in Neural Information Processing Systems._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
Yuxuan Lai, Chen Zhang, Yansong Feng, Quzhe Huang,
[and Dongyan Zhao. 2021. Why machine reading](https://doi.org/10.18653/v1/2021.findings-acl.85)
[comprehension models learn shortcuts?](https://doi.org/10.18653/v1/2021.findings-acl.85) In Find_ings of the Association for Computational Linguis-_
_tics: ACL-IJCNLP 2021, pages 989–1002, Online._
Association for Computational Linguistics.
Andrew Lampinen, Ishita Dasgupta, Stephanie Chan,
Kory Mathewson, Mh Tessler, Antonia Creswell,
James McClelland, Jane Wang, and Felix Hill. 2022.
[Can language models learn from explanations in con-](https://aclanthology.org/2022.findings-emnlp.38)
[text? In Findings of the Association for Computa-](https://aclanthology.org/2022.findings-emnlp.38)
_tional Linguistics: EMNLP 2022. Association for_
Computational Linguistics.
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan,
Bing Tian Dai, Yan Wang, Dongxiang Zhang, and
Ee-Peng Lim. 2022. Mwptoolkit: an open-source
framework for deep learning-based math word problem solvers. In Proceedings of the AAAI Conference
_on Artificial Intelligence, volume 36, pages 13188–_
13190.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
[The power of scale for parameter-efficient prompt](https://doi.org/10.18653/v1/2021.emnlp-main.243)
[tuning. In Proceedings of the 2021 Conference on](https://doi.org/10.18653/v1/2021.emnlp-main.243)
_Empirical Methods in Natural Language Processing,_
pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
[Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:](https://doi.org/10.18653/v1/2021.acl-long.353)
[Optimizing continuous prompts for generation. In](https://doi.org/10.18653/v1/2021.acl-long.353)
_Proceedings of the 59th Annual Meeting of the Asso-_
_ciation for Computational Linguistics and the 11th_
_International Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pages 4582–_
4597, Online. Association for Computational Linguistics.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2022a. On the](https://arxiv.org/abs/2206.02336)
[advance of making language models better reasoners.](https://arxiv.org/abs/2206.02336)
_ArXiv preprint, abs/2206.02336._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2022b. On the](https://arxiv.org/abs/2206.02336)
[advance of making language models better reasoners.](https://arxiv.org/abs/2206.02336)
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
-----
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancouver,_
Canada. Association for Computational Linguistics.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan,
Lawrence Carin, and Weizhu Chen. 2022. [What](https://doi.org/10.18653/v1/2022.deelio-1.10)
[makes good in-context examples for GPT-3?](https://doi.org/10.18653/v1/2022.deelio-1.10) In
_Proceedings of Deep Learning Inside Out (DeeLIO_
_2022): The 3rd Workshop on Knowledge Extrac-_
_tion and Integration for Deep Learning Architectures,_
pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding,
[Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT](https://arxiv.org/abs/2103.10385)
[Understands, Too. ArXiv preprint, abs/2103.10385.](https://arxiv.org/abs/2103.10385)
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
[RoBERTa: A Robustly Optimized BERT Pretrain-](https://doi.org/10.48550/arXiv.1907.11692)
[ing Approach. arXiv.](https://doi.org/10.48550/arXiv.1907.11692)
[Ilya Loshchilov and Frank Hutter. 2019. Decoupled](https://openreview.net/forum?id=Bkg6RiCqY7)
[weight decay regularization. In 7th International](https://openreview.net/forum?id=Bkg6RiCqY7)
_Conference on Learning Representations, ICLR 2019,_
_New Orleans, LA, USA, May 6-9, 2019. OpenRe-_
view.net.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu,
Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark,
and Ashwin Kalyan. 2023. Dynamic prompt learning
via policy gradient for semi-structured mathematical
reasoning. In International Conference on Learning
_Representations (ICLR)._
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel,
[and Pontus Stenetorp. 2022. Fantastically ordered](https://doi.org/10.18653/v1/2022.acl-long.556)
[prompts and where to find them: Overcoming few-](https://doi.org/10.18653/v1/2022.acl-long.556)
[shot prompt order sensitivity. In Proceedings of the](https://doi.org/10.18653/v1/2022.acl-long.556)
_60th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
8086–8098, Dublin, Ireland. Association for Computational Linguistics.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and developing](https://doi.org/10.18653/v1/2020.acl-main.92)
[English math word problem solvers. In Proceedings](https://doi.org/10.18653/v1/2020.acl-main.92)
_of the 58th Annual Meeting of the Association for_
_Computational Linguistics, pages 975–984, Online._
Association for Computational Linguistics.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish
[Sabharwal. 2018. Can a suit of armor conduct elec-](https://doi.org/10.18653/v1/D18-1260)
[tricity? a new dataset for open book question an-](https://doi.org/10.18653/v1/D18-1260)
[swering. In Proceedings of the 2018 Conference on](https://doi.org/10.18653/v1/D18-1260)
_Empirical Methods in Natural Language Processing,_
pages 2381–2391, Brussels, Belgium. Association
for Computational Linguistics.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. [Training language models to follow in-](https://arxiv.org/abs/2203.02155)
[structions with human feedback.](https://arxiv.org/abs/2203.02155) _ArXiv preprint,_
abs/2203.02155.
Pantelis Papadopoulos, Stavros Demetriadis, Ioannis
Stamelos, and Ioannis Tsoukalas. 2010. [The ef-](https://doi.org/10.1108/17504971011075192)
[fect of prompting to students with different learn-](https://doi.org/10.1108/17504971011075192)
[ing styles. Multicultural Education and Technology](https://doi.org/10.1108/17504971011075192)
_Journal, 4:198–213._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin,
Qiang Fu, Yan Gao, Jian-Guang Lou, and Weizhu
[Chen. 2022. Reasoning like program executors. In](https://aclanthology.org/2022.emnlp-main.48)
_Proceedings of the 2022 Conference on Empirical_
_Methods in Natural Language Processing, pages 761–_
779. Association for Computational Linguistics.
Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit
[Bansal. 2022. Grips: Gradient-free, edit-based in-](https://arxiv.org/abs/2203.07281)
[struction search for prompting large language models.](https://arxiv.org/abs/2203.07281)
_ArXiv preprint, abs/2203.07281._
[Guanghui Qin and Jason Eisner. 2021. Learning how](https://doi.org/10.18653/v1/2021.naacl-main.410)
[to ask: Querying LMs with mixtures of soft prompts.](https://doi.org/10.18653/v1/2021.naacl-main.410)
In Proceedings of the 2021 Conference of the North
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
pages 5203–5212, Online. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
[Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the](http://jmlr.org/papers/v21/20-074.html)
[limits of transfer learning with a unified text-to-text](http://jmlr.org/papers/v21/20-074.html)
[transformer. Journal of Machine Learning Research,](http://jmlr.org/papers/v21/20-074.html)
21(140):1–67.
S. Robertson. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends®
_in Information Retrieval, 3(4):333–389._
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric
[Wallace, and Sameer Singh. 2020. AutoPrompt: Elic-](https://doi.org/10.18653/v1/2020.emnlp-main.346)
[iting Knowledge from Language Models with Auto-](https://doi.org/10.18653/v1/2020.emnlp-main.346)
[matically Generated Prompts. In Proceedings of the](https://doi.org/10.18653/v1/2020.emnlp-main.346)
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 4222–4235,_
Online. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D. Manning, Andrew Ng, and
[Christopher Potts. 2013. Recursive deep models for](https://aclanthology.org/D13-1170)
[semantic compositionality over a sentiment treebank.](https://aclanthology.org/D13-1170)
In Proceedings of the 2013 Conference on Empiri_cal Methods in Natural Language Processing, pages_
1631–1642, Seattle, Washington, USA. Association
for Computational Linguistics.
-----
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta, Adrià
[Garriga-Alonso, et al. 2022. Beyond the imitation](https://arxiv.org/abs/2206.04615)
[game: Quantifying and extrapolating the capabilities](https://arxiv.org/abs/2206.04615)
[of language models. ArXiv preprint, abs/2206.04615.](https://arxiv.org/abs/2206.04615)
Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi,
Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf,
[Luke Zettlemoyer, Noah A Smith, et al. 2022. Selec-](https://arxiv.org/abs/2209.01975)
[tive annotation makes language models better few-](https://arxiv.org/abs/2209.01975)
[shot learners. ArXiv preprint, abs/2209.01975.](https://arxiv.org/abs/2209.01975)
Saku Sugawara, Kentaro Inui, Satoshi Sekine, and
[Akiko Aizawa. 2018. What makes reading compre-](https://doi.org/10.18653/v1/D18-1453)
[hension questions easier?](https://doi.org/10.18653/v1/D18-1453) In Proceedings of the
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 4208–4219, Brussels,_
Belgium. Association for Computational Linguistics.
Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing
[Huang, and Xipeng Qiu. 2022. Black-box tuning for](https://proceedings.mlr.press/v162/sun22e.html)
[language-model-as-a-service. In Proceedings of the](https://proceedings.mlr.press/v162/sun22e.html)
_39th International Conference on Machine Learning,_
volume 162 of Proceedings of Machine Learning
_Research, pages 20841–20855. PMLR._
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. CommonsenseQA: A ques-](https://doi.org/10.18653/v1/N19-1421)
[tion answering challenge targeting commonsense](https://doi.org/10.18653/v1/N19-1421)
[knowledge. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1421)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4149–4158, Minneapolis, Minnesota. Association for
Computational Linguistics.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gard[ner, and Sameer Singh. 2019. Universal adversarial](https://doi.org/10.18653/v1/D19-1221)
[triggers for attacking and analyzing NLP. In Proceed-](https://doi.org/10.18653/v1/D19-1221)
_ings of the 2019 Conference on Empirical Methods_
_in Natural Language Processing and the 9th Inter-_
_national Joint Conference on Natural Language Pro-_
_cessing (EMNLP-IJCNLP), pages 2153–2162, Hong_
Kong, China. Association for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc
Le, Ed Chi, and Denny Zhou. 2022. [Rationale-](https://arxiv.org/abs/2207.00747)
[augmented ensembles in language models. ArXiv](https://arxiv.org/abs/2207.00747)
_preprint, abs/2207.00747._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_International Conference on Learning Representa-_
_tions._
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy
[Liang, Jeff Dean, and William Fedus. 2022a. Emer-](https://openreview.net/forum?id=yzkSU5zdwD)
[gent abilities of large language models. Transactions](https://openreview.net/forum?id=yzkSU5zdwD)
_on Machine Learning Research. Survey Certifica-_
tion.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022b. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement
learning. Machine learning, 8(3):229–256.
Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi
Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao,
Pengcheng He, Michael Zeng, and Xuedong Huang.
[2022. Human parity on commonsenseqa: Augment-](https://doi.org/10.24963/ijcai.2022/383)
[ing self-attention with external attention. In Pro-](https://doi.org/10.24963/ijcai.2022/383)
_ceedings of the Thirty-First International Joint Con-_
_ference on Artificial Intelligence, IJCAI-22, pages_
2762–2768. Main Track.
Ke Yang, Charles Yu, Yi R Fung, Manling Li, and Heng
Ji. 2023. Adept: A debiasing prompt framework.
In Proceedings of the AAAI Conference on Artificial
_Intelligence, volume 37, pages 10780–10788._
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
[BARTScore: Evaluating generated text as text gener-](https://openreview.net/forum?id=5Ya8PbvpZ9)
[ation. In Advances in Neural Information Processing](https://openreview.net/forum?id=5Ya8PbvpZ9)
_Systems._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good[man. 2022. STar: Bootstrapping reasoning with rea-](https://openreview.net/forum?id=_3ELRdg2sgI)
[soning. In Advances in Neural Information Process-](https://openreview.net/forum?id=_3ELRdg2sgI)
_ing Systems._
[Yiming Zhang, Shi Feng, and Chenhao Tan. 2022. Ac-](https://aclanthology.org/2022.emnlp-main.622)
[tive example selection for in-context learning. In Pro-](https://aclanthology.org/2022.emnlp-main.622)
_ceedings of the 2022 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 9134–_
9148.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
[Smola. 2023. Automatic chain of thought prompting](https://openreview.net/forum?id=5NTt8GFjUHkr)
[in large language models. In International Confer-](https://openreview.net/forum?id=5NTt8GFjUHkr)
_ence on Learning Representations._
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and
[Sameer Singh. 2021. Calibrate before use: Improv-](http://proceedings.mlr.press/v139/zhao21c.html)
[ing few-shot performance of language models. In](http://proceedings.mlr.press/v139/zhao21c.html)
_Proceedings of the 38th International Conference on_
_Machine Learning, ICML 2021, 18-24 July 2021, Vir-_
_tual Event, volume 139 of Proceedings of Machine_
_Learning Research, pages 12697–12706. PMLR._
Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021.
[Factual probing is [MASK]: Learning vs. learning](https://doi.org/10.18653/v1/2021.naacl-main.398)
[to recall. In Proceedings of the 2021 Conference](https://doi.org/10.18653/v1/2021.naacl-main.398)
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, pages 5017–5033, Online. Association_
for Computational Linguistics.
-----
Xiao Zhou, Weizhong Zhang, Zonghao Chen, Shizhe
[Diao, and Tong Zhang. 2021. Efficient neural net-](https://proceedings.neurips.cc/paper/2021/file/80f2f15983422987ea30d77bb531be86-Paper.pdf)
[work training via forward and backward propagation](https://proceedings.neurips.cc/paper/2021/file/80f2f15983422987ea30d77bb531be86-Paper.pdf)
[sparsification. In Advances in Neural Information](https://proceedings.neurips.cc/paper/2021/file/80f2f15983422987ea30d77bb531be86-Paper.pdf)
_Processing Systems, volume 34, pages 15216–15229._
-----
**Algorithm 1 The black-box optimization procedures.**
**Require: Input batch S, Label batch Y, Parameter of categorical distribution p1, · · ·, pn, Prediction**
model G, Loss function L.
1: for k ≤ _I do_
2: Sample j1[(][k][)] Cat(p1), _, jn[(][k][)]_ Cat(pn)
_∼_ _· · ·_ _∼_
3: _T_ [(][k][)] = t[(]1[k][)] _t[(]n[k][)]_ = [j1[(][k][)][]][ · · · V][[][j]n[(][k][)][]]
_· · ·_ _V_
4: end for
5: avg = _I[1]_ _Ik=1_
8:6:7: L forpg ip[vr]i ≤i [=]nproj doIP−1 1 (pIk[L]i=1[(][G][∇]η[[][T][p][ (] gi[k][log][)]p[vr][, S]i [)][ P][]][, Y][(][t][ )]i[(][k][)])(L(G[T [(][k][)], S], Y ) −Lavg)
_←_ _CP_ _−_ _·_
9: end for
10: return p1, · · · pn
**A** **Algorithm Details**
In this section, we provide more details about the derivation of the equation (1) in Section 3.2. Given the
loss function:
ET [L(T )] = _L(T_ )P (T ) dT (4)
Z
We can estimate the gradient of pi by:
_∇piET [L(T_ )] = _L(T_ )∇piP (T ) dT
Z
= (T ) _[P]_ [(][T] [)]
_L_ _P_ (T ) _[∇][p][i][P]_ [(][T] [) d][T]
Z
= _P_ (T )L(T )∇pi log P (T ) dT
Z
=EP (T ) _L(T_ )∇pi log Πj[n]=1[P] [(][t][j][)]
(5)
=EP (T ) (T ) **_pi_** log P (tj)
L _∇_
_j=1_
X
=EP (T ) [L(T )∇pi log P (ti)]
The j-th component of ∇pi log P (ti) could be solved explicitly by:
_∇pi,j log P_ (ti) = ∇pi,j log pi,ji (6)
When j = ji, it is obvious that ∇pi,j log P (ti) =
1
_pi,ji_ [. When][ j][ ̸][=][ j][i][, equation (][6][) is calculated by:]
_∇pi,j log P_ (ti) =∇pi,j log(1 −
_pi,k)_
_k=1X,k≠_ _ji_
(7)
= − 1 _k=1,k=ji_ _[p][i,k]_
_−_ [P][N] _̸_
1
=
_−_ _pi,ji_
Therefore, we adopted a variance-reduced policy gradient estimator (VR-PGE) as described in Williams
(1992); Dong et al. (2020); Zhou et al. (2021) to mitigate the high-variance issue of PGE. The estimated
gradient is calculated by:
-----
(T [(][k][)])
_L_ _−_ _I[1]_
**_gp[vr]i_** [=]
_L(T_ [(][j][)])
_j=1_
X
_∇pi log P_ (ti) (8)
_I −_ 1
_k=1_
where T [(][k][)], k = 1, · · ·, I are sampled independently from P (T ).
Thus, the prompt token distribution pi can be updated by a projected stochastic gradient descent
algorithm:
**_pi_** proj (pi _η_ **_gp[vr]i_** [)][, i][ = 1][,][ · · ·][, n] (9)
_←_ _C_ _−_ _·_
where η is the learning rate of prompt learning, I is the sample size, and proj is the projection calculation.
_C_
The detailed training procedure of our VR-PGE algorithm is displayed in Algorithm 1.
**B** **Detailed Experimental Setting**
|DATASET|TASK TYPE|# EX.|# EVAL.|EVAL. SPLIT|TRANSFERRED|
|---|---|---|---|---|---|
|GSM8K (Cobbe et al., 2021) ASDiv (Miao et al., 2020) SVAMP (Patel et al., 2021) AQuA (Ling et al., 2017) SingleOp♣ CSQA♦(Talmor et al., 2019) StrategyQA♦(Geva et al., 2021) Letter (4) (Wei et al., 2022b) OpenBookQA (Mihaylov et al., 2018) e-SNLI♥(Camburu et al., 2018) SST-2♦(Socher et al., 2013)|Arithmetic Arithmetic Arithmetic Arithmetic Arithmetic Commonsense Commonsense Symbolic Question Answering Narural Language Inference Sentiment Analysis|8 8 8 4 8 7 6 4 4 6 6|1319 2096 1000 254 562 1221 1880 500 500 1000 872|Test Test Test Test Test Validation Validation Test (OOD) Test Test Validation|✗ ✓ ✓ ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✗|
|---|---|---|---|---|---|
Table 5: The overall statistics of the datasets. # EX.: the number of few-shot chain-of-thought exemplars used to
prompt each task. # EVAL.: the number of evaluation data. EVAL. SPLIT: evaluation split. TRANSFERRED: a
checkmark means that the exemplars are generated and trained from other datasets and then applied to this task. [♣]:
SingleOp is a subset of MAWPS (Koncel-Kedziorski et al., 2016). [♦]: CSQA, StrategyQA, and SST-2 do not have
publicly available test set labels, so we simply follow the setting by Wei et al. (2022b) and Wang et al. (2022) to
evaluate the performance of the validation set. [♥]: Following Wang et al. (2022), we evaluate the first 1,000 data
points for a fair comparison.
**B.1** **Datasets and Evaluation Metrics**
Following Wei et al. (2022b), we conduct our experiments on eight reasoning tasks, including five math
word problem datasets: GSM8K, ASDiv, SVAMP, AQuA, and SingleOp; two commonsense reasoning
datasets: CommonsenseQA (CSQA) and StrategyQA, and one symbolic reasoning task: Last Letter
Concatenation (Letter (4)). We also generalize our method to non-reasoning tasks including one questionanswering task (OpenBookQA), one natural language inference task (e-SNLI), and one sentiment analysis
task (SST-2). The detailed statistics of the datasets are listed in Table 5.
To make a fair comparison with our baselines, we use the same number of exemplars as Wei et al. (2022b)
and Wang et al. (2022), as shown in Table 5. We keep the same setting for the evaluation split as well.
By default, we use the test split for evaluation, and for datasets that do not have publicly available test
set labels, we evaluate the validation set instead. In addition, for last letter concatenation, since the
model has already achieved almost 100% accuracy under the in-distribution setting, we only test the
out-of-distribution (OOD) setting, Letter (4), where prompts are 2-letters, and test examples are 4-letters.
The evaluation metric for all tasks is the exact match accuracy. First, we conduct pre-processing for
predictions to remove all the special symbols. For example, "$100,000" will be processed to "100000".
Then we check if it has the same value as the ground truth to calculate the exact match accuracy.
**B.2** **Baselines**
In our experiments, the following three methods serve as the main baselines:
_• chain-of-thought (Manual-CoT) (Wei et al., 2022b): standard chain-of-thought prompting which_
provides manual-written intermediate reasoning steps.
-----
_• self-consistency (SC) (Wang et al., 2023): an improved version of CoT. Instead of greedy decoding, it_
samples a diverse set of reasoning paths and chooses the most common answer.
_• Auto-CoT (Zhang et al., 2023): an automatic exemplars construction method that applies clustering_
techniques to sample questions and then generates chains.
Our experiments are conducted with two popular large language models:
_• GPT-3 (Brown et al., 2020): we test an advanced version of GPT-3, text-davinci-002, which_
corresponds to InstructGPT (Ouyang et al., 2022) model.
_• CodeX (Chen et al., 2021): we test code-davinci-002 which has better code representation ability._
We utilize the public APIs directly from OpenAI’s services[4]. In our main experiments, we test on both
text-davinci-002 and code-davinci-002 engines. However, in additional experiments, we mainly
test on code-davinci-002 for two reasons : (1) It is the most capable model available at the time we
were conducting our experiments, consistent with the observations in previous studies (Wei et al., 2022b;
Wang et al., 2023; Miao et al., 2020). (2) Compared to costly text-davinci-002, it is free of charge
because we are in the initial limited beta period during our experiments process.
**B.3** **Implementation**
**Augment and Prune: Following Wei et al. (2022b) and Wang et al. (2022), we keep the same number of**
exemplars (4-8) listed in Table 5. For main experiments, we augment and prune a pool of 100 high-quality
exemplars for all datasets. Firstly, pool construction questions are randomly sampled and then fed to
LLMs to construct model-generated answers with rationale chains. Given that some datasets only have the
test split, we use the pool result of GSM8K and transferred it to these datasets for further inference. Here
for arithmetic reasoning tasks, pool construction questions are randomly sampled from the training split
of GSM8K and AQuA. For CSQA and StrategyQA, exemplars are randomly sampled from the official
training split (Talmor et al., 2019) and question-only set from BIG-bench collaboration (Srivastava et al.,
2022). For letter concatenation, exemplars are randomly sampled from the 2-letter set. After the pool is
constructed, we use labels to prune the incorrect model-generated exemplars and retain 100 high-quality
exemplars.
**Select: The train set and validation set are also randomly sampled following the same rule as above**
except Letter (4) dataset. Since LLM has already reached almost 100% accuracy on the 2-letter set,
we choose to optimize the model based on the 3-letter OOD set. Thus the train set and validation set
are randomly sampled from the 3-letter set. Both the train and validation sets have a size of 100 to
reach a performance and cost trade-off. Then by utilizing the log probability returned by API calls,
we calculate the cross-entropy loss of the answer token. Finally, we optimize the latent variables by
AdamW (Loshchilov and Hutter, 2019) for 5 epochs with a learning rate of 1 × 10[−][3] and batch size
of 10. After optimization, as shown in Figure 2 inference stage, we choose the exemplars combination
(arg max pi) with the highest validation accuracy to be further evaluated on the test set. By default, we
query the language model once to get the answer. Under the self-consistency setting, similar to Wang et al.
(2023), we query the language model 40 times and choose the most consistent one as the final answer.
**Hyper-parameter Setting: Under few-shot setting, we set max_tokens = 256 for all augmentation,**
selection and inference. In addition, we set logprobs = 5 when training. Moreover, we set temperature =
0.7 for evaluation under self-consistency while temperature = 0 for all other cases. Under zero-shot setting
(§6.5), we keep the same hyper-parameters as Kojima et al. (2022) which first uses max_tokens = 128 for
generating the rationale chains and then uses max_tokens = 32 for generating the answers to construct the
pool. The hyper-parameters for selecting and evaluating are the same as the few-shot setting above.
**C** **More Experiment Results**
**C.1** **Experiments under ChatGPT**
To further verify the effectiveness of Automate-CoT, we further conduct the experiments on gpt-3.5-turbo.
Automate-CoT also shows consistent improvement on each task with 2.8% improvement on arithmetic
[4https://openai.com/api/](https://openai.com/api/)
-----
|METHOD|GSM8K ASDIV SVAMP AQUA SINGLEOP CSQA STQA LETTER (4) OBQA E-SNLI SST-2|AVG.|
|---|---|---|
|Manual-CoT + BM25 + PromptPG + K-Means + Automate-CoT|63.1 77.1 78.1 44.9 90.0 77.5 59.7 73.0 80.0 80.9 85.3 64.2 73.7 73.8 45.3 87.9 76.1 58.9 73.4 81.4 76.3 87.2 66.6 76.7 75.6 46.1 89.1 77.8 60.2 74.8 81.8 77.8 87.8 66.4 76.6 77.6 45.7 89.7 79.0 60.0 73.6 80.4 78.4 84.1 68.0↑4.9 81.7 ↑4.6 79.1↑1.0 46.9↑2.0 91.5↑1.5 80.5↑3.0 64.5↑4.8 76.2↑3.2 83.0↑3.0 81.4↑0.5 87.7↑2.4|73.6 72.6 74.0 73.8 76.4↑2.8|
|---|---|---|
Table 6: The overall performance of Automate-CoT under gpt-3.5-turbo and the comparison with retrieval-based
and clustering-based exemplars selection methods.
|METHOD|GSM8K ASDIV SVAMP AQUA SINGLEOP CSQA STQA LETTER (4)|
|---|---|
_text-davinci-002_
|Automate-CoT Automate-CoT(SC)|0.14 0.29 0.17 0.21 0.08 0.06 0.26 0.04 0.02 0.18 0.06 0.14 0.04 0.01 0.07 0.04|
|---|---|
_code-davinci-002_
|Automate-CoT Automate-CoT(SC)|0.19 0.78 0.33 0.09 0.05 0.17 0.95 0.02 0.09 0.09 0.13 0.01 0.06 0.03 0.09 0.08|
|---|---|
Table 7: The variance of the results in Table 1 over 3 runs. (SC) denotes under self-consistency setting.
reasoning, 3.9% improvement on commonsense reasoning, 3.2% on symbolic reasoning, and 2.8%
improvement overall as shown in Table 6.
**C.2** **Comparison with Retrieval Methods**
We also compare Automate-CoT with simple retrieval method BM25 (Robertson, 2009) and reinforcement
learning-based retrieval method PromptPG (Lu et al., 2023). We first implemented a BM25 selection
method and tested the performance on all the datasets. The results are shown in Table 6. It indicates that
retrieval-based methods can only select examples with similar meaning to the query question while the
diversity is overlooked. As shown in the table, the average performance of the BM25 retrieval-based
method even has a 1% degradation compared to Manual-CoT, and 3.8% lower than Automate-CoT. A
similar phenomenon is observed in Auto-CoT (Zhang et al., 2023), which indicates that with similar
questions being sampled for test questions, Retrieval-Q-CoT is negatively affected by misleading by
similarity.
In addition, we also compare with PromptPG (Lu et al., 2023), a dynamic example-selection baseline.
We adopt the same setting as ours for PomptPG, where the number of training examples is 100, the
size of the candidate pool is 100, and the backbone model is gpt-3.5-turbo. Further, we keep the same
prompt format as the original chain-of-thought and ours. The other settings we use are consistent with the
settings provided by their original code. The results are shown in Table 6. It indicates that Automate-CoT
outperforms PromptPG.
**C.3** **Comparison with Clustering Methods**
We further conduct additional experiments to compare Automate-CoT with methods selecting demonstration exemplars through clustering. We use K-Means as the clustering method and create k clusters
according to the number of exemplars specified in Table 5. Then we use these k representative exemplars
as the demonstration exemplars to prompt the language models. The results are shown in Table 6. It indicates that clustering-based methods can select examples with different semantic meanings and generally
perform better than Manual-CoT. However, the complexity and diversity are overlooked. For example,
most of the selected few-shot exemplars in GSM8K have around 3-4 hops where complex questions and
moderately difficult questions are overlooked. As a result, it generally performs worse than Automate-CoT
with a 2.6% gap.
**C.4** **Variance Report**
Since Automate-CoT’s results in Table 1 are averaged over three runs, we also report the variance in Table
7 here. It is observed that Automate-CoT achieves quite a low variance, especially compared to the large
variance of Manual-CoT as shown in § 2 Motivation.
-----
**D** **Additional Comparison with Fine-tuning**
Since our method uses a training-based pipeline, we also compare it with fine-tuning large language
models in terms of the number of parameters, training cost, estimated total training cost, and required
training set size. As shown in the study of Cobbe et al. (2021), fine-tuning on gpt-3 requires thousands
(e.g., 8000) of training examples to be effective while Automate-CoT only needs 100 training examples.
In addition, fine-tuning has a larger training and inference cost than Automate-CoT because it not only
requires a one-off fine-tuning cost but also has a higher unit price on subsequent usage.
For Automate-CoT, under the setting of gpt-3.5-turbo, the direct usage is $ 0.0015 / 1k tokens for input and
$ 0.002 / 1k tokens for output. With the training epochs of 3, a training set size of 100 and a validation set
size of 100, an input length of around 750 tokens and an average output length of 150 tokens, it takes about
(750/1000 · 0.0015 + 150/1000 · 0.002) · 100 · 10 · 3 + (750/1000 · 0.0015 + 150/1000 · 0.002) · 100 · 3=
$ 4.7. However, for fine-tuneing, given the training price of gpt-3.5-turbo is $ 0.008 / 1K tokens, the usage
of finetuned gpt-3.5-turbo is $ 0.0015 / 1K tokens for input and $ 0.002 / 1K tokens for output tokens.
Under the finetuning setting, suppose the average length of training examples is 300 tokens, and training a
whole training set of 8000 examples for 3 epochs takes about 300/1000 · 8000 · 3 · 0.008= $ 57.6, which
costs 12x more than Automate-CoT.
It is also worth noting that the further usage of finetuned gpt-3.5-turbo is $ 0.012 / 1K tokens for input
and $ 0.016 / 1K tokens for output while Automate-CoT remains the normal cost, which is 8x less cost
than fine-tuning.
|METHOD|# of Training Params|Cost|Est. Total Cost|Train Set Size|
|---|---|---|---|---|
|Fine-tuning|Unknown but should ≥175B|$0.008/1K tokens (Train) $0.012/1K tokens (Input Usage) $0.016/1K tokens (Output Usage)|$ 9.1 $ 12.7 $ 20.0 $ 34.3 $ 63.1|500 1000 2000 4000 8000|
|---|---|---|---|---|
|Automate-CoT|# of exemplars × Pool Size|$0.0015/1K tokens (Input Usage) $0.002/1K tokens (Output Usage)|$ 6.6|100|
|---|---|---|---|---|
Table 8: Comparison between Fine-tuning and Automate-CoT on GSM8K. The cost is copied from the OpenAI
official website. [5]
**E** **Additional Analysis**
We list some additional analysis here that cannot be put in the main section because of the page limit.
**E.1** **Effects of Several Tricks**
Previous studies have found some tricks like add "Let’s think step by step." before each rationale chain
and replace "Q:" with "Question:" (Fu et al., 2023; Kojima et al., 2022) can boost the performance on
top of Manual-CoT. Following their settings, we also test Automate-CoT with tricks on GSM8K as an
additional experiment. By adding tricks, Automate-CoT can further boost the accuracy to 69.8% (+2.2%)
under the normal setting and 83.0% (+0.6%) under the self-consistency setting, respectively.
**F** **Exact Match Number over Each Hop**
The exact match number over each hop of Figure 1 is reported in Table 9.
[5https://openai.com/pricing](https://openai.com/pricing)
-----
|HOP|Total Number|Manual-CoT|Automate-CoT|Complex-CoT|
|---|---|---|---|---|
|2 3 4 5 6 7 8 9 11|326 370 298 174 88 40 20 2 1|269 = 82.52% 265 = 71.62% 159 = 53.35% 83 = 47.70% 26 = 29.54% 10 = 25% 3 = 15% 0 0|272 = 83.44% 268 = 72.44% 196 = 65.77% 110 = 63.21% 37 = 42.05% 12 = 30% 4 = 20% 0 0|227 = 69.63% 232 = 62.70% 166 = 55.70% 98 = 56.32% 39 = 44.32% 16 = 40% 5 = 25% 0 1|
|---|---|---|---|---|
|Overall|1319|815 = 61.8%|899 = 68.2%|784 = 59.4%|
|---|---|---|---|---|
Table 9: The exact match number across the different numbers of hops on GSM8K. Bold represents the best among
each hop. The percentage accuracy is calculated for each hop.
**G** **Full Exemplars generated by Automate-CoT**
-----
|DATASET|Exemplars|Hops|
|---|---|---|
|GSM8K|Q: Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for Ralph to hit. He loads up the machine with 175 tennis balls to start with. Out of the first 100 balls, he manages to hit 2/5 of them. Of the next 75 tennis balls, he manages to hit 1/3 of them. Out of all the tennis balls, how many did Ralph not hit? A: Ralph started with 175 tennis balls. He hit 2/5 of the first 100 balls, so he hit 2/5 * 100 = 40 balls. He hit 1/3 of the next 75 balls, so he hit 1/3 * 75 = 25 balls. In total he hit 40 + 25 = 65 balls. He did not hit 175 - 65 = 110 balls. The answer is 110. Q: Hans booked a room in a hotel. The hotel has 10 floors with 10 identical rooms on each floor. Because of an accident, the last floor is unavailable for the guests. Considering there are no other guests, in how many different rooms could Hans be checked in? A: There are 10 floors with 10 rooms each. The last floor is unavailable. So there are 9 * 10 = 90 rooms available. The answer is 90. Q: James earns $20 an hour while working at his main job. He earns 20% less while working his second job. He works 30 hours at his main job and half that much at his second job. How much does he earn per week? A: James earns $20 an hour at his main job. He earns 20% less at his second job. That means he earns $20 - 20% of $20 = $20 - $4 = $16 an hour at his second job. He works 30 hours at his main job and half that much at his second job. That means he works 30 / 2 = 15 hours at his second job. In total he works 30 + 15 = 45 hours. He earns $20 an hour at his main job and $16 an hour at his second job. In total he earns $20 * 30 + $16 * 15 = $600 + $240 = $840. The answer is 840. Q: Leah earned $28 working odd jobs around the neighborhood. She spent a seventh of it on a milkshake and put half of the rest in her savings account. She left the remaining money in her wallet. Her dog got ahold of her wallet and shredded all the money inside but $1. How many dollars did Leah lose? A: Leah earned $28. She spent a seventh of it on a milkshake, so she spent $28 / 7 = $4 on the milkshake. She put half of the rest in her savings account, so she put $28 - $4 / 2 = $12 in her savings account. She left the remaining money in her wallet. She lost all the money in her wallet except $1. So she lost $28 - $4 - $12 - $1 =$11. The answer is 11. Q: Sam and Jeff had a skipping competition at recess. The competition was split into four rounds. Sam completed 1 more skip than Jeff in the first round. Jeff skipped 3 fewer times than Sam in the second round. Jeff skipped 4 more times than Sam in the third round. Jeff got tired and only completed half the number of skips as Sam in the last round. If Sam skipped 16 times in each round, what is the average number of skips per round completed by Jeff? A: Sam skipped 16 times in each round. In the first round, Jeff skipped 16 - 1 = 15 times. In the second round, Jeff skipped 16 - 3 = 13 times. In the third round, Jeff skipped 16 + 4 = 20 times. In the last round, Jeff skipped 16 / 2 = 8 times. The average number of skips per round completed by Jeff is (15 + 13 + 20 + 8) / 4 = 56 / 4 = 14. The answer is 14. Q: A Statistics student wants to find out the average daily allowance of the middle school students. According to his survey, 2/3 of the students receive an average of $6 allowance per day while the rest gets an average of $4 a day. If he surveyed 60 students, what is the total amount of money those 60 students get in a day? A: 2/3 of the students receive $6 a day. The rest receive $4 a day. The total number of students is 60. So, 2/3 of 60 students receive $6 a day. The rest of the students, which is 60 - 2/3 * 60 = 20 students, receive $4 a day. The total amount of money those 60 students get in a day is 2/3 * 60 * $6 + 20 * $4 = $240 + $80 = $320. The answer is 320. Q: Ali had a collection of seashells. He started with 180 seashells. He then gave away 40 seashells to his friends. He also gave 30 seashells to his brothers. If he sold half of the remaining seashells, how many seashells did he have left? A: He started with 180 seashells. He gave away 40 to his friends and 30 to his brothers. That means he gave away 40 + 30 = 70 seashells. He has 180 - 70 = 110 seashells left. He sold half of them, so he has 110 / 2 = 55 seashells left. The answer is 55. Q: Marvin and Tina were selling candy bars to help fund their class trip. The candy bars cost $2 each. Marvin sold 35 candy bars total. Tina sold three times the number of candy bars as Marvin. How much more money did Tina make for the class trip selling candy bars compared to Marvin? A: Marvin sold 35 candy bars. Tina sold 3 times as many, so she sold 35 * 3 = 105 candy bars. Each candy bar cost $2, so Marvin made 35 * $2 = $70. Tina made 105 * $2 = $210. The difference is $210 - $70 = $140. The answer is 140.|4-hop 3-hop 8-hop 6-hop 6-hop 6-hop 5-hop 5-hop|
|---|---|---|
Table 10: One example of selected model-generated exemplars with rationale chains of average hops = 5.4. This set
of exemplars is trained and selected on GSM8K and transferred to other arithmetic reasoning tasks.
-----
|DATASET|Exemplars|
|---|---|
|AQuA|Q: If Tim had lunch at $50 and he gave 20% tip, how much did he spend? Answer Choices: (a) $60.00 (b) $35.42 (c) $60.60 (d) $21.56 (e) $78.45 A: The tip is 20% of what he paid for lunch. tip = 20% of 50.00 = (20/100)*50.00 = = $10.00. Total spent 50.00 + 10.00 = $60.00. The answer is (a). Q: A person can walk at a constant rate of 8mph and can bike at a rate of 16mph. If he wants to travel 64 miles in 8 hours using bike and walking at their constant rates, how much distance would he require to walk? Answer Choices: (a) 20 (b) 30 (c) 48 (d) 64 (e) 72 A: Total distance = 64. Distance = Speed * Time. Walking speed = s1 = 8. Walking time = t1. Bike speed = s2 = 16. Time traveled in bike = t2. d1 + d2 = 64. s1t1 + s2t2 = 64. 8*t1 + 16*t2 = 64. t1 + 2*t2 = 8 —– (1). Given: t1 + t2 = 8 —– (2). (1) - (2) −−> t2 = 0 and t1 = 8 - 0 = 8. Walking distance = s1*t1 = 8*8 = 64. The answer is (d). Q: The output of a factory was increased by 10% to keep up with rising demand. To handle the holiday rush, this new output was increased by 20%. By approximately what percent would the output now have to be decreased in order to restore the original output? Answer Choices: (a) 20% (b) 24% (c) 30% (d) 32% (e) 79% A: Let initial output is O then after 10% increase it will be 1.1O and after 20% increase on this new output the latest output will be 1.1O * 1.20 = 1.32O. Now we have to decrease the output by some percentage so that the new output is same as the starting output (O). so, 1.32O * (1-x/100) = O. => x = 24.24%. So, answer will be B. The answer is (b). In a graduate physics course, 70 percent of the students are male and 30 percent of the students are married. If two-sevenths of the male students are married, what fraction of the male students is single? Answer Choices: (a) 2/7 (b) 1/3 (c) 1/2 (d) 2/3 (e) 5/7 A: let assume there are 100 students of which 70 are male and 30 are females. if 30 are married then 70 will be single. now its given that two-sevenths of the male students are married that means 2/7 of 70 = 20 males are married. if 30 is the total number of students who are married and out of that 20 are males then the remaining 10 will be females who are married. total females = 70. married males = 20. then single males = 70-20 = 50. we need to find the fraction of male students who are single i.e single male students / total male student = 50/70 = 5/7 [E]. The answer is (e).|
|---|---|
Table 11: One example of selected model-generated exemplars with rationale chains. Note that there are newlines
between the answer choices which are omitted in the table to save space.
-----
|DATASET|Exemplars|
|---|---|
|CSQA|Q: John was punching and punching at the wall but succeeded only in bloodying his knuckles. This was bad. He would be unable to hold his tools if he injured what? Answer Choices: (a) hostility (b) anger (c) nose (d) fists (e) hands A: The answer should be something that John would be unable to hold his tools if he injured it. Of the above choices, only hands make sense. So the answer is (e). Q: What is required to be good at playing tennis? Answer Choices: (a) tennis racket (b) skill (c) competition (d) tennis court (e) legs A: The answer must be something that is required to be good at playing tennis. Of the above choices, only skill is required to be good at playing tennis. So the answer is (b). Q: What is someone who gets angry after getting drunk likely to participate in? Answer Choices: (a) violence (b) falling down (c) vomiting (d) vomiting (e) lower standards A: The answer should be something that someone who gets angry after getting drunk is likely to do. Of the above choices, the closest answer is violence. So the answer is (a). Q: James felt himself losing consciousness. He was frightened. He didn’t want this. He was scared of what? Answer Choices: (a) death (b) coma (c) peace (d) forgetfulness (e) loss of life A: The answer should be something that James would be scared of. Of the above choices, the closest thing is death. So the answer is (a). Q: Sally lost her kite because she wasn’t careful. She thought that there was more string on the spool, but it slipped out of her fingers when she reached the what?. Answer Choices: (a) child’s hand (b) the last straw (c) hobby shop (d) end of line (e) toy store A: The answer should be the end of the string. Of the above choices, the end of the line makes the most sense. So the answer is (d). Q: The sensor would just the distance then set off an alarm, the installation expert explained it was called a what kind of sensor? Answer Choices: (a) near (b) closeness (c) here (d) proximity (e) this A: The answer must be a type of sensor that would set off an alarm when something is near. Of the above choices, only proximity sensor makes sense. So the answer is (d). Q: The student needed to get some new pencils, where did he go? Answer Choices: (a) store (b) classroom (c) backpack (d) cabinet (e) teacher A: The answer should be the place where pencils are sold. Of the above choices, only store makes sense. So the answer is (a).|
|---|---|
Table 12: One example of selected model-generated exemplars with rationale chains. This set of exemplars is
trained and selected on CommonsenseQA. Note that there are newlines between the answer choices which are
omitted in the table to save space.
-----
|DATASET|Exemplars|
|---|---|
|StrategyQA|Q: Did the death of Helen Palmer have a significant effect on Dr. Seuss? A: Helen Palmer was Dr. Seuss’s wife. The death of a spouse can have a significant effect on a person. Thus, the death of Helen Palmer could have a significant effect on Dr. Seuss. So the answer is yes. Q: Would a moose hypothetically be too much for a minotaur to devour whole? A: A moose is about 1,000 pounds. A minotaur is about 1,000 pounds. Thus, a moose would be too much for a minotaur to devour whole. So the answer is yes. Q: Yes or no: Is H’s most common two letter pair partner a freebie in Wheel of Fortune bonus round? A: The most common two letter pair partner for H is E. E is a freebie in the Wheel of Fortune bonus round. Thus, H’s most common two letter pair partner is a freebie in the Wheel of Fortune bonus round. So the answer is yes. Q: Yes or no: Would bald eagle deliver an urgent message before B-52? A: The top speed of a bald eagle is about 80 mph. The top speed of a B-52 is about 650 mph. Thus, a B-52 would be faster than a bald eagle. So the answer is no. Q: Yes or no: Does an individual oceanographer study many sciences? A: Oceanography is the study of the ocean. The ocean is a complex system. Thus, oceanography is a multidisciplinary science. So the answer is yes. Q: Yes or no: Could Steven Spielberg send emails as a child? A: Steven Spielberg was born in 1946. Email was invented in 1971. Thus, Steven Spielberg could not send emails as a child. So the answer is no.|
|---|---|
Table 13: One example of selected model-generated exemplars with rationale chains. This set of exemplars is
trained and selected on StrategyQA. Note that there are newlines between the answer choices which are omitted in
the table to save space.
|DATASET|Exemplars|
|---|---|
|Letter (4)|Q: Take the last letters of the words in "Delmer Andersen" and concatenate them. A: The last letter of "Delmer" is "r". The last letter of "Andersen" is "n". Concatenating them is "rn". So the answer is rn. Q: Take the last letters of the words in "Jason Gonzalez" and concatenate them. A: The last letter of "Jason" is "n". The last letter of "Gonzalez" is "z". Concatenating them is "nz". So the answer is nz. Q: Take the last letters of the words in "Ulysses Brown" and concatenate them. A: The last letter of "Ulysses" is "s". The last letter of "Brown" is "n". Concatenating them is "sn". So the answer is sn. Q: Take the last letters of the words in "Frank Ortiz" and concatenate them. A: The last letter of "Frank" is "k". The last letter of "Ortiz" is "z". Concatenating them is "kz". So the answer is kz.|
|---|---|
Table 14: One example of selected model-generated exemplars with rationale chains. This set of exemplars is
trained on Letter (3) and selected on Letter (2).
-----
|DATASET|Exemplars|
|---|---|
|OpenBookQA|Q: As a car approaches you in the night (a) the headlights become more intense (b) the headlights recede into the dark (c) the headlights remain at a constant (d) the headlights turn off A: The headlights become more intense as the car approaches. The answer is (a). Q: Many animals that give birth to live young have (a) gills (b) scales (c) exoskeletons (d) legs A: Animals that give birth to live young are mammals. Mammals have hair and give milk to their young. The answer is (a). Q: A person is lost in a dense forest, and needs to find their home. They know their home is to the south, and they are headed north. They can find home by using a (a) northern-directing device (b) northern light reader (c) northeastern winds (d) north central credit A: he person needs to find south, so they need a northern-directing device. The answer is (a). Q: When the weather changes as it does from Christmas to Easter, (a) the air may chill (b) the ground may freeze (c) the plants may die (d) the ground may warm A: The weather changes from Christmas to Easter, the ground may warm. The answer is (d).|
|---|---|
Table 15: One example of selected exemplars with rationale chains. This set of exemplars is trained and selected on
OpenBookQA.
-----
|DATASET|Exemplars|
|---|---|
|e-SNLI|Premise: "A woman wearing all white and eating, walks next to a man holding a briefcase." Based on this premise,can we conclude the hypothesis "A female is next to a man." is true? OPTIONS: - yes - no - it is not possible to tell A: A woman is another word for a female. The answer is yes. Premise: "A couple play in the tide with their young son." Based on this premise,can we conclude the hypothesis "The family is sitting down for dinner." is true? OPTIONS: - yes - no - it is not possible to tell A: The family cannot simultaneously be playing in the tide and sitting down to dinner The answer is no. Premise: "A man, woman, and child enjoying themselves on a beach." Based on this premise,can we conclude the hypothesis "A child with mom and dad, on summer vacation at the beach." is true? OPTIONS: - yes - no - it is not possible to tell A: A man and a woman with a child are not necessarily its mom and dad. A man, woman, and child at a beach are not necessarily on summer vacation. The answer is it is not possible to tell. Premise: "Two women, holding food carryout containers, hug." Based on this premise,can we conclude the hypothesis "The two sisters saw each other across the crowded diner and shared a hug, both clutching their doggie bags." is true? OPTIONS: - yes - no - it is not possible to tell A: Just because two women are hugging does not mean they are sisters or they are in a crowded diner. The answer is it is not possible to tell. Premise: "A few people in a restaurant setting, one of them is drinking orange juice." Based on this premise,can we conclude the hypothesis "The people are eating omelettes." is true? OPTIONS: - yes - no - it is not possible to tell A: We only know that someone is drinking orange juice, we do not know that they are eating omelettes The answer is it is not possible to tell. Premise: "A man and a woman cross the street in front of a pizza and gyro restaurant." Based on this premise,can we conclude the hypothesis "Near a couple of restaurants, two people walk across the street." is true? OPTIONS: - yes - no - it is not possible to tell A: man and woman are people. The answer is yes.|
|---|---|
Table 16: One example of selected exemplars with rationale chains. This set of exemplars is trained and selected on
e-SNLI.
-----
|DATASET|Exemplars|
|---|---|
|SST-2|What is the sentiment of the following sentence? "more than another " best man " clone by weaving a theme throughout this funny film" A: "weaving a theme throughout this funny film" indicates positive sentiment. The answer is positive. What is the sentiment of the following sentence? "that ’s far too tragic to merit such superficial treatment" A: "far too tragic" and "to merit such superfciial treatment" both mean negative sentiments. The answer is negative. What is the sentiment of the following sentence? "are more deeply thought through than in most ’ right-thinking ’ films" A: "more deeply thought through" indicates positive sentiment. The answer is positive. What is the sentiment of the following sentence? "excruciatingly unfunny and pitifully unromantic" A: "excruciatingly unfunny" and "pitifully unromantic" both mean negative sentiments. The answer is negative.. What is the sentiment of the following sentence? "with his usual intelligence and subtlety" A: "with his usual intelligence and subtlety" indicates positive sentiment. The answer is positive. What is the sentiment of the following sentence? "goes to absurd lengths" A: "goes to absurd lengths" is a negative sentiment. The answer is negative.|
|---|---|
Table 17: One example of selected exemplars with rationale chains. This set of exemplars is trained and selected on
SST-2.
-----
| [
"KaShun, Shum",
"Tong, Zhang",
"Shizhe, Diao"
] | 2024-02-27T00:00:00 | EMNLP 2023 Findings | false | 99 | 4 | null | http://arxiv.org/abs/2302.12822 | https://arxiv.org/abs/2302.12822 | https://www.semanticscholar.org/paper/1358f90705b05cdb20ebe6799b02196205e7e9f0 |
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs | The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only accessible to a few experts. While previous studies to automate formalization focused on powerful search algorithms, no attempts were made to take advantage of available informal proofs. In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems. We investigate two relevant setups where informal proofs are either written by humans or generated by a language model. Our experiments and ablation studies show that large language models are able to produce well-structured formal sketches that follow the same reasoning steps as the informal proofs. Guiding an automated prover with these sketches enhances its performance from $20.9\%$ to $39.3\%$ on a collection of mathematical competition problems. | Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems, is introduced. | ## DRAFT, SKETCH, AND PROVE: GUIDING FORMAL THEOREM PROVERS WITH INFORMAL PROOFS
**Albert Q. Jiang[1][,][2][,][†]** **Sean Welleck[3][,][4][,][†]** **Jin Peng Zhou[5][,][6][,][†]**
**Wenda Li[2]** **Jiacheng Liu[3]** **Mateja Jamnik[2]**
**Timoth´ee Lacroix[1]** **Yuhuai Wu[5][,][7][,][‡]** **Guillaume Lample[1][,][‡]**
1Meta AI 2University of Cambridge 3University of Washington 4Allen Institute for AI
5Google Research 6Cornell University 7Stanford University
ABSTRACT
The formalization of existing mathematical proofs is a notoriously difficult process.
Despite decades of research on automation and proof assistants, writing formal
proofs remains arduous and only accessible to a few experts. While previous studies
to automate formalization focused on powerful search algorithms, no attempts were
made to take advantage of available informal proofs. In this work, we introduce
_Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof_
sketches, and uses the sketches to guide an automated prover by directing its search
to easier sub-problems. We investigate two relevant setups where informal proofs
are either written by humans or generated by a language model. Our experiments
and ablation studies show that large language models are able to produce wellstructured formal sketches that follow the same reasoning steps as the informal
proofs. Guiding an automated prover with these sketches enhances its performance
from 20.9% to 39.3% on a collection of mathematical competition problems.
Figure 1: Draft, Sketch, and Prove. Starting with an informal statement, our framework yields a formal proof
through a three-stage process: drafting informal proofs, mapping them into formal sketches, and proving the
remaining conjectures. Concretely, an informal statement is a mathematical problem described in a mixture
of natural and mathematical languages (e.g., formulae in LATE[X). Then, we use a large language model to]
autoformalize each informal proof into a formal sketch, which is a skeleton of the formal proof with open
conjectures left unproven (indicated by the <proof> blocks). The formal sketch mirrors the structure of the
informal proof. Finally, the open conjectures/gaps inside each formal sketch are proved by an off-the-shelf prover.
_†Equal contributions as leading authors. Correspondence to: [email protected]._
_‡Equal contributions as senior authors._
-----
1 INTRODUCTION
Formal proof automation is a challenging task that has been the focus of increased attention in recent
years (Bansal et al., 2019b; Polu & Sutskever, 2020; Lample et al., 2022; Jiang et al., 2022; Wu
et al., 2022). However, deep learning approaches have not been as successful as in other domains,
mainly because of the scarcity of formal data. Indeed, formalizing proofs is notoriously difficult and
only accessible to a handful of experts, which makes large annotation endeavors unrealistic (Wiedijk,
2008). The largest formal proof corpus is written in Isabelle (Paulson, 1994), and amounts to less
than 0.6 GB in size, orders of magnitude smaller than datasets commonly used in vision (Deng et al.,
2009) or natural language processing (Brown et al., 2020). To address the scarcity of formal proofs,
previous studies have proposed to use synthetic data (Wu et al., 2021), self-supervision (Polu &
Sutskever, 2020; Han et al., 2022), or reinforcement learning (Bansal et al., 2019a; Polu et al., 2022)
to synthesize additional formal training data. Although these methods alleviate the data insufficiency
to some degree, none are able to capitalize on the bulk of human-written mathematical proofs.
Unlike formal mathematics, informal mathematical data is abundant and widely available. Recently,
large language models trained on informal mathematical data showcased impressive quantitative
reasoning abilities (Lewkowycz et al., 2022; Welleck et al., 2022). However, they often generate
erroneous proofs and it is challenging to detect the faulty reasoning in these proofs automatically. Our
work devises a novel approach called Draft, Sketch, and Prove (DSP) to translate informal mathematical proofs into formal ones and thus enjoy both the logical rigor provided by formal systems and the
wealth of informal data. We give a schematic diagram of the DSP method in Figure 1 and describe it
in Section 3. Recent work (Wu et al., 2022) demonstrates the feasibility of automatically translating
informal statements into formal ones with large language models. DSP goes beyond and leverages
large language models to generate formal proof sketches (Wiedijk, 2003) from informal proofs. Proof
sketches consist of high-level reasoning steps that can be interpreted by formal systems such as
interactive theorem provers. They differ from complete formal proofs in that they contain sequences
of intermediate conjectures without justification. An example of informal proof with its corresponding
formal proof sketch is provided in Figure 2. In the last step of DSP, we elaborate the formal proof
sketch into a full formal proof using an automated prover to prove all intermediate conjectures.
We perform experiments to generate formal proofs of problems from the miniF2F dataset (Zheng
et al., 2022) and show that a large portion of theorems can be proved automatically with this method.
We investigate two settings where the informal proofs are either written by humans or drafted by
a large language model trained on mathematical text. These two settings correspond to situations
frequently occurring during the formalization of existing theories, where informal proofs are usually
available, but sometimes left as exercises to the reader or missing due to space limits in the margin.
**Contributions:**
- We introduce a novel approach to leverage informal proofs to guide automated provers with
formal proof sketches.
- To evaluate our approach, we build a dataset of manually curated informal statements and
informal proofs aligned with formal statements in the miniF2F dataset (Zheng et al., 2022).
- We increase the proportion of problems solved by an automated prover on miniF2F from
20.9% to 38.9% given language-model-generated informal proofs, and up to 39.3% when
proofs are written by humans.
- Through three ablation studies, we demonstrate the performance benefit of drafting informal
proofs, annotating sketches with informal segments, and using automated provers to close
open conjectures for the autoformalization of proofs.
2 BACKGROUND AND RELATED WORK
**Interactive theorem proving** Modern verification systems for mathematics are centered around
_interactive theorem provers (ITPs), such as Isabelle (Paulson, 1994), Lean (Moura et al., 2015),_
Coq (Barras et al., 1997), or Metamath (Megill & Wheeler, 2019). ITPs embed the mathematical
definitions and theorems onto a solid logical foundation (e.g., Higher-Order Logic, Dependent Type
Theory) implemented by their kernels. Every theorem must be checked by the kernel to be recognized
-----
by the ITP. To be proved formally, a theorem is first stated in the ITP’s programming language, and
iteratively simplified into simpler objectives (or subgoals), until it can be reduced to already proven
facts. In this paper, we will refer to proofs verified by a formal theorem prover as formal proofs, and
proofs written in “standard” mathematics (e.g. in LATE[X) as][ informal proofs][.]
**Machine learning for formal proof synthesis** Several approaches propose to combine machine
learning with modern interactive theorem provers (Yang & Deng, 2019; Gauthier et al., 2021), and
build upon the recent success of language models (Polu & Sutskever, 2020; Han et al., 2022; Polu et al.,
2022; Jiang et al., 2022; Lample et al., 2022). These methods typically rely on sequence-to-sequence
models (Sutskever et al., 2014) to generate the next step of a proof given the current proof state and
perform search over the generated subgoals using powerful search methods such as MCTS (Silver
et al., 2018). Because search is computationally expensive, these language models are relatively small
(with fewer than 1 billion parameters). Our method contrasts with these approaches in that we use a
significantly reduced number of calls to the models, but also much larger language models (with up
to 175 billion parameters) that showcase outstanding few-shot learning abilities (Brown et al., 2020).
**Machine learning for informal reasoning** Language models have also been used in the context
of purely informal mathematics (Lample & Charton, 2020; Hendrycks et al., 2021; Welleck et al.,
2021; Drori et al., 2022; Welleck et al., 2022). Nevertheless, Lewkowycz et al. (2022) note that for
quantitative question answering, models are prone to generate false positives: the model guesses
the right answer while providing an incorrect proof. These errors are hard to spot without human
inspection. Worryingly, the frequency of false positives increases with the difficulty of the problem.
Our method builds on these findings and translates informal proofs into formal proofs. Since ITPs
are logically grounded, once a formal proof is checked by them, we are guaranteed its correctness.
**Autoformalization** In a position paper, Szegedy (2020) argued for attaining formal mathematical
data from informal sources with neural networks. Wang et al. (2020) performed preliminary experiments where the evaluation was limited to text-level similarities on synthetic datasets. Recently, Wu
et al. (2022) found that large language models (Chen et al., 2021; Chowdhery et al., 2022) are capable
of few-shot statement autoformalization. Namely, a small number of examples are enough for them
to learn to perform informal-to-formal translation of statements. In this paper, we investigate whether
these findings can generalize to proof autoformalization, i.e., whether large language models can be
used to translate informal proofs into formal ones.
3 METHOD
In this section, we describe our Draft, Sketch, and Prove (DSP) method for formal proof automation,
which leverages informal proofs to guide automated formal theorem provers with proof sketches. We
assume that each problem comes with an informal statement and a formal statement describing the
problem. Our pipeline consists of three stages (depicted in Figure 1), which we present below.
3.1 DRAFTING INFORMAL PROOFS
The initial phase of the DSP method consists in finding informal proofs for a problem according to
its description in natural mathematical language (possibly with LATE[X). The resulting informal proof]
is seen as a draft for the subsequent phases. In mathematical textbooks, proofs of theorems are in
general provided, but are sometimes missing or incomplete. Therefore, we consider two settings
corresponding to the presence or absence of the informal proofs. In the first, we assume that a
“ground-truth” informal proof (i.e., one written by a human) is available, which is the typical scenario
in the practice of formalizing existing mathematical theories. In the second setting, we make a more
general assumption that the ground-truth informal proof is not given, and draft proof candidates
with a large language model trained on informal mathematical data. The language model removes
the dependence on human proofs and can produce multiple alternative solutions for every problem.
Although there is no easy way to automatically verify the correctness of these proofs, the informal
proof only needs to be useful for producing a good formal proof sketch in the next stage.
-----
**Informal Statement: Show that for any real number a, 10a ≤** 28a[2] + 1.
**Informal Proof:**
It suffices to show 0 28a[2] 10a + 1. First, consider completing the square for 28a[2] 10a and
observe that _a −_ 285 _≤2 = a2_ _−−1028_ _[a][+(5][/][28)][2][. Since][ 0][ ≤]_ _a −_ 285 2, we get 0 ≤ _a2_ _−_ 1028 _[a]−[+(5][/][28)][2][.]_
Multiplying by 28 and simplifying gives 0 28a[2] 10a + (25/28). Since 25/28 < 1, the result
_≤_ _−_
follows.
**Formal Proof Sketch:**
**theorem algebra_binomnegdiscrineq_10alt28asqp1:**
**fixes a :: real**
**shows "10 * a ≤** 28 * aˆ2 + 1"
**proof - (* it suffices to show 0 <= 28aˆ2 - 10a + 1 *)**
**have c0: "0 ≤** 28aˆ2 - 10a + 1"
**proof - (* observe that (a - (5/28))ˆ2 = aˆ2 - (10/28)a + (5/28)ˆ2 *)**
**have c1: "(a - (5/28))ˆ2 = aˆ2 - 10/28a + (5/28)ˆ2" < · · · >**
**(* we get 0 <= aˆ2 - (10/28)a + (5/28)ˆ2 *)**
**have c2: "0 ≤** aˆ2 - 10/28a + (5/28)ˆ2" using c1 < · · · >
**(* Multiplying by 28 and simplifying gives 0 <= 28aˆ2 - 10a + (25/28) *)**
**have c3: "0 ≤** 28aˆ2 - 10a + 28((5/28)ˆ2)" using c2 < · · · >
**have c4: "0 ≤** 28aˆ2 - 10a + 28((5/28)*(5/28))" using c3 < · · · >
**have c5: "0 ≤** 28aˆ2 - 10a + (25/28)" using c4 < · · · >
**(* Since 25/28 < 1, the result follows. *)**
**show ?thesis using c5 < · · · >**
**qed**
**show ?thesis < · · · >**
**qed**
Figure 2: A proof sketch in Isabelle. The problem “Show that for any real number a, 10a ≤ 28a[2] + 1”
is given with an informal proof and an associated formal proof sketch. The sketch first rewrites the original
statement (c0), which is proved through 5 intermediary conjectures (c1..c5). We use a special token (< · · · >)
to indicate that the conjecture is “open” and should be tackled by an automated prover later. To facilitate the
alignment between the informal and formal languages, we annotate the formal proof sketch examples with
informal proof segments (shown in red), which are immediately followed by their formal counterparts.
3.2 MAPPING INFORMAL PROOFS INTO FORMAL SKETCHES
A formal proof sketch encodes the structure of a solution and leaves out low-level details (Wiedijk,
2003). Intuitively, it is a partial proof that outlines high-level conjecture statements. A concrete
example of a proof sketch is shown in Figure 2. Although informal proofs often leave aside low-level
details, (e.g., by stating their triviality), these details cannot be discharged in a formal proof, making
straightforward informal-to-formal proof translation difficult. Instead, we propose to map informal
proofs to formal proof sketches that share the same high-level structures. The low-level details
missing from a proof sketch can later be filled by an automated prover. Since large informal-formal
parallel corpora do not exist, standard machine translation methods are unsuitable for this task.
Rather, we use the few-shot learning abilities of a large language model. Specifically, we prompt the
model with a few example pairs containing informal proofs and their corresponding formal sketches,
followed by an informal proof yet to be translated. We then let the model generate the subsequent
tokens to obtain the desired formal sketch. We refer to this model as an autoformalizer.
3.3 PROVING OPEN CONJECTURES IN THE SKETCHES
As the last part of the process, we execute off-the-shelf automated provers to fill in the missing
details in proof sketches, where “automated provers” refers to systems capable of producing formally
verifiable proofs. Our framework is agnostic to the specific choice of the automated prover: it can be
symbolic provers such as heuristic proof automation tools, neural-network-based provers, or hybrid
approaches. If the automated prover successfully closes all the gaps in the proof sketch, it returns the
final formal proof which can be checked against the problem’s specification. If the automated prover
fails (e.g., it exceeds the allocated time limit), we consider the evaluation to be unsuccessful.
-----
4 EXPERIMENTS
4.1 DATASET AND EVALUATION
We evaluate our method on the miniF2F dataset (Zheng et al., 2022). The dataset contains the formal
_statements of 488 problems from high-school mathematical competitions, written in three formal_
languages: Lean, HOL-Light, and Isabelle. They are split into a valid set and a test set, composed
of 244 problems each. In this work, we choose to experiment with Isabelle for three reasons: (1)
Isabelle’s proof corpus is one of the largest among interactive theorem provers, conducive to the
language models’ mastery of its syntax; (2) Isabelle supports the declarative proof style (detailed
discussion in Appendix A), enabling formal proof sketches (Wiedijk, 2003) which are central to our
method; (3) although automated proving tools are available in other interactive theorem provers, none
are as developed and effective as Sledgehammer (Paulson, 2010) in Isabelle for proving conjectures.
The miniF2F dataset is comprised of problems from three source categories: (1) 260 problems
sampled from the MATH dataset (Hendrycks et al., 2021); (2) 160 problems from actual high-school
mathematical competitions (AMC, AIME, and IMO); (3) 68 crafted problems at the same difficulty
level as (2). We employ three methods to obtain informal statements and proofs from these sources.
For source (1), we access the informal statements and proofs from the MATH dataset; for (2), we
retrieve their informal statements and proofs from the AOPS website [1]; and for (3), we manually write
down their informal statements and proofs. Thus we gather a parallel set of 488 informal statements,
informal proofs, and formal statements. This dataset provides the informal statements and proofs for
our experiment in the human-as-informal-proof-writer setting and will be made available shortly.
Our task is to generate formal proofs for problems as they are formally stated in miniF2F. We consider
a proof valid if and only if it (a) does not contain “cheating” keywords (sorry and oops) that exit
a proof without completing it, and (b) Isabelle is able to verify the corresponding formal statement
with the proof. We use the Portal-to-ISAbelle API by Jiang et al. (2021) to interact with Isabelle.
4.2 BASELINES
**Sledgehammer** As a baseline, we attempt to prove the formal statement directly with Sledgehammer, a popular proof automation tool in Isabelle. We use the default Sledgehammer configuration in
Isabelle2021, including a 120-second timeout and the five automated theorem provers (Z3, CVC4,
SPASS, Vampire, E). Appendix B gives a more thorough introduction to Sledgehammer.
**Sledgehammer + heuristics** Occasionally, Sledgehammer may fail without trying simple yet effective tactics. As a second, stronger baseline, we create an automated prover that tries 11 common tactics
(auto, simp, blast, fastforce, force, eval, presburger, sos, arith, linarith,
auto simp: field simps) for high-school level algebra and number theory problems. If every
attempted tactic fails, or times out after 10 seconds, it falls back to Sledgehammer.
**Language models for proof search** Finally, we include baselines which are representative of
state-of-the-art neural theorem proving in Isabelle, specifically Thor (Jiang et al., 2022) and Thor
with expert iteration on autoformalized data (Wu et al., 2022). The methods GPT-f with expert
iteration (Polu et al., 2022), and HyperTree Proof Search (HTPS) (Lample et al., 2022) can solve
36.6% and 41.0% of the problems on miniF2F-test. However, they rely on the Lean theorem
prover instead of Isabelle, which greatly influences the performance due to the different tactics and
automation, and are not directly comparable to our method.
4.3 EXPERIMENTAL SETUP
**Drafting** When informal proofs are generated, we condition a large language model on informal
statements to sample 100 informal proofs per problem. Specifically, we use the Codex code-davinci002 model (Chen et al., 2021) through the OpenAI API, and the 8B, 62B, and 540B versions of the
Minerva model from Lewkowycz et al. (2022). We use greedy decoding for Codex, and nucleus
sampling (Holtzman et al., 2019) with temperature T = 0.6 and top p = 0.95 for Minerva models.
[1https://artofproblemsolving.com/community](https://artofproblemsolving.com/community)
-----
Table 1: Proving success rates on the miniF2F dataset with Isabelle In the table are the success rates of four
baselines, the DSP method with human and language model informal proofs, as well as three ablation studies,
on the validation and the test sets of miniF2F. The highest success rates on each set are highlighted in bold. The
performance difference between ablation studies and DSP with human informal proofs are enclosed in brackets.
Success rate miniF2F-valid miniF2F-test
_Baselines_
Sledgehammer 9.9% 10.4%
Sledgehammer + heuristics 18.0% 20.9%
Thor (Jiang et al., 2022) 28.3% 29.9%
Thor + expert iteration (Wu et al., 2022) 37.3% 35.2%
_Draft, Sketch, and Prove_
Human informal proof 42.6% **39.3%**
Codex informal proof 40.6% 35.3%
8B Minerva informal proof 40.6% 35.3%
62B Minerva informal proof **43.9%** 37.7%
540B Minerva informal proof 42.6% 38.9%
_Ablations (with human informal statements and proofs)_
**– In-line comments** 37.7% (−4.9%) 36.5% (−2.8%)
**– Informal proofs** 38.9% (−3.7%) 34.0% (−5.3%)
**– Automated provers** 32.8% (−9.8%) 30.3% (−9.0%)
**Sketching** For sketching, we manually prepare 20 autoformalization examples of the format
(informal statement, informal proof, formal statement, formal sketch), to form a pool of high-quality
demonstrations. Of these 20 examples, 10 are of the algebra type and 10 are of the number theory
type. All examples are from the validation set of the miniF2F dataset and can be found in the
supplementary materials. The sketches contain in-line comments as in Figure 2. If the name of the
problem gives away its type (algebra or number theory), we only use examples of the corresponding
type. We also ensure that the sampled few-shot examples do not contain the problem being solved.
The prompt is composed of 3 uniformly randomly sampled example from the pool and the current
problem’s (informal statement, informal proof, formal statement). We use this prompt to query the
same Codex model to get the desired proof sketches. We use greedy decoding and a maximum of
2048 tokens in the generated sequence. For all the experiments, unless stated otherwise, we control
the total number of queries made to Codex per problem to be 100. This means 100 queries per human
informal solution and one query per language-model-generated solution.
**Proving** To prove the conjectures left open by the formal sketch, we use the Sledgehammer +
heuristics automated prover described in Subsection 4.2. We execute the automated prover on every
open conjecture in the sketch to synthesize a formal proof that can be verified by Isabelle.
4.4 RESULTS
In Table 1, we display the proportion of successful formal proofs found on the miniF2F dataset . The
results include the four baselines described in Subsection 4.2 and the DSP method with human-written
proofs and model-generated proofs. From the table, we can see that the automated prover with 11
additional heuristic tactics significantly increases the performance of Sledgehammer, boosting its
success rate from 9.9% to 18.0% on the validation set of miniF2F and from 10.4% to 20.9% on the
test set. The two baselines using language models and proof search (Thor and Thor + expert iteration)
achieve success rates of 29.9% and 35.2% on the test set of miniF2F, respectively.
With informal proofs written by humans, the DSP method achieves success rates of 42.6% and
39.3% on the validation and test sets of miniF2F. A total of 200 out of 488 problems can be proved
in this way. The Codex model and the Minerva (8B) model give very similar results in solving
problems on miniF2F: they both guide the automated prover to solve 40.6% and 35.3% of problems
on the validation and the test sets respectively. This is corroborated by Lewkowycz et al. (2022)’s
observation that these two models have comparable performance in solving mathematical problems.
-----
When we switch to the Minerva (62B) model, the success rates rise up to 43.9% and 37.7% respectively. Compared to human-written informal proofs, its success rates are 1.3% higher on the validation
set and 1.6% lower on the test set. In total, the Minerva (62B) model is able to solve 199 problems on
miniF2F, one fewer than with human proofs. The Minerva (540B) model solves 42.6% and 38.9% of
problems in the validation and the test sets of miniF2F, also resulting in 199 successful proofs. The
_DSP method is effective in guiding the automated prover under both settings: using human informal_
proofs or language-model-generated informal proofs. DSP almost doubles the prover’s success rate
and results in a new state-of-the-art performance on miniF2F with Isabelle. Moreover, the larger
Minerva models are almost as helpful as a human in guiding the automated formal prover.
5 ANALYSIS
5.1 ABLATION STUDIES
**Ablation of in-line comments** To facilitate the alignment between the informal proofs and the
formal proof sketches, we copy relevant segments of the informal proofs as in-line comments in
the sketches. In the manually constructed prompt examples, these comments are prefixed to the
corresponding Isabelle code blocks, as shown in Figure 2 (the text in red). We hypothesize that this
technique is beneficial for large language models to synthesize formal sketches. To validate this
hypothesis, we perform an ablation study by removing the in-line comments in the prompt examples
before running the experiment. The results are displayed in Table 1. We find that without in-line
comments, the success rates drop by 4.9% and 2.8% on the validation and test sets respectively. We
conclude that having in-line comments is helpful for generating formal proof sketches.
**Ablation of informal proof drafts** Drafting informal proofs is the first step of the DSP method.
To investigate the necessity of this step, we perform an experiment of formal sketching and proving
without informal proofs at all. Because formal proof sketches are written in the declarative proof style,
they are fairly similar to the informal proof drafts already. Concretely, we remove the informal proofs
and the in-line comments (because they are copied segments of the informal proofs) in the prompt
examples. This removes the need for the informal proof writer, whether a human or a neural network.
The results of this setup are shown in Table 1. It can be seen that the success rates on the validation
and the test sets of miniF2F drop by 3.7% and 5.3% respectively compared to with human-written
proofs. They are also inferior to success rates obtained with language-model-generated informal
proofs. This demonstrates the importance of drafting informal proofs before sketching and proving.
**Ablation of automated provers** Using an autoformalizer to generate proof sketches which are
then completed by an automated prover is central to our method. The effect of utilizing an automated
prover to close open conjectures in proof sketches is worth studying, so we conduct an ablation
experiment for it. Namely, we replace the proof sketches in the prompt examples with complete
formal proofs. The complete formal proofs still follow the declarative proof style, but do not contain
any open conjectures. As a result, the large language model will also generate full proofs instead of
sketches, and we directly check whether these generated proofs are valid. The results in this setup are
presented in Table 1. The results reveal that without an automated prover to close open conjectures,
the success rate on miniF2F decreases by 9.8% and 9.0% on the validation and test sets respectively.
The drastic performance difference indicates the essential role of automated provers in our approach.
**Scaling properties of ablation studies** To understand the effect of the ablations on the DSP
method’s scaling properties, we vary the number of autoformalization attempts per problem and plot
the number of successful proofs found on the miniF2F dataset in Figure 3 (left). Four methods are
contrasted: the original DSP method with human informal proofs, the DSP method without in-line
comments, the DSP method without informal proofs, and the DSP method without formal proof
sketches. It can be seen from the figure that with the original DSP method, the performance reaches
a plateau (no new proofs are found) after 70 autoformalization attempts are made for each problem.
For the ablation study with no in-line comments, the plateau is reached much faster, after around 50
autoformalization attempts. This method solves 181 problems in total. The ablation study without
informal proofs also reaches a plateau at around 70 autoformalization attempts, solving 178 problems
in total. The ablation study without sketching can solve 154 problems on miniF2F. In comparison,
with human informal proofs, only 7 autoformalization attempts are required to reach this performance.
-----
MiniF2F Problems Solved (out of 488) MiniF2F Problems Solved (out of 488)
200 200
150 150
Human informal proof drafts
100 DSP with human proofs 100 Minerva (540B) proof drafts
Ablation: no in-line comments Minerva (62B) proof drafts
#Successful Proofs Ablation: no informal proofs #Successful Proofs Minerva (8B) proof drafts
50 Ablation: no automated provers 50 Codex proof drafts
0 20 40 60 80 100 0 20 40 60 80 100
#Autoformalization Attempts Per Problem #Autoformalization Attempts Per Problem
Figure 3: Number of problems solved on miniF2F against the number of autoformalization attempts per
**problem. Left: The figure displays the experiments carried out with the DSP method and three ablations on**
it. The curves represent the DSP method (blue), formal proof sketches without the in-line comments (orange),
without informal proofs altogether (green), and without the automated provers (red). Right: The figure compares
the experimental results with informal proof drafts written by humans (blue), the 540B Minerva model (orange),
the 62B Minerva model (green), the 8B Minerva model (red), and the Codex model (purple).
5.2 LANGUAGE-MODEL-GENERATED PROOFS
Our experiments demonstrated that model-generated informal proofs from Minerva and Codex can
help guide a formal theorem prover. In this section, we analyze the properties of these proofs further.
We focus on the informal proofs the 62B and 540B Minerva models produce in this section, as they
give the best overall performances and achieve the highest success rate on miniF2F.
**Minerva helps solve one IMO problem** Interestingly, our approach manages to solve one problem
from the International Mathematical Olympiad (imo 1959 1) with a Minerva-generated solution,
but not with the human proof. For this problem, we present the successful Minerva-generated
informal proof draft and the formal proof in Figure 4. We hypothesize that the reason behind this
phenomenon is that human proofs might leave gaps between conjectures that are too difficult for
automated provers to solve. On the other hand, the diversity in language model informal proofs
makes some of them more amenable to automated provers. In Appendix C, we analyze the human
and the Minerva informal proofs for this problem in greater detail.
**Manual evaluation of Minerva proofs** Next, we analyze the relationship between the validity of
the formal proofs and the correctness of the informal proofs. For our analysis, we randomly sample
50 Minerva proofs of different problems, which are then successfully converted to formal proofs. We
then manually evaluate the correctness of these 50 informal proofs. Among them, 29 proofs (58%)
are entirely correct, 16 are incorrect with a clearly identifiable incorrect step, and 5 “proofs” are
nonsensical and simply rephrase the final conclusions of the problems.
Seeing that a total of 16 + 5 = 21 incorrect informal proofs can lead to successful formal proofs,
we study how they guide the automated formal prover despite having flaws themselves. The 21
proofs divide into 2 cases: In the first case, we find 13 problems for which the informal proofs are
mostly ignored, and the automated prover can find proofs by itself; In the other 8 problems, although
the informal proofs are wrong, the autoformalizer manages to correct them, either by ignoring the
erroneous steps or by stating their correct versions in the formal proof sketches. This suggests
that the autoformalizer has some understanding of the mathematical statements and is not merely
translating them from an informal language to a formal language. It is robust to slight noises in its
input. In Appendix D, we present 4 case studies comparing the human and Minerva informal proofs.
Particularly, Figure 10 shows a completely correct example and one example of each pathological case.
Is there a way to detect which Minerva proofs are correct, without human evaluation? For a
preliminary investigation, we filter out all the problems that can be solved directly with the automated
prover from the 50 and are left with 27 informal proofs. Of these 27, 21 are completely correct, 6
still contain small errors, but none are nonsensical. With this simple filter, we achieve a precision of
77.8% and a recall of 72.4% in identifying correct Minerva informal proofs.
-----
**Statement: Prove that the fraction** [21]14[n]n[+4]+3 [is irreducible for every natural number][ n][.]
**Informal Proof (Minerva 62B):**
We must show that gcd(21n + 4, 14n + 3) = 1.
The Euclidean algorithm gives
21n + 4 = 1 · (14n + 3) + 7n + 1
14n + 3 = 2 · (7n + 1) + 1.
Since gcd(7n + 1, 1) = 1, we have gcd(21n + 4, 14n + 3) = 1.
**Formal Proof:**
**theorem imo_1959_p1:**
**fixes n :: nat**
**shows "gcd (21*n + 4) (14*n + 3) = 1"**
**proof -**
**(* The Euclidean algorithm gives**
**21n+4=1\cdot(14n+3)+7n+1**
**14n+3=2\cdot(7n+1)+1. *)**
**have c0: "21*n + 4 = 1*(14*n + 3) + 7*n + 1"**
_<ATP> by auto </ATP>_
**have c1: "14*n + 3 = 2*(7*n + 1) + 1" using c0**
_<ATP> by auto </ATP>_
**(* Since \gcd(7n+1,1)=1, we have \gcd(21n+4,14n+3)=1. *)**
then have "gcd (7*n + 1) 1 = 1"
**using c1**
_<ATP> by auto </ATP>_
then have "gcd (21*n + 4) (14*n + 3) = 1"
**using c1**
_<ATP> by (smt (z3) BitM_plus_one ab_semigroup_add_class.add_ac(1)_
add.assoc c0 gcd.commute gcd_add2 gcd_add_mult mult_numeral_1
numeral_One numeral_eq_Suc numerals(1) semiring_norm(3)) </ATP>
then show ?thesis
**using c1**
_<ATP> by blast </ATP>_
**qed**
Figure 4: IMO proof guided by a Minerva informal proof An informal proof of the International Math
Olympiad problem imo 1959 p1 generated by Minerva that leads to a successful formal proof. The steps
enclosed by the ATP delimiters are generated by an automated prover and all other steps are by the autoformalizer.
**Scaling properties of human and Minerva proofs** To understand the influence of different informal proof sources on the scaling properties of DSP, we plot the number of successful proofs found on
miniF2F against the number of autoformalization attempts per problem in Figure 3 (right). Note that
for each problem, we have 1 informal proof by a human and 100 informal proof drafts by each language model. The one human proof is used 100 times for formal proof sketch generation, while each
language model proof draft is used only once. We notice that the 62B Minerva model and the 540B
Minerva model always have comparable performances. Considering that the 540B Minerva model is
more capable of mathematical reasoning (Lewkowycz et al., 2022, Table 3) than the 62B model, we
hypothesize that the bottleneck in the DSP process shifts from drafting to sketching and proving. I.e.,
informal proof drafts of higher quality do not necessarily lead to more successful formal proofs due to
the limitation of sketching and proving. Both the 62B and the 540B models result in more successful
proofs than the smaller (8B) Minerva model and the Codex model, consistently for any number of attempts. The 8B Minerva model and the Codex model behave similarly, both finding 185 proofs in the
end. Informal proofs written by humans help solve more problems than those by Minerva models for
1 − 100 autoformalization attempts. However, the difference is small (1 problem) when 100 are made.
-----
MiniF2F Problems Solved (out of 488)
200
150
100
#Successful Proofs Human informal proof drafts
Minerva (540B) proof drafts
0 50 100 150 200
#Autoformalization Attempts Per Problem
Drafts Per Problem
1 5 10 20 100
1
180
160 5
140
10
120
# Problem Solved100 Sketches Per Draft 20
more compute
80
100
60
Figure 5: Number of problems solved on miniF2F. Left: The figure displays the number of successful proofs
with human and Minerva-generated informal proof drafts when up to 200 autoformalization attempts are made
per problem. The Minerva (540B) proof drafts solve more problems than human proof drafts when more than
130 attempts are made per problem. Right: The figure displays the number of successful proofs with different
combinations of drafts per problem and sketches per draft. The drafts are by the Minerva (540 B) model.
Noticing that the number of successful proofs does not plateau for the Minerva-generated proofs, we
investigate how further increasing the number of autoformalization attempts changes the number of
problems solved for human-written and language-model-generated proofs. For each problem, we use
1 human informal proof and sample 200 sketches for it; we use the same 100 informal proof drafts
by the Minerva (540B) language model and sample 2 sketches for each draft. The total number of
sketches per problem is 200 in both settings. We plot the number of proofs solved with respect to the
number of sketches in Figure 5 (left). We find that with human informal proofs, 203 theorems (106/97
on valid/test) have successful formal proofs after 200 attempts. While with language-model-generated
informal proofs, 209 theorems (111/98 on valid/test) have successful formal proofs after the same
number of autoformalization attempts. This suggests that with enough autoformalization attempts,
the diversity in language-model-generated informal proofs can benefit the automated formalization
process more than the “ground-truth” human informal proofs.
**Allocation of autoformalization budget** In Section 4, language models generate 100 informal
proof drafts for each mathematical problem and the autoformalizer is used once on each draft. It
is likely that some drafts have the potential to be formalized correctly, but do not get to produce a
successful sketch because the randomly sampled examples in the prompt are not suitable. We would
like to reduce this variance by attempting autoformalization multiple times, but it is expensive to do
so. Therefore we conduct an experiment to investigate what the optimal way of allocating drafts and
sketches per draft. For the Minerva (540 B) model, we vary the number of informal proof drafts and
the number of formal proof sketches per draft, under the constraint that the total number of sketches
per problem is fewer than 100. We present the number of miniF2F problems solved under every
combination in Figure 5 (right). The plot shows that when the total number of autoformalization
attempts is fixed, increasing the number of drafts per problem yields the most successes on miniF2F.
5.3 MEMORIZATION
This work utilizes two language models that have been trained on a large amount of internet data.
Several prior works (Trinh & Le, 2018; Carlini et al., 2022) pointed out that such models can memorize some fraction of the data they encounter during training. For drafting informal proofs, we mainly
experimented with Minerva. Lewkowycz et al. (2022, Section 5) discussed the memorization effects
within Minerva and concluded that they could not find evidence that its abilities are due to memorization. For the autoformalization of proof sketches, the Codex (code-davinci-002) model
was used. Its training data was collected before June 2021[2], at which time the miniF2F dataset had
not been made public. So the model cannot benefit from memorizing the exact problems and proofs.
Therefore, it is inappropriate to attribute the abilities of models used in this paper to memorization.
[2https://beta.openai.com/docs/models/codex-series-private-beta](https://beta.openai.com/docs/models/codex-series-private-beta)
-----
6 CONCLUSION
In this paper, we introduced Draft, Sketch, and Prove (DSP), a novel approach that takes advantage
of informal proofs to synthesize formal proofs. We demonstrated its feasibility and effectiveness
by reaching state-of-the-art performance on the miniF2F dataset with the Isabelle theorem prover.
Central to our method are formal proof sketches that mirror the high-level reasoning structures of
informal proofs. Our ablations showed that the ability to automatically convert informal proofs to
proof sketches is critical to the success of DSP.
Our DSP method differs fundamentally from previous applications of machine learning to formal
proof synthesis in two aspects. Firstly, while most approaches in the field focus on improving proof
search, our method seeks to construct the entire formal proof structure from the informal proof in one
decoding operation. The task of the automated prover is then simplified to filling the gaps between
intermediate conjectures. Secondly, while existing approaches operate exclusively on formal data,
_DSP by design benefits from informal proofs._
In this work, we utilized a purely symbolic automated prover to close the gaps in proof sketches.
In the future, we aim to equip DSP with more powerful mechanisms, such as HyperTree Proof
Search (Lample et al., 2022), to broaden the scope of provable theorems. Similar to AlphaCode (Li
et al., 2022), we found that the number of generations is crucial for performance. The computational
cost of the autoformalizer being a bottleneck in our method, we seek to develop approaches able to
generate high-quality proof sketches more efficiently.
ACKNOWLEDGEMENTS
We thank Rui Yuan and Kunhao Zheng for helping with the informal solutions used in our dataset.
We thank Christian Szegedy for his feedback on the early draft.
FUNDING DISCLOSURE
WL is supported by the ERC Advanced Grant ALEXANDRIA (Project GA 742178).
LIST OF CONTRIBUTIONS
AQJ conceived the idea of using proof sketches and conducted the experiments. SW constructed the
first version of the pipeline and the initial autoformalization prompts. JPZ produced the Minerva
informal proofs, and helped conduct autoformalization experiments. GL proposed to use inline
comments in formal proof sketches to improve alignment. JPZ, YW, and SW performed the case
analyses of Minerva solutions. AQJ, TL, GL, SW, and JL contributed to the dataset. AQJ and WL
wrote the final autoformalization prompts. MJ is AQJ’s PhD supervisor. AQJ, GL, SW, and TL wrote
the paper. YW and GL directed the project.
REFERENCES
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, and Christian Szegedy. Learning to reason in large
[theories without imitation. CoRR, abs/1905.10501, 2019a. URL http://arxiv.org/abs/](http://arxiv.org/abs/1905.10501)
[1905.10501.](http://arxiv.org/abs/1905.10501)
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An
environment for machine learning of higher order logic theorem proving. In Kamalika Chaudhuri
and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine
_Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings_
_[of Machine Learning Research, pp. 454–463. PMLR, 2019b. URL http://proceedings.](http://proceedings.mlr.press/v97/bansal19a.html)_
[mlr.press/v97/bansal19a.html.](http://proceedings.mlr.press/v97/bansal19a.html)
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicael Courant, Jean-Christophe Filliatre, Eduardo¨
Gimenez, Hugo Herbelin, Gerard Huet, Cesar Munoz, Chetan Murthy, et al. The Coq proof
_assistant reference manual: Version 6.1. PhD thesis, Inria, 1997._
-----
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler,
Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot
learners. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan,
and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual
_Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12,_
_[2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)_
[1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan`
Zhang. Quantifying memorization across neural language models. CoRR, abs/2202.07646, 2022.
[URL https://arxiv.org/abs/2202.07646.](https://arxiv.org/abs/2202.07646)
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison
Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger,
Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick
Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis,
Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, S. Arun
Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa,
Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating
large language models trained on code. ArXiv, abs/2107.03374, 2021.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh,
Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam
Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James
Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin
Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret
Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick,
Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang,
Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern,
Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways. CoRR, abs/2204.02311, 2022. doi: 10.48550/arXiv.2204.02311. URL
[https://doi.org/10.48550/arXiv.2204.02311.](https://doi.org/10.48550/arXiv.2204.02311)
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale
hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition,
pp. 248–255. Ieee, 2009.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu,
Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates
university math problems by program synthesis and few-shot learning at human level. Proceedings
_of the National Academy of Sciences, 119(32):e2123433119, 2022._
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. Tactictoe:
learning to prove with tactics. Journal of Automated Reasoning, 65(2):257–286, 2021.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. In The Tenth International Conference on
_Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022._
[URL https://openreview.net/forum?id=rpxJc9j04U.](https://openreview.net/forum?id=rpxJc9j04U)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
-----
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text
degeneration. arXiv preprint arXiv:1904.09751, 2019.
Albert Q. Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. LISA: Language models of Isabelle
proofs. In 6th Conference on Artificial Intelligence and Theorem Proving, 2021.
Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygozdz, Piotr´
Milos, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate language models
and automated theorem provers. CoRR, abs/2205.10893, 2022. doi: 10.48550/arXiv.2205.10893.
[URL https://doi.org/10.48550/arXiv.2205.10893.](https://doi.org/10.48550/arXiv.2205.10893)
Guillaume Lample and Franc¸ois Charton. Deep learning for symbolic mathematics. In International
_[Conference on Learning Representations, 2020. URL https://openreview.net/forum?](https://openreview.net/forum?id=S1eZYeHFDS)_
[id=S1eZYeHFDS.](https://openreview.net/forum?id=S1eZYeHFDS)
Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel
Ebner, Aurelien Rodriguez, and Timoth´ ee Lacroix. Hypertree proof search for neural theorem´
[proving. CoRR, abs/2205.11491, 2022. doi: 10.48550/arXiv.2205.11491. URL https://doi.](https://doi.org/10.48550/arXiv.2205.11491)
[org/10.48550/arXiv.2205.11491.](https://doi.org/10.48550/arXiv.2205.11491)
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V.
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with
[language models. CoRR, abs/2206.14858, 2022. doi: 10.48550/arXiv.2206.14858. URL https:](https://doi.org/10.48550/arXiv.2206.14858)
[//doi.org/10.48550/arXiv.2206.14858.](https://doi.org/10.48550/arXiv.2206.14858)
Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Remi Leblond,´
Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy,
Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl,
Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson,
Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level
code generation with alphacode. CoRR, abs/2203.07814, 2022. doi: 10.48550/arXiv.2203.07814.
[URL https://doi.org/10.48550/arXiv.2203.07814.](https://doi.org/10.48550/arXiv.2203.07814)
Norman D. Megill and David A. Wheeler. _Metamath:_ _A_ _Computer_ _Language_
_for_ _Mathematical_ _Proofs._ Lulu Press, Morrisville, North Carolina, 2019.
http://us.metamath.org/downloads/metamath.pdf.
Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer. The
lean theorem prover (system description). In International Conference on Automated Deduction,
pp. 378–388. Springer, 2015.
Lawrence C. Paulson. Isabelle - A Generic Theorem Prover (with a contribution by T. Nipkow),
volume 828 of Lecture Notes in Computer Science. Springer, 1994. ISBN 3-540-58244-4. doi:
[10.1007/BFb0030541. URL https://doi.org/10.1007/BFb0030541.](https://doi.org/10.1007/BFb0030541)
Lawrence C. Paulson. Three years of experience with sledgehammer, a practical link between
automatic and interactive theorem provers. In Renate A. Schmidt, Stephan Schulz, and Boris
Konev (eds.), Proceedings of the 2nd Workshop on Practical Aspects of Automated Reasoning,
_PAAR-2010, Edinburgh, Scotland, UK, July 14, 2010, volume 9 of EPiC Series in Computing, pp._
[1–10. EasyChair, 2010. doi: 10.29007/tnfd. URL https://doi.org/10.29007/tnfd.](https://doi.org/10.29007/tnfd)
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_[CoRR, abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393.](https://arxiv.org/abs/2009.03393)_
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. CoRR, abs/2202.01344, 2022.
[URL https://arxiv.org/abs/2202.01344.](https://arxiv.org/abs/2202.01344)
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez,
Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement
learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):
1140–1144, 2018.
-----
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.
_Advances in neural information processing systems, 27, 2014._
Donald Syme. DECLARE: A prototype declarative proof system for higher order logic. Citeseer,
1997.
Christian Szegedy. A promising path towards autoformalization and general artificial intelligence. In
Christoph Benzmuller and Bruce R. Miller (eds.),¨ _Intelligent Computer Mathematics - 13th Interna-_
_tional Conference, CICM 2020, Bertinoro, Italy, July 26-31, 2020, Proceedings, volume 12236 of_
_[Lecture Notes in Computer Science, pp. 3–20. Springer, 2020. doi: 10.1007/978-3-030-53518-6\ 1.](https://doi.org/10.1007/978-3-030-53518-6_1)_
[URL https://doi.org/10.1007/978-3-030-53518-6_1.](https://doi.org/10.1007/978-3-030-53518-6_1)
Trieu H. Trinh and Quoc V. Le. A simple method for commonsense reasoning. CoRR, abs/1806.02847,
[2018. URL http://arxiv.org/abs/1806.02847.](http://arxiv.org/abs/1806.02847)
Qingxiang Wang, Chad E. Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine
translation in autoformalization of mathematics in mizar. In Jasmin Blanchette and Catalin Hritcu
(eds.), Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs
_and Proofs, CPP 2020, New Orleans, LA, USA, January 20-21, 2020, pp. 85–98. ACM, 2020. doi:_
[10.1145/3372885.3373827. URL https://doi.org/10.1145/3372885.3373827.](https://doi.org/10.1145/3372885.3373827)
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun Cho.
Naturalproofs: Mathematical theorem proving in natural language. In Thirty-fifth Conference on
_Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL_
[https://openreview.net/forum?id=Jvxa8adr3iY.](https://openreview.net/forum?id=Jvxa8adr3iY)
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover:
Grounded mathematical proof generation with language models. CoRR, abs/2205.12910, 2022. doi:
[10.48550/arXiv.2205.12910. URL https://doi.org/10.48550/arXiv.2205.12910.](https://doi.org/10.48550/arXiv.2205.12910)
Freek Wiedijk. Formal proof sketches. In Stefano Berardi, Mario Coppo, and Ferruccio Damiani
(eds.), Types for Proofs and Programs, International Workshop, TYPES 2003, Torino, Italy,
_April 30 - May 4, 2003, Revised Selected Papers, volume 3085 of Lecture Notes in Computer_
_[Science, pp. 378–393. Springer, 2003. doi: 10.1007/978-3-540-24849-1\ 24. URL https:](https://doi.org/10.1007/978-3-540-24849-1_24)_
[//doi.org/10.1007/978-3-540-24849-1_24.](https://doi.org/10.1007/978-3-540-24849-1_24)
Freek Wiedijk. Formal proof – getting started. Notices of the American Mathematical Society, 55:
1408–1414, 2008.
Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Baker Grosse. INT: An inequality benchmark
for evaluating generalization in theorem proving. In International Conference on Learning
_[Representations, 2021. URL https://openreview.net/forum?id=O6LPudowNQm.](https://openreview.net/forum?id=O6LPudowNQm)_
Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, and Christian
Szegedy. Autoformalization with large language models. CoRR, abs/2205.12615, 2022. doi:
[10.48550/arXiv.2205.12615. URL https://doi.org/10.48550/arXiv.2205.12615.](https://doi.org/10.48550/arXiv.2205.12615)
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
_International Conference on Machine Learning (ICML), 2019._
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. miniF2F: a cross-system benchmark
for formal olympiad-level mathematics. In The Tenth International Conference on Learning
_Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL_
[https://openreview.net/forum?id=9ZPegFuFTFv.](https://openreview.net/forum?id=9ZPegFuFTFv)
-----
### APPENDIX
A CONJECTURES AND THE DECLARATIVE PROOF STYLE
Interactive theorem provers such as Isabelle and Mizar use a declarative proof style (Syme, 1997), in
which a proof is interleaved with conjectures and their corresponding proofs. Syme (1997) stated
that the list of conjectures in a declarative proof should be analogous to a proof sketch found in a
mathematical textbook and sufficiently convincing for the reader. In practice, ITP users often prove a
theorem by writing down a list of conjectures (a “formal sketch”), then attempt to find a proof of
each conjecture (fill a “gap”) with an automated system.
B SLEDGEHAMMER
Sledgehammer (Paulson, 2010) is a powerful system that automates reasoning with the interactive
theorem prover Isabelle. It works by flattening the goals encoded in the higher-order logic used by
Isabelle/HOL into other logics (e.g., first-order logic) which can then be fed into automated theorem
provers such as E [3], CVC4 [4], Z3 [5], Vampire [6], and SPASS [7]. If any of these automated theorem
provers succeeds in finding the proof in their own corresponding format, Sledgehammer reconstructs
the proof in Isabelle/HOL with certified provers (metis, meson, and smt), which is relatively
more interpretable by humans.
As a practical example of using Sledgehammer, one can declare a conjecture in Isabelle/HOL:
have "4 dvd (a::nat) =⇒ 2 dvd a" and call Sledgehammer immediately afterwards.
If Sledgehammer succeeds, it will return a proof step that proves the conjecture. In this example,
the step is by (meson dvd trans even numeral), which uses the meson resolution prover
and two facts: that the division relation is transitive and that 4 is an even number. If Sledgehammer
does not find the proof or timeouts, it will report failure.
C A PROOF TO AN INTERNATIONAL MATHEMATICAL OLYMPIAD PROBLEM
With the Minerva-generated solutions, a proof to the problem imo 1959 p1 is discovered. This is
the first problem of the first ever International Mathematical Olympiad (IMO). The informal problem
statement, Minerva-generated informal solution, and DSP’s formal proof are shown in Figure 6.
In Figure 6, we can see that the autoformalizer in DSP (a large language model), copies over parts of
the informal proof generated by Minerva as in-line comments to precede their corresponding formal
proof blocks. The formal proof does not use the first sentence of the informal proof solution as it
is already identical to the formal statement. We also notice that the large language model selects
relevant premises after writing down the conjectures (the steps starting with using) despite not
every premise is strictly needed.
The formal proof creates 5 conjectures (4 have statements and 1 show statement) which are all
subsequently proved by our automated theorem prover. The step to prove the statement have "gcd
(21*n + 4) (14*n + 3) = 1" involves 2 verified low-level provers smt and z3 and 10
lemmas/facts from outside the scope of the language model. It is highly unlikely that either the large
language model or the automated theorem prover can finish this proof on its own.
**Unsuccessful human-written proof.** In contrast, the human-written informal proof of this IMO
problem did not lead to a successful formal proof. The human-written proof is:
Denoting the greatest common divisor of a, b as (a, b), we use the Euclidean algorithm:
(21n + 4, 14n + 3) = (7n + 1, 14n + 3) = (7n + 1, 1) = 1
[3https://wwwlehre.dhbw-stuttgart.de/ sschulz/E/E.html](https://wwwlehre.dhbw-stuttgart.de/~sschulz/E/E.html)
[4https://cvc4.github.io/index.html](https://cvc4.github.io/index.html)
[5https://github.com/Z3Prover/z3](https://github.com/Z3Prover/z3)
[6https://vprover.github.io/](https://vprover.github.io/)
[7https://www.spass-prover.org/download/index.html](https://www.spass-prover.org/download/index.html)
-----
**Statement: Prove that the fraction** [21]14[n]n[+4]+3 [is irreducible for every natural number][ n][.]
**Informal Proof (Minerva 62B):**
We must show that gcd(21n + 4, 14n + 3) = 1.
The Euclidean algorithm gives
21n + 4 = 1 · (14n + 3) + 7n + 1
14n + 3 = 2 · (7n + 1) + 1.
Since gcd(7n + 1, 1) = 1, we have gcd(21n + 4, 14n + 3) = 1.
**Formal Proof:**
**theorem imo_1959_p1:**
**fixes n :: nat**
**shows "gcd (21*n + 4) (14*n + 3) = 1"**
**proof -**
**(* The Euclidean algorithm gives**
**21n+4=1\cdot(14n+3)+7n+1**
**14n+3=2\cdot(7n+1)+1. *)**
**have c0: "21*n + 4 = 1*(14*n + 3) + 7*n + 1"**
_<ATP> by auto </ATP>_
**have c1: "14*n + 3 = 2*(7*n + 1) + 1" using c0**
_<ATP> by auto </ATP>_
**(* Since \gcd(7n+1,1)=1, we have \gcd(21n+4,14n+3)=1. *)**
then have "gcd (7*n + 1) 1 = 1" using c1
_<ATP> by auto </ATP>_
then have "gcd (21*n + 4) (14*n + 3) = 1" using c1
_<ATP> by (smt (z3) BitM_plus_one ab_semigroup_add_class.add_ac(1)_
add.assoc c0 gcd.commute gcd_add2 gcd_add_mult mult_numeral_1
numeral_One numeral_eq_Suc numerals(1) semiring_norm(3)) </ATP>
then show ?thesis using c1 <ATP> by blast </ATP>
**qed**
Figure 6: IMO proof guided by a Minerva informal proof An informal proof of the International
Math Olympiad problem imo 1959 p1 generated by Minerva that led to a successful formal proof.
The steps enclosed by ATP delimiters are generated by an automated theorem prover and the rest are
by the DSP autoformalizer.
It follows that [21]14[n]n[+4]+3 [is irreducible. Q.E.D.]
A key difference between the Minerva proof and the human proof is the way that invoking the
Euclidean algorithm is described. The Minerva proof explicitly writes out the results of the Euclidean
algorithm (e.g. 21n + 4 = 1 · (14n + 3) + 7n + 1), which are translated into the sketch (c1 in
Figure 6). The human proof introduces new notation to express the results indirectly in terms of
greatest common divisors, which ends up being less suitable for sketching. For example, below is
a sketch generated with the human proof, which has a conjecture that is semantically incorrect and
hence cannot be closed by the automated prover:
**theorem**
**fixes n :: nat**
**shows "gcd (21*n + 4) (14*n + 3) = 1"**
**proof -**
**have "(21*n + 4, 14*n + 3) = (7*n + 1, 14*n + 3)"**
_ATP_ **(* <--- UNSUCCESSFUL *)**
**also have "... = (7*n + 1, 1)"**
_ATP_
**finally show ?thesis**
_ATP_
**qed**
-----
D MORE CASE ANALYSES OF HUMAN AND MINERVA INFORMAL PROOFS
**Statement: Show that for any real number x, if |x −** 1| + |x| + |x + 1| = x + 2, then 0 ≤ _x ≤_ 1.
**Informal Proof (Human):**
If x ≤−1, then |x−1|+|x|+|x+1| = −(x−1)−x−(x+1) = −3x. So, −3x = x+2 and x = − 2[1] [,]
which is a contradiction. If −1 < x < 0, then |x−1|+|x|+|x+1| = −(x−1)−x+(x+1) = 2−x.
So, 2 − _x = x + 2 and x = 0, which is a contradiction. If x > 1, then |x −_ 1| + |x| + |x + 1| =
_x −_ 1 + x + (x + 1) = 3x. So, 3x = x + 2 and x = 1, which is a contradiction. As a result, the only
possible values for x are between 0 and 1 and 0 ≤ x ≤ 1.
**Formal Proof:**
**theorem algebra_absxm1pabsxpabsxp1eqxp2_0leqxleq1:**
**fixes x ::real assumes "abs (x - 1) + abs x + abs (x + 1) = x + 2"**
**shows "0 \<le> x \<and> x \<le> 1"**
**proof -**
**(* If x ≤−1, then |x −** 1| + |x| + |x + 1| = −(x − 1) − _x −_ (x + 1) = −3x.
**So, −3x = x + 2 and x = −** [1]2[, which is a contradiction. *)]
**have c0: "x \<le> -1 \<Longrightarrow> False"**
**proof -**
**assume c1: "x \<le> -1"**
**have c2: "abs(x-1) + abs x + abs(x+1) = -(x-1) - x - (x+1)" using c1**
_<ATP> by auto </ATP>_
then have c3: "abs(x-1) + abs x + abs(x+1) = -3*x" <ATP> by auto </ATP>
then have c4: "-3*x = x+2" using assms c3 <ATP> by auto </ATP>
then have c5: "x = -1/2" <ATP> by auto </ATP>
then show ?thesis using c1 <ATP> by auto </ATP>
**qed**
**(* If −1 < x < 0, then |x −** 1| + |x| + |x + 1| = −(x − 1) − _x + (x + 1) = 2 −_ _x._
**So, 2 −** _x = x + 2 and x = 0, which is a contradiction. *)_
**have c6: "-1 < x \<Longrightarrow> x < 0 \<Longrightarrow> False"**
**proof -**
**assume c7: "-1 < x" assume c8: "x < 0"**
**have c9: "abs(x-1) + abs x + abs(x+1) = -(x-1) - x + (x+1)" using c7 c8**
_<ATP> by auto </ATP>_
then have c10: "abs(x-1) + abs x + abs(x+1) = 2-x" <ATP> by auto </ATP>
then have c11: "2-x = x+2" using assms c10 <ATP> by auto </ATP>
then have c12: "x = 0" <ATP> by auto </ATP>
then show ?thesis using c8 <ATP> by auto </ATP>
**qed**
**(* If x > 1, then |x −** 1| + |x| + |x + 1| = x − 1 + x + (x + 1) = 3x.
**So, 3x = x + 2 and x = 1, which is a contradiction. *)**
**have c13: "x > 1 \<Longrightarrow> False"**
**proof -**
**assume c14: "x > 1"**
**have c15: "abs(x-1) + abs x + abs(x+1) = x-1 + x + (x+1)" using c14**
_<ATP> by auto </ATP>_
then have c16: "abs(x-1) + abs x + abs(x+1) = 3*x" <ATP> by auto </ATP>
then have c17: "3*x = x+2" using assms c16 <ATP> by auto </ATP>
then have c18: "x = 1" <ATP> by auto </ATP>
then show ?thesis using c14 <ATP> by auto </ATP>
**qed**
**(* As a result, the only possible values for x are between 0 and 1 and 0 ≤** _x ≤_ 1.*)
then show ?thesis using c0 c6 c13 <ATP> by fastforce </ATP>
**qed**
Figure 7: Algebra example with human informal proof. A human informal proof is successful in guiding
the formal proof sketch to divide the problem into three cases and drive contradictions to each of them before
showing the final objective. The complexity and the consistency of the formal proof sketch are impressive.
-----
**Statement: Given that f + 3z = 11 and 3(f −** 1) − 5z = −68, show that f = −10 and z = 7.
**Informal Proof (Minerva 62B):**
We can solve for f in the first equation, obtaining f = 11 − 3z.
Then we can substitute this expression for f into the second equation,
obtaining 3(11 − 3z − 1) − 5z = −68.
Solving for z, we obtain z = 7.
Then, we can substitute this value of z into the expression for f, obtaining f = 11 − 3 · 7 = −10.
**Formal Proof:**
**theorem algebra_2varlineareq_fp3zeq11_3tfm1m5zeqn68_feqn10_zeq7:**
**fixes f z::complex**
**assumes h0: "f + 3*z = 11"**
**and h1: "3*(f - 1) - 5*z = -68"**
**shows "f = -10 \<and> z = 7"**
**proof -**
**(* We can solve for f in the first equation, obtaining f = 11 - 3z. *)**
**have c0: "f = 11 - 3*z"**
**using h0**
_<ATP> by (auto simp: field_simps) </ATP>_
**(* Then we can substitute this expression for f into the second equation,**
**obtaining 3(11 - 3z - 1) - 5z = -68. *)**
**have c1: "3*(11 - 3*z - 1) - 5*z = -68"**
**using h1 c0**
_<ATP> by auto </ATP>_
**(* Solving for z, we obtain z = 7. *)**
**have c2: "z = 7"**
**using c1**
_<ATP> by auto </ATP>_
**(* Then, we can substitute this value of z into the expression for f,**
**obtaining f = 11 - 3 \cdot 7 = -10.*)**
**have "f = 11 - 3*7"**
**using c0 c2**
_<ATP> by auto </ATP>_
then have c3: "f = -10"
_<ATP> by auto </ATP>_
**show ?thesis**
**using c2 c3**
_<ATP> by auto </ATP>_
**qed**
Figure 8: Algebra example with Minerva informal proof. An informal proof generated by Minerva that led
to a successful formal proof. The autoformalizer generated a proof sketch containing all lines of the formal proof
except for those delimited by the ATP tags. The sketch is structured according to the informal proof, containing
five intermediate conjectures based on the informal proof. The autoformalizer generated in-line comments in the
proof sketch (shown in red), which correctly identified an alignment between the formal and informal proofs.
-----
**Statement: For any integer a, show that a[2]** _≡_
0 (mod 4) or a[2] _≡_ 1 (mod 4).
**Informal Proof (Human):**
_a (mod 4) ∈{0, 1, 2, 3}._
Using that for any natural number k, a ≡ _b_
(mod 4) implies a[k] _≡_ _b[k]_ (mod 4), we have
_a[2]_ (mod 4) ∈{0, 1, 4, 9}.
Since 4 ≡ 0 (mod 4) and 9 ≡ 1 (mod 4),
the result follows.
**Formal Proof:**
**theorem numbertheory_sqmod4in01d:**
**fixes a :: int**
**shows "(aˆ2 mod 4 = 0)**
\<or> (aˆ2 mod 4 = 1)"
**proof -**
**(* a \pmod 4 \in {0, 1, 2, 3}. *)**
**have c0: "a mod 4 \<in> {0, 1, 2, 3}"**
_<ATP> by auto </ATP>_
**(* Using that for any natural number k,**
**a \equiv b \pmod 4 implies**
**aˆk \equiv bˆk \pmod 4,*)**
**have "aˆ2 mod 4 = (a mod 4)\<ˆsup>**
2 mod 4" <ATP> by (smt (z3)
numeral_eq_Suc power_mod) </ATP>
**(* we have**
**aˆ2 \pmod 4 \in {0, 1, 4, 9}. *)**
**also have "... \<in> {0, 1, 4, 9}"**
**using c0**
_<ATP> by auto </ATP>_
**(* Since 4 \equiv 0 \pmod 4 and**
**9 \equiv 1 \pmod 4,**
**the result follows. *)**
**finally show ?thesis**
_<ATP> by auto </ATP>_
**qed**
**Statement: For any integer a, show that a[2]** _≡_
0 (mod 4) or a[2] _≡_ 1 (mod 4).
**Informal Proof (Minerva 62B):**
Let a = 2k for some integer k.
Then a[2] = 4k[2].
Since 4k[2] is divisible by 4, we have a[2] _≡_ 0
(mod 4).
Now, let a = 2k + 1 for some integer k.
Then a[2] = 4k[2] + 4k + 1 = 4(k[2] + k) + 1.
Since k[2] + k is an integer, 4(k[2] + k) + 1 is
not divisible by 4.
Thus, a[2] _≡_ 1 (mod 4).
**Formal Proof:**
**theorem numbertheory_sqmod4in01d:**
**fixes a :: int**
**shows "(aˆ2 mod 4 = 0)**
\<or> (aˆ2 mod 4 = 1)"
**proof (cases "even a")**
case True
**(* Let a=2k for some integer k.**
**Then aˆ2=4kˆ2.**
**Since 4kˆ2 is divisible by 4,**
**we have aˆ2 \equiv 0 \pmod{4}.*)**
then obtain k where "a=2*k"
**using evenE**
_<ATP> by auto </ATP>_
then have "aˆ2 = 4*kˆ2"
unfolding power2_eq_square
_<ATP> by auto </ATP>_
then have "aˆ2 mod 4 = 0"
_<ATP> by auto </ATP>_
then show ?thesis
_<ATP> by auto </ATP>_
next
case False
**(* Now, let a=2k+1 for some integer k.**
**Then aˆ2=4kˆ2+4k+1=4(kˆ2+k)+1.**
**Since kˆ2+k is an integer,**
**4(kˆ2+k)+1 is not divisible by 4.**
**Thus, aˆ2 \equiv 1 \pmod{4}.*)**
then obtain k where "a=2*k+1"
**using oddE <ATP> by auto </ATP>**
then have "aˆ2 = 4*kˆ2+4*k+1"
unfolding power2_eq_square <ATP>
by (auto simp: field_simps) </ATP>
then have "aˆ2 mod 4 = 1"
_<ATP> by presburger </ATP>_
then show ?thesis
_<ATP> by auto </ATP>_
**qed**
Figure 9: Alternative proofs: Minerva-generated (left) and human-written (right). In both proofs, the
formal sketch is structured based on the informal proof. The Minerva informal proof and its sketch break the
proof into even and odd cases. These cases are not explicitly stated in the informal proof, and the formal sketch
makes them explicit (cases “even a”). Each case has three conjectures that are directly based on the informal
proof. The sketch excludes parts that are not needed in the formal proof, for instance “Since k[2] + k is an integer,
4(k[2] + k) + 1 is not divisible by 4”. The human proof uses a different strategy than the Minerva proof, based on
the facts that a (mod 4) ∈{0, 1, 2, 3} and a[2] (mod 4) ∈{0, 1, 4, 9}. The sketch uses these as conjectures,
synthesizes an intermediate step not in the informal proof, and excludes the last step of the informal proof.
-----
**Statement: Show that for any real number x**
and any natural number n, if x > −1, then
(1 + nx) ≤ (1 + x)[n].
**Informal Proof (Minerva 62B):**
This is true for n = 1. Now, suppose that this
is true for n.
Then we have that
(1 + nx)[n] = 1 + n[2]x + n[3]x[2] + · · · + nx[n]
_≤_ 1 + nx + n[2]x[2] + · · · + nx[n] = (1 + x)[n]
and
(1 + nx)[n][+1] = (1 + nx)[n](1 + nx)
_≤_ (1 + x)[n](1 + x) = (1 + x)[n][+1].
Therefore, this is true for n + 1.
**Formal Proof:**
**theorem induction_1pxpownlt1pnx:**
**fixes x :: real and n :: nat**
**assumes "-1 < x"**
**shows "(1 + n*x) \<le> (1 + x)ˆn"**
**proof (induct n)**
case 0
then show ?case
_<ATP> by auto </ATP>_
next case (Suc n)
then show ?case
_<ATP> by (smt (z3)_
Bernoulli_inequality assms) </ATP>
**qed**
**Statement: Determine the remainder of 1529**
(mod 6). Show that it is 5.
**Informal Proof (Minerva 62B):**
By definition, 1529 ≡ 5 (mod 6).
**Formal Proof:**
**theorem mathd_numbertheory_551 :**
"1529 mod 6 = (5::nat)"
_<ATP> by auto </ATP>_
**Statement: For each plumbing repair job, Mr.**
Wrench charges N dollars for coming out to
the house plus x dollars per hour that he works
at the house. He charged $97 for a one-hour
repair job and $265 for a five-hour repair job.
What is his charge for a two-hour repair job?
Show that it is $139.
**Informal Proof (Minerva 62B):**
Let N be the number of dollars charged for
coming out to the house and let x be the number of dollars per hour that Mr. Wrench works.
For the one-hour repair job, we have that the
total cost is N + x = 97.
For the five-hour repair job, we have that the
total cost is N + 5x = 265.
We subtract the first equation from the second
equation to obtain 4x = 168 ⇒ _x = 42._
Plugging this back into the first equation, we
obtain that N = 55.
Therefore, for a two-hour repair job, the total
cost is 55 + 2x = 55 + 2(42) = $139 .
**Formal Proof:**
**theorem mathd_algebra_160:**
**fixes n x ::real**
**assumes "n + x = 97"**
**and "n + 5 * x = 265"**
**shows "n + 2 * x = 139"**
**proof -**
**(* We subtract the first equation**
**from the second equation to obtain**
**4x = 168 ⇒** **x = 42. *)**
**have "4 * x = 168"**
**using assms <ATP> by auto </ATP>**
then have "x = 42"
_<ATP> by auto </ATP>_
**(* Plugging this back into**
**the first equation, we obtain that**
**N = 55. *)**
then have "n = 55"
**using assms <ATP> by auto </ATP>**
**(* Therefore, for a two-hour repair**
**job, the total cost is**
**55 + 2x = 55 + 2(42) = $139. *)**
then show ?thesis
_<ATP> by (smt (z3) ⟨x = 42⟩) </ATP>_
**qed**
Figure 10: Three Types of Minerva proofs: correct proof (left), incorrect proof (right top), nonsensical
**proof (right bottom) In the correct Minerva proof, the formal sketch is structured based on the informal proof**
and steps are well-aligned. In the incorrect Minerva proof, the step ”This is true for n = 1” is corrected by Codex
in the formal sketch to ”case 0” which starts the base case with n = 0 since natural numbers include 0. This
is an explicit correction made by Codex and makes a slightly incorrect Minerva proof formalized successfully.
Lastly, the meaningless proof contains only a single statement without any calculation or justification. However,
Codex also chooses to directly show the statement without any calculation. This suggests that the problem itself
could be considered simple by Codex.
-----
| [
"Guillaume, Lample",
"Sean, Welleck",
"Jiacheng, Liu",
"Albert Q., Jiang",
"Wenda, Li",
"Yuhuai, Wu",
"Jin Peng, Zhou",
"Timothée, Lacroix",
"Mateja, Jamnik"
] | 2022-11-07T00:00:00 | ICLR 2023 | true | 99 | 17 | [
"Isabelle"
] | http://arxiv.org/abs/2210.12283 | https://arxiv.org/abs/2210.12283 | https://www.semanticscholar.org/paper/7de36d6b14aadc8cdb6ad1340b9ca64b15375bca |
Proof Artifact Co-Training for Theorem Proving with Language Models | Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built. This is particularly challenging when applying large Transformer language models to tactic prediction, because the scaling of performance with respect to model size is quickly disrupted in the data-scarce, easily-overfitted regime. We propose PACT (Proof Artifact Co-Training), a general methodology for extracting abundant self-supervised data from kernel-level proof terms for joint training alongside the usual tactic prediction objective. We apply this methodology to Lean,an interactive proof assistant which hosts some of the most sophisticated formalized mathematics to date. We instrument Lean with a neural theorem prover driven by a Transformer language model and show that PACT improves theorem proving success rate on a held-out suite of test theorems from 32% to 48%. | PACT is proposed, a general methodology for extracting abundant self-supervised data from kernel-level proof terms for co-training alongside the usual tactic prediction objective and applied to Lean, an interactive proof assistant which hosts some of the most sophisticated formalized mathematics to date. | ## PROOF ARTIFACT CO-TRAINING FOR THEOREM PROV### ING WITH LANGUAGE MODELS
**Jesse Michael Han**
University of Pittsburgh
OpenAI
**Jason Rute** **Yuhuai Wu**
IBM Research[∗] Google Research
Stanford University[†]
**Edward W. Ayers** **Stanislas Polu**
Carnegie Mellon University[‡] OpenAI
ABSTRACT
Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce, as such libraries require years of concentrated effort
by human specialists to be built. This is particularly challenging when applying
large Transformer language models to tactic prediction, because the scaling of
performance with respect to model size is quickly disrupted in the data-scarce,
easily-overfitted regime. We propose PACT (Proof Artifact Co-Training), a general
methodology for extracting abundant self-supervised data from kernel-level proof
terms for joint training alongside the usual tactic prediction objective. We apply
this methodology to Lean, a proof assistant host to some of the most sophisticated
formalized mathematics to date. We instrument Lean with a neural theorem prover
driven by a Transformer language model and show that PACT improves theorem
proving success rate on a held-out suite of test theorems from 32% to 48%.
1 INTRODUCTION
Deep learning-driven automated theorem proving in large libraries of formalized mathematics (henceforth “neural theorem proving”) has been the focus of increased attention in recent years. Labeled
data for imitation learning of theorem proving is scarce—formalization is notoriously labor-intensive,
with an estimated cost of 2.5 man-years per megabyte of formalized mathematics (Wiedijk, 2000),
and complex projects require years of labor from human specialists. Within a fixed corpus of (possibly
unproven) theorem statements, it is possible to augment a seed dataset of human proofs with new
successful trajectories using reinforcement learning or expert iteration. However, for some large
models this can be quite computationally intensive, and without a way to expand the curriculum of
theorems, the agent will inevitably saturate and suffer from data starvation.
Data scarcity is a particularly thorny obstruction for applying large language models (LLMs) to
neural theorem proving. LLMs have achieved spectacular success in data-rich regimes such as plain
text (Brown et al., 2020), images (Dosovitskiy et al., 2021), and joint text-image modeling (Radford
et al., 2021), and the performance of decoder-only Transformers has been empirically shown to
obey scaling power laws in model and data size (Henighan et al., 2020). However, existing datasets
of human proof steps for neural theorem proving are extremely small and exist at scales at which
overfitting occurs extremely rapidly, disrupting the scaling of performance with respect to model
size (Kaplan et al., 2020).
We make two contributions towards addressing the problem of data scarcity in the context of formal
mathematics. First, we introduce PACT (Proof Artifact Co-Training), a general methodology for
extracting self-supervised auxiliary tasks for jointly training a language model alongside a tactic
prediction objective for interactive theorem proving. Second, we present LEANSTEP, a collection of
_∗Work performed while Jason Rute was at CIBO Technologies._
_†Work performed while Yuhuai Wu was at University of Toronto._
_‡Work performed while Edward W. Ayers was at University of Cambridge._
-----
datasets and a machine learning environment for the Lean 3 theorem prover with support for PACT,
supervised learning of tactic prediction, theorem proving evaluation, and reinforcement learning.
We train large language models on these data and demonstrate that PACT significantly improves
theorem proving success rate on a held-out suite of test theorems, from 32% to 48%. We then embark
on a careful study of the effects of pre-training vs. co-training and show that PACT combined with
_WebMath pre-training (Polu & Sutskever, 2020) achieves the best validation loss and theorem proving_
success rate. Finally, on an out-of-distribution collection of thousands of theorems (some involving
novel definitions) added to Lean’s mathematical library after we extracted our train/test data, we
achieve a theorem proving success rate of 37%, suggesting strong generalization and usefulness at
the frontier of formalized mathematics.
2 BACKGROUND AND RELATED WORK
LEAN Lean is an interactive theorem prover and functional programming language (de Moura et al.,
2015). It has an extremely active community and is host to some of the most sophisticated formalized
mathematics in the world, including scheme theory (Buzzard et al., 2021), forcing (Han & van Doorn,
2020), perfectoid spaces (Buzzard et al., 2020), and condensed mathematics (Scholze, 2020). Lean’s
foundational logic is a dependent type theory called the calculus of inductive constructions (Pfenning
& Paulin-Mohring, 1989). This design means that terms, types and proofs are all represented with a
single datatype called an expression. A proof term is a Lean expression whose type is a proposition,
_i.e. a theorem. This proof term serves as a checkable artifact for verifying the proposition. Lean uses a_
small, trusted kernel to verify proof terms. The primary repository of formalized mathematics in Lean
is mathlib (mathlib, 2020). At the time of writing, 140 contributors have added almost 500,000
lines of code; mathlib contains over 46,000 formalized lemmas backed by over 21,000 definitions,
covering topics such as algebraic geometry, computability, measure theory, and category theory. The
range of topics and the monolithic, unified organization of mathlib make it an excellent foundation
for a neural theorem proving dataset.
MACHINE LEARNING IN INTERACTIVE THEOREM PROVING In a tactic-based interactive theorem
prover (ITP) such as Lean, a proof is a list of tactics, i.e. small proof-term-generating programs.
Tactics can be simple one-word commands, e.g. refl, or be composed of many nested parts, e.g.
simpa [le_antisymm_iff, norm_nonneg] using @norm_eq_zero α _ g
Here the brackets enclose a list of simplifier rules (which often are just lemmas from the library), and
@norm_eq_zero α _ g is a proof term applying the lemma norm_eq_zero to the local variables
_α and g._
Other ML and neural theorem provers for tactic-based ITPs take one of two approaches to tactic
generation. TacticToe (Gauthier et al., 2021) for HOL4 and Tactician (Blaauwbroek et al., 2020)
for Coq use k-NN to select similar tactics in the training set and apply modifications to the result,
_e.g. swapping the tactic variables with those found in the local context. HOList/DeepHOL (Bansal_
et al., 2019b;a; Paliwal et al., 2020) for HOL Light; TacticZero (Wu et al., 2021a) for HOL4; and
CoqGym/ASTactic (Yang & Deng, 2019) and ProverBot9001 (Sanchez-Stern et al., 2020) for Coq
hard-code the DSL for every tactic command. The model chooses a tactic command, and then fills in
the tactic arguments using specialized argument selectors (such as a lemma selector, a local hypothesis
selector, and/or a variable selector). None of these selectors currently synthesize arbitrary terms. This
prevents the tactic synthesis from constructing tactics with proof terms, such as @norm_eq_zero α
_ g, or directly proving an existential, e.g. ∃ (x : R), x + 3 = 0, by supplying the witnessing
term -3.
Directly applying generative language modeling to tactic generation allows this setup to be considerably simplified. Our tactic generator is able to synthesize tactics of any form found in mathlib
including, for example, the simpa example above as a one line proof to a test theorem, even though
the string @norm_eq_zero does not occur in our dataset. (See more examples in Appendix D.) We
leave as future work the possibility of re-integrating specialized components, e.g. lemma selection,
found in other works (possibly as, say, a source of additional prompts for the language model).
Language models have also been explored in the first-order ITP Mizar for conjecturing and proof
synthesis (Urban & Jakubuv, 2020). While their work shows the promise of such approaches,
-----
is not intended as a complete end-to-end theorem prover. For Metamath, which does not use
tactics, language modeling approaches have been quite successful. Holophrasm (Whalen, 2016),
MetaGen (Wang & Deng, 2020), and GPT-f (Polu & Sutskever, 2020) all use RNNs or Transformers
to generate proof steps. Indeed, our paper builds on the work of Metamath GPT-f (Polu & Sutskever,
2020) (MM GPT-f). Whereas MM GPT-f trained primarily on the Metamath proof step objective
(i.e. guessing the next lemma to be applied to a goal, which is similar to our NEXTLEMMA task
in Section 3.2), we co-train on a diverse suite of self-supervised tasks extracted from Lean proof
terms and demonstrate significant improvements in theorem proving performance when doing so.
This is our main result.
REASONING WITH TRANSFORMERS Besides theorem proving, a number of recent papers have
shown that language models, especially Transformers, are capable of something like mathematical
and logical reasoning in integration (Lample & Charton, 2020), differential equations (Charton et al.,
2021), Boolean satisfiability (Hahn et al., 2021), and inferring missing proof steps (Li et al., 2021).
A closely-related vein of work has shown that pre-training Transformers on data engineered to reflect
inductive biases conducive to mathematical reasoning is beneficial for downstream mathematical
reasoning tasks (Rabe et al., 2021; Wu et al., 2021b). Our work both builds on and departs from
these ideas in several ways. Unlike skip-tree training (Rabe et al., 2021), which focuses solely on
predicting masked subterms of theorem statements, PACT derives its self-supervised training data
from far more complex proofs. Unlike LIME (Wu et al., 2021b), which uses purely synthetic data and
is presented as a pre-training methodology, our self-supervised tasks are extracted from non-synthetic
human proofs. Moreover, we show that not only are Transformers capable of performing well on
auxiliary tasks gathered from low-level proof artifact data, but that we can directly leverage this
low-level data by jointly training a language model to greatly improve its performance at high-level
theorem proving.
MACHINE LEARNING WITH PROOF ARTIFACTS The idea of mining low-level proof artifacts was
previously explored by Kaliszyk and Urban in the context of automated lemma extraction (Kaliszyk
& Urban, 2015b; Kaliszyk et al., 2015). It has also been previously observed that training on fully
elaborated Coq terms (Nie et al., 2020) helps with a downstream theorem naming task. However,
similar to previous work on skip-tree training, their dataset focuses solely on theorem statements,
_i.e. types, does not cover the far more complex proof terms, and does not evaluate the effect of such_
training on theorem proving evaluations.
While there exist environments and datasets for other formal mathematics libraries (Kaliszyk et al.,
2017; Li et al., 2021; Huang et al., 2019; Kaliszyk & Urban, 2015a), LEANSTEP is the first and only
tactic proof dataset for the Lean theorem prover. This makes available a large set of formal mathematical data to researchers covering a diverse and deep spectrum of pure mathematics. Moreover,
LEANSTEP is unique in that it contains both high-level human-written tactics as well as kernel-level
proof terms, which enables the extraction of self-supervised tasks for PACT (Section 3.2).
3 THE LEANSTEP DATASETS AND MACHINE LEARNING ENVIRONMENT
3.1 HUMAN TACTIC PROOF STEPS
Tactics in Lean are metaprograms (Ebner et al., 2017), which can construct Lean expressions, such as
proof terms. A tactic state which tracks the list of open goals and other metadata (like the partial
proof term constructed so far) is threaded through each tactic invocation. Lean has special support for
treating tactics as an extensible domain-specific language (DSL); this DSL is how Lean is typically
used as an interactive theorem prover. The DSL amounts to a linear chain of comma-separated
invocations. The Lean proof step task is to predict the next tactic given this goal state. We refer the
reader to Appendix A for examples and further explanation.
Our human tactic proof step dataset consists of source-target pairs of strings, one for each tactic
invocation in the Lean core library and in mathlib. The source string is the pretty-printed tactic
state. The target string is the tactic invocation as entered by a human author of the source code. This
data is gathered by hooking into the Lean parser and Lean’s compilation process. We refer to the task
of predicting the next human tactic proof step given a tactic state as the proofstep objective.
-----
3.2 PROOF ARTIFACT CO-TRAINING
In this section, we describe the PACT task suite and how data for these tasks are extracted.
For every proof term τ, we record the type Γ of τ, its name nm, and a list ps of all premises (i.e.
named references to other lemmas in the library) which are used in τ . We then recurse through
_τ_, tracking a list bs of bound variables which we update whenever navigating into the body of a
_λ-expression. At every sub-term τ_ _[′]_ _⊆_ _τ we record τ_ _[′], its type Γ[′], the current state of bs, and the_
following data:
1. A tactic state, where the goal is set to be Γ[′] and the list of hypotheses in the local context is
set to be the list bs, i.e. those bound variables in scope at τ _[′]._
2. A partial proof term, i.e. τ with τ _[′]_ masked out.
3. A premise selection bitmask, i.e. Boolean labels for every p in ps indicating whether p is
used in τ _[′]._
4. A local context bitmask, i.e. similar Boolean labels for every b in bs indicating whether b
is used in τ _[′]._
5. An optional next lemma: if the first step of τ _[′]_ is to apply a premise p in ps, we record p.
Whenever we record a term, we record both pretty-printed and far more explicit fully elaborated
versions of it. The fully elaborated terms explicitly display enormous amounts of type information
which are usually silently inferred by Lean. From these data, we assemble the following language
modeling tasks:
1. Next lemma prediction. Given the tactic state, predict the next lemma to be applied.
2. Proof term prediction. Given the tactic state, predict the entire proof term τ _[′]._
3. Skip-proof. Given the partial proof term, predict the masked subterm τ _[′]._
4. Type prediction. Given the partial proof term, predict the type Γ[′] of the masked subterm
_τ_ _[′]._
5. Tactic state elaboration. Given the tactic state, predict the fully elaborated tactic state.
6. Proof term elaboration. Given τ, predict the fully elaborated version of τ .
7. Premise classification. Given the tactic state and a premise p ∈ ps, predict either <TRUE>
or <FALSE> according to the premise selection bitmask.
8. Local context classification. Given the tactic state (which consists of a list of local assumptions bs and the goal Γ[′]), predict the sublist of bs which is true on the local context
bitmask.
9. Theorem naming. Given the type Γ of the top-level proof term τ, predict the name nm.
We remark that our next lemma prediction task is precisely the low-level PROOFSTEP objective
studied in (Polu & Sutskever, 2020), and our skip-proof task superficially resembles, but is much
more difficult than the skip-tree task studied in (Rabe et al., 2021), as proof terms tend to be far more
complex than the syntax trees of theorem statements.
3.3 THE LEANSTEP MACHINE LEARNING ENVIRONMENT
We instrument Lean for automatic theorem proving with a language model, including utilities for
(1) setting the runtime environment at a particular theorem (ensuring proofs are never circular), (2)
serializing the tactic state as environment observations for a theorem-proving agent, (3) exposing
Lean’s parser to re-parse strings emitted by a language model into tactic invocations, and (4) executing
and capturing the results of the re-parsed tactics, enabling the recording of trajectories for expert
iteration and reinforcement learning.
In addition to this general instrumentation, we implement a generic best-first search algorithm for
theorem proving; it forms the basis for our evaluations and is written entirely in Lean itself. The
algorithm is parametrized by an oracle (Ω : tactic_state → list (string × float))
that accepts a tactic state and returns a list of strings and heuristic scores. The search is controlled
-----
by a priority queue of search nodes, which consist of a tactic state (i.e. a partial proof) and search
metadata. In the outer loop of the algorithm—which continues until either the theorem is completely
proved (i.e. no goals are remaining on the current node), the priority queue is empty (i.e. the search
has failed), or a pre-set timeout or budget of iterations is exceeded—we pop a node off the queue,
serialize the associated tactic state and use it to query the oracle, producing a list of candidates
cs : list (string × float). We then loop over the candidates cs to produce a list of new
search nodes, by re-parsing each string into a tactic and adding a new node if the parsed tactic
advances the proof without raising errors. These new search nodes are then re-inserted into the
queue in order of decreasing priority and the search continues. We optionally constrain the search
by enforcing maximum width and depth limits wmax and dmax that guard insertion into the queue.
When considering nodes for insertion, any node whose depth exceeds dmax is ignored, and all
nodes are ignored if the queue size is strictly larger than wmax. Due to the flexibility in assigning
heuristic scores and in choosing the maximum width and depth hyperparameters, our algorithm is
quite general—for example, it reduces to (1) a greedy depth-first search when wmax = 0, and (2) a
naïve breadth-first search when heuristic scores are identical and wmax = dmax = ∞.
4 EXPERIMENTS
TRAINING In all of our experiments, we use decoder-only Transformers similar to GPT-3 (Brown
et al., 2020). Unless mentioned otherwise, all of our models have 24 layers with dmodel = 1536 and
24 heads, accruing to 837M trainable parameters. They are also pre-trained on WebMath (Polu &
Sutskever, 2020) for 72B tokens. We use the standard BPE encoding (Brown et al., 2020), a batch
size of 512 and a learning rate of 0.00025 with a cosine schedule and a 100-step ramp-up.
We use an 80-5-15 train-validation-test split. We split all datapoints deterministically by theorem
_name, by hashing each name to a float in (0, 1). This ensures, for example, that proof steps used to_
prove a test theorem never appear in the training data and vice-versa.
When fine-tuning a model we load its saved parameters but re-initialize the optimizer. We start each
training for a fixed number of tokens (defining the cosine schedule) and record the number of tokens
consumed as we reach a minimal validation loss. We use the minimum validation loss snapshot to
evaluate each model on our held-out test set.
We partition our datasets into three groups:
1. tactic: the dataset described in Section 3.1.
2. mix1: the union of the PACT tasks next lemma prediction and proof term predic**tion (Section 3.2), selected because of their close relation to tactic.**
3. mix2: all other datasets described in Section 3.2.
This grouping is motivated by the impossibility to ablate each dataset separately given our compute
budget. They nonetheless enable us to study the effect of tasks that are very close to the tactic
objective in comparison to others. Our choice of next lemma prediction and proof term prediction
for mix1 is motivated by the observation that these tasks are closely related to the theorem proving
objective: a proof can be given entirely in terms of a sequence of lemmas to apply (as in Metamath),
or the proof can be finished in one step by supplying the entire proof term. Despite their logical
similarity to the PROOFSTEP objective, we nevertheless use different keywords in the prompt to the
model to disambiguate (NEXTLEMMA and PROOFTERM) from (PROOFSTEP) because the data is
noisy and represents a significant distribution shift: during pretty-printing, subtrees of proof terms
beyond a certain depth are dropped entirely, there is generally no guarantee that they can be re-parsed,
and the data is much more verbose than what humans typically supply in source code.
THEOREM PROVING EVALUATION We run theorem-proving evaluations on our held-out test
set, comprising 3071 theorems. Since the split was conducted by theorem name, the proofs of these
theorems never appear in the training data. For each theorem in the test set, we set the runtime
environment to the location where the theorem is proved in the source code, preventing the use of
theorems defined later in mathlib and ensuring that we never derive circular proofs. We compare
against existing proof automation In Lean by also evaluating the tactics refl, which attempts to
prove statements via definitional equality, and tidy, which conducts a greedy depth-first search
-----
tactic
**tactic proof steps** GOAL <TacticState> PROOFSTEP <Tactic>
mix1
**next lemma prediction** GOAL <TacticState> NEXTLEMMA apply (<NextLemma>)
**proof term prediction** GOAL <TacticState> PROOFTERM exact (<ProofTerm>)
mix2
**skip proof** RESULT <MaskedProofTerm> SKIPPROOF <ProofTerm>
**type prediction** RESULT <MaskedProofTerm> PREDICTTYPE <Type>
**tactic state elaboration** GOAL <TacticState> ELABGOAL <ElaboratedTacticState>
**proof term elaboration** PROOFTERM <ProofTerm> ELABPROOFTERM <ElaboratedProofTerm>
**premise classification** GOAL <TacticState> CLASSIFYPREMISE <Premise> <True|False>
**local context classification** GOAL <TacticState> CLASSIFYLOCALS <LocalsList>
**theorem naming** TYPE <Type> NAME <Name>
Figure 1: Auto-regressive objectives used for each task described in Section 3. Placeholders represented with brackets (such as <TacticState>) are substituted by the context-completion pairs
from each datasets in the prompts above. Each task is presented to the model with its respective keyword (PROOFSTEP, NEXTLEMMA,...). We wrap the completions of mix1 tasks (with apply(...)
and exact(...) respectively) as a hint that they are related to the respective Lean tactics; this is
not directly possible for the other tasks.
using a fixed list of tactics at each step. We re-implement tidy as a special case of our best-first
search algorithm using an oracle which always emits the same list of tactics, and so henceforth
refer to it as tidy-bfs. In all of our experiments, we use a maximum width of wmax = 16, a
maximum depth of dmax = 128, a maximum budget of 512 iterations of the outer loop, a timeout of
5 seconds per tactic execution, and a global timeout of 600 seconds per theorem. Because sampling
completions from our models is much slower (≈ 1 second) than querying the constant tidy-bfs
oracle (instantaneous), the tidy-bfs search runs many more iterations than gptf before timeout.
We report the pass-rate (i.e. percentage of theorems proved) from the randomly-chosen held-out test
set, following (Whalen, 2016), (Bansal et al., 2019c), and others. We provide an alternative pass-rate
at the end of this section, using theorems added to mathlib after our dataset was collected. We
average over three evaluation runs when reporting the pass rate.
EFFECT OF CO-TRAINING VS PRE-TRAINING We first study the effects of pre-training versus
co-training with the mix1 and mix2 datasets. We pre-train using the methodology described above
(potentially pre-training first on WebMath, and then on a PACT dataset in sequence). For co-training,
we simply concatenate and shuffle the datasets together without applying any particular weight to a
given dataset.
The main results are presented in Figure 2. Pre-training exhibits an effective transfer from mix-1
and/or mix-2 but the best result is achieved by co-training with both these datasets. With this
setup, we are able to train for much longer (71B tokens vs 22B+18B for the best pre-training setup)
before overfitting on the PROOFSTEP task. We hypothesize that PACT regularizes overfitting to
the PROOFSTEP task while still imparting useful knowledge to the model due to large amounts of
mutual information, and that this is the main driver of increased performance.
ABLATING WEBMATH PRE-TRAINING Next, we ablate the effect of WebMath pre-training (instead
starting with a model pre-trained on the same English language mix as GPT-3). As expected, cotrained models suffer from a performance drop without Webmath pretraining. but we were more
interested in measuring the effect on pre-trained models on mix-1 and mix-2, as they may not
benefit from WebMath as much due to the two successive pre-training steps.
We report the optimal validation losses in Figure 3. WebMath appears as substantially beneficial
even in the sequential pre-training setup. This indicates that PACT is not a replacement for WebMath
pre-training, but rather a complementary method for enhancing the performance of language models
for theorem proving.
-----
Tokens
Model elapsed mix1 mix2 tactic Pass-rate
_Baselines_
refl 1.1%
tidy-bfs 9.9%
WebMath > tactic 1B 1.02 32.2%
_Pre-training_
WebMath > mix1 11B _0.08_
WebMath > mix2 16B _0.08_
WebMath > mix1 + mix2 22B _0.11_ _0.08_
WebMath > mix1 > tactic 1B 1.00 39.8%
WebMath > mix1 + mix2 > tactic 1B 0.97 44.0%
_Co-training (PACT)_
WebMath > mix1 + tactic 18B _0.08_ 0.94 40.0%
WebMath > mix2 + tactic 75B _0.09_ 0.93 46.0%
WebMath > mix1 + mix2 + tactic 71B _0.09_ _0.09_ **0.91** **48.4%**
_Pre-training and co-training_
WebMath > mix2 > mix1 + tactic 18B _0.08_ 0.93 46.9%
Figure 2: Comparison of pre-training and co-training on mix-1 and mix-2. > denotes a pre-training
step and + denotes a co-training. As an example, WebMath > mix2 > mix1 + tactic
signifies a model successively pre-trained on WebMath then mix2 and finally co-trained as a finetuning step on mix1 and tactic. Columns mix1, mix2, tactic report the optimal validation
loss achieved on these respective datasets. We provide a detailed description of experiment runtime
and computing infrastructure in Appendix B.
Tokens Tokens
Model budget elapsed mix1 mix2 tactic Pass-rate[†]
_Baselines_
tactic 32B 1B 1.59 —
_Pre-training_
mix1 32B 20B _0.12_
mix2 32B 25B _0.10_
mix1 + mix2 32B 27B _0.13_ _0.10_
mix1 > tactic 32B 1B 1.26 —
mix1 + mix2 > tactic 32B 1B 1.16 —
_Co-training_
mix1 + tactic 32B 27B _0.11_ 1.12 —
mix2 + tactic 96B 75B _0.10_ **1.02** 40.4%
mix1 + mix2 + tactic 96B 71B _0.10_ _0.11_ 1.07 —
_Pre-training and co-training_
mix2 > mix1 + tactic 32B 26B _0.11_ 1.09 —
Figure 3: Validation losses achieved in the pre-training and co-training setups without WebMath
pre-training. See Figure 2 for a description of the columns and the models nomenclature used. _[†]Due_
to technical constraints, we are unable to provide pass-rates for some of the models.
ABLATING REGULARIZATION We rule out the possibility that the benefits from PACT come from
simply regularizing our models on the scarce tactic data alone. We checked that a WebMath >
tactic model trained with 15% residual dropout achieved a minimum validation loss of 1.01 and
33.6% pass rate, far below the 48.4% PACT pass rate.
-----
Tokens Tokens
Model budget elapsed mix1 mix2 tactic Pass-rate
121m 96B 82B _0.13_ _0.10_ 1.23 35.1%
163m 96B 80B _0.12_ _0.09_ 1.11 39.8%
837m 96B 71B _0.09_ _0.09_ **0.91** **48.4%**
Figure 4: Validation losses and pass-rates achieved for various model sizes using PACT. See Figure 2
for a description of the columns. The setup used is WebMath > mix1 + mix2 + tactic.
EFFECT OF MODEL SIZE Finally, we study how performance scales with respect to model size. We
use the best training setup reported in Figure 2, WebMath > mix1 + mix2 + tactic. The
837m model is our main model. The 163m and 121m models respectively have 12 and 6 layers,
with dmodel = 768. The learning rates are respectively adjusted to 0.0014 and 0.0016.
As demonstrated by Figure 4, performance is highly correlated with model size, with larger models
generally achieving better generalization even in the overfitted regime. We leave as future work a
careful study of how evaluation performance is affected when scaling to multi-billion parameter
models, as well as the feasibility of deploying them for interactive use by Lean users.
TIME-STRATIFIED EVALUATION In the 5 week period that separated our last dataset extraction
and the writing of this paper, mathlib grew by 30K lines of code, adding 2807 new theorems.
Evaluating our models on these new theorem statements gives a unique way to assess their capability
to assist humans in formalizing proofs and to test their generalization to completely unseen theorems
and definitions. This evaluation set also addresses one of the weaknesses of using a random split
of theorems from a formal mathematics library, namely that the split is non-chronological; e.g. test
theorems can appear as lemmas in proofs of train theorems.
We call this temporally held-out test set future-mathlib and evaluate our best model as well
as the refl and tidy-bfs baselines on it. In contrast to evaluation on our test split, the refl
baseline (simply attempting a proof by the refl tactic) closes 328 proofs (11.6%), demonstrating
an important skew towards trivial boilerplate lemmas generally defined to provide alternate interfaces to new definitions. The tidy-bfs baseline closed 611 proofs (21.8%), and our best model
wm-tt-m1-m2 closed 1043 proofs (37.1%), proving 94% of the refl lemmas. We attribute the
weaker performance to heavy distribution shift: by the nature of the dataset, the future-mathlib
theorems frequently involve new definitions and concepts which the model was never exposed to
during training. Nevertheless, the success rate remains high enough to suggest strong generalization
and usefulness at the frontier of formalized mathematics.
5 DISCUSSION
CHAINED TACTIC PREDICTIONS In Lean, multiple tactic commands can be chained together using
semicolons. Our data pipeline treats these tactic chains as a single sequence in our training data, and
they are occasionally predicted by the model. Such chained tactic applications are difficult for human
formalizers to synthesize on their own, as they require reasoning about the semantics of multiple
tactics in sequence and their effects on the tactic state, and the examples present in the training data
are usually optimized by hand from longer, less succinct proofs. We observed that PACT significantly
boosts the capability of our models to successfully predict longer chained tactic applications. This
occurs despite the fact that the tactic chaining idiom is specific to the tactic proofstep dataset and
does not appear in the PACT data whatsoever. We supply more detail in Appendix C.1.
THEOREM NAMING We also evaluate our best PACT model (wm-to-tt-m1-m2) on the theorem
naming task, using the theorem statements and human-supplied names from the future-mathlib
evaluation set. It achieved 20% acc@1, 27% acc@10, and 30% acc@16. An inspection of its outputs
reveals that even when its predictions diverge from the ground truth, they are often idiomatic and
semantically correct alternatives. We supply more detail in Appendix C.2.
-----
IMPACT ON LEAN COMMUNITY Lean’s mathlib (mathlib, 2020) is a rapidly growing open
source library of formal mathematics which has grown considerably in size each year for the past
four years.[1] Our work has been welcomed by members of this community, with Lean power users
describing some of the new proofs found by GPT-f as “nontrivial” and “clever”. More than one-third
of the proofs found by our models are shorter and produce smaller proof terms (sometimes by several
orders of magnitude) than the ground truth. Manually inspecting a small, non-cherry picked sample
of these shorter proofs has led to 19 GPT-f co-authored commits to mathlib, some of which reduce
proof term sizes and theorem compilation times by an order of magnitude (see Appendix D).
POTENTIAL SOCIETAL IMPACT Strong automated reasoning systems have enormous potential
impact for mathematical research and scientific progress in other disciplines. The methods that we
discuss in this paper could accelerate the development of strong automated reasoning systems. We
have also observed that our language models absorb stylistic biases from their training data which
could be amplified via reinforcement learning. However, since we focus on mathematics codified in
proof assistants, we believe that there is little immediate negative societal impact from our work.
FUTURE DIRECTIONS There are many elaborations on the training data, training methodology, and
tree search wrapping lean-gptf which can be reasonably expected to improve its performance
at theorem proving. Our dataset can be synthetically augmented using similar methods as (Polu &
Sutskever, 2020). Our dataset could be cleaned further, and proofs minimized. Merely making the
decoded rewrites robust by only using the largest prefix of successful rewrites significantly boosts the
success rate of suggested rewrites. In a similar vein, predicted lemmas generated as arguments to
unsuccessful tactic applications could be cached and re-used as hints for an intermittently-queried
hammer. The increased success rate of chained tactic predictions mentioned above shows the
feasibility of having language models perform multiple reasoning steps in a single query, potentially
improving the efficiency of the proof search. From the experiments described in Section 4, it is clear
that the composition of the dataset used for co-training significantly affects performance on theorem
proving. Although we uniformly sampled across all co-training tasks, it would be interesting to
optimize a dynamic mixture schedule, perhaps annealing towards a desired task.
CONCLUSION There is a sense in which PACT is merely an application of the well known principle
that compute in the form of search should be exchanged for training signal whenever possible. In
Lean, typeclass inference relies on a backtracking Prolog-style search; the elaborator performs search
to disambiguate overloaded notation and infer types; Lean tactics have complex semantics precisely
because they can perform search to find subproofs automatically. The work done by these subroutines
is preserved in the proof artifacts, and PACT can be viewed as a way of extracting this information
offline for more training signal.
We have presented PACT as a way of addressing the data scarcity issue for learning theorem proving
from human tactic scripts in proof assistant libraries. Another well-studied solution for this is expert
iteration and reinforcement learning. In the setting of HOL Light, and under the assumption of a
hardcoded finite action space of tactics, Bansal et al. (2019a) in conjunction with supervised seed
data was able to achieve up to 70% proof success rate on the HOList theorem proving task. Similarly,
in a set-up much closer to ours, MM GPT-f demonstrated the feasibility of expert iteration when
using generative language models for theorem proving.
Within a fixed corpus of theorems (and hence proof terms), however, both PACT and RL are
fundamentally constrained by a lack of exploration—as the performance of the theorem proving
agent improves, it will eventually saturate and become starved for data, and its curriculum will need
to be expanded. Although self-supervised methods such as PACT represent a way to significantly
improve the data-efficiency of reinforcement learning loops over existing theorem prover libraries, the
development of continuously self-improving and infinitely scalable neural theorem provers remains
contingent on sufficiently powerful exploration and automated curriculum generation; we consider
these challenges to be of paramount importance.
[1See https://leanprover-community.github.io/mathlib_stats.html for up-to-date](https://leanprover-community.github.io/mathlib_stats.html)
statistics on mathlib’s size and growth over time.
-----
6 ACKNOWLEDGMENTS
We thank the members of the Lean community, in particular Kevin Buzzard, Simon Hudon, Johan
Commelin, Mario Carneiro, Bhavik Mehta, and Gabriel Ebner for their valuable feedback on our
work. We are indebted to Markus Rabe and Christian Szegedy for many hours of helpful discussion.
We also thank Daniel Selsam, Tom Hales, and Josef Urban for feedback on earlier drafts of this
paper.
7 REPRODUCIBILITY STATEMENT
The source code used to generate the Lean datasets and run the evaluation is open source and made
available in the following repositories:
**Lean theorem proving environment :**
[https://github.com/jesse-michael-han/lean-tpe-public](https://github.com/jesse-michael-han/lean-tpe-public)
**Tactic step data pipeline :**
[https://github.com/jasonrute/lean_proof_recording](https://github.com/jasonrute/lean_proof_recording)
**PACT data pipeline :**
[https://github.com/jesse-michael-han/lean-step-public](https://github.com/jesse-michael-han/lean-step-public)
Our Transformer model was pre-trained on two proprietary datasets. The first is the same mix used
by GPT-3 (Brown et al., 2020) and the second is WebMath (Polu & Sutskever, 2020). More details
can be found in Appendix B.
While our weights and the API through which we query our models are not currently public, techniques for training decoder-only transformers and efficiently performing inference with them are
well-known. Our released theorem proving code is agnostic to these implementation details and will
work with any language model exposed via an HTTP server. The provided code also supports querying a locally hosted Transformer from the open-source library fairseq via the Fairseq CLI (Ott
et al., 2019).
We have released a simplified version of the proof search described in Section 3.3 as a tactic to
the Lean community in a public beta, opening the way for our models to directly accelerate the
development of formalized mathematics and for human experts to provide feedback and additional
[training signal in a virtuous cycle. The tactic and code are available at https://github.com/](https://github.com/jesse-michael-han/lean-gptf)
[jesse-michael-han/lean-gptf, and users who sign up for the beta are granted access to](https://github.com/jesse-michael-han/lean-gptf)
our Transformer model through an API.
REFERENCES
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, and Christian Szegedy. Learning to reason in large
[theories without imitation. CoRR, abs/1905.10501, 2019a. URL http://arxiv.org/abs/](http://arxiv.org/abs/1905.10501)
[1905.10501.](http://arxiv.org/abs/1905.10501)
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An
environment for machine learning of higher order logic theorem proving. In Kamalika Chaudhuri
and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine
_Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings_
_[of Machine Learning Research, pp. 454–463. PMLR, 2019b. URL http://proceedings.](http://proceedings.mlr.press/v97/bansal19a.html)_
[mlr.press/v97/bansal19a.html.](http://proceedings.mlr.press/v97/bansal19a.html)
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An
environment for machine learning of higher-order theorem proving (extended version). CoRR,
[abs/1904.03241, 2019c. URL http://arxiv.org/abs/1904.03241.](http://arxiv.org/abs/1904.03241)
Lasse Blaauwbroek, Josef Urban, and Herman Geuvers. The tactician - A seamless, interactive
tactic learner and prover for coq. In Christoph Benzmüller and Bruce R. Miller (eds.), Intelligent
_Computer Mathematics - 13th International Conference, CICM 2020, Bertinoro, Italy, July 26-_
_31, 2020, Proceedings, volume 12236 of Lecture Notes in Computer Science, pp. 271–277._
-----
[Springer, 2020. doi: 10.1007/978-3-030-53518-6\_17. URL https://doi.org/10.1007/](https://doi.org/10.1007/978-3-030-53518-6_17)
[978-3-030-53518-6_17.](https://doi.org/10.1007/978-3-030-53518-6_17)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler,
Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot
learners. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan,
and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual
_Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12,_
_[2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)_
[1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
Kevin Buzzard, Johan Commelin, and Patrick Massot. Formalising perfectoid spaces. In Jasmin Blanchette and Catalin Hritcu (eds.), Proceedings of the 9th ACM SIGPLAN Interna_tional Conference on Certified Programs and Proofs, CPP 2020, New Orleans, LA, USA,_
_January 20-21, 2020, pp. 299–312. ACM, 2020._ doi: 10.1145/3372885.3373830. URL
[https://doi.org/10.1145/3372885.3373830.](https://doi.org/10.1145/3372885.3373830)
Kevin Buzzard, Chris Hughes, Kenny Lau, Amelia Livingston, Ramon Fernández Mir, and Scott
Morrison. Schemes in lean. Experimental Mathematics, 0(0):1–9, 2021. doi: 10.1080/10586458.
[2021.1983489. URL https://doi.org/10.1080/10586458.2021.1983489.](https://doi.org/10.1080/10586458.2021.1983489)
François Charton, Amaury Hayat, and Guillaume Lample. Learning advanced mathematical
computations from examples. In 9th International Conference on Learning Representations,
_ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021._ [URL https:](https://openreview.net/forum?id=-gfhS00XfKj)
[//openreview.net/forum?id=-gfhS00XfKj.](https://openreview.net/forum?id=-gfhS00XfKj)
Leonardo Mendonça de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von
Raumer. The lean theorem prover (system description). In Amy P. Felty and Aart Middeldorp
(eds.), Automated Deduction - CADE-25 - 25th International Conference on Automated Deduction,
_Berlin, Germany, August 1-7, 2015, Proceedings, volume 9195 of Lecture Notes in Computer_
_[Science, pp. 378–388. Springer, 2015. doi: 10.1007/978-3-319-21401-6\_26. URL https:](https://doi.org/10.1007/978-3-319-21401-6_26)_
[//doi.org/10.1007/978-3-319-21401-6_26.](https://doi.org/10.1007/978-3-319-21401-6_26)
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas
Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit,
and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale.
In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria,
_[May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=YicbFdNTTy)_
[YicbFdNTTy.](https://openreview.net/forum?id=YicbFdNTTy)
Gabriel Ebner, Sebastian Ullrich, Jared Roesch, Jeremy Avigad, and Leonardo de Moura. A metaprogramming framework for formal verification. Proc. ACM Program. Lang., 1(ICFP):34:1–34:29,
[2017. doi: 10.1145/3110278. URL https://doi.org/10.1145/3110278.](https://doi.org/10.1145/3110278)
M. Ganesalingam and W. T. Gowers. A fully automatic theorem prover with human-style output.
_[J. Autom. Reason., 58(2):253–291, 2017. doi: 10.1007/s10817-016-9377-1. URL https:](https://doi.org/10.1007/s10817-016-9377-1)_
[//doi.org/10.1007/s10817-016-9377-1.](https://doi.org/10.1007/s10817-016-9377-1)
Thibault Gauthier and Cezary Kaliszyk. Sharing HOL4 and HOL light proof knowledge. In Martin
Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov (eds.), Logic for Programming,
_Artificial Intelligence, and Reasoning - 20th International Conference, LPAR-20 2015, Suva, Fiji,_
_November 24-28, 2015, Proceedings, volume 9450 of Lecture Notes in Computer Science, pp._
[372–386. Springer, 2015. doi: 10.1007/978-3-662-48899-7\_26. URL https://doi.org/](https://doi.org/10.1007/978-3-662-48899-7_26)
[10.1007/978-3-662-48899-7_26.](https://doi.org/10.1007/978-3-662-48899-7_26)
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. Tactictoe: Learning to prove with tactics. J. Autom. Reason., 65(2):257–286, 2021. doi: 10.1007/
[s10817-020-09580-x. URL https://doi.org/10.1007/s10817-020-09580-x.](https://doi.org/10.1007/s10817-020-09580-x)
-----
Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus Norman Rabe, and Bernd Finkbeiner.
Teaching temporal logics to neural networks. In 9th International Conference on Learning
_Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL_
[https://openreview.net/forum?id=dOcQK-f4byz.](https://openreview.net/forum?id=dOcQK-f4byz)
Jesse Michael Han and Floris van Doorn. A formal proof of the independence of the continuum
hypothesis. In Jasmin Blanchette and Catalin Hritcu (eds.), Proceedings of the 9th ACM SIGPLAN
_International Conference on Certified Programs and Proofs, CPP 2020, New Orleans, LA, USA,_
_[January 20-21, 2020, pp. 353–366. ACM, 2020. doi: 10.1145/3372885.3373826. URL https:](https://doi.org/10.1145/3372885.3373826)_
[//doi.org/10.1145/3372885.3373826.](https://doi.org/10.1145/3372885.3373826)
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo
Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford,
Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. Scaling laws for autoregressive generative modeling. CoRR, abs/2010.14701, 2020. URL
[https://arxiv.org/abs/2010.14701.](https://arxiv.org/abs/2010.14701)
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. Gamepad: A learning environment
for theorem proving. In 7th International Conference on Learning Representations, ICLR 2019,
_[New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.](https://openreview.net/forum?id=r1xwKoR9Y7)_
[net/forum?id=r1xwKoR9Y7.](https://openreview.net/forum?id=r1xwKoR9Y7)
Cezary Kaliszyk and Josef Urban. Mizar 40 for mizar 40. _J. Autom. Reason., 55(3):245–_
256, 2015a. doi: 10.1007/s10817-015-9330-8. [URL https://doi.org/10.1007/](https://doi.org/10.1007/s10817-015-9330-8)
[s10817-015-9330-8.](https://doi.org/10.1007/s10817-015-9330-8)
Cezary Kaliszyk and Josef Urban. Learning-assisted theorem proving with millions of lemmas. J.
_[Symb. Comput., 69:109–128, 2015b. doi: 10.1016/j.jsc.2014.09.032. URL https://doi.org/](https://doi.org/10.1016/j.jsc.2014.09.032)_
[10.1016/j.jsc.2014.09.032.](https://doi.org/10.1016/j.jsc.2014.09.032)
Cezary Kaliszyk, Josef Urban, and Jirí Vyskocil. Lemmatization for stronger reasoning in large
theories. In Carsten Lutz and Silvio Ranise (eds.), Frontiers of Combining Systems - 10th In_ternational Symposium, FroCoS 2015, Wroclaw, Poland, September 21-24, 2015. Proceedings,_
volume 9322 of Lecture Notes in Computer Science, pp. 341–356. Springer, 2015. doi: 10.1007/
[978-3-319-24246-0\_21. URL https://doi.org/10.1007/978-3-319-24246-0_](https://doi.org/10.1007/978-3-319-24246-0_21)
[21.](https://doi.org/10.1007/978-3-319-24246-0_21)
Cezary Kaliszyk, François Chollet, and Christian Szegedy. Holstep: A machine learning dataset for
higher-order logic theorem proving. In 5th International Conference on Learning Representations,
_ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net,_
[2017. URL https://openreview.net/forum?id=ryuxYmvel.](https://openreview.net/forum?id=ryuxYmvel)
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models.
_[CoRR, abs/2001.08361, 2020. URL https://arxiv.org/abs/2001.08361.](https://arxiv.org/abs/2001.08361)_
Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In 8th
_International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia,_
_[April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=S1eZYeHFDS)_
[S1eZYeHFDS.](https://openreview.net/forum?id=S1eZYeHFDS)
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Isarstep: a benchmark for high-level
mathematical reasoning. In International Conference on Learning Representations, 2021. URL
[https://openreview.net/forum?id=Pzj6fzU6wkj.](https://openreview.net/forum?id=Pzj6fzU6wkj)
mathlib. The lean mathematical library. In Jasmin Blanchette and Catalin Hritcu (eds.), Proceedings
_of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP 2020,_
_New Orleans, LA, USA, January 20-21, 2020, pp. 367–381. ACM, 2020. doi: 10.1145/3372885._
[3373824. URL https://doi.org/10.1145/3372885.3373824.](https://doi.org/10.1145/3372885.3373824)
Pengyu Nie, Karl Palmskog, Junyi Jessy Li, and Milos Gligoric. Deep generation of coq lemma
names using elaborated terms. In Nicolas Peltier and Viorica Sofronie-Stokkermans (eds.), Au_tomated Reasoning - 10th International Joint Conference, IJCAR 2020, Paris, France, July 1-4,_
-----
_2020, Proceedings, Part II, volume 12167 of Lecture Notes in Computer Science, pp. 97–118._
[Springer, 2020. doi: 10.1007/978-3-030-51054-1\_6. URL https://doi.org/10.1007/](https://doi.org/10.1007/978-3-030-51054-1_6)
[978-3-030-51054-1_6.](https://doi.org/10.1007/978-3-030-51054-1_6)
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and
Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Waleed Ammar, Annie
Louis, and Nasrin Mostafazadeh (eds.), Proceedings of the 2019 Conference of the North American
_Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-_
_HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, pp. 48–53. Association for_
[Computational Linguistics, 2019. doi: 10.18653/v1/n19-4009. URL https://doi.org/10.](https://doi.org/10.18653/v1/n19-4009)
[18653/v1/n19-4009.](https://doi.org/10.18653/v1/n19-4009)
Aditya Paliwal, Sarah M. Loos, Markus N. Rabe, Kshitij Bansal, and Christian Szegedy. Graph
representations for higher-order logic and theorem proving. In The Thirty-Fourth AAAI Conference
_on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intel-_
_ligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial_
_Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 2967–2974. AAAI Press,_
[2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/5689.](https://aaai.org/ojs/index.php/AAAI/article/view/5689)
Frank Pfenning and Christine Paulin-Mohring. Inductively defined types in the calculus of constructions. In Michael G. Main, Austin Melton, Michael W. Mislove, and David A. Schmidt (eds.),
_Mathematical Foundations of Programming Semantics, 5th International Conference, Tulane_
_University, New Orleans, Louisiana, USA, March 29 - April 1, 1989, Proceedings, volume 442_
of Lecture Notes in Computer Science, pp. 209–228. Springer, 1989. doi: 10.1007/BFb0040259.
[URL https://doi.org/10.1007/BFb0040259.](https://doi.org/10.1007/BFb0040259)
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_[CoRR, abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393.](https://arxiv.org/abs/2009.03393)_
Markus Norman Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical reasoning
via self-supervised skip-tree training. In 9th International Conference on Learning Representations,
_[ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://](https://openreview.net/forum?id=YmqAnY0CMEy)_
[openreview.net/forum?id=YmqAnY0CMEy.](https://openreview.net/forum?id=YmqAnY0CMEy)
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.
Learning transferable visual models from natural language supervision. In Marina Meila and
Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning,
_ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning_
_[Research, pp. 8748–8763. PMLR, 2021. URL http://proceedings.mlr.press/v139/](http://proceedings.mlr.press/v139/radford21a.html)_
[radford21a.html.](http://proceedings.mlr.press/v139/radford21a.html)
Alex Sanchez-Stern, Yousef Alhessi, Lawrence K. Saul, and Sorin Lerner. Generating correctness
proofs with neural networks. In Koushik Sen and Mayur Naik (eds.), Proceedings of the 4th
_ACM SIGPLAN International Workshop on Machine Learning and Programming Languages,_
_MAPL@PLDI 2020, London, UK, June 15, 2020, pp. 1–10. ACM, 2020. doi: 10.1145/3394450._
[3397466. URL https://doi.org/10.1145/3394450.3397466.](https://doi.org/10.1145/3394450.3397466)
Peter Scholze. Liquid tensor experiment. [https://xenaproject.wordpress.com/](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
[2020/12/05/liquid-tensor-experiment/, 2020. Formalization available at https:](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
[//github.com/leanprover-community/lean-liquid.](https://github.com/leanprover-community/lean-liquid)
Josef Urban and Jan Jakubuv. First neural conjecturing datasets and experiments. In Christoph
Benzmüller and Bruce R. Miller (eds.), Intelligent Computer Mathematics - 13th International
_Conference, CICM 2020, Bertinoro, Italy, July 26-31, 2020, Proceedings, volume 12236 of Lecture_
_Notes in Computer Science, pp. 315–323. Springer, 2020. doi: 10.1007/978-3-030-53518-6\_24._
[URL https://doi.org/10.1007/978-3-030-53518-6_24.](https://doi.org/10.1007/978-3-030-53518-6_24)
Mingzhe Wang and Jia Deng. Learning to prove theorems by learning to generate theorems. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and
Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Con_ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12,_
-----
_[2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/](https://proceedings.neurips.cc/paper/2020/hash/d2a27e83d429f0dcae6b937cf440aeb1-Abstract.html)_
[d2a27e83d429f0dcae6b937cf440aeb1-Abstract.html.](https://proceedings.neurips.cc/paper/2020/hash/d2a27e83d429f0dcae6b937cf440aeb1-Abstract.html)
Qingxiang Wang, Cezary Kaliszyk, and Josef Urban. First experiments with neural translation of
informal to formal mathematics. In Florian Rabe, William M. Farmer, Grant O. Passmore, and
Abdou Youssef (eds.), Intelligent Computer Mathematics - 11th International Conference, CICM
_2018, Hagenberg, Austria, August 13-17, 2018, Proceedings, volume 11006 of Lecture Notes in_
_Computer Science, pp. 255–270. Springer, 2018. doi: 10.1007/978-3-319-96812-4\_22. URL_
[https://doi.org/10.1007/978-3-319-96812-4_22.](https://doi.org/10.1007/978-3-319-96812-4_22)
Qingxiang Wang, Chad E. Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine
translation in autoformalization of mathematics in mizar. In Jasmin Blanchette and Catalin Hritcu
(eds.), Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs
_and Proofs, CPP 2020, New Orleans, LA, USA, January 20-21, 2020, pp. 85–98. ACM, 2020. doi:_
[10.1145/3372885.3373827. URL https://doi.org/10.1145/3372885.3373827.](https://doi.org/10.1145/3372885.3373827)
Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic. CoRR,
[abs/1608.02644, 2016. URL http://arxiv.org/abs/1608.02644.](http://arxiv.org/abs/1608.02644)
[Freek Wiedijk. The De Bruijn factor, 2000. URL http://www.cs.ru.nl/F.Wiedijk/](http://www.cs.ru.nl/F.Wiedijk/factor/factor.pdf)
[factor/factor.pdf.](http://www.cs.ru.nl/F.Wiedijk/factor/factor.pdf)
Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. Tacticzero: Learning to prove
theorems from scratch with deep reinforcement learning. In A. Beygelzimer, Y. Dauphin, P. Liang,
and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021a. URL
[https://openreview.net/forum?id=edmYVRkYZv.](https://openreview.net/forum?id=edmYVRkYZv)
Yuhuai Wu, Markus N. Rabe, Wenda Li, Jimmy Ba, Roger B. Grosse, and Christian Szegedy. LIME:
learning inductive bias for primitives of mathematical reasoning. In Marina Meila and Tong Zhang
(eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24
_July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 11251–_
[11262. PMLR, 2021b. URL http://proceedings.mlr.press/v139/wu21c.html.](http://proceedings.mlr.press/v139/wu21c.html)
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International
_Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA,_
volume 97 of Proceedings of Machine Learning Research, pp. 6984–6994. PMLR, 2019. URL
[http://proceedings.mlr.press/v97/yang19a.html.](http://proceedings.mlr.press/v97/yang19a.html)
-----
A ADDITIONAL BACKGROUND
PROOF TERMS Lean’s fundamental logic is a dependent type theory called the calculus of inductive
constructions Pfenning & Paulin-Mohring (1989). This design means that terms (4, x + y, f ), types
(N, list Z, α → _β) and proofs are all represented with a single datatype called an expression. Given_
an environment of available constants and definitions and a context Γ of variables, Lean can infer
a type α for each well-formed expression t. A proof term is a Lean expression whose type is a
proposition. This proof term serves as a checkable artifact for verifying the proposition. Lean uses a
small, trusted kernel to verify proof terms.
TACTICS Tactics in Lean are metaprograms Ebner et al. (2017), which can construct Lean expressions, such as terms. A tactic state which tracks the list of open goals and other metadata is threaded
through each tactic invocation. Lean has special support for treating tactics as an extensible domainspecific language (DSL); this DSL is how Lean is typically used as an interactive theorem prover. The
DSL amounts to a linear chain of comma-separated invocations. The process of interactive proving is
mediated through Lean’s language server, which will present the context and type for the current goal
in the proof to the user, dependent on where their cursor is in the source text. The tactic prediction
task is to predict the next tactic given this goal state. We extract supervised training data for this task
by extracting all human-supplied proof steps from Lean’s mathlib.
An object called the tactic state is threaded through each invocation of a tactic. Among other things,
the tactic state maintains a context of metavariables: placeholders in to which expressions will be
substituted later. At each point in the proof, one or more of these metavariables are selected as the
_goal of the tactic state which is present As the proof progresses, there are multiple values to be found_
EXAMPLE Consider this (modified) example of a tactic proof from the library.
theorem int.sub_ne_zero_of_ne : ∀ (a b : Z), a ̸= b -> a - b ̸= 0 :=
begin
intros a b h hab,
apply h,
apply int.eq_of_sub_eq_zero hab,
end
Each tactic line modifies the proof state, which we explicitly annotate below with comments between
each tactic.
theorem int.sub_ne_zero_of_ne : ∀ (a b : Z), a ̸= b -> a - b ̸= 0 :=
begin
-- ⊢ _∀_ (a b : Z), a ̸= b → a - b ̸= 0
intros a b h hab,
-- a b : Z,
-- h : a ̸= b,
-- hab : a - b = 0
-- ⊢ false
apply h,
-- a b : Z,
-- h : a ̸= b,
-- hab : a - b = 0
-- ⊢ a = b
apply int.eq_of_sub_eq_zero hab,
-- no goals
end
Our proofstep objective is to predict the tactic applied to a given tactic state.
Lean stores this proof internally as a proof term:
theorem int.sub_ne_zero_of_ne : ∀ (a b : Z), a ̸= b → a - b ̸= 0 :=
_λ (a b : Z) (h : a ̸= b), id (λ (hab : a - b = 0), h_
(int.eq_of_sub_eq_zero hab))
Since this proof term is just stored internally as a tree, any branch of this term tree can be removed,
to create a hole _, for example:
-----
_λ (a b : Z) (h : a ̸= b), id (λ (hab : a - b = 0), h_ _)
Lean will automatically provide a list of both the local context and the type of a term needed to fill
that hole as shown below. Notice this is the same as a tactic state we saw from the term proof above.
a b : Z,
h : a ̸= b,
hab : a - b = 0
_⊢_ a = b
Using this methodology of following proof term trees, we can mine low level proof data for every
node of a term proof to produce the PACT dataset described in Section 3.2.
B DATASETS
B.1 PRE-TRAINING DATASETS
We pre-train on WebMath as described in (Polu & Sutskever, 2020). All models, including the
WebMath pre-trained models, and the non-WebMath models used in ablations, were first pretrained on the mix used by GPT-3 (Brown et al., 2020) which includes a filtered CommonCrawl,
WebText2, Book1, Book2 and Wikipedia. WebMath includes Python-only GitHub data, as
well as arXiv and Math StackExchange.
From these datasets, a potential risk for test-set contamination (presence of mathlib) exists for the
crawled datasets, namely CommonCrawl, WebText2, and (in case of a filtering bug) Python-only
GitHub. The other datasets (in particular arXiv and Math StackExchange) may contain short
references of mathlib code but in shape and forms that would not lead to effective contamination.
To assess the contamination risk related with the crawled datasets, we searched CommonCrawl,
WebText2, arXiv, Python-only GitHub, and Math StackExchange for test theorems. For
example, given the test theorem nat.div_eq_sub_div we searched for any occurrences of the
string div_eq_sub_div. Of over 3000 test theorem names, we found 595 which occurred in the
datasets. Many instances were innocuous, but some were in Lean files, and in some cases there was a
proof of a test theorem. There were also 160 additional test theorems with no underscore in their
name, which we did not check, but whose name is likely to be found in the datasets. (There is no need
to check for training theorems since they are already in the training data and it would not constitute
contamination.) We re-calculated the pass-rates of the results in Figure 2 omitting these 755 test
theorems. This decreases the reported pass-rates slightly, ranging from 0.6 to 1.1 percentage points.
The adjusted pass-rate of our best model WebMath > mix1 + mix2 + tactic is 47.4%, a
decrease of 1 percentage point. Our main results still hold even with the adjusted pass-rates.
Additionally we also look at the results for the 1,350 test theorems in our dataset that were added
to Lean and mathlib after April 18, 2020, which is after CommonCrawl and WebText2 were
gathered, and the 544 test theorems added after September 11, 2020, which is after WebMath was
gathered. Unlike future-mathlib, these theorems were part of the originally extracted data. The
pass-rates for the WebMath > mix1 + mix2 + tactic model on these restricted sets of test
theorems are 45.6% and 43.3%, respectively.
We also looked for the following Metamath specific and HOL specific strings in CommonCrawl,
WebText2, and Python-only GitHub:
Metamath:
"( ph -> A = C )"
"( ph -> A R C )"
"( sqrt ‵ 2 ) e/ QQ"
HOL:
"apply (rule "
"apply (drule "
We found 0 occurrence of the Metamath-related strings but interestingly found a non-negligible
amount of HOL-related documents, which does not constitute a test-set contamination but potentially
benefits the downstream tasks studied in this paper.
-----
While our results show a significant benefit to pre-training on WebMath, it is unclear exactly
how pre-training helps. Since Lean’s theorem names are made of coded mathematical phases,
e.g. affine.simplex.dist_circumcenter_eq_circumradius, it is not unreasonable
to suspect that important statistical connections are extracted from math sources. It is even possible
that simple instances of auto-formalization or ITP translation are happening. There is prior work
(Gauthier & Kaliszyk, 2015; Wang et al., 2018; 2020) suggesting that both of these are possible.
From the point of view of a lean-gptf end-user, any such extraction of prior, publicly available
data is useful and helpful. Nonetheless, our results are of a different nature than other AI for theorem
proving research which do not use data outside of a given theorem proving library. This should be
taken into account in any future comparisons and benchmarks.
B.2 DATASET SIZES
- tactic: ≈128K examples.
- mix1
**– Next lemma prediction: ≈2.5M examples**
**– Proof term prediction: ≈2.9M examples**
- mix2
**– Skip-proof: ≈1.7M examples**
**– Type-prediction: ≈1.7M examples**
**– Tactic state elaboration: ≈346K examples**
**– Proof term elaboration: ≈1.0M examples**
**– Premise classification: ≈9.3M examples**
**– Local context classification: ≈2.0M examples**
**– Theorem naming: ≈32K examples.**
B.3 EXAMPLE DATAPOINTS
We present datapoints extracted from a toy example, namely the proof of the Peirce identity, viz.
lemma peirce_identity {P Q :Prop} : ((P → Q) → P) → P :=
begin
apply or.elim (em P),
intros h _,
exact h,
tauto!
end
From this, we can extract four tactic datapoints (i.e. human-generated tactic proof steps):
-- GOAL P Q : Prop ⊢ ((P → Q) → P) → P PROOFSTEP apply or.elim (em P)
-- GOAL P Q : Prop ⊢ P → ((P → Q) → P) → P P Q : Prop ⊢¬P → ((P →
Q) → P) → P PROOFSTEP intros h _
-- GOAL P Q : Prop, h : P, ˇα : (P → Q) → P ⊢ P P Q : Prop ⊢¬P → ((P
_→_ Q) → P) → P PROOFSTEP exact h
-- GOAL P Q : Prop ⊢¬P → ((P → Q) → P) → P PROOFSTEP tauto!
In contrast, we can extract dozens of raw PACT datapoints. Due to space constraints, we list a
representative sample of four such datapoints, from each of which we can derive the nine selfsupervised auxiliary PACT tasks studied in our present work. For example, proof term prediction is
precisely predicting the "proof_term" given the concatenation of "hyps", "⊢", and the "goal",
skip-proof is predicting the "proof_term" given "result", etc.
DATAPOINT:
--{ "decl_nm":"peirce_identity",
"decl_tp":"∀ {P Q : Prop}, ((P → Q) → P) → P",
"hyps":[["P", "Prop"], ["Q", "Prop"], ["αˇ", "¬P"], ["αˇ_1", "(P → Q) →
P"], ["αˇ_1", "¬(P → Q)"]],
-----
"hyps_mask":[true, false, false, false, false],
"decl_premises":[["absurd", "∀ {a b : Prop}, a →¬a → b"],
["absurd", "∀ {a b : Prop}, a →¬a → b"],
["decidable.not_imp", "∀ {a b : Prop} [_inst_1 : decidable a], ¬(a →
b) ↔ a ∧¬b"],
["iff.mp", "∀ {a b : Prop}, (a ↔ b) → a → b"],
["and.dcases_on",
"∀ {a b : Prop} {C : a ∧ b → Prop} (n : a ∧ b), (∀ (left : a)
(right : b), C _) → C n"],
["decidable.not_or_of_imp", "∀ {a b : Prop} [_inst_1 : decidable a],
(a → b) →¬a ∨ b"],
["or.dcases_on",
"∀ {a b : Prop} {C : a ∨ b → Prop} (n : a ∨ b), (∀ (h : a), C _) →
(∀ (h : b), C _) → C n"],
["em", "∀ (p : Prop), p ∨¬p"],
["or.elim", "∀ {a b c : Prop}, a ∨ b → (a → c) → (b → c) → c"]],
"decl_premises_mask":[false, false, true, false, false, false, false,
false, false],
"goal":"∀ {b : Prop} [_inst_1 : decidable P], ¬(P → b) ↔ P ∧¬b",
"proof_term":"decidable.not_imp",
"result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q) → P),
h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P), (decidable.not_or_of_imp ˇα
_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), ((PREDICT Q
(classical.prop_decidable P)).mp ˇα_1).dcases_on (λ (αˇ_1_left : P)
(αˇ_1_right : ¬Q), absurd ˇα_1_left ˇα)) (λ (αˇ_1 : P), absurd ˇα_1 ˇα))",
"next_lemma":["decidable.not_imp", "∀ {a b : Prop} [_inst_1 :
decidable a], ¬(a → b) ↔ a ∧¬b"],
"goal_is_prop":true,
"verbose_proof_term":"@decidable.not_imp P",
"verbose_goal":"∀ {b : Prop} [_inst_1 : decidable P], ¬(P → b) ↔ P ∧
_¬b",_
"verbose_result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q)
_→_ P), h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P),
(@decidable.not_or_of_imp (P → Q) P (classical.prop_decidable (P →
Q)) ˇα_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), (@iff.mp (¬(P → Q)) (P ∧¬
Q) (PREDICT Q (classical.prop_decidable P)) ˇα_1).dcases_on (λ
(αˇ_1_left : P) (αˇ_1_right : ¬Q), @absurd P P ˇα_1_left ˇα)) (λ (αˇ_1 :
P), @absurd P P ˇα_1 ˇα))"}
--
DATAPOINT:
--{ "decl_nm":"peirce_identity",
"decl_tp":"∀ {P Q : Prop}, ((P → Q) → P) → P",
"hyps":[["P", "Prop"], ["Q", "Prop"], ["αˇ", "¬P"], ["αˇ_1", "(P → Q) →
P"], ["αˇ_1", "¬(P → Q)"]],
"hyps_mask":[false, true, false, false, false],
"decl_premises":[["absurd", "∀ {a b : Prop}, a →¬a → b"],
["absurd", "∀ {a b : Prop}, a →¬a → b"],
["decidable.not_imp", "∀ {a b : Prop} [_inst_1 : decidable a], ¬(a →
b) ↔ a ∧¬b"],
["iff.mp", "∀ {a b : Prop}, (a ↔ b) → a → b"],
["and.dcases_on",
"∀ {a b : Prop} {C : a ∧ b → Prop} (n : a ∧ b), (∀ (left : a)
(right : b), C _) → C n"],
["decidable.not_or_of_imp", "∀ {a b : Prop} [_inst_1 : decidable a],
(a → b) →¬a ∨ b"],
["or.dcases_on",
"∀ {a b : Prop} {C : a ∨ b → Prop} (n : a ∨ b), (∀ (h : a), C _) →
(∀ (h : b), C _) → C n"],
["em", "∀ (p : Prop), p ∨¬p"],
["or.elim", "∀ {a b c : Prop}, a ∨ b → (a → c) → (b → c) → c"]],
"decl_premises_mask":[false, false, false, false, false, false, false,
false, false],
"goal":"Prop",
-----
"proof_term":"Q",
"result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q) → P),
h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P), (decidable.not_or_of_imp ˇα
_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), (decidable.not_imp.mp ˇα
_1).dcases_on (λ (αˇ_1_left : P) (αˇ_1_right : ¬Q), absurd ˇα_1_left ˇα
)) (λ (αˇ_1 : P), absurd ˇα_1 ˇα))",
"next_lemma":["Q", "Prop"],
"goal_is_prop":false,
"verbose_proof_term":"Q",
"verbose_goal":"Prop",
"verbose_result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q)
_→_ P), h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P),
(@decidable.not_or_of_imp (P → Q) P (classical.prop_decidable (P →
Q)) ˇα_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), ((@decidable.not_imp P
PREDICT (classical.prop_decidable P)).mp ˇα_1).dcases_on (λ (αˇ_1_left
: P) (αˇ_1_right : ¬Q), @absurd P P ˇα_1_left ˇα)) (λ (αˇ_1 : P),
@absurd P P ˇα_1 ˇα))"}
--
DATAPOINT:
--{ "decl_nm":"peirce_identity",
"decl_tp":"∀ {P Q : Prop}, ((P → Q) → P) → P",
"hyps":[["P", "Prop"], ["Q", "Prop"], ["αˇ", "¬P"], ["αˇ_1", "(P → Q) →
P"], ["αˇ_1", "¬(P → Q)"]],
"hyps_mask":[true, true, false, false, false],
"decl_premises":[["absurd", "∀ {a b : Prop}, a →¬a → b"],
["absurd", "∀ {a b : Prop}, a →¬a → b"],
["decidable.not_imp", "∀ {a b : Prop} [_inst_1 : decidable a], ¬(a →
b) ↔ a ∧¬b"],
["iff.mp", "∀ {a b : Prop}, (a ↔ b) → a → b"],
["and.dcases_on",
"∀ {a b : Prop} {C : a ∧ b → Prop} (n : a ∧ b), (∀ (left : a)
(right : b), C _) → C n"],
["decidable.not_or_of_imp", "∀ {a b : Prop} [_inst_1 : decidable a],
(a → b) →¬a ∨ b"],
["or.dcases_on",
"∀ {a b : Prop} {C : a ∨ b → Prop} (n : a ∨ b), (∀ (h : a), C _) →
(∀ (h : b), C _) → C n"],
["em", "∀ (p : Prop), p ∨¬p"],
["or.elim", "∀ {a b c : Prop}, a ∨ b → (a → c) → (b → c) → c"]],
"decl_premises_mask":[false, false, true, false, false, false, false,
false, false],
"goal":"∀ [_inst_1 : decidable P], ¬(P → Q) ↔ P ∧¬Q",
"proof_term":"decidable.not_imp",
"result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q) → P),
h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P), (decidable.not_or_of_imp ˇα
_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), ((PREDICT
(classical.prop_decidable P)).mp ˇα_1).dcases_on (λ (αˇ_1_left : P)
(αˇ_1_right : ¬Q), absurd ˇα_1_left ˇα)) (λ (αˇ_1 : P), absurd ˇα_1 ˇα))",
"next_lemma":["decidable.not_imp", "∀ {a b : Prop} [_inst_1 :
decidable a], ¬(a → b) ↔ a ∧¬b"],
"goal_is_prop":true,
"verbose_proof_term":"@decidable.not_imp P Q",
"verbose_goal":"∀ [_inst_1 : decidable P], ¬(P → Q) ↔ P ∧¬Q",
"verbose_result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q)
_→_ P), h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P),
(@decidable.not_or_of_imp (P → Q) P (classical.prop_decidable (P →
Q)) ˇα_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), (@iff.mp (¬(P → Q)) (P ∧¬
Q) (PREDICT (classical.prop_decidable P)) ˇα_1).dcases_on (λ
(αˇ_1_left : P) (αˇ_1_right : ¬Q), @absurd P P ˇα_1_left ˇα)) (λ (αˇ_1 :
P), @absurd P P ˇα_1 ˇα))"}
--
DATAPOINT:
-----
--{ "decl_nm":"peirce_identity",
"decl_tp":"∀ {P Q : Prop}, ((P → Q) → P) → P",
"hyps":[["P", "Prop"], ["Q", "Prop"], ["αˇ", "¬P"], ["αˇ_1", "(P → Q) →
P"], ["αˇ_1", "¬(P → Q)"]],
"hyps_mask":[false, false, false, false, false],
"decl_premises":[["absurd", "∀ {a b : Prop}, a →¬a → b"],
["absurd", "∀ {a b : Prop}, a →¬a → b"],
["decidable.not_imp", "∀ {a b : Prop} [_inst_1 : decidable a], ¬(a →
b) ↔ a ∧¬b"],
["iff.mp", "∀ {a b : Prop}, (a ↔ b) → a → b"],
["and.dcases_on",
"∀ {a b : Prop} {C : a ∧ b → Prop} (n : a ∧ b), (∀ (left : a)
(right : b), C _) → C n"],
["decidable.not_or_of_imp", "∀ {a b : Prop} [_inst_1 : decidable a],
(a → b) →¬a ∨ b"],
["or.dcases_on",
"∀ {a b : Prop} {C : a ∨ b → Prop} (n : a ∨ b), (∀ (h : a), C _) →
(∀ (h : b), C _) → C n"],
["em", "∀ (p : Prop), p ∨¬p"],
["or.elim", "∀ {a b c : Prop}, a ∨ b → (a → c) → (b → c) → c"]],
"decl_premises_mask":[false, false, false, false, false, false, false,
false, false],
"goal":"Π (a : Prop), decidable a",
"proof_term":"classical.prop_decidable",
"result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q) → P),
h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P), (decidable.not_or_of_imp ˇα
_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), (decidable.not_imp.mp ˇα
_1).dcases_on (λ (αˇ_1_left : P) (αˇ_1_right : ¬Q), absurd ˇα_1_left ˇα
)) (λ (αˇ_1 : P), absurd ˇα_1 ˇα))",
"next_lemma":["classical.prop_decidable", "Π (a : Prop), decidable a"],
"goal_is_prop":false,
"verbose_proof_term":"classical.prop_decidable",
"verbose_goal":"Π (a : Prop), decidable a",
"verbose_result":"λ {P Q : Prop}, (em P).elim (λ (h : P) (αˇ : (P → Q)
_→_ P), h) (λ (αˇ : ¬P) (αˇ_1 : (P → Q) → P),
(@decidable.not_or_of_imp (P → Q) P (PREDICT (P → Q)) ˇα
_1).dcases_on (λ (αˇ_1 : ¬(P → Q)), ((@decidable.not_imp P Q
(PREDICT P)).mp ˇα_1).dcases_on (λ (αˇ_1_left : P) (αˇ_1_right : ¬Q),
@absurd P P ˇα_1_left ˇα)) (λ (αˇ_1 : P), @absurd P P ˇα_1 ˇα))"}
--
C EXPERIMENTS
C.1 CHAINED TACTIC PREDICTION
Individual Lean tactics are chained together with commas. However, the Lean interactive tactic DSL
also includes a number of other tactic combinators for creating composite tactics. A frequently used
combinator is the infix semicolon t; s which will perform the tactic t and then apply the tactic
s to each of the resulting subgoals produced by t. Our data pipeline for human tactic proof steps
treats these semicolon-chained tactics as a single string for the language modeling objective. Thus,
our models learn to occasionally emit multiple-step tactic predictions using semicolons. For example,
wm-to-tt-m1-m2 solved the following lemma in category theory with a single prediction chaining
four tactics in a row:
theorem category_theory.grothendieck.congr
{X Y : grothendieck F} {f g : X −→ Y} (h : f = g) :
f.fiber = eq_to_hom (by subst h) ≫ g.fiber :=
begin
rcases X; rcases Y; subst h; simp
end
-----
One way of measuring the sophistication of predicted tactics is to consider the number of successful
proofs on the evaluation set which have this composite form using semicolon-chaining. We display
this analysis in Table 1, which shows that training with PACT in addition to the human-made tactics
causes longer semicolon-chained tactics to be successfully predicted during theorem proving. This is
remarkable because the semicolon idiom is specific to the tactic DSL which does not occur in the
PACT data whatsoever, and yet the co-training causes longer and more frequent successful composite
tactic predictions.
C.2 THEOREM NAMING CASE STUDY
We included theorem naming as part of the PACT task suite. By mathlib convention, theorem
names are essentially snake-cased, natural language summaries of the type signature of a theorem,
and so the theorem naming task is analogous to a formal-to-informal translation task. We evaluate
the ability of our best model (in terms of theorem proving success rate) wm-to-tt-m1-m2 on
its ability to guess theorem names on the completely unseen future-mathlib set of theorems.
The distribution shift inherent in the future-mathlib dataset particularly impacts the theorem
naming task, because many of the ground-truth names will involve names for concepts that were only
defined in mathlib after we extracted our training data.
On the ≈2.8K future-mathlib theorems, we queried wm-to-tt-m1-m2 for up to N = 16
candidates. We order these candidates into a list xs by decreasing cumulative log-probability and
calculate the top-K accuracy by checking if any of the first K candidates of xs match the ground
truth exactly. The model wm-to-tt-m1-m2 was able to achieve 20.1% top-1 accuracy, 21.1%
top-3 accuracy, 26.7% top-10 accuracy, and 30.0% top-16 accuracy. We display a sample of correct
top-1 guesses (Figure 5) and a sample of failed guesses in (Figure 6). We note that the failed guesses,
while containing no syntactic matches, are both semantically reasonable and syntactically very similar
to the ground truth.
C.3 TEST SET EVALUATION BREAKDOWN BY MODULE
Lean’s mathlib is organized into top-level modules, which roughly organize theorems into mathematical subject area. In Figure 7, we break down the evaluation results on our test set between
our PACT-trained models wm-to-tt-m1-m2 and wm-to-tt-m1 and our baselines wm-to-tt
and tidy. We see that full PACT mostly dominates over co-training on just the mix1 tasks over all
subject areas, and that wm-to-tt-m1 dominates the model wm-to-tt trained on human tactic
proof steps only.
C.4 BASELINE DESCRIPTION
The tidy backend is determined by a constant oracle
Ω : tactic_state → list (string × float)
which always returns the same list of tactics, namely:
meta def tidy_default_tactics : list (string × float) :=
list.map (flip prod.mk 0.0) [
"refl"
, "exact dec_trivial"
, "assumption"
Table 1: Counting the number of semicolon-chained tactics predicted by our models that appear
_in successful proofs. Each column headed by a number n; indicates the number of times that a_
suggestion appeared with n occurrences of ‘;’.
MODEL 1; 2; 3; 4; MEAN
wm-to-tt 215 49 2 0 1.199
wm-to-tt-m1 186 39 5 1 1.225
wm-to-tt-m1-m2 **328** **82** **12** **3** **1.271**
-----
**Correct top-1 guesses**
_∀_ {α : Type u_1} {β : Type u_2} [_inst_1 : decidable_eq α]
[_inst_2 : decidable_eq β] (s : finset α) (t : finset β),
s.product t = s.bUnion
(λ (a : α), finset.image (λ (b : β), (a, b)) t)
**Theorem statement**
**Ground truth** finset.product_eq_bUnion
_∀_ {α : Type u_1} {β : Type u_2} [_inst_1 : topological_space α]
[_inst_2 : topological_space β] {f : α → _β},_
quotient_map f → function.surjective f
**Theorem statement**
**Ground truth** quotient_map.surjective
_∀_ {α : Type u_1} {β : Type u_2} (f : α → option β)
(x : option α), x.pbind (λ (a : α) (_x : a ∈ x), f a) = x.bind f
**Theorem statement**
**Ground truth** option.pbind_eq_bind
{C : Type u1} [_inst_1 : category_theory.category C]
_∀_
{G : C ⇒ C} [_inst_2 : category_theory.comonad G]
{A B : category_theory.comonad.coalgebra G} (h : A.A = B.A)
_[∼]_
(w : A.a ≫ G.map h.hom = h.hom ≫ B.a),
(category_theory.comonad.coalgebra.iso_mk h w).hom.f = h.hom
**Theorem statement**
**Ground truth** category_theory.comonad.coalgebra.iso_mk_hom_f
_∀_ {k : Type u_1} {E : Type u_2} [_inst_1 : is_R_or_C,k]
[_inst_2 : inner_product_space k E]
[_inst_4 : normed_space R E] [_inst_5 : is_scalar_tower R k E]
(p x : E × E),
_⇑(fderiv_inner_clm p) x =_
has_inner.inner p.fst x.snd + has_inner.inner x.fst p.snd
**Theorem statement**
**Ground truth** fderiv_inner_clm_apply
Figure 5: A sample of correct top-1 guesses by our best model wm-to-tt-m1-m2 on the theorem
_naming task. We performed this experiment on the future-mathlib evaluation set, which_
comprises entirely unseen theorems added to mathlib only after we last extracted training data.
-----
**Incorrect guesses**
_∀_ {α : Type u_1} (t : ordnode α) (x : α),
t.dual.find_min[′] x = ordnode.find_max[′] x t
ordinal.find_min[′]_eq, ordinal.find_min[′]_eq_max[′], ordinal.find_min[′]_def,
ordinal.find_min[′]_eq_max, ordinal.find_min[′], ordinal.dual_find_min[′],
ordinal.find_min[′]_gt, ordinal.find_min[′]_q
**Theorem statement**
**Guesses (top 8)**
**Ground truth** ordnode.find_min[′]_dual
_∀_ {α : Type u_1} {β : Type u_3} {γ : Type u_5} [_inst_1 :
measurable_space α] [_inst_3 : measurable_space β]
[_inst_5 : measurable_space γ] {µ : measure_theory.measure α}
{ν : measure_theory.measure β}
[_inst_8 : measure_theory.sigma_finite ν]
{f : α × β → _γ},_
ae_measurable f (µ.prod ν) → (∀[m](x : α) ∂µ,
ae_measurable (λ (y : β), f (x, y)) ν)
measure_theory.ae_prod, measure_theory.ae_of_ae_prod,
measure_theory.ae_eq_prod_of_ae, measure_theory.ae_ae_of_ae_prod,
measure_theory.ae_measure_prod_mk_left,
measure_theory.ae_prod_of_ae_prod,
measure_theory.ae_measure_prod, measure_theory.ae_eq_refl
**Theorem statement**
**Guesses (top 8)**
**Ground truth** ae_measurable.prod_mk_left
_∀_ {α : Type u_1} {β : Type u_2} {γ : Type u_3}
{f : filter α} {h : set α → set β} {m : γ → _β}_
{l : filter γ}, filter.tendsto m l (f.lift[′] h) ↔
_∀_ (s : set α), s ∈ f → (∀[f] (a : γ) in l, m a ∈ h s)
**Theorem statement**
**Guesses (top 8)** filter.tendsto_lift[′]_iff, filter.tendsto_lift[′]_def
**Ground truth** filter.tendsto_lift[′]
_∀_ {R : Type} [_inst_1 : comm_ring R]
{d : Z} (f : Z[√]d +[∗] R),
_→_
_↑(⇑(zsqrtd.lift.symm) f) = ⇑f zsqrtd.sqrtd_
zsqrtd.coe_lift_symm, zsqrtd.coe_lift.symm, zsqrtd.lift.coe_symm_apply,
zsqrtd.lift_symm_apply, zsqrtd.lift.coe_coe_symm,
zsqrtd.lift.coe_symm_coe,
zsqrtd.lift.symm_coe_zsqrtd, zsqrtd.lift_symm_to_zsqrtd
**Theorem statement**
**Guesses (top 8)**
**Ground truth** zsqrtd.lift_symm_apply_coe
Figure 6: A sample of incorrect guesses by our best model wm-to-tt-m1-m2 on the theorem
_naming task. We performed this experiment on the future-mathlib evaluation set, which_
comprises entirely unseen theorems added to mathlib only after we last extracted training data.
Most of the top-8 guesses displayed in the above table are very similar to the ground truth, in some
cases being equivalent up to permutation of underscore-separated tokens. Note that for the first
example, the concept of ordnode was not in the training data whatsoever and all predictions are in
the syntactically similar ordinal namespace.
-----
Test Set Evaluation Breakdown By Modules
0.7
wm-to-tt-m1-m2 (PACT full)
wm-to-tt-m1 (PACT mix1 only)
0.6 wm-to-tt (tactic step only)
tidy (baseline)
0.5
0.4
Success Rate 0.3
0.2
0.1
0.0
logic algebra order data category_theory control group_theory combinatorics topology linear_algebra set_theory analysis geometry dynamics computability number_theory measure_theory field_theory ring_theory
Figure 7: A breakdown of theorem proving success rate on the test set for wm-to-tt-m1-m2,
wm-to-tt-m1, wm-to-tt, and the tidy baseline across top-level modules in Lean’s mathlib.
We see that wm-to-tt-m1-m2 mostly dominates wm-to-tt-m1 and the models trained using
PACT dominate the model wm-to-tt trained on human tactic proof steps.
, "tactic.intros1"
, "tactic.auto_cases"
, "apply_auto_param"
, "dsimp at _∗"_
, "simp at _∗"_
, "ext1"
, "fsplit"
, "injections_and_clear"
, "solve_by_elim"
, "norm_cast"
]
Unlike the gptf backend, which generates a list of candidates in parallel independently, tidy enjoys
the advantage that the list of tactics it emits is carefully chosen and ordered in order to optimize
the proof search—this is based on the “waterfall” technique of the human-style automated theorem
prover described in (Ganesalingam & Gowers, 2017).
C.5 COMPUTATIONAL RESOURCE ESTIMATES
For each evaluation loop over the test set, we distributed the theorems over a pool of 32 CPU
workers whose inference requests were load-balanced over 4 V100 GPUs. Each evaluation required
_≈10 hours with ≈30% GPU utilization. We observed that our evaluation was bottlenecked by_
inference and in practice, we hosted up to three evaluation loops at once on a VM with 80 logical
cores without achieving full CPU utilization. In addition to the wall-clock timeout of 600s, we also
limited the proof search to a logical timeout of 512 iterations, where one iteration corresponds to a
single expansion of a node of the BFS search tree. In practice, so much time was spent either blocked
on inference or performing the tactic executions in the inner loop of each iteration that we rarely
exceeded the logical timeout, usually exceeding the wall-clock timeout instead.
Fine-tuning on our largest dataset mix1 + mix2 + tactic required 26 hours using 64 A100
GPUs exhibiting high FP16 usage, totalling an estimated ≈1.5K A100(FP16)-hours. This gives
an estimated cost of 17.33 A100(FP16)-hours per billion elapsed tokens during training. We note
-----
that when calculating the number of elapsed tokens for training, we overestimate the actual number
of tokens effectively trained on by summing full context windows (in this case, 2048 tokens).
D EXAMPLE PROOFS
Lean’s mathlib is one of the most active open-source software projects in the world. More than
one-third of the proofs found by our models are shorter and produce smaller proof terms than the
ground truth, leading to dozens of GPT-f co-authored commits to mathlib. We examine some of
the proofs found by our models in more detail.
D.1 LIE_ALGEBRA.MORPHISM.M A P_B O T_I F F
This proof produces a proof term which is 4X smaller than the original:
lemma map_bot_iff : I.map f = ⊥↔ I ≤ f.ker :=
by { rw ← le_bot_iff, apply lie_ideal.map_le_iff_le_comap }
The original, human-written proof is much longer, viz.
lemma map_bot_iff : I.map f = ⊥↔ I ≤ f.ker :=
begin
rw le_ker_iff, unfold lie_ideal.map, split; intros h,
{ rwa [eq_bot_iff, lie_submodule.lie_span_le, set.image_subset_iff,
lie_submodule.bot_coe] at h,},
{ suffices : f _′′ I = ↑(⊥_ : lie_ideal R L′), { rw [this,
lie_submodule.lie_span_eq], },
ext x, rw [lie_submodule.bot_coe, set.mem_singleton_iff,
set.mem_image],
split,
{ rintros ⟨y, hy, hx⟩, rw ← hx, exact h y hy, },
{ intros hx, use 0, simp [hx], }, },
end
D.2 PRIMREC.OF_EQUIV
This proof produces a proof term which is 12X smaller than the original:
theorem of_equiv {β} {e : β ≃ _α} :_
by haveI := primcodable.of_equiv α e; exact
primrec e :=
by letI : primcodable β := primcodable.of_equiv α e; exact encode_iff.1
primrec.encode
The author of the original proof and maintainer of that package commented:
encode_iff.1 primrec.encode is clever, it’s a way to translate primrec
across an equivalence when the encode function is defined as encode x =
encode (e x) where e is the isomorphism.
As far as they knew, this trick was never used before in the computability package.
D.3 REAL.TAN_EQ_SIN_DIV_C O S
This proof demonstrates our model’s library knowledge and ability at premise selection.
lemma real.tan_eq_sin_div_cos (x : R) : tan x = sin x / cos x :=
begin
rw ← of_real_inj,
simp only [complex.tan_eq_sin_div_cos, of_real_sin, of_real_cos,
of_real_div, of_real_tan]
end
-----
Our model was able to predict this entire list of simp lemmas in one shot. Note that the lemma
complex.tan_eq_sin_div_cos in this list is the complex number version of the result, i.e. ∀
(x : C), tan x = sin x / cos x. The previous human-written version of the proof did
not use the more general version of the lemma on complex numbers, demonstrating our model’s
ability to find more general cases of lemmas. We contrast this with the human-written ground truth,
which is more complex and performs a case analysis using the complex cosine:
lemma tan_eq_sin_div_cos : tan x = sin x / cos x :=
if h : complex.cos x = 0 then by simp [sin, cos, tan, _∗, complex.tan,_
div_eq_mul_inv] at _∗_
else
by rw [sin, cos, tan, complex.tan, ← of_real_inj, div_eq_mul_inv,
mul_re];
simp [norm_sq, (div_div_eq_div_mul _ _ _).symm, div_self h]; refl
D.4 SYM2.IS_DIAG_IFF_PROJ_E Q
The proof of this lemma is longer than the ground truth and was not contributed to mathlib, but we
describe it here because the proof is original and includes a nontrivial instantiation of an existential
quantifier.
theorem sym2.is_diag_iff_proj_eq (z : α × α) :
is_diag z _↔_ z.1 = z.2 :=
begin
intros,
J K
simp only [is_diag, prod.ext_iff, quot.exists_rep, iff_true,
not_true, eq_self_iff_true],
simp [diag], split,
{ rintros ⟨y, hy⟩, cases hy; refl },
intro h, cases z, existsi z_snd,
cases h, refl,
end
Before existsi z_snd, the goal state is
z_fst z_snd: α
h: (z_fst, z_snd).fst = (z_fst, z_snd).snd
_⊢_ _∃_ (y : α), (y, y) ≈ (z_fst, z_snd)
This goal state never appeared in mathlib.
D.5 NORM_LE_ZERO_IFF
The following proof is remarkable because it uses fewer tactic steps and takes a different route to the
proof than the ground truth, uses a complex idiom simpa [...] using @..., and was predicted
in one shot.
lemma norm_le_zero_iff {α : Type u_1} [_inst_1 : normed_group α]
{g : α} : ||g|| ≤ 0 ↔ g = 0 :=
by { simpa [le_antisymm_iff, norm_nonneg] using @norm_eq_zero α _ g }
-- ground truth:
-- by { rw[←dist_zero_right],
-- exact dist_le_zero }
The lemmas supplied between the square brackets are used to simplify the main goal. The lemma
supplied after the keyword using can further simplify the lemmas supplied between the square
brackets. The @ modifier makes all arguments explicit. The string @norm_eq_zero never appeared
in our training data but the prediction includes the correct number of correctly typed arguments, and
even replaces the second argument with a placeholder _, correctly guessing that it can be inferred
by the elaborator. Finally, this again showcases the strength of our models as premise selectors:
all three lemmas le_antisymm_iff, norm_nonneg, and norm_eq_zero were not used in the
human-supplied proof but are necessary for this proof.
-----
Moving forward, we hope that our neural theorem provers will continue to find ways to improve
mathlib and assist in creating new proofs. More generally, we hope neural theorem proving will
one day be become a routine part of the formalization workflow.
-----
| [
"Stanislas, Polu",
"Jesse Michael, Han",
"Jason, Rute",
"Yuhuai, Wu",
"Edward W., Ayers"
] | 2021-02-01T00:00:00 | ICLR 2022 | true | 98 | 27 | [
"Lean"
] | https://arxiv.org/abs/2102.06203 | https://arxiv.org/abs/2102.06203 | https://www.semanticscholar.org/paper/9231927bc0a9ed10de64cad05640587893eba4b1 |
GamePad: A Learning Environment for Theorem Proving | N/A | A system called GamePad is introduced that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant and addresses position evaluation and tactic prediction tasks, which arise naturally in tactic-based theorem proving. | null | [
"Daniel, Huang",
"Prafulla, Dhariwal",
"Dawn, Song",
"Ilya, Sutskever"
] | 2018-01-01T00:00:00 | ICLR 2019 | true | 97 | 16 | [
"Coq"
] | https://www.semanticscholar.org/paper/87c425f23bcac2f082968abda64a971f91522f73 | null | https://www.semanticscholar.org/paper/87c425f23bcac2f082968abda64a971f91522f73 |
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning | Large language models show improved downstream task performance when prompted to generate step-by-step reasoning to justify their final answers. These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness (independent of the final answer) is difficult without reliable methods for automatic evaluation. We simply do not know how often the stated reasoning steps actually support the final end task predictions. In this work, we present ROSCOE, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics. To evaluate ROSCOE against baseline metrics, we design a typology of reasoning errors and collect synthetic and human evaluation scores on commonly used reasoning datasets. In contrast with existing metrics, ROSCOE can measure semantic consistency, logicality, informativeness, fluency, and factuality — among other traits — by leveraging properties of step-by-step rationales. We empirically verify the strength of our metrics on five human annotated and six programmatically perturbed diagnostics datasets - covering a diverse set of tasks that require reasoning skills and show that ROSCOE can consistently outperform baseline metrics. | ROSCOE is presented, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics and can measure semantic consistency, logicality, informativeness, fluency, and factuality - among other traits - by leveraging properties of step-by-step rationales. | p p
## ROSCOE: A SUITE OF METRICS FOR SCORING STEP-BY- STEP REASONING
**Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer,**
**Maryam Fazel-Zarandi, Asli Celikyilmaz**
Meta AI Research
{olggol, mpchen, spoff, mcorredor, lsz, maryamfazel, aslic}@meta.com
ABSTRACT
Large language models show improved downstream task performance when prompted to
generate step-by-step reasoning to justify their final answers (Nye et al., 2021; Wei et al.,
2022). These reasoning steps greatly improve model interpretability and verification, but
objectively studying their correctness (independent of the final answer) is difficult without
reliable methods for automatic evaluation. We simply do not know how often the stated
reasoning steps actually support the final end task predictions. In this work, we present
ROSCOE, a suite of interpretable, unsupervised automatic scores that improve and extend
previous text generation evaluation metrics. To evaluate ROSCOE against baseline metrics,
we design a typology of reasoning errors and collect synthetic and human evaluation scores
on commonly used reasoning datasets. In contrast with existing metrics, ROSCOE can
measure semantic consistency, logicality, informativeness, fluency, and factuality — among
other traits — by leveraging properties of step-by-step rationales. We empirically verify
the strength of our metrics on five human annotated and six programmatically perturbed
diagnostics datasets - covering a diverse set of tasks that require reasoning skills and show
that ROSCOE can consistently outperform baseline metrics.[1]
1 INTRODUCTION
Scaling language models has improved state-of-the-art performance on nearly every NLP benchmark (Brown
et al., 2020), with large language models (LLMs) performing impressively as few-shot learners (Brown
et al., 2020). Despite these achievements, even the largest of these models still struggle with tasks including
math word problems (Hendrycks et al., 2021), symbolic manipulation (Rytting & Wingate, 2021), and
commonsense reasoning (West et al., 2022). Recent work has shown that prompting (Wei et al., 2022;
Wang et al., 2022) or fine-tuning (Lampinen et al., 2022) LLMs to generate step-by-step rationales can
lead to improvements on reasoning tasks. Some of these include small-scale analysis of specific error
types within step-by-step rationales (Lewkowycz et al., 2022; Chowdhery et al., 2022), as shown in Table
1. However, existing works primarily focus on end-task performance. Although text generation evaluation
metrics sometimes offer fine-grained quality evaluations (e.g., adequacy, fluency) against human scores (Opitz
& Frank, 2021; Leiter et al., 2022), these metrics generally treat the output as a whole, and many of these
generative metrics operate on tasks such as summarization or machine-translation rather than reasoning.
[1Code can be found at https://github.com/facebookresearch/ParlAI/tree/main/projects/](https://github.com/facebookresearch/ParlAI/tree/main/projects/roscoe)
[roscoe.](https://github.com/facebookresearch/ParlAI/tree/main/projects/roscoe) [Annotated datasets can be downloaded from https://dl.fbaipublicfiles.com/parlai/](https://dl.fbaipublicfiles.com/parlai/projects/roscoe/annotations.zip)
[projects/roscoe/annotations.zip.](https://dl.fbaipublicfiles.com/parlai/projects/roscoe/annotations.zip)
-----
p p
In this paper, we present ROSCOE, a suite of interpretable and fine-grained step-by-step generation evaluation metrics to address the above gaps. Rather than
providing one score that only evaluates the generated
text on the overall, ROSCOE encapsulates fine-grained
metrics under four perspectives: (1) semantic alignment
defines to what extend the generated reasoning is coherent, and grounded with the source context; (2) logical
_inference evaluates if the generated reasoning steps are_
consistent within itself and checks for logical fallacies;
(3) semantic similarity quantifies the degree of similarity between the generated reasoning and the context or
between intermediate steps to capture hallucinations or
repetitions; and (4) language coherence evaluates if the
whole chain flows naturally.
Table 1: Step-by-step reasoning for logical inference
task generated from 2-shot GPT-3 (text-davinci-002)
(Brown et al., 2020). Error types are highlighted: logical
or factual issues, partially correct, somewhat-relevant.
Full 2-shot exemplars are in Appendix B.
To evaluate ROSCOE against existing metrics, we devise _• Tuesday is her travel day, so she will likely not be_
a taxonomy of reasoning errors for multi-step genera- traveling this weekend.
tions and use it to create synthetic data and collect human
**Context: Shelley is from Virginia, but is visiting that**
city with that famous market where they throw the fish.
Going home next Tuesday!
**Question: Is it likely that Shelly will be near the Pacific**
Ocean this weekend?
**GPT-3 Step-by-step reasoning:**
_• No, it is not likely that Shelley will be near the Pacific_
Ocean this weekend.
_• Shelley is from Virginia, which is on the East Coast_
of the United States.
_• The Pacific Ocean is on the West Coast._
_• Shelley is visiting a city with a famous market, which_
could be anywhere in the world.
_• Tuesday is her travel day, so she will likely not be_
traveling this weekend.
evaluations on commonly used reasoning datasets. Our taxonomy and annotated datasets help us gain deeper
insights into the causes of reasoning inconsistencies and weaknesses of LLMs. We evaluate ROSCOE with 18
fine-grained metrics under the above four perspectives. ROSCOE demonstrates performance gains against
baseline evaluation metrics on all tasks that require reasoning over context. Additional sensitivity analysis
shows that ROSCOE is more robust when dealing with tasks that require logical and arithmetic reasoning.
**Contributions. (1) We propose a new taxonomy for reasoning errors, and use it for collecting human**
annotations and creating synthetic datasets. (2) Using our taxonomy, we propose a new suite of metrics that
focus on sequence and step level analysis of step-by-step reasoning. (3) We present extensive comparative
analysis on 11 datasets of varied complex reasoning problems demonstrating the strengths of each metric,
especially in terms of interpretability relative to baselines, and considerations for use.
2 RELATED WORK
**Evaluating Explanations. Free-form natural Language (NL) explanations of model decisions should enable**
accurate representation of the reasoning process and degree of plausibility (Danilevsky et al., 2020; Jacovi &
Goldberg, 2021; Jacovi et al., 2021). A qualitative assessment of NL explanations with correctness labels
collected from human judges was presented in (Camburu et al., 2018). Recent work has also investigated
automatic metrics for natural language generation (NLG) evaluation including word overlap or embedding
based similarly with human written explanations (Clinciu et al., 2021). Though fast and cost-effective,
automatic metrics for NLG are not equipped to measure the logical inconsistencies or information gain
with thinking steps (Reiter, 2019; Celikyilmaz et al., 2020). Explanations have also been evaluated by
collecting datasets, and running correlation analysis to investigate the degree to which an automatic metric
correlates with human judgements of clarity, relevance and informativeness (Leiter et al., 2022; Welleck et al.,
2022).Although reliable, human evaluation is an expensive, domain specific, and time-consuming process. In
comparison, ROSCOE provides generic automatic evaluation procedures that are domain and task specific.
**Automatic Metrics. Many NLG evaluation metrics exist in the literature including ones based on: n-gram**
match (Lin, 2004), regression (Sellam et al., 2020), embedding proximity (Zhang et al., 2020), paraphrasing
(Thompson & Post, 2020), generation as an evaluator (Yuan et al., 2021); information alignment (Deng et al.,
2021); among others. Although these metrics are easy to use, they evaluate the alignment of two texts as a
whole and are not designed to assess individual reasoning steps. The closest metrics to ours are CTC (Deng
-----
p p
Table 2: Taxonomy of Step-by-Step Reasoning Errors. Full list of the error types with examples is illustrated in Table 10.
**Error Type** **Definition**
**Grammar** Faulty, unconventional, or controversial grammar usage
**Factuality** Information about an object (i.e. quantity, characteristics) or a named entity doesn’t match with the input context.
**Hallucination** Information is not provided in the problem statement and is irrelevant or wrong
**Redundancy** Explanation contains redundant information, which even though might be factual, is not required to answer the question
**Repetition** Step paraphrases information already mentioned in previous reasoning steps
**Missing step** The content of the generated reasoning is incomplete and lacks required information to produce the correct answer.
**Coherency** Steps contradict each other or do not follow a cohesive story
**Commonsense** Model lacks relations that should be known from general world (e.g., "all ducks are birds")
**Arithmetic** Error in math calculations
et al., 2021) and BARTScore (Yuan et al., 2021), as both introduce a set of interpretable metrics to evaluate
the similarity between two texts. However, ROSCOE is unique in providing fine-grained interpretations of
reasoning steps, determining contradictions, and identifying ordering issues in the reasoning narrative.
**Self-Consistency with LLMs. Recent work on improving LLMs performance on complex reasoning tasks**
uses an ensemble strategy called self-consistency (Wang et al., 2022). This method samples a diverse set of
reasoning paths from a language model via reasoning traces prompting and returns the most consistent final
answer in the set. Other work evaluates the diversity of a reasoning path (Li et al., 2022), or the consistency
of an inference step (Creswell et al., 2022) or finetune LLMs (Zelikman et al., 2022) to improve on difficult
NLP tasks. In contrast to these works, we present a suit of metrics that focus on determining the type of the
error (e.g., commonsense or logical inconsistency) in a reasoning path, if one exists.
3 REASONING ERROR TAXONOMY AND DATASETS CONSTRUCTION
**Problem Formulation. Our goal is to score step-by-step rationales generated by a language model. We**
assume that the model is given a source context s = _s1,_ _, sT_ of T-sentences indicating a problem
statement followed by a question and is prompted to generate step-by-step reasoning ( { _· · ·_ _}_ Nye et al., 2021). We
refer to this as a hypothesis h = _h1,_ _, hN_ of N-steps, including a final answer as the last step. We do
_{_ _· · ·_ _}_
not assume availability of gold step-by-step reasoning references r = _r1,_ _, rK_ of K-steps.
_{_ _· · ·_ _}_
**Taxonomy. We propose a new taxonomy of generic reasoning errors for language problem solving. We**
first conduct manual preliminary analysis on different types of LLMs reasoning errors using five Human
_judged datasets described below. Based on our analysis, we identified nine error types centered on the_
overall reasoning chain (i.e., the quality of the step-by-step thinking, including consistency with the context
and commonsense reasoning). Our taxonomy also includes fine-grained errors marking inconsistency of a
reasoning step with the previous steps, whether each step contributes to the final decision, and overall logical
inference or fluency issues. The definition of error types is in Table 2, and Table 10 provides examples.
**Datasets and Annotations. To evaluate ROSCOE, we select datasets covering diverse set of tasks that require**
reasoning skills (e.g., logical, arithmetic, and commonsense reasoning tasks). We separate these datasets into
two: (1) Diagnostics datasets that contain gold standard step-wise reasoning chains, where we synthetically
perturb some of the reasoning steps to introduce different generation errors (e.g., missing step, mathematical
error, etc.); (2) Human judged datasets with model generated step-by-step reasoning outputs where the
reasoning error evaluations are solicited from expert judges. We investigate these in §5.
4 REASONING SCORER: ROSCOE
We present our fine-grained metrics under four perspectives: semantic alignment, semantic similarity, logical
_inference and language coherence. Each metric is bounded within [0, 1], where 1 indicates the perfect score_
-----
p p
and 0 corresponds to failure. A metric is reference-free or unsupervised when it uses the source and hypothesis
(h → **_s), while reference-based or supervised when evaluated between hypothesis and reference (h →_** **_r)._**
4.1 SEMANTIC ALIGNMENT METRICS (ROSCOE-SA)
At the core of the ROSCOE semantic alignment[2] metrics is the reasoning alignment vector from the N -step
hypothesis h to the source s of length T : r-align(h **_s) =_** _α1, α2,_ _, αN_, where each alignment value
_→_ _{_ _· · ·_ _}_
_αhypothesis step and most similar sentence in a context, and explicitly measures the grounding of the step-wisei = r-align(hi →_ **_s) = [1 + max[T]j=1[(cos(][h][i][, s][j][)]][/][2][ ∈]_** [[0][,][ 1]][ is the normalized cosine similarity between]
reasoning with respect to the source text (illustrated in App. D, Fig. 3). We estimate the alignment vector
_r-align(h →_ **_s) by matching source text and the reasoning chains on the embeddings of tokens and individual_**
reasoning steps. A similar information alignment score is introduced in CTC (Deng et al., 2021) to measure
the confidence that the information of the i-th source document token sj is grounded by a hypothesis token
_hi. Our reasoning alignment is different in that we measure if a hypothesized reasoning step hi supports the_
source context s. Our proposed metrics are summarized in Table 3.
Table 3: Semantic alignment metrics (ROSCOE-SA).
**Score** **Description**
**Faithfulness-Step** This step-level score is based on the alignment from the hypothesis steps to the source sentences, and is calcu(h → **_s)_** lated as the mean reasoning alignment score over the steps of reasoning (see illustration in Appendix D, Figure 3):
(1/N ) _i=1_ _[r][-align][(][h][i][ →]_ **_[s][)][. Faithfulness measures if the model misinterpreted the problem statement, or the reasoning]_**
chain is too vague, irrelevant, or misuses information.
**Faithfulness-Token** We extend step-level embeddings of the Faithfulness-Step by measuring similarities between the token embeddings:[P][N]
(h → **_s)_** (1/(N + M )) _i=1[[][r][-align][(][h][i][ →]_ **_[s][) +][ P][M]j=1[i]_** _[r][-align][token][(][h][i,j][ →]_ **_[s][)]][, as shown in App.][ D][, Fig.][ 3][.][ M][i][ is the number of]_**
tokens in step hi, M = _i=1_ _[M][i][ is the total number of tokens in the reasoning chain,][ h][i,j][ is the][ j][th token in][ i][th step, and]_
_r-align[token]_ is the alignment vector from tokens in step[P][N] _hi to all tokens in s._
**Informativeness-Step** Measures how well information present in the source is used in the reasoning steps:[P][N] [(1/T ) _t=1_ _[r][-align][(][s][t][ →]_ **_[h][) +]_**
**(Info-Step) (h ↔** **_s)_** (1/N ) _i=1_ _[r][-align][(][h][i][ →]_ **_[s][)]][/][2][. Info-step gives a higher score to reasoning steps that are well-grounded with respect]_**
to the source, and identifies the degree of information from source that is covered by the generated hypothesis. A lower[P][T]
Info-Step score corresponds to the reasoning steps that are not related to the source sentences or have missed information
[P][N]
provided in the context.
**Repetition-Token** To identify repeated, or paraphrased steps, we look at the token alignment scores between all steps in the hypothesis chain:
(hi → _hj)_ 1token alignment, and find those sentences that maximize this alignment score. In other words, Repetition-Token will punish − maxi=2..N maxj=1···i−1[(1/Mi) _l=1_ _[r][-align][token][(][h][i,l][ →]_ _[h][j][)]][. For each pair of sentences, we look at the mean]_
chains where there are at least two steps with high overlap in token embeddings.
[P][M][i]
**Hallucination** To find irrelevant reasoning steps, we use alignment score to identify steps that are both not related to the context and not in
(h → (s, r)) the reference chain (to avoid punishing for possibly relevant commonsense knowledge):s)] · [1 − _r-align(h →_ **_r)]). Here, 1 is an all-ones vector, and (·) is the element-wise product. 1 −_** maxi=1..N ([1 − _r-align(h →_
**Redundancy (h →** **_r)_** with steps that are not required for the correct solution.To find chains that contain information that is not required to solve the problem (i.e., redundant steps), we identify thosehypothesis steps that are least aligned with the the reference steps: mini=1..N r-align(hi → **_r). This score punishes chains_**
**Semantic** This score can be viewed as a measure of how easily a gold reference could be generated by the hypothesis. It compares step
(Coverage-Step(r, h) **_s)_** level grounding of the hypothesis with respect to the source, and the gold reference grounding: |(1/T ) _t=1_ _[r][-align][(][r][t][ →]_
_→_ **_s) −_** (1/N ) _i=1_ _[r][-align][(][h][i][ →]_ **_[s][)][|][, where][ |·|][ indicates absolute value.]_**
**Reasoning Alignment** The most straightforward way to evaluate the correctness of the hypothesis chain is to compare the degree of the overlap[P][K]
(h → **_r)_** between the hypothesis and the reference. One way of doing that is to measure the reasoning alignment between them:[P][N]
(1/N ) _i=1_ _[r][-align][(][h][i][ →]_ **_[r][)][.]_**
**Commonsense** Measures if hypothesis lacks steps that are not stated in the source, but are required to solve the problem such as
(r → (h, s)) We detect such information by extracting steps in the reference reasoning that are not grounded by the source text:general world knowledge (e.g., "velocity is distance divided by time", "1 foot is 12 inches", "all ducks are birds", etc.).[P][N]
1 − maxi=1..K ([1 − _r-align(r →_ **_h)] · [1 −_** _r-align(r →_ **_s)])._**
**Missing Step (r→h)** To identify steps that are missing from the hypothesis but could be required to solve the problem, we look at the alignmentbetween reference and the hypothesis, similar to Redundancy. However, here we go through each step in the reference, and
check if there is a similar step in the hypothesis: mini=1..K (r-align(ri→h)).
2Semantic alignment refers to determination of relations between concepts with the same or a similar intended
meaning (Agirre et al., 2013).
-----
p p
4.2 SEMANTIC SIMILARITY METRICS (ROSCOE-SS)
Semantic similarity metrics quantify the degree of semantic equivalence between pieces of text. As opposed
to the ROSCOE-SA metrics, ROSCOE-SS considers text as a whole, rather than relying on text units
comparisons. We propose the following metrics summarized in Table 4.
Table 4: Semantic similarity metrics (ROSCOE-SS).
**Score** **Description**
**Informativeness-Chain** Similar to Info-Step, this metric quantifies the degree of agreement between the hypothesis chain and the source and is
**(Info-Chain) (h→s)** embeddings in *-Step types of metrics introduced in Tablecalculated as [1 + cos(h, s)]/2. We embed reasoning chain and source context as a whole, as opposed to using step-wise 3.
**Repetition-Step** Measures repetition-related errors on the step level by checking if it paraphrases information already mentioned in the
(hi↔hj) previous steps:individual tokens in pairs of steps, Repetition-Step considers step embeddings similarity and is more robust to changing (1 − maxi=2..N maxj=1···i−1[cos(hi, hj)])/2. Unlike Repetition-Token, which is orderless and compares
contexts.
**Semantic Coverage-** Reflects the overall degree of similarity between the reference and hypothesis chains, comparing reference and hypothesis
**Chain (r ↔** **_h)_** embeddings as a whole: [1 + cos(r, h)]/2.
4.3 LOGICAL INFERENCE METRICS (ROSCOE-LI)
Logical inference metrics (Table 5) measure logical errors between pieces of text. We use an NLI model that
was trained to classify hypothesis-context pairs into entailment, neutral, and contradiction classes (Laurer
et al., 2022) to infer the contradiction probability pcontr.
Table 5: Logical inference metrics (ROSCOE-LI).
**Score** **Description**
(Self-Consistencyhi↔hj) Measures logical entailment errorspunish chains where there is a pair of steps that are likely to contradict each other. within the reasoning steps: 1 − maxi=2..N maxj<i pcontr(hi, hj). This metric will
(Source-Consistencyh ↔ **_s)_** Measures logical entailment errors between any generated reasoningmaxdicts any sentence in the context. We take the maximum probability of contradiction over all steps, following the logic thati=1..N maxj=1..T pcontr(hi, sj). Specifically, for each reasoning step we measure the probability that it contra- h and the source context s: 1 −
a contradiction anywhere in the reasoning chain signals a failure of the overall argument.
4.4 LANGUAGE COHERENCE METRICS (ROSCOE-LC)
To evaluate language coherence (Table 6), we use perplexity PPL as scored by the GPT2-Large model
(Radford et al., 2019), and English grammatical acceptability pgram as scored by the classifier model from
Krishna et al. (2020). Both models were used as-is with no finetuning.
Table 6: Language coherence metrics (ROSCOE-LC).
**Score** **Description**
**Perplexity-Chain (h)** Average perplexity of all tokens in the generated reasoning steps: 1/PPL(h). The context used to score each token is
the previous tokens in the current and all previous steps. Steps are joined with a space character. To keep the range and
orientation consistent with the other scores we invert the perplexity.
**Perplexity-Step (hi)** Average perplexity of all tokens in the generated reasoning steps, where the context used to score each token is only the
previous tokens within the current step: 1/[(1/N ) _i=0_ [PPL(][h][i][)]][. To keep the range and orientation consistent with the]
other scores we invert the perplexity.
**Grammar (hi)** Probability of grammatical acceptability of each step, averaged over all steps:[P][N] (1/N ) _i=0_ _[p][gram][(][h][i][)][.]_
[P][N]
5 EXPERIMENTAL SETUP
**Diagnostics Datasets. We construct our first category of labeled datasets by generating perturbations — i.e.,**
deterministic modifications — on half of the reference reasoning steps and assign binary labels based on
whether or not a chain has been perturbed. We select seven language understanding and entailment datasets
-----
p p
that require complex problem solving skills, and have reference step-by-step explanations: Entailment-Bank
(deductive reasoning) (Dalvi et al., 2021), ProofWriter (logical reasoning) (Tafjord et al., 2021); three
arithmetic reasoning datasets MATH (Hendrycks et al., 2021), ASDIV (Miao et al., 2020) and AQUA (Liang
et al., 2018); EQASC (explanations for commonsense question answering) (Aggarwal et al., 2021), and
**StrategyQA (question answering with implicit reasoning strategies) (Geva et al., 2021) (see dataset details**
in App. E.1). Using our taxonomy, we introduce 12 error perturbation rules and apply on these datasets to
construct our diagnostics datasets (see details in App. E.3).
**Human Judged Datasets. We select our second category of datasets from commonly used complex reasoning**
tasks: GSM8K (arithmetic reasoning) (Cobbe et al., 2021), DROP (discrete reasoning) (Dua et al., 2019),
**ESNLI (deductive and commonsense reasoning) (Camburu et al., 2018), COSMOS-QA (commonsense**
reasoning) (Huang et al., 2019) and SemEVAL (Ostermann et al., 2018) (commonsense reasoning). Wei et al.
(2022) provide model generated chain of thought reasoning steps for GSM8K. We used chains produced
by the 175b_verification model to annotate for reasoning errors. For other datasets, we prompt GPT-3
LLM (Brown et al., 2020) with few-shot in-context examples to obtain step-by-step reasoning sequences (see
examples in App. E.2). We use the error types in our taxonomy in Table 2 as human evaluation perspectives
of reasoning errors where we solicit five expert annotators[3]. The data collection interface provided judges
with the source text (e.g., source and a question, or hypothesis, premise, and a question if they entail) and
associated reasoning text clearly separated into individual steps. Judges were asked to rate the chain as a
whole (e.g., on overall quality) as well as each individual step (e.g., commonsense errors, contradicts with
the previous steps). App. Table 16 summarizes the distribution of error types annotated by the judges. See
App. F for details.
**ROSCOE Training. To obtain reasoning step embeddings, we finetune SimCSE (Gao et al., 2021), a**
supervised sentence similarity model extending the RoBERTa word embedding model (Liu et al., 2019) on
multi-step reasoning datasets we listed in §5 (see details in Table 11)[4]. SimCSE is a contrastive learning
model that is trained on triplets of reference reasoning steps, positive and hard-negative hypothesis reasoning
steps to minimize the cross-entropy objective with in-batch negatives. For contrastive learning, we use the
context and reference reasoning steps as a positive sample (s, r), and context and perturbed reference steps
(s, h) as hard-negative pairs. For finetuning, we embed source context and hypothesis chain as a whole,
without splitting it into steps. With the finetuned model we embed each individual step, as well as a reasoning
chain as a whole. We use the pretrained checkpoint of supervised SimCSE model sup-simcse-roberta-base to
initialize our model, and further train it for five epochs on our synthetic train data (details in App. G). We also
compare ROSCOE scores calculated against sup-simcse-roberta-base SimCSE model, and all-mpnet-base-v2
sentence embedding model (Reimers & Gurevych, 2019) to understand metrics sensitivity to the embedding
method.
**Baseline Metrics. We use text generation evaluation metrics as baseline metrics and comprehensively**
examine the ones outlined in §2, which are: n-gram match based metrics including ROUGE-1, ROUGE-2,
and ROUGE-L (Lin, 2004); pre-trained scores including BLEURT (Sellam et al., 2020), PRISM (Thompson
& Post, 2020), BERTScore (Zhang et al., 2020), BARTScore using the Faithfulness (s → **_h) direction_**
for factuality and relevance, and its finetuned variant BARTScore+CNN+Para BARTScore+ (Yuan et al.,
2021); and information alignment metrics of CTC, CTC-Relevancy and CTC-Consistency. We also include
**BARTScore-P, which we obtain by finetuneing BART (Lewis et al., 2020) on the same reasoning datasets we**
use for finetuning our SimCSE embedding models. Most of our ROSCOE metrics are constructed referencefree. We also have metrics that use reference reasoning steps which we examine against human judgements.
We use the official code for each metric.
3We chose expert annotators over crowd-sourcing, because our annotation task is cognitively challenging and requires
fine-grained annotation.
[4Fine-tuned model is available at https://huggingface.co/facebook/roscoe-512-roberta-base](https://huggingface.co/facebook/roscoe-512-roberta-base)
-----
p p
**Meta Evaluation. We use Somers’ D[5]** (Somers, 1962), which measures the ordinal association between two
measured quantities, to meta-evaluate each scorer against synthetic and human scores. We prefer Somers’
_D over more commonly used Kendall’s τ or Kendall’s τ_ _-b, because it is better in handling the ties of a_
biased random variable (Agresti, 2010, Section 7.1.5), which imposes an upper bound on the possible
values Kendall’s τ (-b) can take. For each score Y considered, our correlations are built against the biased
random variable X ∈ [0, 1], represented by the perturbation or error presence indicator and evaluated using
_D(Y |X) = τ_ (X, Y )/τ (X, X).
6 EXPERIMENTAL RESULTS
**Controlled Experiments with Diagnostics Datasets. Table 7 shows Somers’ D correlation for metrics mea-**
sured reference-free on six different datasets and compares baselines to ROSCOE-* aggregated categories calculated with finetuned embeddings: ROSCOE-SA, ROSCOE-SS, ROSCOE-LI, ROSCOE-LC. Results also
include ROSCOE metrics with all-mpnet-base-v2 (ROSCOE-SA[1], ROSCOE-SS[1]) and sup-simcse-roberta_base (ROSCOE-SA[2], ROSCOE-SS[2]) sentence embedding models. Correlations for ProofWriter are taken_
on its depth-5 subset. We report highest correlation scores across perturbations within each dataset. The
breakdown of all ROSCOE metrics is in App. Table 18.
We observe that: (1) ROSCOE can out- Table 7: Somers’ D correlation of different metrics on six Diagperform all other reference-free methods on **nostics datasets. Metrics are measured reference-free on (s, h).**
all six diagnostic datasets, (2) the gains for We take the maximum score over different perturbations. The two
ROSCOE-SS are more pronounced in four highest correlations for each dataset are bolded and underlined, reout of six diagnostics datasets, which sug- spectively. Correlations that are not significant (p-value ≥ 0.05) are
omitted when aggregating, and "-" denotes an absence of any signifi
gests that ROSCOE can capture hallucinations
cant correlation. Breakdown of all baseline and ROSCOE metrics is
and repetitions in step-wise reasoning. On
shown in App. H.1, Table 18.
Proofwriter, our scorers show lower correlations, because as shown in Table E.1, the **EntBank Math AQUA ProofWr. EQASC ASDIV**
context is a list of facts and rules and the ROUGE-L 0.365 0.156 0.264 0.106 0.315 0.269
reasoning steps can include unordered fact BLEURT 0.257 0.148 0.252 0.024 0.447 -
BERTScore 0.380 0.124 0.220 0.117 0.462 0.322
and rule combinations, but still a correct an- BARTScore 0.358 0.185 0.317 0.081 0.415 -
swer can be deduced. This makes it challeng- BARTScore+ 0.315 0.164 0.251 0.054 0.297 -
ing for ROSCOE to evaluate the steps in se- BARTScore-P 0.186 0.128 0.215 0.011 0.276 -
quence. Overall, the correlations of the base- PRISM 0.453 0.208 0.191 0.235 0.436 -
CTC Relev. 0.258 0.188 0.217 0.394 0.485 0.382
line metrics are much lower than ROSCOE, CTC Consist. 0.310 0.282 0.157 0.513 0.270 0.396
because the baseline metrics are designed to
ROSCOE-SA 0.919 0.939 0.971 0.763 **1.000** **0.879**
capture the semantic or lexical overlap be- ROSCOE-SA[1] 0.913 0.936 0.972 0.771 **1.000** 0.198
tween a reference and hypothesis and it is ROSCOE-SA[2] 0.919 0.939 0.971 0.732 **1.000** 0.515
harder to detect logical consistency without a ROSCOE-SS **0.955** 0.924 0.982 0.624 **1.000** 0.857
golden reference text. ROSCOE is specifically ROSCOE-SS[1] 0.909 0.932 0.982 0.631 **1.000** 0.280
focused on reference-free settings, and can ROSCOE-SS[2] 0.901 **0.949** **0.991** 0.621 **1.000** 0.289
gauge each individual step against the source ROSCOE-LI 0.917 0.331 0.424 0.289 0.793 0.771
and other generated steps. In fact, our met- ROSCOE-LC 0.604 0.392 0.359 **0.788** 0.859 0.485
rics also work well against the baselines in
the reference-based setting (comparing against reference reasoning steps). In App. Table 19 we present
correlations when metrics are measured as reference-based. We also observe that finetuning SimCSE
gives highest improvements on the ASDIV dataset. ASDIV is a 1-step reasoning dataset (see App. Table 12), where step is represented by an equation with one of the arithmetic perturbations added. We
5We use SciPy (Virtanen et al., 2020) to calculate correlations and obtain p-values from a hypothesis test where the
null hypothesis is an absence of association.
-----
p p
hypothesize that including these patterns in finetuning helped the model to better learn relationships between context and equations, and resulted in higher scores. On EQASC dataset, Repetition* scores are
able to catch all duplicated steps in a chain, i.e., we can separate perturbed and non-perturbed chains
based on the given threshold value for the Repetition* scores, and achieve perfect correlation scores (App.
Table 20). To understand if finetuning actually helps to improve scoring, we compare non-aggregated
metrics (see details in App. Table 18). We observe, that finetuning indeed helps to improve ROSCOE: on
average across datasets, all correlations except Repetition_* scores improve (up to 0.556 on InformativenessChain), with mean Repetition-Token not changing, and mean Repetition-Step degrading by 0.005. We
speculate that since we finetune the model using reasoning chains and context as a whole, it helps to
better capture step-by-step rationales, while possibly degrading on word and sentence-level semantics.
**Meta-Evaluations on Human Judge-** Table 8: Somers’ D correlations of metrics with human judgement.
**ment Datasets. Table 8 reports a sum-** We report the maximum over the error types in Table 2. All metrics
mary of meta-evaluation of ROSCOE met- are measured reference-free on (s, h). The highest two correlations in
rics comparing against baselines on hu- each column are bolded and underlined, respectively. Correlations that
man judged datasets. The correlations are not significant (p-value ≥ 0.05) are omitted when aggregating, and
"-" denotes an absence of any significant correlation. Breakdown of all
are measured based on the presence of
baseline and ROSCOE metrics is shown in App. H.2.
a particular error from Table 2 and we
report the highest correlation across all **DROP** **GSM8K** **ESNLI** **COSMOS** **SemEVAL**
error types within each dataset. We ob- Rouge-L 0.278 0.252 0.557 -0.441 -0.478
BLEURT 0.328 0.256 0.541 0.218 -0.356
serve that: (1) on all tasks, ROSCOE met- BERTScore 0.275 0.235 0.590 -0.420 -0.295
rics outperform all other baselines when BARTScore -0.835 -0.546 0.549 -0.544 -
evaluated as reference-free; (2) overall, BARTScore+ -0.665 - 0.482 -0.186 -
BARTScore-P -0.642 - 0.255 -0.207 -
ROSCOE yields considerably better corre- PRISM -0.733 -0.455 0.580 -0.376 -
lations, which indicates that step-by-step CTC-Relevance 0.333 -0.371 0.334 - -0.349
reasoning generations can be more effec- CTC-Consistency 0.462 -0.174 0.647 0.275 -0.301
tively evaluated with ROSCOE. In gen- ROSCOE-SA 0.578 0.392 0.521 0.555 0.337
eral, most correlations with human judge- ROSCOE-SA[1] 0.790 0.500 **0.799** 0.638 0.485
ROSCOE-SA[2] 0.578 0.392 0.599 0.555 0.337
ments are moderate when compared to
the synthetic correlation scores, indicat- ROSCOE-SSROSCOE-SS[1] **0.8240.791** 0.4710.514 0.5300.507 0.5930.642 0.4110.508
ing that step-by-step reasoning evaluation ROSCOE-SS[2] 0.799 **0.638** 0.531 **0.658** **0.535**
is among the cognitively hard tasks for ROSCOE-LI 0.584 0.345 0.531 0.444 0.372
neural models (Deutsch et al., 2022). In- ROSCOE-LC 0.205 -0.184 0.447 -0.212 0.517
terpretable metrics such as ROSCOE can
provide better information about a model’s reasoning skills, thus future work should improve such metrics on
aligning with human judgments. In App. H.2, we show fine-grained experimental analysis per each human
labeled dataset. Specific examples showcasing ROSCOE scoring abilities are summarized in Table 40.
7 ANALYSIS
**How sensitive are ROSCOE metrics against level of errors?** To evaluate how well metric values match
human assessment of reasoning, we measure sensitivity to the level of errors. We perturb sentences in the
MATH (arithmetic) and EntailmentBank (deductive reasoning) diagnostic datasets (similar to § 5) and inject
different levels of errors into the reasoning text. Using randomly selected perturbation types, we construct
up to a maximum of 3 perturbations per instance. We measure the correlation (Somers’ D) between the
reasoning inconsistency level 1, 2, 3 of the reasoning steps (i.e., the number of injected errors) and the metric
score. Fig. 1 illustrates the results averaged over different perturbations.
We expect the metrics correlate with humans better when the level of errors is high. Both semantic alignment
of the reasoning ROSCOE-SA, and the semantic similarity metrics ROSCOE-SS show consistent behavior
-----
p p
on both datasets, while baseline metrics fluctuate with low correlations. Baseline metrics perform better on
EntailmentBank. On MATH, ROSCOE-LC and the baseline metrics show minimal impact, which can be that
some of the perturbations applied on the MATH dataset (e.g., RandomOperation, or ShuffleNumbers) are
harder to detect with language model based (BARTScore) and NLI model based (ROSCOE-LC) metrics.
**What does ROSCOE illuminate about scores across errors and tasks?**
For an ideal scorer based on ease of use, it would be possible to pick a
set of fixed thresholds that had error discrimination power across datasets.
However, we show that this dataset-agnostic ideal is currently not possible
and an issue endemic across scores, including baselines. We study which
metrics correlate strongly with which perturbations, with a focus of consistency across datasets. From this, we plot the interquartile ranges for
strongly correlated metric and perturbation pairs. We show a sample of
these in Fig. 2, though find that the trends generally hold across metrics
and perturbations (see Fig 6). We note that within a given dataset, scores
are well separated: the perturbed version of a dataset for a given score
and perturbation type shows little interquartile overlap with the original
version. However, this does not hold across datasets – e.g., in (Score: InfoChain, Perturbation: Repetition), if one were to set a detective threshold
for the Repetition perturbation based off EntBank (around 0.95), it would
mark almost all values of EQASC as perturbed, even non-perturbed samples. This shows the challenge of using metrics for classification without
calibration for drifts in both mean and variance across datasets, even if a
metric generally correlates well with detecting a given error.
1.00
0.75
0.50
0.25
0.00
|Col1|ROSCOE-SA BARTScore-P ROSCOE-SS BERTScore ROSCOE-LI ROUGE-L ROSCOE-LC PRISM BARTScore BLEURT|Col3|Col4|
|---|---|---|---|
|M|ATH|||
|||||
|||||
|||||
0.75
0.50
0.25
0.00
|E|ntBank|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
0.95), it would Level of Error
Figure 1: Sensitivity of selected
metrics on Somers’ D by injecting
levels of error into reasoning steps.
Score: Faithfulness-Step, Perturbation: Negate Step Score: BERTScore, Perturbation: Repetition
Score: Info-Chain, Perturbation: Repetition
0.5 0.6 0.7 0.8 0.9 1.0
Info-Chain score perturbed
(ProofWr., perturbed)
(ProofWr., original)
(Math, perturbed)
(Math, original)
(EntBank, perturbed)
(EntBank, original)
(EQASC, perturbed)
(EQASC, original)
(AQUA, perturbed)
(AQUA, original)
0.2 0.4 0.6 0.8 1.0
Faithfulness-Step score perturbed
0.4 0.2 0.0 0.2 0.4 0.6 0.8
BERTScore score perturbed
perturbed
original
perturbed
original
perturbed
original
Figure 2: Box-and-whisker plots of interquartile ranges of scores, for perturbations and reference-free metrics with
strong Somers’ D values. Scores are split by dataset and perturbation use. While interquartile ranges separate well by
perturbation use within a single dataset, there is overlap across datasets. This shows the drift of neural scores across
datasets and applies to both ROSCOE (left, center) and strong baselines (right).
8 CONCLUSION
In this paper, we introduce ROSCOE, a new suite of interpretable, unsupervised metrics that enables evaluation
of step-by-step reasoning generations of LMs when no golden reference generation exists. We present
a taxonomy of reasoning errors used to generate and evaluate our metrics. Experimental results, from
evaluating on both synthetic and human-labeled datasets exhibiting multiple types of reasoning (commonsense,
arithmetic, and logical inference, etc.), demonstrate superior performance compared to prior semantic and
lexical similarly based baseline metrics for text generation. Our analysis shows improved capability in
evaluation of reasoning exhibiting nuances, such as factual and logical errors in step-wise decisions.
-----
p p
ETHICS STATEMENT
Explainability builds transparency and trust for users, eases bug-fixing and shortens improvement cycles
for metric designers, and will be required by law/regulations for AI systems to be applied to large-scale,
high-stakes domains. In this context, we hope our work will catalyze efforts on the topic of explainable
evaluation metrics for language model rationale generations. We should mention that our evaluation metrics
do not monitor the explanations from integrity or bias perspectives. Our work also uses five human expert
annotators and in the annotation process, annotators need to rate the model generated candidate rationals.
While the model-generated explanations can produce potentially unsafe content, the datasets for annotations
include domains related to logical and arithmetic concepts and general commonsense knowledge. The
anecdotal consensus was that the generations were safe and didn’t include biased statements.
REPRODUCIBILITY STATEMENT
To ensure the reproducibility of our empirical results, we will open source our code to Github, which will
contain: instructions for installing the virtual environment, data preprocessing, all score generation and
correlation scripts (both for ROSCOE and baselines), and trained embedding models. Detailed explanation of
all the finetuned models and metrics are given in the main paper as well as in the Appendices. We will also
release all the diagnostic and human judgment datasets used in our experiments.
REFERENCES
Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and
Dinesh Garg. Explanations for CommonsenseQA: New Dataset and Models. 2021.
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. *SEM 2013 shared task:
Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM),
_Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pp._
[32–43, Atlanta, Georgia, USA, June 2013. Association for Computational Linguistics. URL https:](https://aclanthology.org/S13-1004)
[//aclanthology.org/S13-1004.](https://aclanthology.org/S13-1004)
Alan Agresti. Analysis of ordinal categorical data, volume 656. John Wiley & Sons, 2010.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated
corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical
_Methods in Natural Language Processing, pp. 632–642, Lisbon, Portugal, September 2015. Association_
[for Computational Linguistics. doi: 10.18653/v1/D15-1075. URL https://aclanthology.org/](https://aclanthology.org/D15-1075)
[D15-1075.](https://aclanthology.org/D15-1075)
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark,
Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models
[are few-shot learners. 33:1877–1901, 2020. URL https://proceedings.neurips.cc/paper/](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. e-snli: Natural language
inference with natural language explanations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman,
N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 9539–
9549. Curran Associates, Inc., 2018. [URL http://papers.nips.cc/paper/8163-e-snli-](http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf)
[natural-language-inference-with-natural-language-explanations.pdf.](http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf)
10
-----
p p
Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. Evaluation of text generation: A survey. CoRR,
[abs/2006.14799, 2020. URL https://arxiv.org/abs/2006.14799.](https://arxiv.org/abs/2006.14799)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling
with pathways. arXiv preprint arXiv:2204.02311, 2022.
Miruna-Adriana Clinciu, Arash Eshghi, and Helen Hastie. A study of automatic metrics for the evaluation
of natural language explanations. In Proceedings of the 16th Conference of the European Chapter
_of the Association for Computational Linguistics: Main Volume, pp. 2376–2387, Online, April 2021._
[Association for Computational Linguistics. doi: 10.18653/v1/2021.eacl-main.202. URL https://](https://aclanthology.org/2021.eacl-main.202)
[aclanthology.org/2021.eacl-main.202.](https://aclanthology.org/2021.eacl-main.202)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training
verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language mod[els for interpretable logical reasoning. arXiv, 2022. URL https://arxiv.org/abs/2205.09712.](https://arxiv.org/abs/2205.09712)
Bhavana Dalvi, Peter Jansen, Oyvind Tafjord, Zhengnan Xie, Hannah Smith, Leighanna Pipatanangkura, and
Peter Clark. Explaining answers with entailment trees. EMNLP, 2021.
Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. A survey of
the state of explainable AI for natural language processing. In Proceedings of the 1st Conference of the
_Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint_
_Conference on Natural Language Processing, pp. 447–459, Suzhou, China, December 2020. Association_
[for Computational Linguistics. URL https://aclanthology.org/2020.aacl-main.46.](https://aclanthology.org/2020.aacl-main.46)
Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric P. Xing, and Zhiting Hu. Compression, transduction,
and creation: A unified framework for evaluating natural language generation. In EMNLP, 2021. URL
[https://aclanthology.org/2021.emnlp-main.599.pdf.](https://aclanthology.org/2021.emnlp-main.599.pdf)
Daniel Deutsch, Rotem Dror, and Dan Roth. Re-examining system-level correlations of automatic summarization evaluation metrics. arXiv preprint arXiv:2204.10216, 2022.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A
reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the
_2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human_
_Language Technologies, Volume 1 (Long and Short Papers), pp. 2368–2378, Minneapolis, Minnesota,_
[June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246. URL https:](https://aclanthology.org/N19-1246)
[//aclanthology.org/N19-1246.](https://aclanthology.org/N19-1246)
Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings.
_arXiv preprint arXiv:2104.08821, 2021._
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a
laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association
_for Computational Linguistics, 9:346–361, 2021._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021.
11
-----
p p
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods
_in Natural Language Processing and the 9th International Joint Conference on Natural Language Process-_
_ing (EMNLP-IJCNLP), pp. 2391–2401, Hong Kong, China, November 2019. Association for Computational_
[Linguistics. doi: 10.18653/v1/D19-1243. URL https://aclanthology.org/D19-1243.](https://aclanthology.org/D19-1243)
Alon Jacovi and Yoav Goldberg. Aligning faithful interpretations with their social attribution. vol[ume 9, pp. 294–310, Cambridge, MA, 2021. MIT Press. doi: 10.1162/tacl_a_00367. URL https:](https://aclanthology.org/2021.tacl-1.18)
[//aclanthology.org/2021.tacl-1.18.](https://aclanthology.org/2021.tacl-1.18)
Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, and Yoav Goldberg. Contrastive explanations for model interpretability. In Proceedings of the 2021 Conference on Empirical
_Methods in Natural Language Processing, pp. 1597–1611, Online and Punta Cana, Dominican Republic,_
November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.120.
[URL https://aclanthology.org/2021.emnlp-main.120.](https://aclanthology.org/2021.emnlp-main.120)
Kalpesh Krishna, John Wieting, and Mohit Iyyer. Reformulating unsupervised style transfer as paraphrase
generation. In Empirical Methods in Natural Language Processing, 2020.
Andrew K Lampinen, Nicholas Roy, Ishita Dasgupta, Stephanie Cy Chan, Allison Tam, James Mcclelland,
Chen Yan, Adam Santoro, Neil C Rabinowitz, Jane Wang, and Felix Hill. Tell me why! Explanations
support learning relational and causal structure. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba
Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on
_Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 11868–11890. PMLR,_
[17–23 Jul 2022. URL https://proceedings.mlr.press/v162/lampinen22a.html.](https://proceedings.mlr.press/v162/lampinen22a.html)
Moritz Laurer, Wouter van Atteveldt, Andreu Casas, and Kasper Welbers. Less annotating, more classifying–
addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert-nli.
2022.
Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, and Steffen Eger.
Towards explainable evaluation metrics for natural language generation. CoRR, abs/2203.11131, 2022.
[URL https://doi.org/10.48550/arXiv.2203.11131.](https://doi.org/10.48550/arXiv.2203.11131)
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin
Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language
generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Associa_tion for Computational Linguistics, pp. 7871–7880, Online, July 2020. Association for Computational_
[Linguistics. doi: 10.18653/v1/2020.acl-main.703. URL https://aclanthology.org/2020.acl-](https://aclanthology.org/2020.acl-main.703)
[main.703.](https://aclanthology.org/2020.acl-main.703)
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning
problems with language models. arXiv preprint arXiv:2206.14858, 2022.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the
[advance of making language models better reasoners. arXiv, 2022. URL https://arxiv.org/abs/](https://arxiv.org/abs/2206.02336)
[2206.02336.](https://arxiv.org/abs/2206.02336)
Chao-Chun Liang, Yu-Shiang Wong, Yi-Chung Lin, and Keh-Yih Su. A meaning-based statistical English
[math word problem solver. pp. 652–662, June 2018. doi: 10.18653/v1/N18-1060. URL https:](https://aclanthology.org/N18-1060)
[//aclanthology.org/N18-1060.](https://aclanthology.org/N18-1060)
12
-----
p p
Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches
_[Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https:](https://aclanthology.org/W04-1013)_
[//aclanthology.org/W04-1013.](https://aclanthology.org/W04-1013)
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv
_preprint arXiv:1907.11692, 2019._
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing english
math word problem solvers. pp. 975–984, 2020.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for
intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021.
Juri Opitz and Anette Frank. Towards a decomposable metric for explainable evaluation of text generation
from AMR. In Proceedings of the 16th Conference of the European Chapter of the Association for
_Computational Linguistics: Main Volume, pp. 1504–1518, Online, April 2021. Association for Compu-_
[tational Linguistics. doi: 10.18653/v1/2021.eacl-main.129. URL https://aclanthology.org/](https://aclanthology.org/2021.eacl-main.129)
[2021.eacl-main.129.](https://aclanthology.org/2021.eacl-main.129)
Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. Semeval-2018
[task 11: Machine comprehension using commonsense knowledge. In *SEMEVAL, 2018. URL https:](https://aclanthology.org/S18-1119.pdf)
[//aclanthology.org/S18-1119.pdf.](https://aclanthology.org/S18-1119.pdf)
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models
are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In
_Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association_
[for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084.](https://arxiv.org/abs/1908.10084)
Ehud Reiter. Natural language generation challenges for explainable AI. In Proceedings of the 1st Workshop
_on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019), pp._
[3–7. Association for Computational Linguistics, 2019. doi: 10.18653/v1/W19-8402. URL https:](https://aclanthology.org/W19-8402)
[//aclanthology.org/W19-8402.](https://aclanthology.org/W19-8402)
Christopher Michael Rytting and David Wingate. Leveraging the inductive bias of large language models for
[abstract textual reasoning. 2021. URL https://openreview.net/forum?id=urueR03mkng.](https://openreview.net/forum?id=urueR03mkng)
Thibault Sellam, Dipanjan Das, and Ankur Parikh. BLEURT: Learning robust metrics for text generation. In
_Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7881–7892,_
Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.704. URL
[https://aclanthology.org/2020.acl-main.704.](https://aclanthology.org/2020.acl-main.704)
Robert H Somers. A new asymmetric measure of association for ordinal variables. American sociological
_review, pp. 799–811, 1962._
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. ProofWriter: Generating implications, proofs, and
abductive statements over natural language. In Findings of the Association for Computational Linguistics:
_ACL-IJCNLP 2021, pp. 3621–3634, Online, August 2021. Association for Computational Linguistics._
[doi: 10.18653/v1/2021.findings-acl.317. URL https://aclanthology.org/2021.findings-](https://aclanthology.org/2021.findings-acl.317)
[acl.317.](https://aclanthology.org/2021.findings-acl.317)
13
-----
p p
Brian Thompson and Matt Post. Automatic machine translation evaluation in many languages via zeroshot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language
_Processing (EMNLP), pp. 90–121, Online, November 2020. Association for Computational Linguistics. doi:_
[10.18653/v1/2020.emnlp-main.8. URL https://aclanthology.org/2020.emnlp-main.8.](https://aclanthology.org/2020.emnlp-main.8)
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni
Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett,
Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric
Larson, C J Carey, [˙]Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold,
Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro,
Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for
Scientific Computing in Python. Nature Methods, 17:261–272, 2020. doi: 10.1038/s41592-019-0686-2.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves
chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of
thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover: Grounded
mathematical proof generation with language models. 2022.
Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean
Welleck, and Yejin Choi. Symbolic knowledge distillation: from general language models to commonsense
models. In Proceedings of the 2022 Conference of the North American Chapter of the Association
_for Computational Linguistics: Human Language Technologies, pp. 4602–4625, Seattle, United States,_
July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.341. URL
[https://aclanthology.org/2022.naacl-main.341.](https://aclanthology.org/2022.naacl-main.341)
Weizhe Yuan, Graham Neubig, and Pengfei Liu. Bartscore: Evaluating generated text as text generation. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan
(eds.), Advances in Neural Information Processing Systems, volume 34, pp. 27263–27277. Curran Associates, Inc., 2021. [URL https://proceedings.neurips.cc/paper/2021/file/](https://proceedings.neurips.cc/paper/2021/file/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Paper.pdf)
[e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Paper.pdf.](https://proceedings.neurips.cc/paper/2021/file/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Paper.pdf)
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. Star: Bootstrapping reasoning with reasoning.
arXiv, 2022.
Tianyi Zhang, Varsha Kishore, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. Bertscore: Evaluating
[text generation with bert. In International Conference on Learning Representations, 2020. URL https:](https://openreview.net/forum?id=SkeHuCVFDr)
[//openreview.net/forum?id=SkeHuCVFDr.](https://openreview.net/forum?id=SkeHuCVFDr)
14
-----
p p
APPENDIX
A LIMITATIONS
Our study is the first initial step that investigates the evaluation of the step-by-step reasoning produced by
large language models. Our taxonomy (in Table 2) covers several reasoning errors and we designed our
metrics to evaluate a spectrum of criteria including the ones in the taxonomy. Even though we cannot say we
cover all possible reasoning errors, our metrics are generic enough, work on natural language rationales, and
consider the alignment with the input context and the generated explanation. Nevertheless, we believe our
study can spur others to investigate different reasoning errors and use our code and datasets as templates to
extend further.
Due to the extensive analysis needed to thoroughly test and communicate the ability of our proposed metrics
to capture reasoning errors, we decided to leave some follow-up questions, such as the application of these
metrics for improving downstream task performance, for future exploration.
B FEW-SHOT PROMPTING EXAMPLES (CONT. FROM § 1)
Below is the 2-shot example we used to generate the explanations from GPT-3 as we show in the Fig. 1.
I will explain how to make logical inferences:
**Context: John wakes to the sound of his children giggling outside the door. They bust in and say**
"Happy Father’s Day!" with a plate full of food. 2022 has been a great year.
**Question: Can we infer that John was born before the inauguration of Barack Obama?**
**Explanation: We can infer that John is a father because of "his children". We can infer that the**
current year is 2022. Barack Obama was inaugurated in 2009, so it has been about 13 years since
Obama was inaugurated. John’s children are old enough to speak, so if John were born after Obama’s
inauguration, he would have been younger than 10 when he had children. So, we can infer that John
was born before Obama’s inauguration. The answer is "yes".
**Context:** Sam sees a piano in the subway stations. Laughing, his friends prod him to play, saying
"Remember how amazing your rendition of Chopsticks was at the Christmas party?"
**Question: Can we infer that John majored in piano at Juilliard?**
**Explanation: Juilliard is a top school for music, so someone who majored in piano at Juilliard would**
be an expert pianist. "Chopsticks" is a very simple song to play on the piano. His friends were
therefore likely being sarcastic when they told him to play, which would imply that Sam was not an
expert piano player. So it is unlikely that Sam majored in piano at Juilliard. The answer is "no".
Table 9 illustrates sample if in-context examples we used as prompts to obtain step-wise explanations.
15
-----
p p
Table 9: Example prompts used to generate step-by-step explanations using GPT-3.
(Used for ESNLI dataset)
Explain if premise entails hypothesis.
**Premise: Tomato sauce (also known as Neapolitan sauce, and referred to in Italy as Salsa di pomodoro) refers to any**
of a very large number of sauces made primarily from tomatoes, usually to be served as part of a dish (rather than as
a condiment). Tomato sauces are common for meat and vegetables, but they are perhaps best known as sauces for
pasta dishes.
**Hypothesis: Tomato sauce is only rarely used to top pasta.**
**Explanation: Salsa di pomodoro is made of peppers. The answer is No.**
**Premise: Kirk Johnson (born 1960) is an American scientist, author, curator, and museum administrator, and is**
currently serving as Director of Smithsonian’s National Museum of Natural History. Dr. Johnson is the host of the
PBS Nova series, Making North America[¨] ¨, which is a three-part series that describes the shaping of North America,
which aired on November 4, 11 and 18, 2015.
**Hypothesis: johnson hosted Making North America when he turned 50.**
**Explanation: Johnson was born in 1960 and hosted Making North America in 2015, so he was 55, not 50. The**
answer is No.
**Premise: A person rides his bicycle in the sand beside the ocean.**
**Hypothesis: A person is on a beach.**
**Explanation: A person rides beside the ocean means is on a beach if there is sand beside the ocean. The answer is**
Yes.
(used for SEMEVAL dataset)
Explain if premise entails hypothesis.
**Premise: I went into my bedroom and flipped the light switch. Oh, I see that the ceiling lamp is not turning on. It**
must be that the light bulb needs replacement. I go through my closet and find a new light bulb that will fit this lamp
and I place it in my pocket. I also get my stepladder and I place it under the lamp. I make sure the light switch is in
the off position. I climb up the ladder and unscrew the old light bulb. I place the old bulb in my pocket and take out
the new one. I then screw in the new bulb. I climb down the stepladder and I place it back into the closet. I then
throw out the old bulb into the recycling bin. I go back to my bedroom and turn on the light switch. I am happy to
see that there is again light in my room. Which room did the light go out in?
**Hypothesis: Bedroom**
**Label: Correct**
**Explanation: The answer is bedroom, because the light went out in the bedroom. The person replaced the light bulb**
in the bedroom.
**Premise: The weather was so nice today that I decided to have a barbecue. I called up some of my friends to invite**
them over. I set up some chairs and tables outside for my friends to sit around. I then took out some hot dogs from
my fridge and brought them over to my barbecue. My barbecue is a gas barbecue, so all I had to do was press the
knob and turn it to ignite it. Once I got a little fire going I put my hot dogs on the grill over it to cook. I closed the lid
and left my hot dogs to cook for about ten minutes. After ten minutes I decided to check on them to see if they were
done. They looked like they were cooked, so I put them all on a plate to serve to my friends. I went to my kitchen
and got hot dog buns, ketchup, mustard, and relish to serve on top of the hot dogs. I laid out all of the food and
condiments on the table so my friends could help themselves. We ate the hot dogs and relaxed around the table. Why
are we having the barbecue, to celebrate an occasion, or just because we want to?
**Hypothesis: Just because they want to.**
**Label: Correct**
**Explanation: The answer is correct, because the weather was nice. They decided to have a barbecue. There is no**
mention of any special occasion.
**Premise: Another Nurse can in to look at me, took one look at my rash and said OMG and ran out of the room . At**
that point I started crying, regretted coming to the hospital alone and started wheezing again . Then the doctor came
in, her eyes were all bugged out . What will I do after the doctor examines me ?
**Hypothesis: I will talk with my friends .**
**Label: Incorrect**
**Explanation: The premise is about the nurse’s reaction to the patient’s rash, and how the patient feels. The hypothesis**
does not follow up from this premise correctly. Instead, it talks about talking with friends.
16
-----
p p
C TAXONOMY OF REASONING ERRORS (CONT. FROM § 3)
To gain deeper insights into the types of reasoning errors introduced by LLMs while explaining their decisions,
we propose a new taxonomy of generic reasoning errors for language problem solving. Specifically, we
sampled from the training portions of the logical inference and commonsense reasoning datasets, and
prompted GPT-3 with reasoning explanations using prompts similar to App. B. We used task specific indomain examples for prompting. We also analyzed model generated explanations shared in Wei et al. (2022).
We then manually looked into each explanation and identified potential errors that are inconsistent with the
source, question or the prompt and within the reasoning chain. Some tasks require a model to classify the
logical relationship between premise and a hypothesis, others are question and answering tasks. We adjusted
our context and prompts according to the type of the task.
Our reasoning error taxonomy is summarized in Table 10. It contains types of errors concerning an overall
chain or an individual step. Specifically, the chain-level coarse-grained evaluations of the overall reasoning
chain deals with overall quality of the step-by-step thinking, coherence, consistency of the explanation within
itself, and consistency with the context, etc. On the other hand the step-level fine-grained evaluations focus on
the consistency of a reasoning step with the previous steps, if a step conveys new and supporting information
over the previous steps, factuality or logical inference issues. We use these error categories to construct
diagnostics datasets with perturbed errors as well as human judged datasets of reasoning errors. In the
taxonomy, we indicate *-step level errors to differentiate from the chain level error types.
17
-----
p p
Table 10: Taxonomy of Step-by-Step Reasoning Errors. Errors used for perturbations in constructing the diagnostic
datasets (Diag.) and for human annotation (Human) of the model generated reasoning chains are also marked. (Cont.
from Table 2.
**Error Type** **Description** **Example** **Diag. Human**
Grammar Faulty, unconventional, or contro- **Reference: He chews 4 pieces of gum a day so over 30 days** ✓ ✓
versial grammar usage he will chew 4*30 = «4*30=120»120 pieces of gum.
**Model Expl: He eats 4 pieces of gum a day and lasts 30 days**
so he eats 4*30 = «4*30=120»120 pieces of gum.
Factuality Information about object (i.e. quan- **Context: Anakin caught 10 fish.** ✓ ✓
tity, characteristics) or a personal **Model Expl: Locsin caught 10 fish.**
named entity does not match information provided in the question
Hallucination Information is not provided in the **Context: The basketball team went to the steakhouse to eat** ✓ ✓
problem statement and is irrelevant dinner. The first player ate a 6-ounce steak. The second
or wrong player ate beef tips, containing 8 beef tips, each an ounce
in size. The third player ate a one-pound steak. And the
fourth and fifth players ordered vegetarian meals. In total,
how many ounces of meat were consumed by the team?
**Model Expl: The fourth and fifth players ordered vegetarian**
meals, for a total of 2 ounces of meat.
Redundancy Explanation contains redundant in- **Context: Bob is cold. If something is quiet then it is blue.** ✓
formation, which even though All blue things are round. If something is round and smart
might be factual, is not required to then it is cold. Question: Bob is cold.
answer the question **Model Expl: If something is round and not smart then it is**
blue. All blue things are round. Answer: true.
Repetition Step paraphrases information al- **Context: Parker chews 4 pieces of gum a day. A pack of gum** ✓ ✓
ready mentioned in previous reason- has 15 pieces of chewing gum per pack. How many packs of
ing steps gum will he need to last him 30 days?
**Model Expl.: He chews 4 pieces of gum a day and there are**
30 days in a month so he chews 4*30 = 120 pieces of gum a
month. Parker chews 120 pieces of gum a month. He needs
120 pieces of gum a month and a pack of gum has 15 pieces
of gum so he will need 120/15 = 8 packs of gum.
Missing step The content of the generated rea- **Context: A farm has 10 2-legged animals and 15 4-legged** ✓ ✓
soning is incomplete and lacks re- animals. If all animals are healthy, how many pairs of animal
quired information to produce the legs in total are on the farm? Answer: 40
correct answer. If this missing steps **Model Expl.: 10 2-legged animals have 10*2=20 legs. 15**
are added, the model would produce 4-legged animals have 15*4=60 legs. In total on the farm,
the correct answer there are 20+60=80 legs.Answer: 80 missing step: Total
number of pairs is 80/2=40
Coherency Steps contradict each other or do **Context: If the reactor is intact, then it is safe. If the reactor** ✓ ✓
not follow a cohesive story is safe, then no graphite is outside it.
**Model Expl.: The reactor is intact. All graphite is inside of**
it. There are some graphite outside of the the reactor.
Commonsense Model lacks relations that should **Question: Jordan has 2 children who wear diapers. Each** ✓
be known from general knowledge child requires 5 diaper changes per day. Jordan’s wife
about the world (i.e. how to com- changes half of the diapers. How many diapers does Jorpute velocity, how many inches in dan change per day?
one foot, all ducks are birds, etc.) **Model Expl.: Jordan has 2*5=10 diapers to change per day.**
Jordan’s wife changes 10/2=5.0 diapers per day. For both
children, Jordan changes 10+5=15 diapers per day.
Arithmetic Error in math calculations **Reference: If a truck was carrying 20 tons of fertiliser packed** ✓ ✓
in bags, the total number of bags in a truck is 20*20 = 400
bags
**Model Expl: If a truck was carrying 20 tons of fertiliser**
packed in bags, the total number of bags in a truck is 20*20
= 40 bags
18
-----
p p
D ROSCOE METRICS DETAILS (CONT. FROM §4)
ROSCOE metrics are constructed under four categories: semantic alignment, semantic similarity, logical
inference, and logical coherence. The details of each metric is explained in §4. At the core of ROSCOE semantic alignment metrics is the reasoning alignment score, which we designed to measure the grounding of
step-by-step reasoning with respect to the source text. Fig. 3 illustrates the reasoning alignment.
Figure 3: Reasoning alignment illustrating the measurement of the Faithfulness-Step and Faithfulness-Token
semantic alignment scores. h = _h1, h2_ is a hypothesis chain with tokens _h1,1, h1,2, h1,3, h2,1, h2,2_,
_{_ _}_ _{_ _}_
and s = _s1, s2, s3_ is a context with tokens _s1,1, s2,1, s2,2, s2,3, s3,1, s3,2, s3,3_ . Alignment scores from
_{_ _}_ _{_ _}_
hypothesis to context are highlighted, and alignment scores from context to hypothesis are underscored. The
reasoning alignment combines token and step level similarities where each alignment value (cell) is the cosine
similarity and explicitly measures the grounding of the token and step-wise reasoning with respect to the
source text.
The variation of scorers of the ROSCOE shares some similarities, thus we explain them here:
**BARTScore (Yuan et al., 2021) claims that more high level text can be generated using sequence to sequence**
model. It can support different evaluation perspectives such as factuality (by evaluating from source to
hypothesis) or informativeness (by evaluating from both directions between reference and hypothesis).
BARTScore is used to measure the probability of generated text from a source text x to a target set y:
_BARTScore =_ _t=1_ _[w][t][ log][ p][(][y][t][|][y][<t][, x, θ][)]_ (1)
BARTScoreintroduce two variations: (1) finetuning, in which the BART model is finetuned on the task
specific dataset to make the pre-training domain closer to the evaluation domain. (2) prompting, in which a[P][m]
task specific textual prompt is appended to the source x to get the y. In our experiments we compare the the
BARTScorebaseline and one with the prompting variant BARTScore+to compare in the experiments.
**CTC (Compression, Transduction, and Creation) (Deng et al., 2021), is a suite of metrics that unifies**
different perspectives of different tasks (e.g, summarization, style transfer, or text rewriting) into information
alignment, which measures weather the information in one generation component is grounded in another. The
information alignment is defined as follows: let x (e.g, dialog context) be the source input, c (e.g., external
world knowledge) be some additional context, and y be the generated output text (e.g., generated response).
The alignment is measured on token level and it is measured as the vector of scores:
_align(a_ _b) =_ _α1,_ _, αN_ (2)
_→_ _⟨_ _· · ·_ _⟩_
where each score αi indicates confidence that the n-th token in a aligns with the whole sentence b. Using the
information alignment they define a list of metrics to evaluate text for different tasks. In our experiments we
use two of these metrics that are closer to ROSCOE: the Relevance (CTC Relevance), which measures the
consistency of the generated text with the source and its balanced between the reference, and the Consistency
(CTC Consistency) which deals with the faithfullness of the generated text to the input context by the
alignment between the two.
19
-----
p p
E EXPERIMENTAL SETUP DETAILS (CONT. FROM § 5)
E.1 DIAGNOSTIC DATASETS
In the following we present details of each diagnostics dataset used in our work. Table 11 illustrates how
each dataset is used in our experiments. StrategyQA dataset is only used to finetune the SimCSE embeddings
model, because it contains reference reasoning chains in train and validation partitions, but not in the test
partition. The rest of the six diagnostic datasets are used for sentence embedding model finetuning, and
evaluating our models as presented in the experiments results. All datasets with examples are summarised in
Table 12.
Table 11: Summary of datasets used in our work. Reasoning Chain represent whether it contains human
written golden step-wise reasoning explanation. Type indicates whether it is used for constructing Diagnostic
or Human judged datasets. Train/Val./Test indicate whether the dataset is used for training, validation and/or
testing. StrategyQA dataset is only used for finetuning SimCSE embedding model.
Dataset Reasoning Type Train Val. Test Annotated
Chain Instances
EntailmentBank (Dalvi et al., 2021) ✓ Diagnostic, Finetuning ✓ ✓ ✓ 1,840
ProofWriter (Tafjord et al., 2021) ✓ Diagnostic, Finetuning ✓ ✓ ✓ 272,430
MATH (Hendrycks et al., 2021) ✓ Diagnostic, Finetuning ✓ ✓ ✓ 12,500
ASDIV (Miao et al., 2020) ✓ Diagnostic, Finetuning ✓ ✓ ✓ 2,305
AQUA (Liang et al., 2018) ✓ Diagnostic, Finetuning ✓ ✓ ✓ 97,975
EQASC (Aggarwal et al., 2021) ✓ Diagnostic, Finetuning ✓ ✓ ✓ 9,060
StrategyQA (Geva et al., 2021) ✓ Finetuning ✓ ✓ ✗ 2,290
DROP (Dua et al., 2019) ✗ Human judged ✗ ✗ ✓ 210
GSM8K (Cobbe et al., 2021) ✓ Human judged ✗ ✗ ✓ 200
ESNLI (Camburu et al., 2018) ✓ Human judged ✗ ✗ ✓ 151
CosmosQA (Huang et al., 2019) ✗ Human judged ✗ ✗ ✓ 195
SemEval (Ostermann et al., 2018) ✗ Human judged ✗ ✗ ✓ 209
**EntailmentBank (EntBank) (Dalvi et al., 2021) is a complex question answering dataset which contains**
multi-step entailment trees, namely a tree of multi-premise entailment steps from facts that are known, through
intermediate conclusions to hypothesis of interest (which in this case the question and answer).
**ProofWriter (Tafjord et al., 2021) is a question answering dataset for logical reasoning. It contains 500k**
questions, answers and proofs over natural-language rulebases. This dataset is mostly used to emulate reasoning over rules expressed in language, including proof generation. The datasets proofs include intermediate
conclusions. In our experiments, we used depth-0, depth-1, depth-2, depth-3, and depth-5 OWA sets.
**MATH (Hendrycks et al., 2021) is a dataset of 12,500 problems from high school math competitions. Given**
a math problem such as in Table 12 models generate a sequence, such as [2]3 [, that encodes the final answer.]
**ASDIV (Miao et al., 2020) (Academia Sinica Diverse MWP Dataset) is a dataset of 2,305 questions on**
diverse math word problem solving. It includes a diverse operations such as basic arithmetic or aggregative
operations (e.g., comparisons, set-operations).
**AQUA (Liang et al., 2018) is a dataset of 100,000 algebraic word problems with step-wise solutions as shown**
below. In the original dataset each question is decomposed in four parts, two inputs and two outputs: the
description of the problem and a question, and the possible (multiple choice) answer options, one being the
20
-----
p p
Table 12: We show instances from seven of the Diagnostics Datasets here. (Continue from §5).
**Dataset** **Instance** **Reference Step-by-Step Solution**
**EntBank** Earth is a kind of celestial object. Stars appear to move
relative to the horizon during the night. A star is a kind
of celestial object celestial body. The earth rotating on its
axis causes stars to appear to move across the sky at night.
Apparent motion is when an object appears move relative
to another object ’s position.
**Question: How does the appearance of a constellation**
change during the night?
**Hypothesis: Solve the following entailment problem:**
"Earth is a kind of celestial object. During the night stars
appear to move"
**ProofWriter** **Facts: The cow is not big. The cow is not green. The lion**
eats the tiger. The lion sees the cow. The lion visits the
cow. The lion does not visit the squirrel. the lion visits the
tiger. The squirrel is big. The squirrel is round. The tiger
is not green. The tiger does not see the cow.
**Rules: if something sees the squirrel and the squirrel eats**
the cow then the cow is round. if something is green then it
eats the tiger. if the squirrel is round then the squirrel visits
the cow. if something eats the cow then it sees the squirrel.
if something sees the tiger and the tiger visits the squirrel
then it is nice. if something is round then it eats the cow.
if something is kind then it eats the cow. if the tiger visits
the cow then the cow sees the squirrel. if something sees
the cow then the cow eats the tiger.
**Question: The cow does not see the squirrel.**
**MATH** **Context: Tom has a red marble, a green marble, a blue**
marble, and three identical yellow marbles.
**Question: How many different groups of two marbles can**
Tom choose?
**ASDIV** **Context: A sandwich is priced at $0.75. A cup of pudding**
is priced at $0.25. Tim bought 2 sandwiches and 4 cups of
pudding.
**Question: How much money should Tim pay?**
**AQUA** **Context: The entrance fee for a fair is $5 for persons under**
the age of 18 and 20% more for persons older. Each ride at
the fair costs $0.50. If Joe goes with her 6 years old twin
brothers, and they each took 3 rides in total.
**Question: How much money does Joe end up spending at**
the fair?
**Step1: earth is a kind of celestial object Its position appears**
**Step2: a star is a kind of celestial object to shift relative /**
celestial body to the horizon.
**Step3: apparent motion is when an object appears to move**
relative to another object ’s position
**Step4 Therefore apparent motion of stars is when stars appear**
to move relative to earth’s position Step5: The earth rotating on
its axis causes stars to appear to move across the sky at night
**Step6: Therefore the earth rotating on its axis causes apparent**
motion of stars
**Step7: Stars appear to move relative to the horizon during the**
night
**Step8: Therefore the earth rotating on its axis causes stars to**
move relative to the horizon during the night.
**Step1: The squirrel is round.**
**Step2: something is round then it eats the cow.**
**Step3: The squirrel eats the cow.**
**Step4: If something sees the squirrel and the squirrel eats the**
cow then the cow is round.
**Step5: The cow is round.**
**Step6: If something is round then it eats the cow.**
**Step7: The cow eats the cow.**
**Step8: if something eats the cow then it sees the squirrel.**
**Step9: the cow sees the squirrel.**
**Answer: True**
**Step1: There are two cases here:**
**Step2: either Tom chooses two yellow marbles (1 result), or he**
chooses two marbles of different colors (Step3: The total number of distinct pairs of marbles Tom can42 =6 results.).
choose is 1 + 6 = 7.
**Answer: 7**
0.75 x 2 + 0.25 x 4 = 2.5
**Answer: 2.5**
**Step1: Total entrance fee is (2*$5)+(1.20*5) = $16**
**Step2: Total rides fee is ( 0.50 * 3 ) * 3 = $4.50**
**Step3: Total money spent is $20.50**
**Answer: 20.5**
**EQASC** **Question: Where is water likely to form beads?** **Step1: Beads of water are formed by water vapor condensing**
**Step2: Moisture builds up in condenses air and the wherever**
the surfaces are cold.
**Answer: Water beads form on cold surfaces.**
**StrategyQA** **Question: Are more people today related to Genghis Khan** **Step1: Julius Caesar had three children.**
than Julius Caesar? **Step2: Genghis Khan had sixteen children.**
**Step3: Modern geneticists have determined that out of every**
200 men today has DNA that can be traced to Genghis Khan.
**Answer: True**
correct one. In this work we only used the context and question, the step-wise solution and the correct answer
to construct our diagnostic dataset.
**EQASC (Aggarwal et al., 2021) is a multi-hop question answering dataset with 98K explanation annotations**
for multi-step factual reasoning. Each instance in the dataset comes with a question, multiple answer choices,
21
-----
p p
explanation of each answer choice and a free flow explanation of the whole context. In our experiments we
used the correct answer’s explanation to construct our diagnostic datasets.
**StrategyQA (Geva et al., 2021) is another multi-step question answering (QA) dataset, that covers a diverse**
set of reasoning skills. StrategyQA consists of 2,780 questions, annotated with their decomposition and
per-step evidence.
E.2 HUMAN JUDGED DATASET CONSTRUCTION
In the following we present details of each human judged datasets used in our work. Table 11 lists each
dataset and illustrates how each dataset is used in our experiments. Specifically, all six datasets are used for
evaluations in the experiments results and model finetuning, and one dataset was used for finetuning only.
The dataset details are explained below.
To construct these datasets, we first sample instances from each dataset (see the number of instances sampled
in Table 11). We use GPT-3 with few-shot in-context examples and a prompt to generate step-by-step
reasoning (e.g., "explain step-by-step") for each sampled instance (see in-context examples and prompts
in App. B). Then, using our taxonomy we constructed a list of evaluation perspectives to label the model
generated step-by-step reasoning step of each of these datasets. We explain the details of the perspectives
used to label human judged datasets in § 5 and App. F. All datasets with examples are summarised in in
Table 13. In the following we present details of each human judged datasets.
**DROP (Dua et al., 2019), Discrete Reasoning Over the content of Paragraphs, is a dataset of 96K of instances**
with context and a question. To solve the tasks, a system must resolve references in the context that match
with the question, and perform discrete operations over them (such as addition, counting, or sorting). These
operations require comprehensive understanding of the content of the input context.
**GSM8K (Cobbe et al., 2021) is a dataset of 8.5K linguistically diverse grade school math word problems. On**
this dataset, even the largest transformer models fail to achieve high test performance, despite the conceptual
simplicity of this problem distribution.
**CosmosQA (Huang et al., 2019) is a dataset of 35K problems that require commonsense-based reading**
comprehension, formulated as multiple-choice questions. The questions focus on reading between the lines
over a diverse collection of people’s everyday narratives, asking such questions as "what might be the possible
_reason of ...?", or "what would have happened if ...?". The dataset does not introduce step-by-step reasoning_
output, and contains multiple choice answers.
**ESNLI (Camburu et al., 2018) is the extended version of the Stanford Natural Language Inference cor-**
pus (Bowman et al., 2015) of 570K labeled sentence pairs with entailment or contradiction labels. ESNLI
includes human labeled explanations of the entailment decision.
**SemEVAL (Ostermann et al., 2018) is a dataset on machine comprehension using commonsense knowledge.**
It contains questions that require commonsense knowledge for finding the correct answer.
E.3 SYNTHETIC DIAGNOSTICS DATASET GENERATION WITH PERTURBATION RULES
To construct the diagnostics datasets we apply synthetic perturbations on half of the chains from six datasets
(for details see App. E.1 and the summary Table 11). Also, in Table 14 we illustrate these synthetic
perturbations applied on reasoning steps _ri_ of gold reference chains of all the datasets. In there, g[∗]
_{_ _}_
indicates a grammar error, which includes changing verb tense, dropping verb, or random word swap. s[∗]
represents change the semantics of one step in the chain by replacing named entities. To simulate extrinsic
hallucinations, we use random steps from other chains within the same dataset.
22
-----
p p
Table 13: We show instances from five of the Human Judged Datasets used in our work. Only GSM8K and ESNLI
include human labeled explanations.
**Dataset** **Instance** **Reference Answer & Reference Step-by-Step**
**Solution**
**GSM8K** **Question: Tina buys 3 12-packs of soda for a party. Including Tina, 6**
people are at the party. Half of the people at the party have 3 sodas each,
2 of the people have 4, and 1 person has 5. How many sodas are left over
when the party is over?
**CosmosQA** **Context: A woman had topped herself by jumping off the roof of the**
hospital she had just recently been admitted to. She was there because
the first or perhaps latest suicide attempt was unsuccessful. She put her
clothes on, folded the hospital gown and made the bed. She walked
through the unit unimpeded and took the elevator to the top floor
**Question: What would have happened to the woman if the staff at the**
hospital were doing their job properly?
**DROP** **Context: Denver would retake the lead with kicker Matt Prater nailing a**
43-yard field goal, yet Carolina answered as kicker John Kasay ties the
game with a 39-yard field goal. . . . Carolina closed out the half with
Kasay nailing a 44-yard field goal. . . . In the fourth quarter, Carolina
sealed the win with Kasay’s 42-yard field goal.
**Question: Which kicker kicked the most field goals?**
**ESNLI** **Premise: A child in a yellow plastic safety swing is laughing as a dark-**
haired woman in pink and coral pants stands behind her.
**Hypothesis: A young mother is playing with her daughter in a swing.**
**SemEVAL** **Context: Now I am going to set the dining table up for dinner. First I put**
away all the stuff that is not supposed to be on the table. Next I clean the
table with a tissue paper. Then I arrange some of the decorations on the
table. After that I put down the plates and glasses.lastly in the remaining
spaces on the table I put down what we are going to eat but I dont put
down dessert yet. There is one more thing I do before I am finished setting
up the dining table. I call my family down for dinner. I swept a drop of
sweat of my chin and forehead. It was hard hard work but still it was so
much fun. Oh no my family is done with dinner and now I have to bring
them dessert which is fruit I made myself.
**Question: When did they clean the dining table?**
**Hypothesis: After it was set.**
**Answer: 11**
**Step1: Tine buys 3 12-packs of soda for 3*12=36**
sodas
**Step2: 6 people attend the party, so half of them is**
6/2= 3 people
**Step3: Each of those people drinks 3 sodas, so they**
drink 3*3=9 sodas.
**Step4: Two people drink 4 sodas, which means**
they drink 2*4=8 sodas.
**Step5: With 1 person drinking 5, that brings the**
total drank to 5+9+8+3=25 sodas
**Step6: As Tina started off with 36 sodas, that**
means there are 36-25=11 sodas left.
**Answer: The woman would have been stopped**
before she left to take the elevator to the top floor
and she would have lived.
**Answer: John Kasay**
**Answer: neutral**
**Explanation: Child does not imply daughter and**
woman does not imply mother.
**Answer: No**
To construct diagnostic data from math datasets, we introduce four additional perturbations to simulate stepwise explanation errors that might arise in arithmetic reasoning task (Arithmetic error), general knowledge
about relationships and equation construction (Common sense error), and misinformation about object/subject
characteristics (Factuality or Hallucination):
- Shuffle numbers: randomly shuffles all numbers in the chain,
- Shuffle operations: randomly shuffles all math operations in the chain,
- Random number: randomly replaces one number in the chain,
- Random operation: randomly replaces one math operation in the chain.
23
-----
p p
Table 14: Synthetic perturbations and corresponding error types of steps {ri} in reference chains used when constructing
diagnostics datasets. g[∗](·) represents grammar error, s[∗](·) represents semantic change.
**Perturbation Type** **Error Type** **Reference Reasoning Steps** **Hypothesis Reasoning Steps**
Repeat a step Repetition [r1, r2, r3] [r1, r2, r2, r3]
Remove a step Missing step [r1, r2, r3] [r2, r3]
Shuffle steps Self-coherency [r1, r2, r3] [r3, r1, r2]
Swap a step Self-coherency [r1, r2, r3] [r2, r1, r3]
Negate a step Factuality [r1, r2, r3] [r1, _r2, r3]_
_¬_
Hallucination Hallucination [r1, r2, r3] [r1, r2, r3, r4]
Grammar error Grammatical [r1, r2, r3] [r1, r2, g[∗](r3)]
Semantic change Factuality [r1, r2, r3] [r1, s[∗](r2), r3]
24
-----
p p
F HUMAN ANNOTATIONS (CONT. FROM § 5)
To construct Human Judged Datasets, we perform human annotations on five datasets which we summarize
in Table 11 (Type=’Human judged’). These datasets do not include explanations (except GSM8K and ESNLI),
so we construct model generated reasoning steps and label them with reasoning errors. We explain our
generation process in §5 and App. E.2. We used five expert human annotators to collect reasoning error labels
on five datasets. We asked human evaluators to directly rate the generated reasoning errors on overall chain
level using a Likert scale from 1 to 5. We also asked them to mark whether each error type proposed in our
error taxonomy (§3) appeared in each step in step-level evaluations. In Fig. 4 and Fig. 5 we illustrate the
UI used to collect the data. Table 15 summarizes questions that experts were asked. Table 16 reports the
distribution of errors for each dataset. In general, we found that it was hard to get anonymous crowd workers
to annotate our data accurately even when we paid averages of upwards of $30 an hour, hence relying on
expert annotators. For the annotation sessions reported in the text of the paper, we find that it takes an average
of 754 seconds for expert annotators to complete a session of at most 5 examples, or slightly over 2-and-a-half
minutes per example. This highlights the difficulty of obtaining high-quality annotations on these cognitive
challenging tasks.
Figure 4: Screenshot of expert annotation user interface, showing the context for the initial question as well as the
questions regarding the generated response.
Figure 5: Screenshot of expert annotation user interface, showing questions asked for each step, using the question in
Fig 4. The questions are asked of every step generated by the model, with steps separated by sentence-ending periods.
25
-----
p p
Table 15: Evaluation perspectives used to Human Judged the datasets. The perspectives, which we used to ask humans
to label, align with our taxonomy of reasoning errors. (Continued from § 5)
**Level** **Evaluation** **Label** **Details**
**Perspective**
Overall QUAL Overall quality [1-5] Does the generated response answer the question in a well-justified
manner? (1=incomprehensible and wrong, 5=clear and correct)
Overall COH Coherency [1-5] Does the whole generated response make sense? (Ie, does it sound
understandable/non-contradictory/sensical, even if it fails to address
the context?) - (1=sounds like nonsense, 5=easy to parse).
Step MISS Missing Step Y/N Is the reasoning in the generated response incomplete and lacking
required information to produce the correct answer? Specifically, does
this response contains steps that, if added in, would make for a wellsupported chain?
Step GRAM Grammar Y/N Does this step contain faulty, unconventional, or controversial grammar
usage? In other words, does the language in this step sounds unnatural?
Step FACT Factuality Y/N Does this step contain information that contradicts the context while
still largely talking about the same concepts? (Ex. Characteristics of
named objects are wrong, named entities changed.)
Step LOGIC Coherency and Logic Y/N Does this step any logical deduction errors (Ie, makes a conclusion
contradictory to previously stated clauses, including clauses within this
step itself; makes a conclusion while not having enough support to
make the conclusion)
Step HALL Hallucination Y/N Does this step contain information not provided in the problem statement that is irrelevant or wrong?
Step RED Redundancy Y/N Does this step contain information not required to answer the question
asked despite being factual and consistent with the context?
Step REP Repetition Y/N Does this step contain any information, possibly paraphrased, already
mentioned in previous step (and thus could be dropped without impacting correctness)?
Step COMMON Commonsense Y/N Does this step contain any errors in relation to general knowledge about
the world (i.e. how to compute velocity, how many inches in one foot,
etc) not explicitly provided in the context?
Step MATH Arithmetic Y/N Does this step contain math equation errors? Note that you should
consider only current step in isolation, rather than issues propagated
from prior steps.
Table 16: Statistics of types of errors in Human Judged datasets. Each column reports the number of examples where
the specified error type exists. (Continue from § 5)
Error Type **DROP GSM8K ESNLI COSMOS SemEVAL**
Grammar 8 4 5 8 6
Factuality 19 56 15 44 31
Hallucination 4 8 4 9 2
Redundancy 25 13 14 15 19
Repetition 2 2 0 3 3
Missing Step 109 81 40 99 67
Coherency 20 57 17 48 17
Commonsense 3 58 5 18 1
Arithmetic 2 7 1 0 0
26
-----
p p
G SENTENCE EMBEDDING MODEL TRAINING (CONT. FROM §6)
**Model training. We use the train portions of the perturbed diagnostics datasets to finetune the SimCSE**
embeddings model (explained in § 5) and validation portions to select the best embedding model. The test
portions are used to evaluate our metrics against baseline metrics. We randomly select 500,000 samples with
replacement from each dataset to create uniform representation and reduce bias.
The hyperparameters used to finetune SimCSE model are described in Table 17. We use NVIDIA Tesla
V100 Volta GPU instances with 32GB Graphics Card. We perform hyperparameter search, varying batch size
in {32, 64, 256, 512, 1024, 2048}, learning rate in {5e-06, 1e-05, 5e-05, 1e-04}, and max sequence length in
_{64, 128, 512}. Not all combinations of batch size and max sequence length were explored due to memory_
limitations.
Table 17: Hyperparameters used to fine-tune SimCSE model on perturbed datasets.
Parameter Value
Batch size 64
Max sequence length 512
Training epochs 5
Learning rate 5e-6
Temperature 0.05
**Validation. We replace original validation procedure on semantic textual similarity tasks with similarity-**
based validation on perturbed reasoning chains. In particular, during training, we select best checkpoint that
maximizes cosine similarity between positive and minimizes cosine similarity between hard-negative pairs
within the batch of size B as the following:
_N_
_i=1_ [[cos(][s][i][, r][i][)][ −] [cos(][s][i][, h][i][)]]
(3)
2 _B_
P _∗_
Model is evaluated every 100 steps on the development dataset and the best checkpoint is applied at the
inference. Other parameters not described in this section are kept as in the original SimCSE model used for
initialization.
**Inference. We compare ROSCOE scores calculated against three embeddings: finetuned SimCSE model,**
_sup-simcse-roberta-base SimCSE model, and all-mpnet-base-v2 sentence embedding model (Reimers &_
Gurevych, 2019). During inference, we set the random seed to 42. Without this, the embedding-based scores
naturally varied by about 0.01.
27
-----
p p
H ADDITIONAL EXPERIMENTAL RESULTS (CONT. FROM §6)
H.1 CONTROLLED EXPERIMENTS WITH DIAGNOSTICS DATASETS
In this section, we presented Somers’ D correlation of all metrics on all Diagnostics datasets. Table 18 summarizes the evaluations when investigated reference-free. One of the characteristics of our ROSCOE metrics is
that, they can provide judgement of the model generated reasoning steps with and without the human reference
reasoning chains. In the experiments section in §6, we discussed the results of our unsupervised scores in
comparison to baseline scores when measured reference-free. In Table 19, we summarize the correlation
analysis on ROSCOE metrics in comparison to baselines on diagnostic datasets when reference is present for
evaluation. Specifically, each score is measured between the human provided reasoning steps (reference)
and the model generated reasoning steps (hypothesis). We also display fine-grained meta-evaluations of all
metrics on each diagnostics dataset in separate tables. Specifically, Tables 20, 26 for EQASC, Tables 21, 27
for EntailmentBank, Tables 22, 28 for MATH, Tables 23, 29 for ProofWriter, Tables 24, 30 for ASDIV, and
Tables 25, 31 for AQUA.
To understand if designed reference-free scores capture targeted error types we analyze perturbation-level
correlations summarized in Fig. 6. Out of the all considered scores, Info-Chain is able to cover 10 out of 12
of errors, except Remove Step and Semantic error perturbations. In general we can note that ROSCOE fails to
consistently identify missing step error type represented by Remove Step perturbation across different datasets,
while other synthesized error types are covered by at least one score type.
Reference-based scores are covering all synthetic errors, with Semantic Coverage Chain showing strong
correlations with all types of perturbations (Table 19). We also note that along with ROSCOE scores, the
highest correlation among all reference-based scores belong to ROUGE and BERT scores (Tables 26-31).
ROUGE scores consistently outperform on Repetition, Hallucination, Remove Step, Shuffle Steps, Swap
_Steps, Negate Step, and Semantic perturbations, while under performing on Random operation, and Shuffle_
_operations. We attribute this to the fact that ROUGE is an n-gram based score, so it is better in catching errors_
were wording has significantly changed, while failing to catch small changes within steps.
It is worth noting that some scores, especially those among reference-based evaluations, get the highest
possible Somers’ D correlation scores of 1.0. What it means is that in some scenarios, there is a perfect
correlation between the metric and the error type. In other words, for this metric we can find a threshold such
generated chains that have scores greater than the threshold do not have errors of the given type, and in all
generated chains with scores less than the threshold have that error. It is especially evident on referenced-based
metrics that directly compare the reference solution and hypothesis. In this scenario, we build correlation
for two groups: 1) non-perturbed hypothesis: the score is calculated by comparing embedding similarities
of the reference with itself, and we expect to get high scores, 2) perturbed hypothesis: comparing reference
with its perturbed version, where the scores should be lower. In some cases, we are able to perfectly separate
perturbed and non-perturbed chains based on the corresponding metric values by selecting a threshold, in
other cases we cannot due to a number of false-negatives (i.e., a chain gets a high score, although the error is
present). As an example, consider the Semantic Coverage-Chain metric calculated on EQASC dataset using
_all-mpnet-base-v2 sentence embeddings, and Hallucination perturbation (Table 26). Here the Somers’ D_
correlation score is 1.0. Semantic Coverage-Chain is calculated as a normalized cosine distance between the
chain embedding of the reference solution r, and the chain embedding of the hypothesis h : [1+cos(r, h)]/2.
Recall that in our setup, half of the hypothesis chains are perturbed reference chains, and another half is the
same as the reference. While Hallucination perturbation is an insertion of a random step from a dataset, it
is hard to predict how if will affect the embedding of the chain as a whole, but on the unperturbed chains,
where h == r, the Semantic Coverage-Chain should be: [1 + cos(r, r)]/2 = 1.0. Further review confirmed
that in this dataset there are no false-positive instances, i.e., all chains with perturbations had Semantic
_Coverage-Chain score less than 1.0. That means, we can always identify if the chain contains a Hallucination_
28
-----
p p
error or not, by comparing Semantic Coverage-Chain value with 1.0 (threshold value), which is reflected in
perfect Somers’ D score.
Highest correlations among reference-free scores belong to the Repetition-* scores, that exhibit perfect
correlation on EQASC dataset (Tables 20-25). For other datasets, non-perfect correlations can be attributed to
the small number of false-negatives, i.e. they give low Repetition-* scores for chains with non-duplicated
but similar steps, while all chains with duplicates got almost 0 scores (Fig. 7). In EQASC explanations are
created from a set of facts that are not directly related to each other, but are intended to give an answer when
combined together. Among all datasets considered, these steps are most dissimilar, and thus can be separated
with similarity-based scores.
Figure 6: Relative presence of the strong score-perturbation correlation, measured as the number of datasets
where for each score-perturbation pair Somers’ D correlation value is in the 90[th] percentile, normalized by the
total number of datasets where this type of perturbation occurs. Statistics collected over ROSCOE referencefree scores with finetuned SimCSE embeddings. (Continued from §7)
Score: Repetition Step, Perturbation: Repetition Score: Repetition Word, Perturbation: Repetition
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Repetition Step score perturbed Repetition Word score perturbed
original original
perturbed
original
perturbed
original
Figure 7: Box-and-whisker plots of interquartile ranges of scores, for Repetition perturbations and Repetition-*
scores. While all perturbed subsets have 0 or near 0 scores, all datasets except EQASC have some chains that
were also scored as low despite the absence of duplicates.
29
-----
p p
Table 18: Somers’ D correlation of all metrics on six Diagnostics datasets. All metrics are measured reference-free on
(s, h). The highest correlation overall for each dataset is in bold. The second best models are underlined. Correlations
that are not significant (p-value >= 0.05) are omitted when aggregating, and "-" denotes an absence of any significant
correlations. Note that ASDIV is a 1-step equation dataset, so there are no repetition and self-consistency scores as there
are no steps to compare. (Continued from §6, more details in App. H.1.).
**Metric** **EntBank Math AQUA ProofWriter EQASC ASDIV**
ROUGE-1 0.410 0.176 0.257 0.095 0.342 0.305
ROUGE-2 0.391 0.151 0.206 0.090 0.217 -
ROUGE-L 0.365 0.156 0.264 0.106 0.315 0.269
BLEURT 0.257 0.148 0.252 0.024 0.447 -
BERTScore 0.380 0.124 0.220 0.117 0.462 0.322
BARTScore 0.358 0.185 0.317 0.081 0.415 -
BARTScore+ 0.315 0.164 0.251 0.054 0.297 -
BARTScore-P 0.186 0.128 0.215 0.011 0.276 -
PRISM 0.453 0.208 0.191 0.235 0.436 -
CTC Relevancy 0.258 0.188 0.217 0.394 0.485 0.382
CTC Consistency 0.310 0.282 0.157 0.513 0.270 0.396
**ROSCOE Metrics (reference-free metrics only)**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.786 0.362 0.152 0.771 0.785 0.186
Faithfulness-Token 0.581 0.157 0.157 0.436 0.480 0.182
Info-Step 0.638 0.231 - 0.250 0.538 0.198
Repetition-Token 0.913 0.936 0.972 0.596 **1.000** n/a
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.419 0.467 0.214 0.082 0.550 0.280
Repetition-Step 0.909 0.932 0.982 0.631 **1.000** n/a
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.777 0.229 0.224 0.732 0.630 0.266
Faithfulness-Token 0.663 0.200 - 0.517 0.502 0.515
Info-Step 0.560 0.131 0.183 0.226 0.399 0.275
Repetition-Token 0.919 0.939 0.971 0.606 **1.000** n/a
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain 0.524 0.180 0.195 0.045 0.409 0.289
Repetition-Step 0.901 **0.949** **0.991** 0.621 **1.000** n/a
ROSCOE-SA with finetuned sup-simcse-roberta sentence embeddings
Faithfulness-Step 0.538 0.614 0.826 0.763 0.907 **0.879**
Faithfulness-Token 0.519 0.204 0.285 0.499 0.492 0.740
Info-Step 0.599 0.511 0.703 0.317 0.804 **0.879**
Repetition-Token 0.919 0.939 0.971 0.606 **1.000** n/a
ROSCOE-SS with finetuned sup-simcse-roberta sentence embeddings
Info-Chain **0.955** 0.777 0.933 0.462 0.995 0.857
Repetition-Step 0.908 0.924 0.982 0.624 **1.000** n/a
ROSCOE-LI
Self-Consistency 0.782 0.190 0.368 0.204 0.793 n/a
Source-Consistency 0.917 0.341 0.424 0.289 0.778 0.771
ROSCOE-LC
Perplexity-Step 0.213 0.160 0.110 0.178 0.394 0.485
Perplexity-Chain 0.151 0.175 0.229 0.135 0.379 0.485
Grammar 0.604 0.392 0.359 **0.788** 0.859 0.470
30
-----
p p
Table 19: Somers’ D correlation of all reference-based metrics on six Diagnostics datasets. Metrics are measured using
**reference generations on (r, h). The highest correlation overall for each dataset is in bold. The second best models are**
underlined. (Continued from §6, more details in App. H.1.)
**Metric** **EntBank Math AQUA ProofWriter EQASC ASDIV**
ROUGE-1 **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
ROUGE-2 **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
ROUGE-L **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
BLEURT 0.821 0.773 0.946 0.829 **1.000** 0.93
BERTScore **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
BARTScore 0.983 0.989 0.984 0.986 **1.000** 0.954
BARTScore+ 0.988 0.963 **1.000** 0.996 **1.000** **1.000**
BARTScore-P 0.877 0.799 0.905 0.595 0.966 0.83
PRISM 0.939 0.521 **1.000** 0.997 0.996 **1.000**
CTC Relevancy 0.457 0.592 0.409 0.725 0.954 0.398
CTC Consistency 0.814 0.804 0.833 0.635 0.974 0.6
**ROSCOE Metrics (reference-based metrics only)**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Hallucination **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Redundancy **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Semantic Coverage-Step **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Reasoning Alignment **1.000** **1.000** 0.143 **1.000** **1.000** **1.000**
Commonsense 0.438 **1.000** **1.000** 0.379 **1.000** **1.000**
Missing Step 0.993 **1.000** **1.000** 0.876 **1.000** **1.000**
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Hallucination **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Redundancy **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Semantic Coverage-Step **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Reasoning Alignment **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Commonsense 0.433 **1.000** **1.000** 0.415 **1.000** **1.000**
Missing Step 0.999 **1.000** **1.000** 0.874 **1.000** **1.000**
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain **1.000** 0.999 **1.000** **1.000** **1.000** **1.000**
ROSCOE-SA with finetuned sup-simcse-roberta sentence embeddings
Hallucination **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Redundancy **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Semantic Coverage-Step **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Reasoning Alignment **1.000** **1.000** **1.000** **1.000** **1.000** **1.000**
Commonsense 0.445 **1.000** **1.000** 0.404 **1.000** **1.000**
Missing Step 0.999 **1.000** **1.000** 0.873 **1.000** **1.000**
ROSCOE-SS with finetuned sup-simcse-roberta sentence embeddings
Semantic Coverage-Chain **1.000** 0.999 **1.000** **1.000** **1.000** **1.000**
31
-----
p p
Table 20: Somers’ D correlations of all metrics per different perturbation applied on EQASC Diagnostics datasets. All
metrics are measured reference-free on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Negate** **Semantic**
**Perturbations** **Repet.** **Halluc. [Grammar Remove]**
_→_ **Error** **Step** **Step** **Error**
Rouge-1 0.264† 0.342† 0.017 0.227† 0.063 -0.023
Rouge-2 0.071 0.205† 0.106† 0.217† 0.099† -0.002
Rouge-L 0.210† 0.315† 0.057 0.179† 0.083† -0.016
BLEURT 0.366† 0.447† -0.028 0.195† 0.204† -0.108†
BERTScore 0.288† 0.462† 0.153† 0.160† 0.052 0.051
BARTScore -0.127† 0.038 0.047 0.415† 0.019 -0.072
BARTScore+ 0.028 0.212† 0.055 0.297† 0.023 -0.028
BARTScore-P -0.039 0.031 -0.038 0.276† -0.003 -0.023
PRISM -0.327† 0.436† 0.267† 0.077† 0.010 0.123†
CTC-Relevancy 0.141† 0.001 0.082† **0.485†** 0.002 0.220†
CTC-Consistency 0.001 -0.080† 0.095† -0.154† 0.078† 0.270†
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step -0.006 0.785† 0.017 -0.040 0.084† -0.243†
Faithfulness-Token -0.031 0.480† -0.001 0.037 0.008 -0.156†
Info-Step 0.006 0.538† -0.003 0.223† 0.085† -0.191†
Repetition-Token **1.000†** 0.399† -0.028 -1.000† 0.070 0.074
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.189† 0.550† -0.022 0.220† 0.059 -0.132†
Repetition-Step **1.000†** 0.035 -0.034 -1.000† -0.120† 0.030
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step -0.061 0.630† -0.010 0.007 0.308† -0.204†
Faithfulness-Token -0.031 0.502† 0.032 0.045 0.107† -0.168†
Info-Step -0.064 0.399† -0.024 0.241† 0.296† -0.155†
Repetition-Token **1.000†** 0.148† -0.138† -1.000† -0.055 -0.080†
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain -0.025 0.409† -0.028 0.256† 0.379† -0.163†
Repetition-Step **1.000†** 0.001 -0.003 -1.000† -0.465† 0.071
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step -0.044 0.630† 0.293† -0.046 0.907† -0.118†
Faithfulness-Token -0.019 0.485† 0.141† 0.036 0.492† -0.128†
Info-Step -0.041 0.383† 0.266† 0.196† 0.804† -0.068
Repetition-Token **1.000†** 0.148† -0.138† -1.000† -0.055 -0.080†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.995† 0.871† 0.588† 0.103† **0.967†** 0.121†
Repetition-Step **1.000†** 0.037 -0.386† -1.000† -0.953† -0.048
ROSCOE-LI
Source-Consistency -0.020 0.576† 0.112† -0.239† 0.778† 0.268†
Self-Consistency 0.022 0.633† 0.399† -0.713† 0.793† 0.476†
ROSCOE-LC
Perplexity-Chain -0.690† -0.007 0.379† 0.260† 0.118† 0.269†
Perplexity-Step 0.937† **0.965†** 0.352† -0.953† 0.081† 0.225†
Grammar -0.025 0.060 **0.859†** -0.145† 0.139† **0.722†**
32
-----
p p
Table 21: Somers’ D correlations of all metrics per different perturbation applied on Entailment Bank Diagnostics
datasets. All metrics are measured reference-free on (s, h). The highest correlation overall for each dataset is in bold.
The second best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6,
more details in App. H.1.)
**Swap** **Negate** **Semantic**
**Perturbations** **Repet.** **Halluc. [Grammar Remove Shuffle]**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Error**
Rouge-1 0.391† 0.410† 0.074 0.081 0.113 0.065 0.034 -0.017
Rouge-2 0.356† 0.391† 0.160† 0.116 0.109 0.088 0.109 0.091
Rouge-L 0.209† 0.194† 0.112 0.138† 0.365† 0.229† 0.003 0.063
BLEURT 0.025 0.164† 0.060 0.231† 0.092 0.096 0.257† -0.036
BERTScore 0.264† 0.380† 0.211† 0.150† 0.364† 0.205† 0.050 0.087
BARTScore 0.034 0.063 0.106 0.358† 0.248† 0.183† 0.142† 0.164†
BARTScore+ 0.101 0.047 0.036 0.315† 0.184† 0.155† 0.130† 0.173†
BARTScore-P 0.061 0.012 0.025 0.186† 0.041 -0.008 0.098 0.100
PRISM 0.230† 0.453† 0.279† 0.118 0.331† 0.167† 0.081 0.148†
CTC-Relevancy 0.258† 0.057 -0.026 0.080 -0.020 0.035 0.029 0.159†
CTC-Consistency 0.310† 0.159† -0.058 -0.249† -0.162† -0.023 -0.046 0.194†
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.023 0.786† 0.231† -0.005 0.111 -0.000 0.098 0.121
Faithfulness-Token 0.098 0.581† 0.250† 0.087 0.009 -0.020 0.179† 0.170†
Info-Step 0.083 0.638† 0.181† 0.161† 0.025 -0.001 0.216† 0.139†
Repetition-Token 0.913† 0.105 0.042 -0.177† 0.139† 0.038 -0.085 -0.058
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.023 0.786† 0.231† -0.005 0.111 -0.000 0.098 0.121
Faithfulness-Token 0.098 0.581† 0.250† 0.087 0.009 -0.020 0.179† 0.170†
Info-Step 0.083 0.638† 0.181† 0.161† 0.025 -0.001 0.216† 0.139†
Repetition-Token 0.913† 0.105 0.042 -0.177† 0.139† 0.038 -0.085 -0.058
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.039 0.419† 0.083 0.071 0.068 0.025 0.040 0.037
Repetition-Step 0.909† 0.148† -0.061 -0.165† 0.067 -0.068 -0.062 -0.056
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step -0.002 0.777† 0.158† -0.015 0.107 0.009 0.358† 0.110
Faithfulness-Token 0.063 0.663† 0.280† 0.084 0.046 0.024 0.279† 0.159†
Info-Step 0.066 0.560† 0.079 0.148† -0.035 0.007 0.450† 0.117
Repetition-Token **0.919†** 0.137† -0.010 -0.216† 0.171† 0.005 -0.106 -0.099
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain 0.084 0.515† 0.005 0.119 -0.019 0.023 0.524† 0.048
Repetition-Step 0.901† 0.139† 0.006 -0.188† 0.019 -0.066 -0.144† -0.098
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.105 0.392† 0.204† 0.063 0.093 0.029 0.538† -0.115
Faithfulness-Token 0.079 0.519† 0.271† 0.100 0.055 0.029 0.356† 0.041
Info-Step 0.131† 0.364† 0.227† 0.161† 0.053 0.047 0.599† -0.018
Repetition-Token **0.919†** 0.137† -0.010 -0.216† 0.171† 0.005 -0.106 -0.099
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.871† **0.851†** **0.752†** **0.437†** **0.937†** **0.672†** **0.955†** 0.300†
Repetition-Step 0.908† 0.133† -0.013 -0.169† 0.135† -0.043 -0.058 -0.079
ROSCOE-LI
Source-Consistency -0.044 0.289† 0.218† 0.052 0.150† -0.075 0.860† 0.257†
Self-Consistency -0.040 0.403† 0.216† -0.042 0.129† -0.053 0.782† 0.170†
ROSCOE-LC
Perplexity-Chain -0.364† 0.104 0.116 0.151† -0.006 0.085 0.060 0.133†
Perplexity-Step 0.199† 0.168† 0.075 -0.097 0.143† -0.021 -0.019 -0.046
Grammar -0.109 0.076 0.604† 0.075 0.044 -0.063 0.033 **0.365†**
33
-----
p p
Table 22: Somers’ D correlations of all metrics per different perturbation applied on MATH Diagnostics datasets. All
metrics are measured reference-free on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Swap** **Negate** **Random** **Random** **Shuffle** **Shuffle**
**Perturbations** **Repet.** **Halluc. [Grammar Remove Shuffle]**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Number** **Operation Numbers Operations**
Rouge-1 0.176† 0.151† -0.004 0.065† -0.020 0.020 0.028 0.017 0.008 0.022 0.011
Rouge-2 0.107† 0.095† 0.006 0.121† -0.005 0.029 0.010 0.018 -0.011 0.151† -0.002
Rouge-L 0.126† 0.156† 0.004 0.075† 0.008 0.042† 0.023 0.018 0.008 0.082† 0.004
BLEURT 0.143† 0.148† 0.000 -0.023 0.001 0.012 0.049† -0.000 -0.005 -0.036† 0.002
BERTScore 0.124† 0.117† 0.067† 0.089† 0.025 0.029 0.010 0.029 0.016 0.034† 0.017
BARTScore -0.048† -0.066† 0.015 0.185† -0.026 -0.002 -0.030 0.029 0.009 0.075† 0.045†
BARTScore+ 0.015 -0.003 0.049† 0.162† 0.003 0.031 0.002 0.050† 0.047† 0.164† 0.063†
BARTScore-P -0.005 0.002 0.022 0.128† 0.006 0.011 -0.009 0.024 0.002 0.115† 0.059†
PRISM -0.115† 0.208† 0.120† 0.095† 0.029 0.017 -0.003 0.102† 0.069† 0.117† 0.111†
CTC-Relevancy 0.104† 0.041† 0.133† **0.188†** 0.027 0.029 0.018 0.043† 0.052† 0.052† 0.001
CTC-Consistency -0.106† 0.145† 0.282† 0.105† 0.096† 0.035† 0.080† 0.079† 0.033 0.046† -0.017
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.029 0.362† 0.016 0.070† -0.024 0.009 0.025 0.060† 0.048† 0.074† 0.060†
Faithfulness-Token 0.000 0.157† 0.000 0.004 -0.004 -0.003 0.018 0.028 0.005 -0.022 0.003
Info-Step 0.024 0.231† 0.022 0.118† -0.018 0.009 0.033† 0.085† 0.074† 0.109† 0.106†
Repetition-Token 0.936† 0.069† -0.018 -0.078† 0.010 0.043† 0.006 -0.041† -0.018 0.035† 0.006
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.059† 0.467† 0.016 0.172† 0.106† 0.091† 0.063† 0.192† 0.089† 0.162† 0.135†
Repetition-Step 0.932† -0.002 -0.036† -0.114† -0.001 0.013 -0.026 -0.134† -0.045† -0.008 -0.030
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step -0.004 0.229† 0.049† 0.046† 0.007 0.010 0.142† 0.073† 0.033 0.031 -0.010
Faithfulness-Token 0.014 0.200† 0.091† -0.004 -0.001 0.009 0.021 0.033† 0.005 0.021 0.008
Info-Step -0.020 0.115† 0.048† 0.092† 0.009 0.020 0.131† 0.086† 0.029 0.042† -0.015
Repetition-Token 0.939† 0.007 -0.093† -0.073† 0.023 0.037† 0.002 -0.058† -0.035† 0.019 -0.002
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain 0.028 0.114† 0.038† 0.071† 0.034† 0.004 0.180† 0.078† 0.051† 0.041† -0.007
Repetition-Step **0.949†** 0.019 -0.043† -0.094† -0.012 0.000 -0.062† -0.119† 0.014 0.027 -0.001
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.021 0.223† 0.190† 0.038† -0.031 0.005 0.614† 0.376† 0.415† 0.277† 0.428†
Faithfulness-Token 0.016 0.204† 0.106† -0.005 -0.004 0.008 0.084† 0.067† 0.045† 0.047† 0.052†
Info-Step 0.012 0.133† 0.228† 0.099† -0.021 0.020 0.511† 0.451† 0.452† 0.301† 0.430†
Repetition-Token 0.939† 0.007 -0.093† -0.073† 0.023 0.037† 0.002 -0.058† -0.035† 0.019 -0.002
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.679† **0.588†** **0.694†** **0.216†** **0.746†** **0.530†** **0.777†** **0.698†** **0.757†** **0.524†** **0.662†**
Repetition-Step 0.924† 0.024 -0.059† -0.083† -0.019 0.012 -0.052† -0.192† -0.103† -0.011 0.017
ROSCOE-LI
Source-Consistency 0.011 0.071† 0.044† -0.041† -0.002 0.026 0.215† 0.223† 0.123† 0.341† 0.133†
Self-Consistency 0.015 0.069† -0.003 -0.105† -0.014 0.011 0.122† 0.147† 0.068† 0.190† 0.088†
ROSCOE-LC
Perplexity-Chain -0.358† 0.020 0.175† 0.103† 0.100† 0.035† 0.003 0.173† 0.109† 0.154† 0.170†
Perplexity-Step 0.160† 0.127† 0.001 -0.093† -0.016 0.019 0.006 0.003 0.012 0.048† 0.037
Grammar 0.010 0.026 0.392† -0.020 -0.012 0.005 0.112† 0.034† 0.026 0.057† 0.040
34
-----
p p
Table 23: Somers’ D correlations of all metrics per different perturbation applied on ProofWriter Diagnostics datasets.
All metrics are measured reference-free on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Swap** **Negate** **Semantic**
**Perturbations** **Repet.** **Halluc. [Grammar Remove Shuffle]**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Error**
Rouge-1 -0.124† -0.054† 0.023† 0.095† 0.013 -0.001 0.007 0.006
Rouge-2 -0.093† -0.002 0.068† 0.090† 0.018 -0.004 0.037† 0.047†
Rouge-L -0.120† -0.028† 0.029† 0.089† 0.106† 0.020 0.020 0.027†
BLEURT -0.058† -0.001 -0.027† -0.099† 0.016 0.007 0.024† -0.016
BERTScore -0.049† 0.077† 0.117† 0.082† -0.064† -0.054† 0.023† 0.108†
BARTScore -0.059† -0.096† -0.037† 0.081† -0.013 -0.010 -0.015 -0.061†
BARTScore+ -0.055† -0.067† 0.014 0.054† -0.011 -0.021 0.006 0.032†
BARTScore-P -0.046† -0.049† -0.012 0.010 -0.044† -0.032† -0.020 0.011
PRISM -0.159† 0.159† 0.222† 0.097† 0.017 -0.010 0.060† 0.235†
CTC-Relevancy 0.394† 0.392† 0.123† 0.185† -0.131† -0.052† 0.036† 0.077†
CTC-Consistency 0.496† 0.513† 0.182† **0.223†** -0.063† -0.022 0.098† 0.131†
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.002 **0.771†** 0.348† 0.165† 0.011 0.013 0.233† 0.515†
Faithfulness-Token 0.004 0.436† 0.264† 0.055† 0.030† 0.006 0.168† 0.310†
Info-Step -0.004 0.250† 0.121† -0.062† 0.023 0.015 0.108† 0.174†
Repetition-Token 0.596† 0.053† -0.041† 0.101† -0.006 -0.003 -0.055† -0.050†
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain -0.052† 0.083† 0.033† -0.089† 0.014 0.005 -0.001 0.001
Repetition-Step **0.631†** 0.031† -0.027† 0.116† -0.002 0.002 -0.044† -0.042†
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.004 0.732† 0.171† 0.154† 0.013 0.012 0.481† 0.494†
Faithfulness-Token 0.010 0.517† 0.334† 0.086† 0.031† 0.005 0.336† 0.395†
Info-Step -0.008 0.226† 0.047† -0.063† 0.027† 0.014 0.172† 0.160†
Repetition-Token 0.606† 0.036† -0.065† 0.097† -0.002 0.009 -0.070† -0.070†
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain -0.040† 0.004 -0.015 -0.140† -0.024† -0.011 0.045† -0.006
Repetition-Step 0.621† 0.028† -0.008 0.115† -0.011 0.001 -0.050† -0.043†
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.008 0.763† 0.618† 0.180† 0.008 0.004 **0.726†** **0.536†**
Faithfulness-Token 0.012 0.499† 0.475† 0.088† 0.036† 0.000 0.436† 0.403†
Info-Step -0.018 0.243† 0.252† -0.046† 0.016 0.012 0.317† 0.187†
Repetition-Token 0.606† 0.036† -0.065† 0.097† -0.002 0.009 -0.070† -0.070†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.214† 0.248† 0.284† 0.077† **0.330†** **0.246†** 0.462† 0.122†
Repetition-Step 0.624† 0.039† -0.034† 0.116† -0.007 -0.002 -0.076† -0.034†
ROSCOE-LI
Source-Consistency -0.008 0.027† 0.028† 0.010 -0.044† -0.004 0.289† -0.049†
Self-Consistency 0.011 0.204† 0.084† 0.110† -0.022 -0.020 0.036† 0.065†
ROSCOE-LC
Perplexity-Chain -0.165† 0.047† 0.112† -0.064† 0.135† 0.067† 0.012 0.128†
Perplexity-Step 0.178† 0.112† 0.033† 0.082† -0.008 0.005 -0.036† -0.008
Grammar 0.000 0.042† **0.788†** 0.102† 0.007 -0.023 0.007 0.515†
35
-----
p p
Table 24: Somers’ D correlations of all metrics per different perturbation applied on ASDIV Diagnostics datasets. All
metrics are measured reference-free on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Random** **Random** **Shuffle** **Shuffle**
**Perturbations**
_→_ **Number** **Operation Numbers** **Operations**
Rouge-1 0.305† 0.096 0.085 -0.220
Rouge-2 -0.034 -0.014 0.007 -0.038
Rouge-L 0.245† 0.073 0.269† -0.235
BLEURT 0.043 -0.034 0.059 0.015
BERTScore 0.098 0.322† 0.025 0.167
BARTScore 0.107 -0.027 -0.105 0.197
BARTScore+ -0.011 -0.002 -0.075 -0.015
BARTScore-P 0.068 0.043 -0.048 0.121
PRISM -0.009 -0.035 -0.114 0.258
CTC-Relevancy 0.382† 0.155† 0.038 0.000
CTC-Consistency 0.396† 0.189† 0.121 -0.121
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.186† -0.090 0.091 0.091
Faithfulness-Token 0.182† 0.080 0.062 -0.091
Info-Step 0.198† -0.091 0.085 0.167
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.280† 0.005 0.192† 0.091
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.266† 0.082 -0.182† 0.015
Faithfulness-Token 0.273† 0.011 -0.125 0.515†
Info-Step 0.275† 0.125 -0.141† 0.000
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain 0.289† 0.145† -0.084 0.030
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.630† 0.840† 0.670† **0.879†**
Faithfulness-Token 0.576† 0.740† 0.552† 0.545†
Info-Step 0.669† 0.844† 0.683† **0.879†**
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain **0.773†** **0.857†** **0.795†** 0.803†
ROSCOE-LI
Source-Consistency 0.760† 0.771† 0.763† 0.500†
Self-Consistency 0.203† 0.227† 0.206† 0.152
ROSCOE-LC
Perplexity-Chain 0.300† 0.092 0.214† 0.485†
Perplexity-Step 0.300† 0.092 0.214† 0.485†
Grammar 0.170† -0.083 -0.007 0.470†
36
-----
p p
Table 25: Somers’ D correlations of all metrics per different perturbation applied on AQUA Diagnostics datasets. All
metrics are measured reference-free on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Grammar Remove Shuffle** **Swap** **Negate Random** **Random** **Shuffle** **Shuffle**
**Perturbations** **Repet.** **Hallu.**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Number** **Operation Numbers Operations**
Rouge-1 0.134 0.055 -0.085 0.257† 0.074 -0.032 0.109 0.056 -0.006 0.131 -0.073
Rouge-2 0.180† 0.004 -0.021 0.206† 0.100 -0.092 0.044 0.051 0.082 0.196† -0.061
Rouge-L 0.072 0.055 -0.059 0.264† 0.145† -0.008 0.120 0.050 0.004 0.148† -0.037
BLEURT 0.010 0.047 -0.066 0.252† -0.003 -0.047 0.071 -0.010 0.020 0.165† -0.023
BERTScore 0.134 0.111 -0.016 0.166† -0.011 -0.043 0.028 0.007 0.079 0.220† -0.036
BARTScore -0.030 -0.269† -0.025 **0.317†** 0.102 0.051 -0.014 0.060 0.011 0.169† 0.036
BARTScore+ 0.112 -0.037 0.044 0.251† 0.019 -0.053 0.028 0.072 0.066 0.230† -0.009
BARTScore-P 0.082 -0.097 -0.007 0.215† 0.013 -0.027 0.114 0.059 0.042 0.182† 0.028
PRISM -0.119 0.026 0.177† 0.188† 0.071 0.048 0.057 0.191† 0.131 0.182† 0.028
CTC-Relevancy 0.133 -0.016 0.065 0.096 -0.068 0.013 0.024 0.119 -0.017 0.217† 0.072
CTC-Consistency 0.071 -0.050 0.075 0.041 -0.074 0.012 0.004 0.106 -0.006 0.157† 0.024
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.098 0.152† -0.013 0.106 0.035 -0.026 0.080 0.057 -0.023 0.026 -0.123
Faithfulness-Token 0.125 0.009 0.008 0.157† 0.047 -0.001 0.057 -0.014 0.076 0.070 -0.006
Info-Step 0.088 0.030 -0.020 0.114 0.068 -0.007 0.015 0.063 0.025 0.068 -0.088
Repetition-Token 0.972† 0.216† -0.125 -0.226† -0.053 -0.061 -0.041 -0.004 -0.024 0.037 -0.001
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.066 0.214† -0.053 0.146† 0.023 -0.035 0.049 0.079 0.038 0.153† -0.081
Repetition-Step 0.982† 0.143† -0.093 -0.169† 0.006 0.002 -0.031 -0.068 -0.028 0.010 0.050
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.089 0.179† -0.043 0.135 -0.016 -0.009 0.224† 0.122 0.096 0.014 -0.156
Faithfulness-Token 0.080 0.062 -0.047 0.117 0.036 -0.011 0.057 0.035 0.024 0.049 -0.059
Info-Step 0.099 0.064 -0.012 0.162† 0.018 -0.023 0.183† 0.138 0.116 0.099 -0.147
Repetition-Token 0.971† 0.072 -0.112 -0.190† -0.062 -0.054 -0.102 -0.034 0.061 0.026 0.079
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain 0.130 0.046 -0.025 0.178† -0.013 -0.024 0.153† 0.108 0.066 0.195† -0.119
Repetition-Step **0.991†** 0.125 -0.120 -0.234† 0.007 -0.023 -0.063 -0.155† -0.025 -0.010 0.076
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.169† 0.165† 0.280† 0.000 0.018 -0.113 0.826† 0.394† 0.329† 0.210† 0.236†
Faithfulness-Token 0.063 0.132 0.014 0.084 0.030 -0.045 0.285† 0.079 0.077 0.082 -0.037
Info-Step 0.150† 0.038 0.203† 0.107 0.041 -0.081 0.703† 0.450† 0.327† 0.313† 0.259†
Repetition-Token 0.971† 0.072 -0.112 -0.190† -0.062 -0.054 -0.102 -0.034 0.061 0.026 0.079
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.861† **0.465†** **0.399†** 0.025 **0.463†** **0.315†** **0.933†** **0.589†** **0.499†** 0.395† **0.337†**
Repetition-Step 0.982† 0.141† -0.171† -0.149† -0.094 0.022 -0.143 -0.339† -0.155† -0.016 -0.028
ROSCOE-LI
Source-Consistency -0.044 0.140 -0.061 -0.171† -0.087 0.024 0.316† 0.212† 0.151† 0.419† 0.096
Self-Consistency 0.041 0.368† -0.028 -0.227† 0.019 -0.040 0.104 0.226† 0.106 0.167† 0.093
ROSCOE-LC
Perplexity-Chain -0.288† -0.120 0.179† 0.121 0.021 0.101 0.134 0.190† 0.071 0.194† 0.229†
Perplexity-Step 0.171† 0.182† -0.051 -0.105 -0.013 -0.097 -0.068 0.060 0.011 -0.004 0.079
Grammar 0.091 0.200† 0.359† 0.020 -0.043 -0.008 0.223† 0.094 0.046 0.086 0.171
37
-----
p p
Table 26: Somers’ D correlations of all metrics per different perturbation applied on EQASC Diagnostics datasets.
All metrics are measured reference-based on (s, h). The highest correlation overall for each dataset is in bold. The
second best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6,
more details in App. H.1.)
**Grammar Remove Negate Semantic**
**Perturbations** **Repet.** **Hallu.**
_→_ **Error** **Step** **Step** **Error**
Rouge-1 **1.000†** **1.000†** 0.622† **1.000†** **1.000†** 0.946†
Rouge-2 **1.000†** **1.000†** 0.998† **1.000†** **1.000†** 0.946†
Rouge-L **1.000†** **1.000†** 0.998† **1.000†** **1.000†** 0.946†
BLEURT 0.829† 0.999† 0.512† **1.000†** 0.983† 0.703†
BERTScore **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** 0.943†
BARTScore 0.784† 0.925† 0.888† **1.000†** 0.861† 0.897†
BARTScore+ 0.443† 0.725† 0.530† **1.000†** 0.597† 0.885†
BARTScore-P -0.095† 0.284† 0.404† 0.966† 0.476† 0.587†
PRISM 0.879† 0.995† 0.902† 0.996† 0.927† 0.932†
CTC-Relevancy 0.031 -0.124† 0.265† 0.954† 0.321† 0.496†
CTC-Consistency 0.974† 0.965† 0.393† -0.429† 0.711† 0.718†
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Reasoning Alignment 0.010 0.995† 0.999† 0.008 **1.000†** 0.947†
Hallucination -0.000 0.994† 0.998† -0.305† **1.000†** 0.949†
Redundancy -0.004 0.994† 0.998† -0.278† **1.000†** 0.949†
Commonsense -0.006 0.012 0.998† **1.000†** **1.000†** 0.949†
Missing Step -0.010 0.012 0.998† **1.000†** **1.000†** 0.949†
Semantic Coverage-Step **1.000†** **1.000†** 0.997† **1.000†** **1.000†** 0.941†
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain **1.000†** **1.000†** 0.999† **1.000†** **1.000†** 0.944†
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Reasoning Alignment 0.035 0.995† **1.000†** 0.003 **1.000†** 0.950†
Hallucination 0.006 0.994† **1.000†** -0.302† **1.000†** 0.949†
Redundancy 0.007 0.994† **1.000†** -0.273† **1.000†** 0.949†
Commonsense 0.003 0.025 **1.000†** **1.000†** **1.000†** 0.949†
Missing Step 0.003 0.028 **1.000†** **1.000†** **1.000†** 0.949†
Semantic Coverage-Step **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** 0.948†
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** 0.946†
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Reasoning Alignment 0.019 0.995† **1.000†** 0.027 **1.000†** 0.949†
Hallucination 0.009 0.996† **1.000†** -0.273† **1.000†** 0.945†
Redundancy 0.008 0.996† **1.000†** -0.248† **1.000†** 0.946†
Commonsense 0.007 -0.017 **1.000†** **1.000†** **1.000†** 0.945†
Missing Step 0.006 -0.014 **1.000†** **1.000†** **1.000†** 0.946†
Semantic Coverage-Step **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** 0.946†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **0.956†**
38
-----
p p
Table 27: Somers’ D correlations of all metrics per different perturbation applied on Entailment Bank Diagnostics
datasets. All metrics are measured reference-based on (s, h). The highest correlation overall for each dataset is in bold.
The second best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6,
more details in App. H.1.)
**Swap** **Negate Semantic**
**Perturbations** **Repet. Hallu. [Grammar Remove Shuffle]**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Error**
Rouge-1 **1.000† 1.000†** 0.582† **1.000†** 0.000† 0.000† **1.000†** **1.000†**
Rouge-2 **1.000† 1.000†** **1.000†** **1.000†** 0.982† 0.935† **1.000†** **1.000†**
Rouge-L **1.000† 1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
BLEURT 0.197† 0.821† 0.163† 0.786† 0.253† 0.021 0.640† 0.174†
BERTScore **1.000† 1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
BARTScore 0.574† 0.659† 0.574† 0.802† 0.983† 0.864† 0.479† 0.656†
BARTScore+ 0.195† 0.430† 0.214† 0.839† 0.988† 0.813† 0.253† 0.555†
BARTScore-P 0.159† 0.272† 0.337† 0.633† 0.877† 0.656† 0.355† 0.415†
PRISM 0.707† 0.850† 0.612† 0.781† 0.939† 0.833† 0.466† 0.660†
CTC-Relevancy 0.311† 0.250† -0.039 0.457† 0.266† 0.214† 0.036 0.254†
CTC-Consistency 0.768† 0.814† 0.133† -0.022 0.479† 0.413† 0.233† 0.474†
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Reasoning Alignment -0.078 **1.000†** **1.000†** 0.056 -0.034 -0.001 **1.000†** 0.990†
Hallucination 0.002 **1.000†** **1.000†** -0.200† 0.041 0.017 **1.000†** 0.995†
Redundancy -0.072 **1.000†** **1.000†** -0.115† 0.034 0.055 **1.000†** 0.993†
Commonsense 0.025 -0.016 0.438† 0.264† 0.025 -0.012 0.237† 0.327†
Missing Step -0.077 -0.038 0.986† 0.993† 0.040 0.060 0.976† 0.985†
Semantic Coverage-Step **1.000† 1.000†** **1.000†** **1.000†** 0.003 0.036 **1.000†** 0.999†
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain 1.000† 1.000† **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Reasoning Alignment -0.024 **1.000†** **1.000†** 0.039 0.035 0.004 **1.000†** 0.990†
Hallucination -0.011 **1.000†** **1.000†** -0.059 -0.042 -0.008 **1.000†** 0.990†
Redundancy -0.011 **1.000†** **1.000†** -0.023 0.034 0.034 **1.000†** 0.994†
Commonsense -0.021 0.090 0.433† 0.347† -0.049 0.002 0.276† 0.342†
Missing Step -0.011 0.052 0.988† 0.999† 0.034 0.034 0.976† 0.983†
Semantic Coverage-Step **1.000† 1.000†** **1.000†** **1.000†** 0.106† 0.040 **1.000†** 0.994†
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.927† 0.929† 0.922† 0.951† **1.000†** 0.968† 0.940† 0.941†
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Reasoning Alignment 0.040 **1.000†** **1.000†** -0.013 0.007 -0.080 **1.000†** 0.999†
Hallucination -0.012 **1.000†** **1.000†** -0.059 0.134† 0.040 **1.000†** 0.991†
Redundancy 0.025 **1.000†** **1.000†** -0.074 0.041† -0.017 **1.000†** 0.994†
Commonsense -0.022 -0.039 0.368† 0.324† 0.129† 0.039 0.304† 0.445†
Missing Step 0.025 0.008 0.988† 0.999† 0.041† -0.017 0.971† 0.982†
Semantic Coverage-Step **1.000† 1.000†** **1.000†** **1.000†** 0.117† 0.038 **1.000†** 0.994†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.912† 0.934† 0.911† 0.950† **1.000†** 0.980† 0.940† 0.931†
39
-----
p p
Table 28: Somers’ D correlations of all metrics per different perturbation applied on MATH Diagnostics datasets. All
metrics are measured reference-based on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Swap** **Negate Random** **Random** **Shuffle** **Shuffle**
**Perturbations** **Repet.** **Hallu. [Grammar Remove Shuffle]**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Number** **Operation Numbers Operations**
Rouge-1 **0.994†** 0.961† 0.451† **0.996†** 0.000† 0.000† **1.000†** **1.000†** 0.000† 0.433† 0.000†
Rouge-2 **0.994†** 0.961† 0.767† **0.996†** 0.990† 0.953† **1.000†** **1.000†** 0.000† 0.999† 0.000†
Rouge-L **0.994†** 0.961† 0.769† **0.996†** 0.998† **0.997†** **1.000†** **1.000†** 0.000† **1.000†** 0.000†
BLEURT 0.363† 0.561† 0.041† 0.773† 0.103† 0.059† 0.156† 0.121† 0.069† 0.028 0.020
BERTScore 0.972† 0.942† 0.978† 0.978† **1.000†** 0.993† 0.964† **1.000†** 0.999† 0.998† 0.998†
BARTScore 0.413† 0.531† 0.812† 0.837† 0.966† 0.803† 0.429† 0.938† 0.921† 0.989† 0.932†
BARTScore+ 0.022 0.208† 0.539† 0.829† 0.611† 0.437† 0.153† 0.804† 0.466† 0.963† 0.606†
BARTScore-P -0.084† 0.041† 0.248† 0.474† 0.369† 0.170† 0.015 0.529† 0.367† 0.799† 0.536†
PRISM 0.337† 0.465† 0.386† 0.506† 0.412† 0.392† 0.266† 0.437† 0.347† 0.521† 0.285†
CTC-Relevancy -0.045† 0.168† 0.295† 0.592† 0.253† 0.110† 0.141† 0.140† 0.048† 0.169† 0.041
CTC-Consistency 0.673† 0.804† 0.491† 0.091† 0.393† 0.240† 0.506† 0.372† 0.112† 0.408† 0.148†
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Reasoning Alignment -0.012 0.998† 0.835† 0.005 -0.017 -0.013 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination -0.007 0.999† 0.854† -0.038† -0.019 -0.007 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy -0.007 0.999† 0.853† -0.040† -0.019 -0.005 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense -0.030† -0.004 0.852† 0.991† -0.022 -0.005 0.999† **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step -0.030† -0.003 0.850† 0.991† -0.021 -0.004 0.999† **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step 0.908† **1.000†** 0.852† **1.000†** -0.005 -0.008 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain 0.940† 0.946† 0.799† 0.944† **1.000†** 0.980† 0.933† **1.000†** 0.999† 0.998† 0.992†
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Reasoning Alignment -0.002 0.998† 0.990† -0.004 -0.003 0.018 0.998† 0.997† 0.997† **1.000†** **1.000†**
Hallucination -0.006 0.999† **0.993†** -0.045† 0.012† 0.003 0.998† 0.996† 0.997† **1.000†** **1.000†**
Redundancy -0.006 0.999† **0.993†** -0.044† 0.012† 0.003 0.998† 0.996† 0.997† **1.000†** **1.000†**
Commonsense -0.006 0.002 0.983† 0.989† 0.011 0.003 0.995† 0.996† 0.997† **1.000†** **1.000†**
Missing Step -0.006 0.001 0.983† 0.989† 0.011 0.003 0.995† 0.996† 0.997† **1.000†** **1.000†**
Semantic Coverage-Step 0.914† **1.000†** **0.993†** **1.000†** 0.038† 0.017† 0.998† 0.997† 0.997† **1.000†** **1.000†**
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.674† 0.773† 0.762† 0.763† 0.999† 0.868† 0.743† 0.985† 0.958† 0.975† 0.886†
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Reasoning Alignment 0.011 0.999† 0.992† 0.019 -0.001 0.025 0.998† 0.998† 0.998† **1.000†** **1.000†**
Hallucination -0.008 0.999† **0.993†** -0.032† 0.011 0.015† 0.998† 0.997† 0.997† **1.000†** **1.000†**
Redundancy -0.008 0.999† **0.993†** -0.031† 0.010 0.016† 0.998† 0.997† 0.997† **1.000†** **1.000†**
Commonsense -0.008 0.000 0.983† 0.991† 0.010 0.015† 0.995† 0.997† 0.997† **1.000†** **1.000†**
Missing Step -0.008 0.001 0.983† 0.990† 0.010 0.016† 0.995† 0.997† 0.997† **1.000†** **1.000†**
Semantic Coverage-Step 0.913† **1.000†** **0.993†** **1.000†** 0.036† 0.016† 0.998† 0.997† 0.997† **1.000†** **1.000†**
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.674† 0.770† 0.769† 0.775† 0.999† 0.865† 0.741† 0.985† 0.957† 0.973† 0.897†
40
-----
p p
Table 29: Somers’ D correlations of all metrics per different perturbation applied on ProofWriter Diagnostics datasets.
All metrics are measured reference-based on (s, h). The highest correlation overall for each dataset is in bold. The
second best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6,
more details in App. H.1.)
**Swap** **Negate Semantic**
**Perturbations** **Repet. Hallu. [Grammar Remove Shuffle]**
_→_ **Error** **Step** **Steps** **Steps** **Step** **Error**
Rouge-1 **1.000† 1.000†** 0.663† **1.000†** 0.000† 0.000† **1.000†** **1.000†**
Rouge-2 **1.000† 1.000†** 0.993† **1.000†** 0.932† 0.812† **1.000†** **1.000†**
Rouge-L **1.000† 1.000†** 0.993† **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
BLEURT 0.349† 0.829† 0.221† 0.695† 0.076† 0.042† 0.788† 0.597†
BERTScore **1.000† 1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
BARTScore 0.334† 0.430† 0.303† 0.417† 0.986† 0.748† 0.343† 0.391†
BARTScore+ 0.245† 0.391† 0.251† 0.598† 0.996† 0.775† 0.278† 0.515†
BARTScore-P 0.020 0.100† 0.124† 0.260† 0.595† 0.318† 0.114† 0.260†
PRISM 0.829† 0.956† 0.822† 0.924† 0.997† 0.970† 0.871† 0.947†
CTC-Relevancy 0.376† 0.409† 0.354† 0.725† 0.621† 0.396† 0.391† 0.419†
CTC-Consistency 0.537† 0.635† 0.199† 0.009 0.608† 0.412† 0.404† 0.376†
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Reasoning Alignment -0.017 0.997† 0.994† -0.049† -0.007 -0.000 **1.000†** 0.990†
Hallucination 0.015 0.997† 0.990† 0.012 -0.001 -0.004 **1.000†** 0.939†
Redundancy 0.007 0.997† 0.994† 0.048† -0.000 0.010 **1.000†** 0.992†
Commonsense 0.012 0.000 0.187† 0.379† -0.002 -0.010 0.180† 0.169†
Missing Step 0.003 -0.001 0.872† 0.843† 0.004 0.011 0.876† 0.870†
Semantic Coverage-Step 0.793† 1.000† 0.994† **1.000†** -0.008 0.021† **1.000†** 0.984†
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain 1.000† 1.000† 0.994† **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Reasoning Alignment 0.012 0.998† **1.000†** -0.041† 0.000 0.001 **1.000†** **1.000†**
Hallucination 0.016 0.998† 0.998† 0.031† -0.005 0.002 **1.000†** **1.000†**
Redundancy 0.010 0.998† **1.000†** 0.037† 0.015† 0.003 **1.000†** **1.000†**
Commonsense 0.017 0.006 0.242† 0.415† -0.005 0.002 0.238† 0.234†
Missing Step 0.010 0.004 0.871† 0.839† 0.016† 0.004 0.874† 0.866†
Semantic Coverage-Step 0.818† 1.000† **1.000†** **1.000†** 0.059† 0.021† **1.000†** **1.000†**
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.995† 0.998† 0.995† 0.997† **1.000†** 0.999† 0.998† 0.997†
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Reasoning Alignment 0.016 0.996† **1.000†** -0.030† 0.000 0.005 **1.000†** **1.000†**
Hallucination 0.003 0.997† 0.998† 0.016 0.005 -0.010 **1.000†** **1.000†**
Redundancy 0.020† 0.997† **1.000†** 0.050† 0.019† 0.003 **1.000†** **1.000†**
Commonsense 0.001 -0.009 0.243† 0.404† 0.010 -0.009 0.253† 0.235†
Missing Step 0.015† -0.007 0.871† 0.839† 0.020† 0.003 0.873† 0.866†
Semantic Coverage-Step 0.817† 1.000† 0.999† **1.000†** 0.060† 0.022† **1.000†** **1.000†**
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.994† 0.998† 0.995† 0.996† **1.000†** 0.999† 0.998† 0.996†
41
-----
p p
Table 30: Somers’ D correlations of all metrics per different perturbation applied on ASDIV Diagnostics datasets. All
metrics are measured reference-based on (s, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Random** **Random** **Shuffle** **Shuffle**
**Perturbations**
_→_ **Number** **Operation Numbers Operations**
Rouge-1 **1.000†** 0.000† 0.027† 0.000†
Rouge-2 **1.000†** 0.000† **1.000†** 0.000†
Rouge-L **1.000†** 0.000† **1.000†** 0.000†
BLEURT 0.876† 0.930† 0.125 -0.212
BERTScore **1.000†** **1.000†** **1.000†** **1.000†**
BARTScore 0.871† 0.823† 0.954† 0.924†
BARTScore+ 0.948† 0.839† **1.000†** 0.955†
BARTScore-P 0.738† 0.642† 0.830† 0.712†
PRISM 0.998† 0.989† **1.000†** **1.000†**
CTC-Relevancy 0.398† 0.188† 0.160† -0.136
CTC-Consistency 0.600† 0.131† 0.294† -0.106
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Reasoning Alignment **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Reasoning Alignment **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Reasoning Alignment **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain **1.000†** **1.000†** **1.000†** **1.000†**
42
-----
p p
Table 31: Somers’ D correlations of all metrics per different perturbation applied on AQUA Diagnostics datasets. All
metrics are measured reference-based on (r, h). The highest correlation overall for each dataset is in bold. The second
best models are underlined. Correlation scores with p-value < 0.05 are marked with †. (Continued from §6, more details
in App. H.1.)
**Swap** **Negate Random** **Random** **Shuffle** **Shuffle**
**Perturbations** **Repet. Halluc. [Grammar Remove Shuffle]**
_→_ **error** **step** **steps** **steps** **step** **number** **operation numbers operations**
Rouge-1 **1.000†** **1.000†** 0.394† **1.000†** 0.000† 0.000† **1.000†** **1.000†** 0.000† 0.000† 0.000†
Rouge-2 **1.000†** **1.000†** 0.866† **1.000†** 0.984† 0.967† **1.000†** **1.000†** 0.000† **1.000†** 0.000†
Rouge-L **1.000†** **1.000†** 0.866† **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** 0.000† **1.000†** 0.000†
BLEURT 0.193† 0.803† 0.129 0.946† 0.076 0.028 0.640† 0.627† 0.385† -0.041 -0.113
BERTScore **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
BARTScore 0.568† 0.596† 0.688† 0.870† 0.958† 0.883† 0.520† 0.930† 0.834† 0.984† 0.849†
BARTScore+ 0.283† 0.597† 0.613† 0.953† 0.960† 0.901† 0.368† 0.980† 0.521† 0.995† 0.814†
BARTScore-P 0.045 0.025 0.400† 0.679† 0.650† 0.547† 0.148 0.846† 0.559† 0.905† 0.726†
PRISM 0.815† 0.900† 0.724† 0.888† 0.941† 0.901† 0.689† 0.942† 0.772† 0.970† 0.870†
CTC-Relevancy 0.054 -0.044 0.102 0.409† 0.116 0.141 0.062 0.290† 0.030 0.409† 0.023
CTC-Consistency 0.833† 0.766† 0.106 -0.150† 0.270† 0.257† 0.256† 0.774† 0.232† 0.741† 0.063
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Reasoning Alignment 0.030 0.997† 0.985† -0.014 0.054 -0.076 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination 0.029 0.992† 0.992† -0.084 0.024 -0.030 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy 0.032 0.992† 0.992† -0.088 0.023 -0.040 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense 0.029 0.020 0.992† **1.000†** 0.024 -0.038 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step 0.032 0.023 0.992† **1.000†** 0.023 -0.047 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step 0.947† **1.000†** 0.987† **1.000†** 0.006 -0.009 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Semantic Coverage-Chain 1.000† **1.000†** 0.995† **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Reasoning Alignment -0.061 **1.000†** **1.000†** 0.018 0.049 0.006 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination -0.053 **1.000†** **1.000†** -0.069 -0.017 -0.041 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy -0.054 **1.000†** **1.000†** -0.063 -0.016 -0.041 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense -0.053 0.065 0.992† **1.000†** -0.017 -0.041 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step -0.054 0.064 0.992† **1.000†** -0.016 -0.041 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step 0.966† **1.000†** **1.000†** **1.000†** 0.041† 0.008 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.965† 0.965† 0.944† 0.988† **1.000†** **1.000†** 0.953† **1.000†** **1.000†** **1.000†** 0.973†
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Reasoning Alignment -0.089 **1.000†** **1.000†** 0.085 0.054 0.036 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Hallucination -0.039 **1.000†** **1.000†** 0.003 -0.024 -0.025 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Redundancy -0.043 **1.000†** **1.000†** 0.006 -0.016 -0.016 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Commonsense -0.039 0.083 0.992† **1.000†** -0.024 -0.025 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Missing Step -0.043 0.089 0.992† **1.000†** -0.016 -0.016 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
Semantic Coverage-Step 0.967† **1.000†** **1.000†** **1.000†** 0.041† 0.008 **1.000†** **1.000†** **1.000†** **1.000†** **1.000†**
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Semantic Coverage-Chain 0.957† 0.957† 0.959† 0.980† **1.000†** **1.000†** 0.971† **1.000†** **1.000†** **1.000†** 0.972†
43
-----
p p
H.2 EXPERIMENTS WITH HUMAN JUDGEMENT DATASETS
In this section, we present Somers’ D correlation of all metrics on all Human Judged datasets in separate
tables. Specifically, Table 32 summarizes meta-evaluations for ROSCOE metrics in comparison to baselines
on all human judged datasets. Fine-grained evaluations are presented in Table 33 for DROP, Table 34, 38 for
GSM8K, Table 35, 39 for ESNLI, Table 36 for CosmosQA, and Table 37 for SemEVAL. Human evaluation
perspectives used in evaluations are described in App. Table 15.
Looking at how errors are captured by ROSCOE reference-free scores (Fig. 8), we observe strongest correlations between Redundancy error and Repetition-*, Self-Consistency scores. Repetition error is not present in
this analysis as it has at most 3 occurrences per dataset. Out of the all considered scores, Self-Consistency is
able to cover 6 out of 7 evaluation perspectives, except Missing Step.
Figure 8: Relative presence of the strong score-error correlation, measured as the number of datasets
where for each score and evaluation perspective pair Somers’ D correlation value is in the 90[th] percentile,
normalized by the total number of datasets where this type of perturbation occurs. Statistics collected over
ROSCOE reference-free scores with finetuned SimCSE embeddings, and evaluation perspectives where at
least 10 errors are present in a dataset.
We further look at specific human annotated examples where our ROSCOE gives highest and lowest scores to
understand strength and weaknesses of the proposed approach. Results are summarized in Table 40. Similar
analysis for diagnostic datasets is summarized in Table 41.
44
-----
p p
Table 32: Somers’ D correlations of all metrics on five human judged datasets. All metrics are measured reference-free
on (s, h). The highest correlation overall for each dataset is in bold and second best is underlined. Correlations that are
not significant (p-value ≥ 0.05) are omitted when aggregating, and "-" denotes an absence of any significant correlations.
(Continued from §6, more details in App. H.2.)
**DROP GSM8K ESNLI COSMOS SemEVAL**
Rouge-1 0.239 0.180 0.559 -0.264 -0.520
Rouge-2 0.320 - 0.502 0.180 -
Rouge-L 0.278 0.252 0.557 -0.441 -0.478
BLEURT 0.328 0.256 0.541 0.218 -0.356
BERTScore 0.275 0.235 0.590 -0.420 -0.295
BARTScore -0.835 -0.546 0.549 -0.544 -
BARTScore+ -0.665 - 0.482 -0.186 -
BARTScore-P -0.642 - 0.255 -0.207 -
PRISM -0.733 -0.455 0.580 -0.376 -
CTC-Relevance 0.333 -0.371 0.334 - -0.349
CTC-Consistency 0.462 -0.174 0.647 0.275 -0.301
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.496 - 0.403 - -
Faithfulness-Token 0.417 - 0.521 -0.320 -
Info-Step 0.500 0.178 0.493 - -
Repetition-Token 0.578 0.392 0.441 0.555 0.337
ROSCOE-SA with all-mpnet-base-v2 sentence embeddings
Faithfulness-Step 0.297 - 0.423 -0.424 0.330
Faithfulness-Token 0.290 -0.443 0.524 -0.515 0.186
Info-Step 0.301 - 0.542 -0.429 -
Repetition-Token 0.790 0.500 **0.799** 0.638 0.485
ROSCOE-SA with sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.477 -0.192 0.502 -0.381 -
Faithfulness-Token 0.454 - 0.540 -0.420 -
Info-Step 0.510 - 0.599 -0.409 -0.321
Repetition-Token 0.578 0.392 0.441 0.555 0.337
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.446 0.385 0.310 - -
Repetition-Step 0.824 0.514 0.530 0.593 0.411
ROSCOE-SS with all-mpnet-base-v2 sentence embeddings
Info-Chain 0.406 - 0.507 -0.198 0.367
Repetition-Step 0.791 0.471 0.487 0.642 0.508
ROSCOE-SS with sup-simcse-roberta-base sentence embeddings
Info-Chain 0.271 - 0.531 -0.367 -
Repetition-Step **0.799** **0.638** 0.484 **0.658** **0.535**
ROSCOE-LI
Source-Consistency 0.390 0.172 0.425 0.444 -
Self-Consistency 0.584 0.345 0.531 0.417 0.372
ROSCOE-LC
Grammar - -0.184 0.255 - 0.517
Perplexity-Step 0.205 -0.307 0.345 - -
Perplexity-Chain -0.611 -0.273 0.447 -0.212 -0.373
45
-----
p p
Table 33: Somers’ D correlation of all metrics on DROP human judged dataset analyzing step-by-step reasoning on
overall chain and step-level perspectives. All metrics are measured reference-free on (s, h). The highest correlation
overall for each aspect on each dataset is in bold, second best are underlined. Correlation scores with p-value < 0.05
are marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC MATH GRAM MISS
Rouge-1 0.157† -0.160 0.436 0.239† -0.335 -0.731† -0.736 0.030 0.702 0.196 0.173†
Rouge-2 0.137† -0.146 0.488 0.320† -0.284 -0.716† -0.442 0.027 0.584 -0.035 0.129
Rouge-L 0.131† -0.201† 0.465 0.278† -0.345 -0.749† -0.815 -0.012 0.745 0.027 0.146
BLEURT 0.121† -0.101 0.256 0.328† -0.333 -0.725† -0.370 0.078 0.514 0.101 0.087
BERTScore 0.133† -0.115 0.494 0.275† -0.177 -0.647† -0.043 -0.003 0.707 0.098 0.142
BARTScore -0.088 -0.392† 0.575 0.161 -0.454 -0.835† -0.894 -0.225 0.038 0.150 -0.134
BARTScore+ 0.039 -0.159 0.536 0.192 -0.553 -0.665† -0.841 -0.141 0.303 0.261 0.066
BARTScore-P -0.007 -0.152 0.546 0.169 -0.473 -0.642† -0.894 -0.039 0.380 0.265 0.012
PRISM 0.129† -0.081 0.465 0.207 -0.379 -0.733† -0.361 -0.071 0.668 0.165 0.048
CTC-Relevance -0.027 -0.100 -0.072 0.333† -0.041 -0.622† -0.087 -0.091 0.394 0.134 -0.056
CTC-Consistency 0.030 -0.133 0.243 0.462† -0.148 -0.657† -0.106 -0.041 0.769 0.106 -0.002
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.096 -0.095 0.572 0.496† 0.143 -0.566† -0.322 0.278 0.760 -0.022 0.042
Faithfulness-Token 0.177† -0.016 0.472 0.417† -0.189 -0.525† -0.038 0.131 0.678 0.002 0.130
Info-Step 0.142† -0.069 **0.643** 0.500† 0.216 -0.581† -0.284 **0.287†** **0.846** 0.024 0.125
ROSCOE-SARepetition-Token with all-mpnet-base-v20.055 0.210 sentence embeddings† 0.018 0.170 0.340 0.578† **0.952** 0.026 -0.822 -0.140 0.062
Faithfulness-Step 0.054 -0.156 0.153 0.297† -0.073 -0.578† -0.111 0.116 -0.221 0.001 -0.042
Faithfulness-Token 0.156† -0.042 0.362 0.290† -0.388 -0.504† -0.216 -0.058 0.288 0.063 0.085
Info-Step 0.090 -0.116 0.308 0.301† -0.153 -0.619† -0.043 0.021 0.250 0.078 0.020
ROSCOE-SARepetition-Token with sup-simcse-roberta-base0.130† 0.370† sentence embeddings0.027 0.087 0.313 0.790† 0.519 0.114 -0.822 -0.181 0.053
Faithfulness-Step 0.084 -0.128 0.195 0.477† -0.061 -0.643† -0.596 0.085 0.236 -0.061 -0.051
Faithfulness-Token 0.186† -0.031 0.414 0.454† -0.245 -0.574† -0.139 0.098 0.514 0.047 0.093
Info-Step 0.141† -0.095 0.443 **0.510†** -0.121 -0.692† -0.471 0.040 0.731 0.066 0.010
Repetition-Token 0.055 0.210† 0.018 0.170 0.340 0.578† **0.952** 0.026 -0.822 -0.140 0.062
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.108 -0.128 0.462 0.222 -0.393 -0.446† -0.375 0.025 0.207 0.446† 0.114
ROSCOE-SSRepetition-Step with all-mpnet-base-v20.036 **0.400 sentence embeddings†** -0.543 -0.258† 0.165 **0.824†** 0.909 0.052 -0.822 0.192 -0.016
Info-Chain 0.165† 0.020 0.395 0.240† -0.027 -0.480† -0.135 0.122 0.394 0.406† 0.106
ROSCOE-SSRepetition-Step with sup-simcse-roberta-base0.052 0.358† sentence embeddings-0.111 -0.115 0.260 0.791† 0.856 0.105 -0.822 0.066 0.015
Info-Chain 0.138† -0.090 0.504 0.271† -0.158 -0.664† -0.130 -0.089 0.611 0.381 0.093
Repetition-Step 0.073 0.357† -0.143 -0.085 0.451 0.799† 0.918 0.128 -0.822 0.047 0.021
ROSCOE-LI
Source-Consistency 0.200† 0.243† 0.462 0.390† -0.004 0.085 0.697 0.009 0.365 0.420 0.184†
Self-Consistency 0.032 0.295† -0.076 0.198 0.201 0.584† 0.139 0.187 -0.707 0.344 -0.075
ROSCOE-LC
Grammar **0.220†** 0.141 0.250 0.260 **0.536** 0.001 -0.553 0.169 0.260 **0.450** 0.111
Perplexity-Step 0.185† 0.034 0.214 -0.002 -0.112 -0.320† -0.827 -0.043 0.332 0.259 **0.205†**
Perplexity-Chain 0.087 -0.152 0.185 -0.104 -0.515 -0.611† -0.952 -0.178 0.663 0.103 0.117
46
-----
p p
Table 34: Somers’ D correlation of all metrics on GSM8K human judged dataset analyzing step-by-step reasoning on
overall chain and step-level perspectives. All metrics are measured reference-free on (s, h). The highest correlation
overall for each aspect on each dataset is in bold, second best is underlined. Correlation scores with p-value < 0.05 are
marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC MATH GRAM MISS
Rouge-1 0.122 0.202† 0.071 0.156 0.275 0.037 0.222 0.073 0.233 0.173 0.180†
Rouge-2 0.089 0.139 0.060 0.102 0.175 -0.016 0.561 0.030 0.113 0.051 0.148
Rouge-L 0.176† 0.268† 0.195† 0.169 0.180 0.054 0.558 0.120 0.212 -0.125 0.252†
BLEURT 0.160† 0.248† 0.134 0.256† 0.099 0.075 0.227 0.124 0.057 0.077 0.210†
BERTScore 0.173† 0.220† 0.112 0.116 0.168 0.095 0.955 0.138 0.113 0.054 0.235†
BARTScore 0.009 0.035 -0.047 0.044 -0.246 -0.261 0.424 -0.002 -0.546† -0.321 -0.003
BARTScore+ 0.064 0.132 0.078 0.054 -0.003 -0.039 0.879 0.085 -0.298 -0.281 0.090
BARTScore-P 0.037 0.059 0.042 0.061 -0.116 -0.102 0.561 -0.002 -0.322 -0.097 0.019
PRISM -0.112 -0.075 -0.099 -0.037 -0.385 -0.455† -0.086 -0.171 -0.341 0.130 -0.093
CTC-Relevance -0.086 -0.148† -0.077 -0.111 0.009 -0.371† 0.566 -0.106 -0.093 -0.061 -0.088
CTC-Consistency -0.157† -0.203† -0.206† -0.129 -0.013 -0.318† 0.556 -0.174† -0.019 -0.056 -0.204†
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.012 0.083 0.097 0.014 -0.138 -0.019 0.182 -0.024 -0.101 0.286 0.027
Faithfulness-Token -0.012 0.036 -0.052 -0.067 -0.099 -0.304 0.788 -0.042 -0.408 0.204 0.016
Info-Step 0.059 0.137 0.178† 0.082 -0.025 0.059 0.364 0.031 -0.303 0.329 0.108
ROSCOE-SARepetition-Token with all-mpnet-base-v20.200† 0.193 sentence embeddings† 0.186† 0.075 0.224 0.392† 0.788 0.183† 0.443 0.173 0.270†
Faithfulness-Step -0.039 -0.016 0.002 -0.027 0.012 -0.376 0.581 -0.173 -0.127 0.102 -0.021
Faithfulness-Token -0.078 -0.002 -0.090 -0.079 -0.141 -0.443† 0.485 -0.137 -0.623† 0.074 -0.068
Info-Step 0.095 0.121 0.099 0.139 0.112 -0.065 0.662 -0.016 -0.056 0.092 0.148
ROSCOE-SARepetition-Token with sup-simcse-roberta-base0.214† 0.208† sentence embeddings0.184† 0.081 **0.500†** 0.238 0.747 0.208† 0.498† **0.339** 0.306†
Faithfulness-Step -0.061 -0.034 -0.006 0.002 -0.158 -0.142 0.237 -0.192† -0.221 0.168 -0.045
Faithfulness-Token -0.051 -0.008 -0.087 -0.084 -0.168 -0.352 0.732 -0.098 -0.424 0.145 -0.034
Info-Step 0.059 0.102 0.106 0.124 0.040 0.046 0.545 -0.053 -0.236 0.099 0.131
Repetition-Token 0.200† 0.193† 0.186† 0.075 0.224 0.392† 0.788 0.183† 0.443 0.173 0.270†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.097 0.064 0.059 0.080 0.035 0.385† 0.722 0.103 0.121 -0.110 0.130
ROSCOE-SSRepetition-Step with all-mpnet-base-v20.199† 0.166 sentence embeddings† 0.168 0.145 0.254 0.514† 0.869 0.152 0.230 0.176 0.222†
Info-Chain 0.059 0.071 0.020 0.043 0.039 0.128 0.288 0.004 0.260 -0.026 0.098
ROSCOE-SSRepetition-Step with sup-simcse-roberta-base0.218† 0.161† sentence embeddings0.167 0.227† 0.309 0.471† 0.323 0.158 0.486 -0.301 0.245†
Info-Chain 0.042 0.051 0.043 0.057 0.240 0.041 0.808 -0.007 -0.211 0.038 0.130
Repetition-Step **0.322†** **0.299†** **0.275†** 0.227† 0.466 **0.638†** 0.879 0.192† **0.563†** -0.311 **0.354†**
ROSCOE-LI
Source-Consistency 0.108 0.037 -0.019 0.172† 0.121 0.030 0.551 0.082 -0.005 0.219 0.097
Self-Consistency 0.283† 0.267† 0.177 **0.345†** 0.207 0.354 **0.980** **0.219†** 0.087 -0.230 0.223†
ROSCOE-LC
Grammar -0.134† -0.159† -0.260† -0.081 -0.234 -0.246 0.056 -0.264† -0.207 0.298 -0.184†
Perplexity-Step -0.297† -0.278† -0.366† -0.307† -0.608† -0.591† -0.136 -0.334† -0.514† 0.148 -0.331†
Perplexity-Chain -0.332† -0.336† -0.322† -0.273† -0.695† -0.682† -0.556 -0.354† -0.697† -0.084 -0.408†
47
-----
p p
Table 35: Somers’ D correlation of all metrics on ESNLI human judged dataset analyzing step-by-step reasoning on
overall chain and step-level perspectives. All metrics are measured reference-free on (s, h). The highest correlation
overall for each aspect on each dataset is in bold and second best is underlined. Correlation scores with p-value < 0.05
are marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC MATH GRAM MISS
Rouge-1 0.213† -0.121 -0.373 0.152 -0.060 -0.284 - -0.276† 0.113 -0.403 0.559†
Rouge-2 0.216† -0.075 -0.336 0.169 -0.097 -0.240 - -0.255† -0.080 -0.432 0.502†
Rouge-L 0.222† -0.041 -0.296 0.201 -0.092 -0.245 - -0.259 0.053 -0.429 0.557†
BLEURT 0.154† -0.202 -0.271 0.112 -0.041 -0.230 - -0.322† 0.267 -0.641† 0.541†
BERTScore 0.255† 0.019 -0.205 0.222 0.153 -0.179 - -0.141 -0.360 -0.252 0.590†
BARTScore 0.189† -0.096 -0.260 0.079 0.071 -0.259 - -0.200 0.533 -0.688† 0.549†
BARTScore+ 0.209† -0.059 -0.192 0.166 0.204 -0.147 - -0.168 -0.160 -0.578† 0.482†
BARTScore-P 0.092 -0.096 -0.208 0.213 0.092 -0.209 - -0.192 -0.587 -0.370 0.255†
PRISM 0.264† 0.089 -0.151 0.093 0.224 -0.235 - -0.042 -0.240 -0.329 0.580†
CTC-Relevance 0.071 -0.051 -0.074 0.063 -0.020 -0.098 - -0.093 **0.720** -0.205 0.334†
CTC-Consistency 0.029 0.218 **0.647†** -0.009 0.483 0.021 - 0.350† 0.533 -0.071 -0.060
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.236† -0.163 -0.027 0.096 -0.146 -0.139 - -0.293† 0.560 -0.386 0.403†
Faithfulness-Token 0.279† 0.010 -0.145 0.232 0.201 -0.279 - -0.119 0.067 -0.293 0.521†
Info-Step 0.222† -0.127 -0.140 0.143 -0.068 -0.220 - -0.333† 0.400 -0.310 0.493†
ROSCOE-SARepetition-Token with all-mpnet-base-v2-0.037 -0.020 sentence embeddings0.403 0.244 0.330 0.441† - 0.153 0.240 -0.059 -0.436†
Faithfulness-Step 0.261† -0.117 -0.058 0.242 -0.425 -0.296† - -0.248† 0.480 -0.244 0.423†
Faithfulness-Token 0.270† -0.083 -0.216 0.136 0.313 -0.248 - -0.097 0.347 -0.433 0.524†
Info-Step 0.290† -0.006 -0.049 0.286† -0.102 -0.245 - -0.246† 0.373 -0.342 0.542†
ROSCOE-SARepetition-Token with sup-simcse-roberta-base-0.040 0.129 sentence embeddings-0.189 0.169 **0.799†** **0.617†** - 0.151 -0.267 0.084 -0.445†
Faithfulness-Step 0.262† -0.083 -0.244 0.257 -0.163 -0.372† - -0.283† 0.413 -0.375 0.502†
Faithfulness-Token 0.275† 0.004 -0.230 0.206 0.214 -0.303† - -0.140 0.347 -0.321 0.540†
Info-Step **0.295†** 0.046 -0.126 0.299† -0.071 -0.306† - -0.208 0.333 -0.397† **0.599†**
Repetition-Token -0.037 -0.020 0.403 0.244 0.330 0.441† - 0.153 0.240 -0.059 -0.436†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.083 -0.197 -0.466 -0.225 -0.510 -0.259 - -0.067 0.267 0.074 0.310†
ROSCOE-SSRepetition-Step with all-mpnet-base-v2-0.021 0.103 sentence embeddings-0.063 0.249 0.310 0.530† - 0.178 -0.107 0.053 -0.433†
Info-Chain 0.191† -0.233 -0.438 0.052 -0.153 -0.253 - -0.368† 0.533 -0.452 0.507†
ROSCOE-SSRepetition-Step with sup-simcse-roberta-base0.007 0.280 sentence embeddings0.173 0.365† 0.694 0.487† - 0.225 0.213 0.193 -0.472†
Info-Chain 0.211† -0.083 -0.192 0.212 0.024 -0.319† - -0.257† 0.400 -0.455 0.531†
Repetition-Step -0.015 0.180 0.397 0.332 0.367 0.484† - 0.115 0.013 0.125 -0.501†
ROSCOE-LI
Source-Consistency 0.012 -0.028 0.334 **0.425†** 0.299 -0.055 - 0.112 0.600 -0.258 -0.222†
Self-Consistency -0.028 **0.354†** 0.156 0.087 0.561 0.370† - **0.531†** -0.333 0.222 -0.351†
ROSCOE-LC
Grammar 0.063 0.069 -0.411 -0.003 0.122 -0.286 - 0.255† -0.027 **0.411** -0.072
Perplexity-Step 0.084 0.087 -0.655† -0.153 0.255 -0.254 - -0.148 -0.147 0.090 0.345†
Perplexity-Chain 0.027 0.081 -0.616† -0.289 0.075 -0.447† - -0.155 -0.533 0.249 0.447†
48
-----
p p
Table 36: Somers’ D correlation of all metrics on COSMOS human judged dataset analyzing step-by-step reasoning
on overall chain and step-level perspectives. All metrics are measured reference-free on (s, h). The highest correlation
overall for each aspect on each dataset is in bold and second best is underlined. Correlation scores with p-value < 0.05
are marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC GRAM MISS
Rouge-1 -0.011 -0.007 -0.182 -0.077 0.292 -0.576† -0.807 -0.264† -0.644† 0.113
Rouge-2 0.021 0.028 -0.075 -0.131 0.239 -0.561† -0.174 -0.108 -0.314 0.180†
Rouge-L 0.011 -0.013 -0.094 -0.044 0.252 -0.637† -0.436 -0.114 -0.441† 0.141
BLEURT 0.098 0.088 0.019 -0.054 0.097 -0.686† -0.617 -0.181† -0.522† **0.218†**
BERTScore 0.095 0.113 0.059 -0.055 0.234 -0.478† -0.492 -0.058 -0.420† 0.114
BARTScore 0.009 0.024 0.159 -0.026 0.001 -0.544† -0.208 -0.122 -0.420 0.124
BARTScore+ 0.048 0.061 -0.102 -0.004 0.159 -0.507† -0.602 -0.186† -0.499† 0.159
BARTScore-P 0.009 0.021 -0.149 0.010 0.267 -0.385† -0.508 -0.207† -0.453† 0.142
PRISM 0.058 0.091 -0.046 -0.156 0.311 -0.446† -0.428 -0.036 -0.376† 0.157
CTC-Relevance 0.070 0.035 0.246 0.155 0.233 -0.294 -0.780 -0.001 -0.349 0.016
CTC-Consistency 0.093 0.097 **0.275†** 0.084 0.140 -0.032 -0.201 0.051 0.064 -0.006
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.089 0.109 -0.008 0.149 0.256 -0.322 -0.216 -0.019 0.034 -0.011
Faithfulness-Token 0.038 0.039 -0.032 -0.093 0.285 -0.472† -0.220 -0.026 -0.320† 0.012
Info-Step 0.109 0.082 -0.011 0.119 0.302 -0.340 -0.811 -0.039 -0.135 0.057
ROSCOE-SARepetition-Token with all-mpnet-base-v20.050 0.120 sentence embeddings0.187 0.103 0.103 0.555† 0.231 0.192† 0.458† -0.233†
Faithfulness-Step -0.011 0.044 -0.008 -0.012 0.127 -0.424† -0.235 -0.061 -0.268 -0.096
Faithfulness-Token 0.035 0.078 -0.124 -0.128 **0.323** -0.515† -0.667 -0.036 -0.230 -0.008
Info-Step 0.011 0.043 -0.047 -0.023 0.186 -0.429† -0.481 -0.081 -0.268 -0.037
ROSCOE-SARepetition-Token with sup-simcse-roberta-base0.045 0.092 sentence embeddings0.275† 0.122 0.298 0.638† 0.398 0.252† 0.386 -0.184†
Faithfulness-Step 0.036 0.022 0.072 0.111 0.184 -0.381† -0.318 -0.076 -0.237 -0.053
Faithfulness-Token 0.036 0.030 -0.003 -0.058 0.205 -0.486† -0.333 -0.056 -0.420† -0.020
Info-Step 0.026 -0.025 0.069 0.079 0.248 -0.409† -0.720 -0.088 -0.292 0.000
Repetition-Token 0.050 0.120 0.187 0.103 0.103 0.555† 0.231 0.192† 0.458† -0.233†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain 0.013 0.089 0.159 0.048 0.139 -0.219 -0.932 0.113 -0.390 -0.117
ROSCOE-SSRepetition-Step with all-mpnet-base-v20.011 0.050 sentence embeddings0.178 0.132 -0.046 0.593† **0.670** 0.295† 0.330 -0.244†
Info-Chain 0.073 0.091 -0.174 0.033 0.145 -0.215 -0.409 -0.198† -0.411† 0.103
ROSCOE-SSRepetition-Step with sup-simcse-roberta-base0.047 0.127 sentence embeddings0.124 0.060 0.153 0.642† 0.617 **0.346†** **0.563†** -0.184†
Info-Chain 0.114 0.055 0.034 -0.005 0.218 -0.367† -0.879 -0.046 -0.222 0.153
Repetition-Step 0.061 0.127 0.095 0.076 0.246 **0.658†** 0.500 0.256† 0.496† -0.145
ROSCOE-LI
Source-Consistency **0.184†** 0.183† 0.150 **0.285†** 0.241 0.444† 0.091 0.111 0.303 0.011
Self-Consistency 0.048 0.080 0.190 0.173 -0.021 0.417† 0.610 0.192† 0.401† -0.252†
ROSCOE-LC
Grammar 0.093 **0.189†** -0.065 0.084 0.022 -0.013 0.356 -0.013 0.386 0.051
Perplexity-Step 0.122† 0.157† -0.208 -0.021 0.034 0.028 -0.140 -0.113 -0.295 0.064
Perplexity-Chain 0.083 0.047 -0.193 0.001 -0.073 -0.311† -0.561 -0.212† -0.542† 0.130
49
-----
p p
Table 37: Somers’ D correlation of all metrics on SemEVAL human judged dataset analyzing step-by-step reasoning
on overall chain and step-level perspectives. All metrics are measured reference-free on (s, h). The highest correlation
overall for each aspect on each dataset is in bold and second best is underlined. Correlation scores with p-value < 0.05
are marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC GRAM MISS
Rouge-1 -0.199† -0.208† -0.246 -0.118 -0.053 -0.520† -0.398 -0.206 -0.230 0.108
Rouge-2 -0.086 0.060 -0.478 -0.063 0.561 -0.232 -0.083 -0.073 0.141 0.090
Rouge-L -0.198† -0.209† -0.498 -0.104 0.090 -0.478† -0.396 -0.115 -0.156 0.058
BLEURT -0.313† -0.383† -0.372 -0.208 -0.034 -0.482† -0.383 -0.356† -0.104 -0.074
BERTScore -0.051 0.064 -0.517 -0.035 0.524 -0.002 -0.218 -0.295† 0.266 0.063
BARTScore -0.084 -0.059 -0.140 -0.137 -0.369 -0.054 0.209 -0.284 -0.040 -0.056
BARTScore+ -0.046 -0.098 0.652 -0.033 -0.073 0.056 0.204 -0.204 0.048 -0.016
BARTScore-P -0.075 -0.168 0.633 -0.080 0.107 0.096 0.277 -0.230 0.054 -0.082
PRISM -0.082 -0.040 -0.469 -0.115 0.073 -0.174 -0.354 -0.134 -0.079 0.075
CTC-Relevance -0.146† -0.219† 0.256 -0.047 0.442 -0.071 -0.121 -0.349† 0.101 -0.145
CTC-Consistency -0.178† -0.241† 0.101 0.009 0.583 -0.301† -0.296 -0.335† 0.208 -0.142
**ROSCOE Metrics (reference-free on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Faithfulness-Step 0.157† 0.198† 0.275 0.189 0.694 0.233 -0.262 -0.115 0.092 0.115
Faithfulness-Token 0.030 0.182† -0.150 0.009 0.714 0.116 -0.442 -0.101 0.262 0.085
Info-Step 0.068 0.113 0.111 0.055 0.835 0.003 -0.252 -0.187 0.196 0.129
ROSCOE-SARepetition-Token with all-mpnet-base-v20.062 sentence embeddings0.150 0.401 0.021 -0.078 0.337† 0.670 -0.074 -0.007 -0.131
Faithfulness-Step **0.197†** 0.221† 0.005 0.201 0.461 0.330† -0.160 -0.019 0.182 0.108
Faithfulness-Token 0.030 0.161 -0.208 0.013 0.597 -0.044 -0.495 -0.111 0.063 **0.186†**
Info-Step 0.111 0.127 -0.111 0.125 0.544 0.161 -0.073 -0.176 0.210 0.076
ROSCOE-SARepetition-Token with sup-simcse-roberta-base0.134† 0.178† sentence embeddings0.662 0.066 0.364 0.485† 0.772 -0.004 0.157 -0.115
Faithfulness-Step 0.028 0.123 -0.459 0.112 **0.908** 0.174 -0.199 -0.162 0.021 0.059
Faithfulness-Token -0.021 0.133 -0.227 0.021 0.752 0.084 -0.398 -0.134 0.119 0.052
Info-Step -0.040 -0.003 -0.362 0.024 0.777 0.015 -0.296 -0.321† 0.038 0.039
Repetition-Token 0.062 0.150 0.401 0.021 -0.078 0.337† 0.670 -0.074 -0.007 -0.131
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
Info-Chain -0.040 0.008 -0.082 -0.005 0.539 -0.164 -0.015 -0.070 0.086 -0.082
ROSCOE-SSRepetition-Step with all-mpnet-base-v20.051 sentence embeddings0.143 0.546 0.043 -0.024 0.411† 0.723 **0.038** -0.018 -0.195†
Info-Chain 0.076 0.128 -0.700 0.122 -0.005 0.367† 0.117 -0.140 0.068 -0.018
ROSCOE-SSRepetition-Step with sup-simcse-roberta-base0.077 0.117 sentence embeddings0.633 0.036 0.141 0.508† 0.684 -0.026 0.025 -0.155
Info-Chain -0.115 -0.038 -0.169 0.059 0.539 -0.260 -0.476 -0.169 -0.081 -0.009
Repetition-Step 0.104 0.132 0.787 0.042 0.136 **0.535†** **0.811** 0.006 0.069 -0.169†
ROSCOE-LI
Source-Consistency 0.059 0.016 0.546 **0.206** -0.029 0.006 0.010 -0.241 -0.139 -0.063
Self-Consistency 0.162† **0.250†** 0.536 0.104 0.383 0.372† 0.223 -0.091 0.061 -0.075
ROSCOE-LC
Grammar -0.076 0.014 0.101 -0.223† -0.335 0.104 0.393 -0.215 **0.517†** -0.144
Perplexity-Step -0.026 -0.053 **0.797** 0.037 -0.607 0.020 -0.019 -0.071 -0.330 -0.039
Perplexity-Chain -0.141† -0.237† 0.324 -0.126 -0.650 -0.373† -0.481 -0.151 -0.284 0.039
50
-----
p p
Table 38: Somers’ D correlation of all metrics on GSM8K human judged dataset analyzing step-by-step reasoning on
overall chain and step-level perspectives. All metrics are measured reference-based on (s, h). The highest correlation
overall for each aspect on each dataset is in bold, second best is underlined. Correlation scores with p-value < 0.05 are
marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC MATH GRAM MISS
Rouge-1 0.572† 0.506† 0.590† 0.513† 0.532† 0.498† 0.694 0.533† 0.582† 0.430 0.686†
Rouge-2 0.551† 0.520† 0.566† 0.511† 0.467† 0.439† 0.758 0.555† 0.617† 0.480 0.713†
Rouge-L 0.591† 0.542† 0.605† 0.559† 0.452† **0.613†** 0.662 0.575† 0.715† 0.457 0.730†
BLEURT 0.487† 0.391† 0.502† 0.389† 0.392 0.222 0.682 0.404† 0.233 0.283 0.597†
BERTScore 0.505† 0.425† 0.488† 0.475† 0.477† 0.585† 0.763 0.483† 0.677† 0.347 0.627†
BARTScore 0.429† 0.352† 0.522† 0.317† 0.315 0.128 0.692 0.436† 0.279† 0.237 0.555†
BARTScore+ 0.531† 0.455† 0.579† 0.460† 0.273 0.303 0.722 0.501† 0.577† 0.334 0.679†
BARTScore-P 0.343† 0.280† 0.376† 0.268† 0.210 0.062 0.621 0.298† 0.207 0.227 0.441†
PRISM 0.579† 0.511† 0.593† 0.531† 0.363† 0.392† 0.707 0.557† 0.540† 0.423 0.728†
CTC-Relevance -0.047 -0.093 -0.065 -0.065 -0.134 -0.286 0.460 -0.055 0.056 0.242 -0.102
CTC-Consistency -0.272† -0.279† -0.259† -0.264† -0.182 -0.385† -0.399 -0.192† -0.233 0.028 -0.320†
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Hallucination 0.466† 0.460† 0.398† 0.445† 0.371† 0.462† 0.091 0.439† 0.624† 0.319 0.519†
Redundancy 0.464† 0.463† 0.396† 0.455† 0.379† 0.552† 0.242 0.459† 0.699† 0.472 0.538†
Semantic Coverage-Step -0.030 0.002 0.017 -0.154 -0.237 -0.084 -0.298 0.023 0.335 0.064 0.002
Missing Step 0.484† 0.419† 0.509† 0.364† 0.357† 0.325 0.333 0.438† 0.685† 0.207 0.590†
Reasoning Alignment 0.613† 0.590† 0.570† 0.573† 0.464† 0.529† 0.460 0.587† **0.725†** 0.446 0.711†
ROSCOE-SACommonsense with all-mpnet-base-v20.411 sentence embeddings† 0.348† 0.424† 0.289† 0.264 0.246 0.404 0.358† 0.677† 0.367 0.494†
Hallucination 0.702† 0.621† 0.704† 0.630† **0.629†** 0.422† 0.818 0.644† 0.340 0.064 0.846†
Redundancy 0.453† 0.411† 0.451† 0.417† 0.406† 0.353† 0.505 0.468† 0.195 0.204 0.570†
Semantic Coverage-Step -0.066 -0.062 0.005 -0.042 -0.148 0.329† -0.576 0.006 0.232 0.291 -0.085
Missing Step 0.501† 0.449† 0.520† 0.450† 0.402† 0.224 0.566 0.488† 0.503† 0.186 0.572†
Reasoning Alignment 0.583† 0.523† 0.559† 0.558† 0.380† 0.381† 0.571 0.569† 0.418† 0.296 0.716†
ROSCOE-SACommonsense with sup-simcse-roberta-base0.626† 0.529 sentence embeddings† 0.657† 0.594† 0.578† 0.305 **0.828** 0.574† 0.421† -0.125 0.758†
Hallucination **0.806†** **0.707†** **0.731†** **0.729†** 0.625† 0.357† 0.611 **0.712†** 0.575† -0.105 **0.896†**
Redundancy 0.734† 0.661† 0.665† 0.699† 0.598† 0.463† 0.717 0.664† 0.517† -0.061 0.840†
Semantic Coverage-Step 0.063 0.063 0.127 0.057 0.216 0.300 -0.111 0.161 -0.205 **0.492** 0.023
Missing Step 0.722† 0.631† 0.685† 0.691† 0.517† 0.353† 0.808 0.655† 0.482† 0.003 0.821†
Reasoning Alignment 0.712† 0.628† 0.654† 0.711† 0.487† 0.435† 0.717 0.655† 0.603† 0.161 0.848†
Commonsense 0.780† 0.659† 0.721† 0.727† 0.486† 0.324 0.813 0.691† 0.510† -0.217 0.887†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
ROSCOE-SSSemantic Coverage-Chain with all-mpnet-base-v20.404 sentence embeddings† 0.364† 0.452† 0.324† 0.214 0.381† 0.601 0.350† 0.172 -0.196 0.444†
ROSCOE-SSSemantic Coverage-Chain with sup-simcse-roberta-base0.170† 0.171 sentence embeddings† 0.189† 0.203† -0.055 0.205 0.237 0.186† -0.014 0.041 0.243†
Semantic Coverage-Chain 0.411† 0.375† 0.422† 0.460† 0.301 0.439† 0.742 0.381† 0.275 0.148 0.506†
51
-----
p p
Table 39: Somers’ D correlation of all metrics on ESNLI human judged dataset analyzing step-by-step reasoning on
overall chain and step-level perspectives. All metrics are measured reference-based on (s, h). The highest correlation
overall for each aspect on each dataset is in bold, second best is underlined. Correlation scores with p-value < 0.05 are
marked with †. (Continued from § 6, more details in App. H.2)
QUAL COH COMMON FACT HALL RED REP LOGIC MATH GRAM MISS
Rouge-1 0.255† -0.234 -0.506† 0.072 0.359 -0.078 - -0.310† -0.054 -0.487 0.662†
Rouge-2 0.223† -0.189 -0.629† 0.103 0.429 -0.081 - -0.344† -0.149 -0.493 0.568†
Rouge-L 0.227† -0.177 -0.647† 0.084 0.383 -0.089 - -0.351† -0.500 -0.493 0.628†
BLEURT 0.221† -0.170 -0.197 0.078 -0.090 -0.108 - -0.202 0.216 -0.447 0.611†
BERTScore 0.362† -0.036 -0.281 0.182 0.269 0.143 - -0.153 0.068 -0.369 0.661†
BARTScore 0.121 -0.304 -0.200 0.003 0.007 -0.339† - -0.322† 0.338 -0.478 0.513†
BARTScore+ 0.129 -0.113 -0.306 0.164 0.048 -0.036 - -0.367† -0.095 -0.378 0.341†
BARTScore-P 0.096 -0.037 -0.244 0.188 -0.017 0.028 - -0.222 -0.946 -0.244 0.173
PRISM 0.314† 0.032 -0.386 0.086 **0.438** 0.063 - -0.131 -0.378 -0.328 **0.684†**
CTC-Relevance 0.072 **0.331†** 0.383 0.034 0.352 -0.001 - **0.400†** 0.405 **0.231** 0.033
CTC-Consistency -0.051 0.150 -0.078 -0.130 0.083 0.105 - 0.085 0.676 0.056 0.006
**ROSCOE Metrics (reference-based on (s, h))**
ROSCOE-SA with finetuned sup-simcse-roberta-base sentence embeddings
Hallucination 0.156† 0.152 0.461 -0.011 -0.021 **0.160** - 0.132 0.743 -0.275 0.170
Redundancy 0.142 0.234 **0.553†** 0.046 0.283 0.119 - 0.159 0.500 -0.253 0.145
Semantic Coverage-Step 0.153† -0.094 0.172 -0.192 -0.241 -0.065 - -0.020 -0.730 0.086 0.327†
Missing Step 0.234† -0.197 -0.375 0.239 -0.010 -0.275 - -0.226 0.527 -0.400 0.558†
Reasoning Alignment 0.278† -0.062 -0.003 0.072 0.100 -0.049 - -0.047 -0.108 -0.433 0.495†
ROSCOE-SACommonsense with all-mpnet-base-v20.142 sentence embeddings-0.094 -0.353 0.213 0.207 -0.184 - -0.148 0.676 -0.447 0.368†
Hallucination 0.174† 0.094 0.350 0.066 -0.210 -0.021 - 0.181 0.554 -0.089 0.141
Redundancy 0.219† 0.227 0.531 0.153 0.031 0.116 - 0.293 0.405 -0.133 0.133
Semantic Coverage-Step 0.185† -0.108 -0.039 0.099 -0.490 -0.229 - -0.185 -0.122 0.083 0.350†
Missing Step 0.303† -0.173 -0.603† -0.017 0.159 -0.031 - -0.180 0.689 -0.650† 0.679†
Reasoning Alignment **0.428†** -0.028 -0.128 0.294† 0.121 0.095 - -0.066 -0.351 -0.547† 0.657†
ROSCOE-SACommonsense with sup-simcse-roberta-base0.211† -0.029 sentence embeddings-0.567† 0.062 0.324 -0.016 - -0.071 **0.946** -0.564 0.489†
Hallucination 0.190† 0.086 -0.311 0.171 -0.138 -0.050 - 0.001 -0.149 -0.322 0.272†
Redundancy 0.166† 0.177 -0.208 0.249 -0.038 -0.007 - 0.023 -0.405 -0.331 0.235†
Semantic Coverage-Step 0.196† -0.235 -0.058 0.167 -0.341 -0.187 - -0.153 -0.338 -0.078 0.425†
Missing Step 0.307† -0.165 -0.508† 0.143 0.124 -0.105 - -0.139 0.405 -0.558† 0.623†
Reasoning Alignment 0.374† -0.049 -0.406 **0.317†** 0.114 0.024 - -0.148 -0.608 -0.603† 0.642†
Commonsense 0.197† 0.004 -0.467 0.087 0.352 0.021 - -0.047 0.919 -0.464 0.389†
ROSCOE-SS with finetuned sup-simcse-roberta-base sentence embeddings
ROSCOE-SSSemantic Coverage-Chain with all-mpnet-base-v20.152 sentence embeddings† -0.185 -0.356 0.266 -0.021 -0.311 - -0.255† 0.500 0.022 0.287†
ROSCOE-SSSemantic Coverage-Chain with sup-simcse-roberta-base0.320† -0.213 sentence embeddings-0.203 0.045 -0.066 0.013 - -0.309† 0.446 -0.517 0.679†
Semantic Coverage-Chain 0.339† -0.196 -0.378 0.269 0.072 -0.088 - -0.148 0.324 -0.539† 0.643†
52
-----
p p
Table 40: ROSCOE performance analysis on examples from Human Judged datasets. Errors are highlighted in red.
(Cont. from $ 6)
**Dataset** **Context** **Reasoning chain** **Score type:**
**Score value;**
**Errors**
**Comment**
Common Sense Error: Model should
subtract smaller
value (6,000) from
bigger (20,000) to
answer the question. Arithmetic
error: 6,000 minus
20,000 is -14,000.
DROP Over the next year, however, the Polish forces
were subject to attrition, as the Sejm again
refused to raise taxes and pay the army, resulting in mass desertions of unpaid soldiery. The
Polish problems were further aggravated by
the incompetent leadership of hetman Michał
Kazimierz Pac, who obstructed Sobieski’s
leadership, while the Ottomans continued to
receive reinforcements. Nonetheless in 1674
the Commonwealth resumed the offensive,
taking advantage of a new Muscovy-Ottoman
conflict that year, and the Polish-Ottoman
war remained undecided. Sobieski’s force
of 6,000 defeated 20,000 Turks and Tatars under Ibrahim Shyshman in the battle of Lwow
in August 1675. Even after the Battle of
Trembowla, the Sejm still refused his pleas
for more funds and a larger army. In 1676,
after Sobieski’s 16,000 withstood the twoweek siege of Zurawno, by 100,000 men un-[˙]
der Ibrahim Pasha, a new peace treaty was
signed, the Treaty of Zurawno. The peace[˙]
treaty partially reversing those from Buczacz:
the Ottomans kept approximately two thirds
of the territories they gained in 1672, and
the Commonwealth no longer was obliged to
pay any kind of tribute to the Empire; a large
number of Polish prisoners were released by
the Ottomans. How many was the difference
beween Sobieski’s force and the Turks and
Tatars? Claim: 14000. Is the Claim supported
by the Situation?
DROP The first Azov campaign began in the spring
of 1695. Peter the Great ordered his army to
advance towards Azov. The army comprised
crack regiments and the Don Cossacks and
was divided into three units under the command of Franz Lefort, Patrick Gordon and
Avtonom Golovin. Supplies were shipped
down the Don from Voronezh.In 1693 the Ottoman garrison of the fortress was 3,656, of
whom 2,272 were Janissaries.Between June
27-July 5 the Russians blocked Azov from
land but could not control the river and prevent resupply. After two unsuccessful attacks
on August 5 and September 25, the siege was
lifted on October 1. Another Russian army
under the command of Boris Sheremetev set
out for the lower reaches of the Dnieper to
take the Ottoman forts there. The main fort
at Gazi-Kerman was taken when its powder
magazine blew up, as well as Islam-Kerman,
Tagan and Tavan, but the Russians were not
able to hold the area and withdrew most of
their forces. By the Treaty of Constantinople
the remaining Russians were withdrawn and
the lower Dnieper was declared a demilitarized zone. What happened first: Russians
blocked Azov or Treaty of Constantinople?
Claim: Russians blocked Azov. Is the Claim
supported by the Situation?
6000-20000=14000 The answer is yes. Info-Step:
0.514;
COMMON,
MATH
The first Azov campaign began in the
spring of 1695. Peter the Great ordered
his army to advance towards Azov. The
army comprised crack regiments and
the Don Cossacks and was divided into
three units under the command of Franz
Lefort, Patrick Gordon and Avtonom
Golovin. Supplies were shipped down
the Don from Voronezh.In 1693 the
Ottoman garrison of the fortress was
3,656, of whom 2,272 were Janissaries.Between June 27-July 5 the Russians blocked Azov from land but could
not control the river and prevent resupply. After two unsuccessful attacks on
August 5 and September 25, the siege
was lifted on October 1. Another Russian army under the command of Boris
Sheremetev set out for the lower reaches
of the Dnieper to take the Ottoman forts
there. The main fort at Gazi-Kerman
was taken when its powder magazine
blew up, as well as Islam-Kerman.
53
Info-Step: Redundancy:
0.966, RED Model repeats
sentences from
context that are not
directly relevant to
the question and
can be dropped.
Info-Step score is
however high, because it measures
two-way similarity
between context
and reasoning
chain, thus being
not informative.
-----
p p
**Dataset** **Context** **Reasoning chain** **Score type:**
**Score value;**
**Errors**
**Comment**
There are only 2
_Repetition_ errors in
the dataset, both have
_Self-Consistency score_
below 0.362. Most
of the other chains
have higher scores,
thus Somers’ D correlation score for
_Self-Consistency_ and
_Repetition is 0.98 on_
GSM8K dataset.
This is an example of a
chain that was scored as
low as chains that actually have repetitions, because its steps do sound
similar. Two chains
with actual repetition errors were scored below
0.001. In total, 3 out
of 198 chains without
repetition error where
scored as low as chains
with repetition error, resulting in high 0.869
Somers’ D score.
There was no correlation (Somers’ D
score 0.014) found
between FACT error
and _Faithfulness-Step_
score. The range of
scores where the error
was and was not present
is the same (0.60, 0.81).
We provide an example
of the chain with FACT
error that was scored
among the highest.
Although there is a
clear contradiction
with a context, this
chains gets high
_Source-Consistency_
score. Overall, there
is a large number of
false-positives (with
scores as high as 0.98),
and false-negatives
(with scores as low
as 0.0) resulting in
moderate correlation
score of 0.425.
GSM8K John has 3 boxes. Each box is 5 inches by
6 inches by 4 inches. The walls are 1 inch
thick. What is the total inner volume of all
3 boxes?
GSM8K Mary is an avid gardener. Yesterday, she
received 18 new potted plants from her favorite plant nursery. She already has 2 potted plants on each of the 40 window ledges
of her large country home. Feeling generous, she has decided that she will give 1
potted plant from each ledge to friends and
family tomorrow. How many potted plants
will Mary remain with?
GSM8K Rory orders 2 subs for $7.50 each, 2 bags of
chips for $1.50 each and 2 cookies for $1.00
each for delivery. There’s a 20% delivery
fee added at check out and she wants to add
a $5.00 tip. What will her delivery order
cost?
ESNLI People standing in line of a beer and curry
line. Claim: People standing in line. Is the
Claim supported by the Situation?
Each box is 5*6*4=«5*6*4=120»120
cubic inches. So they have a total of 120*3=«120*3=360»360 cubic inches. So the total volume is 360*1=«360*1=360»360 cubic inches. Answer: 360
Mary has 2 potted plants on each of
the 40 window ledges, for a total of
2*40 = «2*40=80»80 potted plants.
She has 18 new potted plants, for a
total of 80+18 = «80+18=98»98 potted plants. She is giving away 1 potted plant from each of the 40 window ledges, for a total of 1*40 =
«1*40=40»40 plants. She will be remaining with 98-40 = «98-40=58»58
potted plants. Answer: 58
The order is $7.50 for the sub, $1.50
for chips and $1.00 for cookies so
the total order is 7.50+1.50+1.00
= $«7.50+1.50+1.00=10.0»10.00.
There’s a 20% delivery fee added
at check out so that’s 10*.20 =
$«10*.20=2.0»2.00. The order is
$10.00 and there’s a $2.00 delivery
fee so 10+2 = $«10+2=12.00»12.00.
She also wants to add a $5.00 tip
which will make the order 12+5 =
$«12+5=17.00»17.00. Answer: 17
The people are standing in line for
beer and curry, not just standing in
line. The answer is No.
54
SelfConsistency:
0.361, REP
RepetitionStep: 0.001;
No error
FaithfulnessToken:
0.801; FACT
SourceConsistency:
0.976; FACT
-----
p p
**Dataset** **Context** **Reasoning chain** **Score type:**
**Score value;**
**Errors**
**Comment**
_Self-Consistency mea-_
sures if steps contradict each other. LOGIC
identifies if there is
a logical deduction error within one of the
steps. Nevertheless,
_Self-Consistency is able_
to catch some logical
deduction errors resulting in moderate correlation, since contradictions within a step often
lead to logical errors between steps.
This chain was given a
score lower that 25% of
_Faithfulness-Step scores_
within this dataset. Second step is misaligned
with the context, containing irrelevant information, that resulted in
score punishment.
_Faithfulness-Step score_
is low, although the
reasoning is correct.
Model failed to produce
higher score because
cosine similarity-based
alignment scores are
low.
ESNLI Two women are embracing while holding to
go packages. Claim: The men are fighting
outside a deli. Is the Claim supported by
the Situation?
SemEval I spray some Pam into a frying pan. I chop
up small pieces of ham and put them into
the frying pan. I turn the stove on low heat.
I crack two eggs into a bowl. I add a little
bit of water and whisk the eggs until they
are completely mixed up. I pour the egg
mixture into the pan. I turn the heat up to
medium. I put a lid on top of the pan and
continue to cook the mixture. After a few
minutes, I check to see if all the liquid on
top of the egg has cooked so it isn’t runny
anymore. If it is completely cooked, I add
a little bit of shredded cheese on top of the
egg mixture. I cook for a few minutes until
the cheese melts. I get a spatula and flip half
of the egg on top of the other half. Then my
omelet is cooked! Who all ate the breakfast? Claim: The family.
SemEval I had bought my dad a tie for father’s day.
I’m sure I could have given it to him like
that, but I thought a nice touch would have
been to wrap it. I couldn’t wrap the tie like
that, so I bought a small cardboard box. It
was a bit narrow and the perfect size for a
tie. I put the tie in the box and taped it shut.
I had bought some blue and white wrapping
paper and some tape, so I set that besides
the box. I lay the box on top of the paper
and folded it over. I cut the excess of the
paper off and taped the paper to the box. I
folded all the sides of the paper to tuck it in
to a neat package. When I was done, I put
a nametag on it. I topped off the gift with a
small blue ribbon bow. What is the reason
for the present? Claim: It was Christmas.
Two women are embracing while
holding to go packages means they
are friends or family. They are not
fighting. The answer is No.
The answer is incorrect, because there
is no mention of a family. The only
people mentioned are the person cooking the omelet and the nurse.
SelfConsistency:
0.079;
LOGIC
_Faithfulness-_
_Step: 0.776;_
HALL
The answer is incorrect, because it _Faithfulness-_
was Father’s Day. _Step: 0.635;_
No error
55
-----
p p
Table 41: ROSCOE performance analysis on examples from Diagnostic datasets. Perturbations are highlighted in red.
(Cont. from $ 6)
**Dataset** **Context** **Reasoning chain** **Score** **type:**
**Score value;**
**Perturbation**
**Comment**
_Faithfulness-Step_ scores vary
from 0.5 to 1.0. Metric is able to
catch a hallucination and gives
the chain a low score.
_Faithfulness-Step_ scores for
chains without perturbations
vary from 0.740 to 0.990. Even
though first two sentences are
aligned with the context, the
last sentence in the chain lowers
overall average.
In this subset, Grammar scores
vary between 0.258 and 0.996. To
score grammatical correctness we
used pre-trained model, that also
punishes chains that sound unnatural, like in this example. As a result, we see moderate correlations
between Grammar scores and Semantic Error perturbation.
The Grammar score correctly
identifies grammatical issue. Although no perturbation was not
added in this chain, the reference
itself contained an error.
In this subset, Info-Chain scores
calculated using finetuned sup_simcse-roberta-base belong to the_
interval (0.530; 0.999), with 92%
of perturbed instances having
scores less then 0.92, and 95% of
non-perturbed instance that were
scored higher. Even though Info_Chain embeds chain as a whole, it_
turned out to be very sensitive to
negation perturbations, especially
when the model is finetuned, resulting in high Somers’ D correlation score of 0.955.
ProofWriter Charlie is furry. Erin is furry. Erin is
green. Erin is round. Fiona is blue. Fiona
is furry. Gary is smart. White things are
round. If something is furry and blue then
it is white. If something is blue and round
then it is red. All furry things are blue. If
Charlie is red then Charlie is green. Is the
following True, False, or Unknown? Erin
is round.
ProofWriter Anne is cold. Anne is not young. Bob is
green. Fiona is cold. Fiona is rough. Gary
is red. Gary is young. Green, rough things
are red. If Gary is cold and Gary is red
then Gary is green. If something is nice
then it is rough. Cold, rough things are
kind. If something is cold and nice then
it is young. If something is kind and cold
then it is green. All red things are nice. Is
the following True, False, or Unknown?
Gary is not nice.
EQASC What insect goes through a brood process
which are the larval and pupal steps combined. Answer: An ant has a brood which
is the larval and pupal process of the ant
after hatching
EQASC What uses a beak for catching prey? Answer: a beak is used for catching prey by
eagles.
EntailmentBank A different moon phase occurs approximately once per week. The first quarter
phase of the moon occurs after the new
moon. For a science activity, Mia had to
draw a picture of the Moon each night. If
Mia drew a new moon, what type of moon
did Mia draw about one week later? Answer: first quarter
The bear visits the cow. _Faithfulness-_
Erin is round. _Step:_ 0.602;
Halucination
Gary is red. All red
things are nice. Gary
is nice.
Most pupal steps go
through larval stages
after hatching. Brood
are the larval and pupal
stages of the ant.
A beak is used for
catching prey by some
birds. Eagles are a big
bird.
A different moon phase
doesn’t occur approximately once per week.
The first quarter phase
of the moon occurs
after the new moon.
Therefore the first quarter phase of the moon
will occur one week after the new moon
_Faithfulness-_
_Step:_ 0.740;
No perturbation
_Grammar:_
0.258 ; Semantic Error
_Grammar:_
0.529 ; No
perturbation
_Info-Chain:_
0.542; Negate
step
56
-----
p p
**Dataset** **Context** **Reasoning chain** **Score** **type:**
**Score value;**
**Perturbation**
**Comment**
In this case, a negation occurs in
the last step of the chain, which
represents a conclusion and does
not overlap in wording with the
context, and goes undetected by
the metric.
Pre-trained model, used to detect grammatical errors, gives low
scores for simple chains. In this
subset, Grammar scores belong to
the interval (0.053; 0.990).
This is an example of a chain
where the classifier model failed
to catch the grammar error
(dropped verb "find"). Even
though the model is able to
identify some grammatical errors,
_Grammar score distributions over_
perturbed and unperturbed data
overlap significantly, with mean
0.909 and variance 0.005 for unperturbed subset, and 0.854 mean
and 0.014 for perturbed. As a result, the correlation score is 0.392,
which is quite low.
In this subset, Info-Step varies between 0.005 and 0.796, flagging
most of the chains with shuffled
steps. Scores are distributed with
mean 0.356 and variance 0.056
for perturbed, and 0.705 and variance 0.004 for unperturbed chains.
Resulting Somers’ D correlation
score is 0.879.
This is an example of a chain
where perturbation in operations
got higher scores by the model.
This is an example of a chain that
was scored on a lower spectrum by
the Info-Step spectrum, although
it had no perturbation or other
type of error included.
EntailmentBank Hunting decreases animal populations. If Hunting decreases ana population decreases to zero then that imal populations. If a
organism is extinct. An animal is a kind population decreases to
of organism. Many animals are still being zero then that organism
hunted for their fur. Because of this, many is extinct. An animal
of these animals are in danger of. Answer: is a kind of organism.
extinction. Therefore if hunting de
creases the animal population to zero, then the
animal will not be extinct.
MATH What is 1[(2][235423523][)]? One raise to any power
be one, so our answer
be 1
_Info-Chain:_
0.983; Negate
step
_Grammar:_
0.053 ; Grammatical Error
_Grammar:_
0.982; Grammatical Error
_Info-Step:_
0.005; Shuffle
Operations
MATH Two fair eight-sided dice have their faces
numbered from 1 to 8. What is the expected value of the sum of the rolls of
both dice?
ASDIV The puppet show begins at 2:00 p.m. It
lasts 1 hour and 30 minutes. What time
does it end?
To _ the expected value
of a double roll, we
can simply add the expected values of the individual rolls, giving
4.5 + 4.5 = 9 .
(2 (hour) + 60
(min+hour) / 1 (hour)
/ 60 (min*hour) / 30
(min)) * 60 (min/hour)
= 3 r30; hh:mm = 3:30
p.m.
ASDIV I have 28 cents in my pocket. How coins 28/25=1 r3; 3+1=3; _Info-Step:_
could I have on minimization? 1/3=4 0.705; Shuffle
Operations
ASDIV Mrs. Hilt has $10. She spends $3 on a toy 10-(3+2)=5 _Info-Step:_
truck and $2 on a pencil case. How much 0.561; No
money does she have left? perturbation
57
-----
| [
"Olga, Golovneva",
"Spencer, Poff",
"Martin, Corredor",
"Luke, Zettlemoyer",
"Maryam, Fazel-Zarandi",
"Asli, Celikyilmaz",
"Moya, Chen"
] | 2023-09-12T00:00:00 | ICLR 2023 | true | 97 | 7 | null | http://arxiv.org/abs/2212.07919 | https://arxiv.org/abs/2212.07919 | https://www.semanticscholar.org/paper/391246ce9c59d61c94cca3f8bef56c95542a4708 |
Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension | Reading comprehension models have been successfully applied to extractive text answers, but it is unclear how best to generalize these models to abstractive numerical answers. We enable a BERT-based reading comprehension model to perform lightweight numerical reasoning. We augment the model with a predefined set of executable ‘programs’ which encompass simple arithmetic as well as extraction. Rather than having to learn to manipulate numbers directly, the model can pick a program and execute it. On the recent Discrete Reasoning Over Passages (DROP) dataset, designed to challenge reading comprehension models, we show a 33% absolute improvement by adding shallow programs. The model can learn to predict new operations when appropriate in a math word problem setting (Roy and Roth, 2015) with very few training examples. | This work enables a BERT-based reading comprehension model to perform lightweight numerical reasoning by augmenting the model with a predefined set of executable ‘programs’ which encompass simple arithmetic as well as extraction. | # Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension
**Daniel Andor, Luheng He, Kenton Lee, Emily Pitler**
Google Research
_{andor, luheng, kentonl, epitler}@google.com_
**Abstract**
Reading comprehension models have been
successfully applied to extractive text answers,
but it is unclear how best to generalize these
models to abstractive numerical answers. We
enable a BERT-based reading comprehension
model to perform lightweight numerical reasoning. We augment the model with a predefined set of executable ‘programs’ which encompass simple arithmetic as well as extraction. Rather than having to learn to manipulate numbers directly, the model can pick a
program and execute it. On the recent Discrete
Reasoning Over Passages (DROP) dataset, designed to challenge reading comprehension
models, we show a 33% absolute improvement
by adding shallow programs. The model can
learn to predict new operations when appropriate in a math word problem setting (Roy and
Roth, 2015) with very few training examples.
**1** **Introduction**
End-to-end reading comprehension models have
been increasingly successful at extractive question answering. For example, performance on the
SQuAD 2.0 (Rajpurkar et al., 2018) benchmark
has improved from 66.3 F1 to 89.5[1] in a single
year. However, the Discrete Reasoning Over Passages (DROP) (Dua et al., 2019) dataset demonstrates that as long as there is quantitative reasoning involved, there are plenty of relatively straightforward questions that current extractive QA systems find difficult to answer. Other recent work
has shown that even state-of-the-art neural models
struggle with numerical operations and quantitative reasoning when trained in an end-to-end manner (Saxton et al., 2019; Ravichander et al., 2019).
In other words, even BERT (Devlin et al., 2019) is
not very good at doing simple calculations.
[1https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
_How many more Chinese nationals are there than Euro-_
_pean nationals?_
The city of Bangkok has a population of 8,280,925 ...the
census showed that it is home to 81,570 Japanese and
**55,893 Chinese nationals, as well as 117,071 expatriates**
from other Asian countries, 48,341 from Europe, 23,418
from the Americas,...
**NAQANet: −55893**
**Ours: Diff(55893, 48341) = 7552**
Table 1: Example from the DROP development set.
The correct answer is not explicitly stated in the passage and instead must be computed. The NAQANet
model[2](Dua et al., 2019) predicts a negative number of
people, whereas our model predicts that an operation
Diff should be taken and identifies the two arguments.
In this work, we extend an extractive QA
system with numerical reasoning abilities. We
do so by asking the neural network to synthesize small programs that can be executed. The
model picks among simple programs of the form
Operation(args, ...), where the possible operations include span extraction, answering yes or
no, and arithmetic. For math operations, the arguments are pointers to numbers in the text and, in
the case of composition, other operations. In this
way, the burden of actually doing the computation
is offloaded from the neural network to a calculator tool. The program additionally provides a thin
layer of interpretability that mirrors some of the
reasoning required for the answer. For example,
in Table 1, the model predicts subtraction (Diff)
over two numbers in the passage, and executes it
to produce the final answer.
We start with a simple extractive question answering model based on BERT (Devlin et al.,
2019), and show the following:
1. Predicting unary and binary math operations
[2https://demo.allennlp.org/reading-comprehension/](https://demo.allennlp.org/reading-comprehension/NzQwNjg1)
[NzQwNjg1](https://demo.allennlp.org/reading-comprehension/NzQwNjg1)
5947
-----
with arguments resulted in significant improvements on the DROP dataset.
2. Our model can smoothly handle more traditional reading comprehension inputs as well
as math problems with new operations. Cotraining with the CoQA (Reddy et al., 2018)
dataset improved performance on DROP. The
DROP+CoQA trained model had never seen
multiplication or division examples, but can
learn to predict these two ops when appropriate in a math word problem setting (Roy and
Roth, 2015) with very few training examples.
**2** **Background and Related Work**
**Discrete Reasoning over Paragraphs (DROP)**
(Dua et al., 2019) is a reading comprehension task
that requires discrete reasoning. Inspired by semantic parsing tasks where models need to produce executable ‘programs’, it keeps the opendomain nature of reading comprehension tasks
such as SQuAD 2.0 (Rajpurkar et al., 2018). As
shown in Table 1, the system needs to perform
fuzzy matching between “from Europe” and “Eu_ropean nationals” in order to identify the argu-_
ments.
**Numerically-aware QANet (NAQANet)** (Dua
et al., 2019) is the current state-of-the-art[3] system for DROP. It extends the QANet model (Yu
et al., 2018) with predictions for numbers (0–9)
and summation operations. For the latter, it performs a 3-way classification (plus, minus, and
zero) on all the numbers in the passage.
While certain binary operations are expressible
efficiently with flat sign prediction, it is difficult
to generalize the architecture. Moreover, each
number is tagged independently, which can cause
global inconsistencies; for instance, in Table 1 it
assigns a single minus label and no plus labels,
leading to a prediction of negative people.
**Mathematical Word Problems** have been addressed with a wide variety of datasets and approaches; see Zhang et al. (2018) for an overview.
One such dataset of arithmetic problems is the Illinois dataset (Roy and Roth, 2015). The problems
are posed in simple natural language that has a
specific, narrow domain, For example: “If there
_are 7 bottle caps in a box and Linda puts 7 more_
_bottle caps inside, how many bottle caps are in_
[3https://leaderboard.allenai.org/drop/submissions/public](https://leaderboard.allenai.org/drop/submissions/public)
_the box?”. Unlike DROP, the problems are typi-_
cally 1–3 sentences long and do not require reading complex passages. Instead, the main challenge
is mathematical reasoning. According to Zhang
et al. (2018), the current state of the art uses syntactic parses and deterministic rules to convert the
input to logical forms (Liang et al., 2016).
**3** **Model**
We extend a BERT-based extractive reading comprehension model with a lightweight extraction
and composition layer. For details of the BERT
architecture see Devlin et al. (2019). We only rely
on the representation of individual tokens that are
jointly conditioned on the given question Q and
passage P . Our model predicts an answer by selecting the top-scoring derivation (i.e. program)
and executing it.
**Derivations** We define the space of possible
derivations D as follows:
_• Literals: {YES, NO, UNKNOWN, 0, . . . 9}._
_• Numerical operations:_ including various
types of numerical compositions of numbers[4], such as Sum or Diff.
_• Text spans: composition of tokens into text_
spans up to a pre-specified length.
_• Composition of compositions: we only con-_
sider two-step compositions, including merging text spans and nested summations.
The full set of operations are listed in Table 2.
For example, Sum is a numerical operation that
adds two numbers and produces a new number.
While we could recursively search for compositions with deep derivations, here we are guided by
what is required in the DROP data and simplify
inference by heavily restricting multi-step composition. Specifically, spans can be composed into
a pair of merged spans (Merge), and the sum of
two numbers (Sum) can subsequently be summed
with a third (Sum3). The results in Table 3 show
the dev set oracle performance using these shallow
derivations, by answer type.
**Representation and Scoring** For each derivation d ∈D, we compute a vector representation
**hd and a scalar score ρ(d, P, Q) using the BERT**
output vectors. The scores ρ are used for computing the probability P (d | P, Q) as well as for
pruning. For brevity, we will drop the dependence
on P and Q in this section.
4Numbers are heuristically extracted from the text.
5948
-----
**Derivations** **Example Question** **Answer Derivation**
_Literals_ YES, NO, UNKNOWN, 0, 1 ..., 9 How many field goals did Stover kick? 4
_Numerical_ Diff100 : n0 → 100 − _n1_ How many percent of the national populationdoes not live in Bangkok? 100 − 12.6 = 87.4
Sumas well as: : n0, n Diff1 →,n Mul0 + n, Div1 How many from the census were in Ungheni andCahul? 3261591, 828+28, 763 =
_Text spans_ Span : i, j → _s_ Does Bangkok have more Japanese or Chinese “Japanese”
nationals?
_Compositions_ Merge : s0, s1 →{s0, s1} What languages are spoken by more than 1%, butfewer than 2% of Richmond’s residents? “Hmong-Mien lan-guages”, “Laotian”
Sum3n1) + : n n2 0, n1, n2 → (n0 + How many residents, in terms of percentage,speak either English, Spanish, or Tagalog? Sum2.11 = 89(64.56.8, 23.13)+
Table 2: Operations supported by the model. s, n refer to arguments of type span and number, respectively. i, j
are the start and end indices of span s. The omitted definitions of Diff, Mul, and Div are analogous to Sum.
_Literals are scored as ρ(d) = wd[⊺][MLP][lit][(][h][CLS][)][,]_
where hCLS is the output vector at the [CLS] token of the BERT model (Devlin et al., 2019).
_Numeric operations use the vector representa-_
tions hi of the first token of each numeric argument. Binary operations are represented as
**hd = MLPbinary(hi, hj, hi ◦** **hj)** (1)
and scored as ρ(d) = wop[⊺] **[h]d[, where][ h]d** [represents]
the binary arguments and op is the operation type.
_◦_ is the Hadamard product. Unary operations such
as Diff100 are scored as wop[⊺] [MLP]unary[(][h]i[)][.]
_Text spans are scored as if they were another_
binary operation taking as arguments the start and
end indices i and j of the span (Lee et al., 2017):
**hd = MLPspan(hi, hj)** (2)
and scored as ρ(d) = wspan[⊺] **[h]d[.]**
_Compositions of compositions are scored with_
the vector representations of its children. For example, the ternary Sum3, comprising a Sum and a
number, is scored with wSum3[⊺] [MLP][Sum3][(][h][d][0][,][ h][k][)][,]
where hd0 corresponds to the representation from
the first Sum, and hk is the representation of the
third number. The composition of two spans is
scored as wMerge[⊺] [MLP][Merge][(][h]d0[,][ h]d1[,][ h]d0 _[◦][h]d1[)][,]_
where hd0 and hd1 are span representations from
(2). The intuition for including hd0 ◦ **hd1 is that**
it encodes span similarity, and spans with similar
types are more likely to be merged.
This strategy differs from the NAQANet baseline in a few ways. One straightforward difference
is that we use BERT as the base encoder rather
than QANet. A more meaningful difference is that
we model all derivations in the unified op scoring
framework described above, which allows generalizing to new operations, whereas NAQANet
would require more large-scale changes to go beyond addition and subtraction. Generalizing the
model to new ops is a case of extending the
derivations and scoring functions. In Section 4,
we will show the impact of incrementally adding
Diff100, Sum3, and Merge.
**3.1** **Training**
We used exhaustive pre-computed oracle derivations D[∗] following Dua et al. (2019). We
marginalized out all derivations d[∗] that lead to the
answer[5] and minimized:
_J (P, Q, D[∗]) = −_ log _P_ (d[∗] _| P, Q)_
_d[∗]X∈D[∗]_
exp ρ(d, P, Q)
_P_ (d | P, Q) = _d[′][ exp][ ρ][(][d][′][, P, Q][)]_
P
If no derivation lead to the gold answer (D[∗] is
empty), we skipped the example.
**Pruning** During inference, the Merge and
Sum3 operations are composed from the results
of Span and Sum operations, respectively. The
space of possible results of Merge is quadratic in
the number |S| of possible spans. With |S| ∼ 10[4],
the complete set of Merge instances becomes
overwhelming. Similarly, with |N| ∼ 100 numbers in each passage, there are millions of possible
Sum3 derivations. To do training and inference efficiently, we kept only the top 128 Span and Sum
results when computing Merge and Sum3.[6]
5In practice we capped the number of derivations at 64,
which covers 98.7% of the training examples.
6During training, the pruned arguments had recall of 80–
90% after 1 epoch and plateaued at 95–98%.
5949
-----
_Oracle_ Overall Dev Overall Test Date (1.6%) Number (62%) Span (32%) Spans (4.4%)
Dev EM EM F1 EM F1 EM F1 EM F1 EM F1 EM F1
NAQANet 46.75 50.39 44.24 47.77 32.0 39.6 44.9 45.0 58.2 64.8 0.0 27.3
Our basic[7] _80.03_ 66.50 69.91 - - 57.0 65.1 65.8 66.1 78.0 82.6 0.0 35.7
+Diff100 _88.75_ 75.52 78.82 - - 53.6 61.3 80.3 80.5 78.4 82.8 0.0 35.8
+Sum3 _90.16_ 76.70 80.06 - - 58.0 64.6 81.9 82.1 78.9 83.4 0.0 36.0
+Merge _93.01_ 76.95 80.48 - - 58.1 61.8 82.0 82.1 78.8 83.4 5.1 45.0
+CoQA _93.01_ **78.00** **81.56** **76.93** **80.47** **59.5** **66.4** **83.0** **83.2** **79.8** **84.2** **5.8** **46.8**
+Ensemble _93.01_ **78.95** **82.54** **78.15** **81.78** **59.7** **67.7** **83.9** **84.1** **81.2** **85.5** 5.4 46.5
_Oracle_ _93.01_ _71.6_ _94.5_ _95.8_ _60.5_
Table 3: Accuracies on the DROP dev and test set in terms of exact match (EM) and token-level F1. The righthand columns show the performance breakdown with different answer types on the development set. The largest
improvements come from Date, Number, and Spans (answers with multiple spans). Oracle rows and columns
indicate the performance that could be achieved by perfect selections of derivations. The ensemble used 6 models.
**Spurious ambiguities** Of the answers for which
we could find at least one oracle derivation, 36%
had two or more alternatives. During training,
the model became effective at resolving many of
these ambiguities. We monitored the entropy of
_P_ (d[∗] _| P, Q) for the ambiguous examples as train-_
ing progressed. At the start, the entropy was 2.5
bits, which matches the average ambiguous oracle length of ∼ 6 alternatives. By the end of 4
epochs, the average entropy had dropped to < 0.2
bits, comparable to a typical certainty of 95–99%
that one of the derivations is the correct one.
**4** **Experiments**
Our main experiments pertain to DROP (Dua
et al., 2019), using DROP and, optionally, CoQA
(Reddy et al., 2018) data for training. Preprocessing and hyperparameter details are given
in the supplementary material. In addition to full
DROP results, we performed ablation experiments
for the incremental addition of the Diff100,
Sum3, and Merge operations, and finally the
CoQA training data. We ran on the CoQA dev set,
to show that the model co-trained on CoQA can
still perform traditional reading comprehension.
To investigate our model’s ability to do symbolic
reasoning at the other extreme, we performed fewshot learning experiments on the Illinois dataset of
math problems (Roy and Roth, 2015).
**4.1** **DROP Results**
As shown in Table 3, our model achieves over
50% relative improvement (over 33% absolute)
over the previous state-of-the-art NAQANet system. The ablations indicate that the improvements
due to the addition of extra ops (Diff100, Sum3,
Merge) are roughly consistent with their proportion in the data. Specifically, the Diff100 and
Sum3 derivations increase the oracle performance
by 8.7% and 1.4% respectively, corresponding to
model improvements of roughly 9% and 1.1%, respectively. Answers requiring two spans occur
about 2.8% of the time, which is a 60.4% proportion of the Spans answer type. Merge only improves the Spans answer type by 9%, which we
think is due to the significant 11:1 class imbalance
between competing single and multiple spans. As
a result, multiple spans are under-predicted, leaving considerable headroom there.
Pre-training on CoQA then fine-tuning on
DROP lead to our best results on DROP, reported
in Table 3. After fine-tuning on DROP, the model
forgot how to do CoQA, with an overall F1 score
of 52.2 on the CoQA dev set. If one prefers a
model competent in both types of input, then the
forgetting can be prevented by fine-tuning on both
CoQA and DROP datasets simultaneously. This
resulted in dev set F1 scores of 82.2 on CoQA and
81.1 on DROP. The CoQA performance is decent
and compares well with the pre-trained model performance of 82.5. The 0.5% drop in DROP performance is likely attributable to the difference between pre-training versus fine-tuning on CoQA.
We ensembled 6 models (3 seeds × 2 learning
rates) for an additional 1% improvement.
**4.2** **Results on Math Word Problems**
We trained our model on the Illinois math word
problems dataset (Roy and Roth, 2015), which
contains answers requiring multiplication and
7The “basic” model includes all Ddirect, all S, and the simple binary operations Sum and Diff.
5950
-----
Google Research Language team.
**References**
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language understanding. In NAACL.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In
_NAACL._
Kenton Lee, Tom Kwiatkowski, Ankur P. Parikh, and
Dipanjan Das. 2017. Learning recurrent span representations for extractive question answering. CoRR,
abs/1611.01436.
Chao-Chun Liang, Kuang-Yi Hsu, Chien-Tsung
Huang, Chung-Min Li, Shen-Yu Miao, and Keh-Yih
Su. 2016. A tag-based statistical english math word
problem solver with understanding, reasoning and
explanation. In IJCAI.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you dont know: Unanswerable questions for SQuAD. In Proceedings of the 56th An_nual Meeting of the Association for Computational_
_Linguistics (Volume 2: Short Papers), pages 784–_
789.
Abhilasha Ravichander, Aakanksha Naik, Carolyn Penstein Ros´e, and Eduard H. Hovy. 2019.
Equate: A benchmark evaluation framework for
quantitative reasoning in natural language inference.
_CoRR, abs/1901.03735._
Siva Reddy, Danqi Chen, and Christopher D. Manning.
2018. CoQA: A conversational question answering
challenge. CoRR, abs/1808.07042.
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In EMNLP.
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transac_tions of the Association for Computational Linguis-_
_tics, 3:1–13._
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. _CoRR,_
abs/1904.01557.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018. MathDQN: Solving arithmetic word problems via deep
reinforcement learning. In AAAI.
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui
Zhao, Kai Chen, Mohammad Norouzi, and Quoc V
Le. 2018. QANet: Combining local convolution
with global self-attention for reading comprehension. Proceedings of ICLR.
Roy et al. (2015) 73.9
Liang et al. (2016) **80.1**
Wang et al. (2018) 73.3
Our basic: IL data 48.6 ± 5.3
+ Mul and Div 74.0 ± 6.0
+ DROP data **83.2 ± 6.0**
Table 4: Accuracy on the Illinois (IL) dataset[8]of
562 single-step word problems, using the five crossvalidation folds of Roy and Roth (2015). Standard deviations were computed from the five folds. Roughly
half the questions require the use of Sum and Diff,
and half require Mul and Div.
division—operations not present in DROP—as
well as addition and subtraction, in roughly equal
proportion. Given the small (N = 562) dataset
size, training and evaluation is done with five-fold
cross-validation on a standardized set of splits.
As shown in Table 4, when we added Mul and
Div to our basic DROP operations, the model
was able to learn to use them. Transferring from
the DROP dataset further improved performance
beyond that of Liang et al. (2016), a model specific to math word problems that uses rules over
dependency trees. Compared to other more general systems, our model outperforms the deep reinforcement learning based approach of Wang et al.
(2018).
**5** **Conclusions and Future Work**
We proposed using BERT for reading comprehension combined with lightweight neural modules for computation in order to smoothly handle both traditional factoid question answering and
questions requiring symbolic reasoning in a single unified model. On the DROP dataset, which
includes a mix of reading comprehension and numerical reasoning, our model achieves a 33% absolute improvement over the previous best. The
same model can also do standard reading comprehension on CoQA, and focused numerical reasoning on math word problems. We plan to generalize
this model to more complex and compositional answers, with better searching and pruning strategies
of the derivations.
**Acknowledgements**
We would like to thank Chris Alberti and Livio
Baldini Soares for tremendously helpful discussions, and we are grateful to all members of the
[8https://cogcomp.org/page/resource view/98](https://cogcomp.org/page/resource_view/98)
5951
-----
Dongxiang Zhang, Lei Wang, Nuo Xu, Bing Tian Dai,
and Heng Tao Shen. 2018. The gap of semantic
parsing: A survey on automatic math word problem
solvers. IEEE transactions on pattern analysis and
_machine intelligence._
5952
-----
| [
"Daniel, Andor",
"Kenton, Lee",
"Emily, Pitler",
"Luheng, He"
] | 2019-01-01T00:00:00 | EMNLP 2019 Main | true | 96 | 10 | null | https://www.aclweb.org/anthology/D19-1609 | null | https://www.semanticscholar.org/paper/52fa450740913a6cdcb4d9395b45e203f46cab79 |
Scaling Relationship on Learning Mathematical Reasoning with Large Language Models | Mathematical reasoning is a challenging task for large language models (LLMs), while the scaling relationship of it with respect to LLM capacity is under-explored. In this paper, we investigate how the pre-training loss, supervised data amount, and augmented data amount influence the reasoning performances of a supervised LLM. We find that pre-training loss is a better indicator of the model's performance than the model's parameter count. We apply supervised fine-tuning (SFT) with different amounts of supervised data and empirically find a log-linear relation between data amount and model performance, and we find better models improve less with enlarged supervised datasets. To augment more data samples for improving model performances without any human effort, we propose to apply Rejection sampling Fine-Tuning (RFT). RFT uses supervised models to generate and collect correct reasoning paths as augmented fine-tuning datasets. We find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs. We also find RFT brings more improvement for less performant LLMs. Furthermore, we combine rejection samples from multiple models which push LLaMA-7B to an accuracy of 49.3\% on GSM8K which outperforms the supervised fine-tuning (SFT) accuracy of 35.9\% significantly. | It is found that pre-training loss is a better indicator of the model's performance than the model’s parameter count and that with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs. | ## SCALING RELATIONSHIP ON LEARNING MATHEMATI### CAL REASONING WITH LARGE LANGUAGE MODELS
**Zheng Yuan[∗], Hongyi Yuan[∗†], Chengpeng Li[†], Guanting Dong[†], Keming Lu**
**Chuanqi Tan, Chang Zhou, Jingren Zhou**
Alibaba DAMO Academy
_{yuanzheng.yuanzhen,yuanhongyi.yhy}@alibaba-inc.com_
_{lichengpeng.lcp,dongguanting.dgt,lukeming.lkm}@alibaba-inc.com_
_{chuanqi.tcq,ericzhou.zc,jingren.zhou}@alibaba-inc.com_
ABSTRACT
Mathematical reasoning is a challenging task for large language models (LLMs),
while the scaling relationship of it with respect to LLM capacity is under-explored.
In this paper, we investigate how the pre-training loss, supervised data amount,
and augmented data amount influence the reasoning performances of a supervised
LLM. We find that pre-training loss is a better indicator of the model’s performance than the model’s parameter count. We apply supervised fine-tuning (SFT)
with different amounts of supervised data and empirically find a log-linear relation between data amount and model performance, and we find better models
improve less with enlarged supervised datasets. To augment more data samples
for improving model performances without any human effort, we propose to apply Rejection sampling Fine-Tuning (RFT). RFT uses supervised models to generate and collect correct reasoning paths as augmented fine-tuning datasets. We
find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs. We also find RFT
brings more improvement for less performant LLMs. Furthermore, we combine
rejection samples from multiple models which push LLaMA-7B to an accuracy of
49.3% on GSM8K which outperforms the supervised fine-tuning (SFT) accuracy
of 35.9% significantly. We release our codes and rejection sampling augmented
[data in https://github.com/OFA-Sys/gsm8k-ScRel.](https://github.com/OFA-Sys/gsm8k-ScRel)
1 INTRODUCTION
Large language models (LLMs) (Anil et al., 2023; Touvron et al., 2023b; OpenAI, 2023) have shown
considerable abilities in various math reasoning tasks (Saxton et al., 2019; Cobbe et al., 2021; Lightman et al., 2023). It is of interest to understand, predict, and improve an LLM’s math reasoning
ability based on different pre-trained LLMs and supervised datasets. With this knowledge, we can
better decide the effort we put into improving the LLM or augmenting the dataset. Many recent
works are focusing on using different prompts (Wei et al., 2022b; Yao et al., 2023) or ensembling /
reranking multiple times of inferences (Cobbe et al., 2021; Uesato et al., 2022; Wang et al., 2023;
Lightman et al., 2023) to improve models’ reasoning performances. While in-context learning (ICL)
and performing multiple inferences can improve performance, it is computationally expensive and
not suitable for online deployment scenarios. Therefore, we focus on the performance of the supervised LLMs with inference only once which is a setting closer to online deployment.
To this end, we empirically investigate the scaling relationship of factors that influence the math
reasoning abilities of a supervised LLM, including pre-training losses, the amount of supervised
data, and the amount of augmented data. Firstly, we analyze the supervised fine-tuning (SFT) and
ICL performance of LLMs. We observe that the pre-training loss is approximately negatively linear
correlated to the SFT and ICL accuracy in a given interval which is a better performance indicator
than pre-trained model sizes or pre-trained token counts. Secondly, we analyze the relationship
_∗Contributed Equally._
_†Work done during internships at Alibaba DAMO Academy._
-----
Figure 1: The key findings of scaling relationship on learning math reasoning ability with LLMs.
between SFT and different amounts of supervised data. We observe that the model performance
has a log-linear relation versus the supervised data amount while the increase diminishes with the
better pre-trained model. Thirdly, we want to leverage the model itself to generate more supervised
data to reinforce its reasoning ability and analyze the scaling relationship of the augmented data
amount. We apply rejection sampling on SFT models to sample and select correct reasoning paths
as augmented dataset (Uesato et al., 2022; Zhu et al., 2023). We use these augmented datasets to
fine-tune base LLMs which would achieve better performances compared to SFT and we denote it
as rejection sampling fine-tuning (RFT). We find the key factor influencing RFT performance is the
distinct reasoning path amount which can be increased by sampling more times or combing samples
from multiple models. We apply RFT on several pre-trained LLMs and show larger improvement
on less performant models. We discuss the reason RFT works is it provides multiple reasoning paths
which makes LLMs have better reasoning generalization. We also discuss that RFT is much cheaper
than pre-training in computational resources while training an LLM with lower pre-training loss is
the fundamental solution.
The key findings of this paper are shown in Figure 1 and are summarized here:
- When the pre-training loss gets smaller (i.e. the pre-trained model gets better), the model
reasoning performances of SFT and ICL increase linearly within a range. The SFT performance improves slower than ICL.
- SFT improves in a log-linear manner with the increase of supervised data amount. The
benefits of increasing data amount diminish as the pre-trained model gets better.
- The model performance for RFT improves as the distinct reasoning path amount increases.
The RFT performance improves slower than SFT.
- The combination of rejection sampling samples from multiple models further enhances the
RFT performance, resulting in an accuracy of 49.3 for LLaMA-7B (+13.4 compared to
SFT), 50.3 for LLaMA2-7B (+8.7 compared to SFT), 52.1 for LLaMA-13B (+9.1 compared to SFT), and 55.4 for LLaMA2-13B (+5.4 compared to SFT).
2 RELATED WORKS
**Learning Math Reasoning with LLMs** Recent research on LLMs has discovered the emergent
ability to solve reasoning tasks beyond a certain model scale (Wei et al., 2022a). Such reasoning
abilities in LLMs can be elicited by fine-tuning, few-shot prompting, or zero-shot prompting (Cobbe
et al., 2021; Wei et al., 2021; Nye et al., 2021; Wei et al., 2022b; Kojima et al., 2022). A large
-----
amount of research focuses on the reasoning tasks of math word problems (MWP), and methods are
evaluated on the benchmarks spanning different levels of MWPs (Koncel-Kedziorski et al. (2016);
Patel et al. (2021); Lan et al. (2021); Cobbe et al. (2021); Jie et al. (2022); Yuan et al. (2023a); Fu
et al. (2023a), inter alia). The core idea of improving the mathematical reasoning ability of LLMs
is to aggregate various sampled reasoning paths during either fine-tuning or inference. Cobbe et al.
(2021) trained and devised a reasoning path verifier to select the correct results during inference.
Wang et al. (2023) proposed to sample various reasoning paths during inference and then derive the
final result by majority voting on the answers or through verifiers (Li et al., 2023). Several works
applied the idea of rejection sampling along with other techniques to filter the diverse sampled
reasoning paths for fine-tuning data augmentation (Huang et al., 2022; Zelikman et al., 2022; Ni
et al., 2023; Zhu et al., 2023). Rejection sampling is a simple-yet-effective fine-tuning augmentation
technique and is also used for LLM alignment with human preference (Bai et al., 2022; Yuan et al.,
2023b; Dong et al., 2023; Touvron et al., 2023b; Song et al., 2023). Uesato et al. (2022) explored
to use of reinforcement learning methods for improving the mathematical reasoning abilities of
LLMs and they further discussed the difference between outcome-based and process-based reward
modeling. Followed by Lightman et al. (2023), they collected large-scale process-based supervision
signals through human annotation and verified that LLMs can benefit more from process-based
reward modeling with human-annotated supervision than outcome-based reward modeling. There is
also prior research that distilled the emergent reasoning ability of LLMs to small language models
(Fu et al., 2023b; Shridhar et al., 2023). Compared to previous works (Zelikman et al., 2022; Uesato
et al., 2022; Zhu et al., 2023; Ni et al., 2023), we are using a simpler way of generating augmented
samples without any trained process-level reward models and we are focusing on researching the
scaling relationship between LLMs and math reasoning ability.
**Scaling Laws of Large Language Models** It is important to understand and predict the performance gain as the language model scales up. Kaplan et al. (2020) first investigated and derived a
predictable relationship on how the number of model parameters and data sizes contribute to the
loss over many orders of magnitudes. Hoffmann et al. (2022) refined the scaling laws in (Kaplan
et al., 2020) and found the scaling laws for computation-optimal training. Muennighoff et al. (2023)
explored and extended the scaling laws under a data-constrained scenario. Besides investigating the
scaling performance for pre-training, Gao et al. (2022) discussed the scaling laws for overparameterized reward models for alignment with human preference, and Hernandez et al. (2021) developed
scaling laws for transferring performance from pre-trained models to downstream tasks. Henighan
et al. (2020); Caballero et al. (2022) investigated scaling laws of math problems. In this paper, we
are investigating the scaling relationships of large language models on learning math word problems
with pre-training losses, supervised data amount, and augmented data amount.
3 THE FACTORS OF MATH REASONING ABILITY IN SUPERVISED LLM
The target of this paper is to try to understand the performances of supervised LLMs in math reasoning. We expect a pre-trained LLM ρ to learn reasoning ability from a supervised reasoning dataset
. The dataset is defined by = _qi, ri, ai_ _i, where q is a question, r is a chain-of-thought reason-_
_D_ _D_ _{_ _}_
ing path, and a is a numerical answer. We perform supervised fine-tuning on dataset D to obtain an
SFT model π. We use π to generate reasoning paths and answers in the test set by greedy decoding
and report the accuracy (i.e. acc or maj1@1) as our metric here.
3.1 MODEL ACCURACY VS. PRE-TRAINING LOSS
Previous works state that the larger LLM shows better reasoning ability across the same series of
models (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a;b), and we find LLaMA
outperforms GPT-3 which shows the model parameter counts should not be the only indicator of
reasoning ability. While LLMs have different architectures, model parameters, and pre-training
token numbers, we find the pre-training loss is a stable performance indicator of the math reasoning
ability and we use it to represent the model instead of using their model parameters and pre-training
token numbers.
We analyze the SFT and ICL (8-shot) performance of GPT-3 (Brown et al., 2020), LLaMA (Touvron
et al., 2023a), LLaMA2 (Touvron et al., 2023b), and GPT-4 (OpenAI, 2023). The pre-training losses
-----
Figure 2: The performance of SFT (blue lines) and ICL (red lines) settings on GSM8K. GPT-4 states
they use some part of the GSM8K data in pre-training, and suggest others consider its performance
between SFT and ICL.
of these models are observed in their paper, we should notice that pre-training losses correspond to
different pre-training datasets and different tokenizers which means they could not be compared
strictly (and we cannot use it to do any sort of regression directly) while the tendency among these
losses is still enlightening. We use the results of GPT-3 fine-tuning from (Cobbe et al., 2021) and
we fine-tune LLaMA and LLaMA2 on the GSM8K training set (detailed in Appendix A.1). For
in-context learning, we use the results from LLaMA (Touvron et al., 2023a) and LLaMA2 (Touvron
et al., 2023b) paper.
In Figure 2, we can find that:
- The pre-training losses are approximately negatively linear correlated to the SFT and ICL
accuracy during the given pre-training loss interval.
- SFT outperforms ICL consistently, while the improvements diminish when the pre-training
loss is lower.
The linear relation of SFT and ICL accuracy may only work in the given interval. The reasons are
(1) the slope of ICL is steeper than SFT, while the SFT performance should be greater than ICL
performance; (2) the accuracy can not bigger than 1 or smaller than 0. It should be using − log(acc)
instead of acc as the dependent variable theoretically while we find an apparent linear relationship
among pre-training loss and acc and use acc as the dependent variable. LLaMA-2 7B(13B) can be
viewed as an approximation of continue-training of LLaMA 7B(13B). As it trains longer, its ICL
and SFT performance both improve without changing the parameter count. From the observations,
one effective way to improve reasoning ability is to train a better base model with lower pre-training
loss (Pre-training is all you need!). The models with lower pre-training loss improve less from the
fine-tuning which may be due to the models having already obtained more reasoning abilities during
pre-training and the supervised data can provide less signal to supervise them.
-----
Figure 3: The performance of SFT with different amounts of supervised data on GSM8K.
3.2 MODEL ACCURACY VS. SUPERVISED DATA COUNT
Supervised fine-tuning does improve LLMs’ reasoning ability, we want to know how the supervised data amount influences the model’s improvement. We fine-tune LLaMA and LLaMA2 with
_{1, 1/2, 1/4, 1/8, 1/16, 1/32} amount of the training set from GSM8K (detailed in Appendix A.2)._
We want to use this experiment to extrapolate the model performances if we have more supervised
data. In Figure 3, we plot the results of training with different amounts of supervised data. From
this figure, we can observe that:
- The model performance has a log-linear relation versus data amount. When the data amount
doubles, the performance increases by a unit.
- Better model needs more amount of data to outperform its ICL performance.
- Better model benefits less when supervised data amount doubles.
The log-linear relation is stable during {1, 1/2, 1/4, 1/8} amount of the training data. From the observation, it is straightforward to enlarge the training dataset to improve the performance, especially
for worse models. For better models, it benefits less which echoes that better models have learned
more reasoning ability during pre-training.
3.3 MODEL ACCURACY VS. AUGMENTED DATA COUNT
Increasing the amount of math reasoning labeled data is difficult, especially proposing a new question. It is easy for a well-educated student to solve hundreds of math word problems per day, but
it is very hard to come up with diverse and educational math problems. So our direction changes
to augment new data using existing resources. We have tried augmenting new queries (detailed in
Appendix D.1) and augmenting revisions (detailed in Appendix D.2). These approaches have none
to marginal improvements compared to SFT. We find a simplified version of rejection sampling (Zhu
et al., 2023) is a naive and effective way to augment new reasoning paths and can improve the model
performance. And we find the key factor influences fine-tuning on rejection sampling (RFT) augmented data is distinct reasoning path amount. Combining rejection sampling samples from multiple
-----
|Setting|7B 7B-2 13B 13B-2 33B|
|---|---|
|Pretrain loss|1.8 1.75 1.73 1.68 1.62|
|ICL SFT|11.0/18.1 14.6/- 17.8/29.3 28.7/- 35.6/53.1 35.9/48.7 41.6/55.4 43.0/55.2 50.0/61.7 54.6/-|
|RFT k = 100 Correct paths per question Distinct paths per question|41.7/52.7 47.5/58.7 49.1/59.9 54.8/65.4 54.5/- 53.3 60.8 62.5 71.6 88.7 5.25 5.19 5.26 5.29 2.78|
Table 1: The performance of RFT with k = 100 on GSM8K compared with SFT and ICL. Distinct
path amount means distinct equation list amount here.
models, we can further fine-tune a LLaMA-7B model to an accuracy of 49.3 (compared with SFT
35.9) and a LLaMA-13B model to an accuracy of 52.1 (compared with SFT 43.0).
**Rejection Sampling Fine-tuning** The SFT model π obtains the ability to perform zero-shot chainof-thought reasoning, and we use π to generate more correct reasoning paths rij to supply the
training dataset. For each qi, we generate k candidate reasoning paths and answers r, a with a
temperature of 0.7 following (Cobbe et al., 2021). We first filter out reasoning paths with wrong
answers a ̸= ai or wrong calculations based on Python evaluation. Each reasoning path contains
a list of equations ej, and we select one reasoning path rij for each distinct equation list as the
augmented data and remove other reasoning paths with the same list of equations to deduplicate
similar reasoning paths. Different order of elements (e.g. 3 + 4 = 7 and 4 + 3 = 7) or different
order of equations (e.g. 1 + 2 = 3, 3 + 4 = 7 and 1 + 4 = 5, 2 + 5 = 7) are considered different.
It is helpful for models to know these orders can be exchanged and is hard for models to learn this
with only one reasoning path each problem. We define _π_ [=][ D ∪{][q][i][, r][ij][, a][i][}][i,j] [as the augmented]
_D[′]_
dataset. We fine-tune D[′] on pre-trained LLM ρ to πRFT as RFT, and we detail how we apply RFT
in Appendix A.3. We list the results of RFT with sampling k = 100 candidate reasoning paths
on LLaMA and LLaMA-2 in Table 1. For ICL, SFT, and RFT, we list the maj1@1 (accuracy) and
maj1@100 (sample 100 times and calculate accuracy based on majority voting) as metrics.
In the case of 7B and 13B models, RFT yields an approximate increase of 5 to 6 points in maj1@1
and about 4 points increase in maj1@100. For 33B models, RFT does not improve performance
compared to SFT. The main reason comes from the augmented samples from rejection sampling.
We can find that better models generate more correct reasoning paths per question. For LLaMA33B-SFT, it can generate an average of 88.7 correct paths per question. However, it overfits the
training set and has difficulty generating more diverse paths on the training set questions. Rejection
sampling with 33B is very time-consuming and we do not conduct a temperate grid search, we have
tried using a larger temperate 1.0 for decoding LLaMA-33B-SFT models, it generates 82.4 correct
paths and 4.77 distinct paths per question which is more diverse than using temperate 0.7 but still
less diverse than 7B and 13B models. We admit there should be a temperate (or generation config)
that can produce more distinct paths and generate good results for RFT in 33B and even larger
models while it does need more computation resources for inference compared to sampling using
7B and 13B models. We will show we can use 7B and 13B models only for rejection sampling to
improve the 33B model.
**Model Accuracy vs Rejection Sampling Data Count** To understand the performance of RFT,
we vary k among 1, 3, 6, 12, 25, 50, 100 and apply RFT. We also have another setting of k = 100
while not removing any reasoning paths denoted as no dedup. We list the RFT results with different
_k on Figure 4. Comparing using RFT with k = 100 and no dedup, the performance is similar and_
shows that it is better to estimate RFT performance based on distinct reasoning path amount instead
of RFT augmented sample counts. Furthermore, using deduplication has better performances for 3
of 4 models and needs much less training time.
When using k = 3, RFT outperforms SFT by 2 points stably. For most data points, using larger
_k leads to better performances. However, the merits of RFT are decreasing when doubling k. We_
calculate different paths per question for different k in Table 2. We can see that the amount of
different reasoning paths is not growing quickly along k growing. In Figure 3, we know doubling
training samples can have a linear performance improvement. Doubling reasoning paths should
-----
Figure 4: The performance of RFT with different amounts of sampling count k on GSM8K.
|k|7B 7B-2 13B 13B-2 33B|
|---|---|
|1 3 6 12 25 50 100|1.17 1.19 1.15 1.18 1.06 1.44 1.47 1.41 1.45 1.16 1.74 1.78 1.69 1.76 1.28 2.20 2.23 2.11 2.21 1.46 2.93 2.93 2.88 2.94 1.77 3.94 3.91 3.90 3.94 2.19 5.25 5.19 5.26 5.29 2.78|
|400 (U13B) 500 (U33B)|12.84 13.65|
Table 2: Different reasoning paths per question generated by different SFT models with different k.
improve less than doubling training samples since obtaining different reasoning paths does not obtain
any new questions. Therefore, doubling k leads to diminished performance improvements.
**Combining rejection sampling samples from multiple models** The experiment results above
demonstrate performance boosts in mathematical reasoning, benefitting from rejection sampling.
Through case studies in 4.1, we show that rejection sampling can augment training data with reasoning paths of diverse calculation processes. However, the reasoning paths sampled from one single
SFT model can be logically non-diverse. Therefore, we expect to further improve the mathematical
reasoning performance by leveraging rejection sampled reasoning paths aggregated from different
models. We denote two final datasets as DU13B[′] [and][ D]U33B[′] [, which are aggregated from rejection sam-]
means models under a certain size, 7B/13B/33B means LLaMA-7B/13B/33B and 7B2/13B2 meanspling different models DU13B[′] [=][ D]7B[′] _[⊕D]7B2[′]_ _[⊕D]13B[′]_ _[⊕D]13B2[′]_ [and][ D]U33B[′] [=][ D]U13B[′] _[⊕D]33B[′]_ [, where U]
LLaMA2-7B/13B. ⊕ means an aggregation process in which all the reasoning paths from different
sets are first combined and then Algorithm 1 is applied to deduplicate the reasoning paths with the
same calculation process regarding the equation forms and orders.
We can see, through the results visualized in Figure 5, that using the aggregated dataset DU13B[′] [and]
_DU33B[′]_ [can lead to uniformly better performance than fine-tuning with datasets from a single model]
across different model sizes. RFT on these two augmented datasets DU13B[′] [and][ D]U33B[′] [decreases the]
performance gaps among the same size models in SFT and RFT k = 100 which mean the combined
augmented datasets provide enough reasoning supervision to fulfill the pre-training gap. We can
assume with sufficient supervised data amounts, the performance indicator should be the model size
but not the pre-training losses.
-----
Figure 5: The performance of RFT with rejection sampling samples from multiple models.
We have stated that it is expensive to apply RFT k = 100 on 33B models and it needs a temperate
grid search to achieve an improvement compared to SFT. However fine-tuning on DU13B[′] [has similar]
rejection sampling computational cost compared with sampling 100 times on 33B and achieve better
performance.
Another phenomenon is including D33B[′] [in aggregation barely influences the performance. To give]
a more comprehensive analysis of the results, we calculate the average reasoning path number per
question in Table 2 and depict a Venn diagram to visualize the source of different reasoning paths
shown in Figure 6. In Table 2, the average reasoning path numbers of DU13B[′] [and][ D]U33B[′] [surpass]
those of a single model by large amounts, while DU33B[′] [only have slightly more reasoning paths than]
_DU13B[′]_ [by 0.81. In the meanwhile, as shown in Figure 6, the models under and including the size of]
13B can contribute unique reasoning paths of similar proportion in DU33B[′] [around 15%. However,]
only 6.5% of the reasoning paths can be exclusively acquired from LLaMA-33B-SFT model. This
shows that the SFT model of 33B can provide limited reasoning diversity when sampling the training
questions. This finding is consistent with the results above in Table 1, indicating the 33B model (and
possibly 65B and 70B models) can well memorize the human-annotated reasoning paths.
For 65B models, we find using DU13B[′] [does not improve the performance compared to SFT. The]
reason can be better models benefit less from the supervised sample amounts while it has learnt
more reasoning ability during pre-training.
Overall, we can come to the conclusion that (1) RFT improves the mathematical reasoning performance of (worse) LLMs through diverse reasoning paths from rejection sampling of the SFT
models, and aggregating more diverse reasoning paths can improve the performance further. (2)
Different SFT models can contribute reasoning paths with different calculation processes from rejection sampling, leading to more diverse training data for RFT, and LLMs of larger parameter sizes
may degrade in generating diversified reasoning paths as a result of overfitting the training questions. There may be a generation config or training config for large enough LMs not to overfit on
the training dataset while it is not trivial to find them.
**Comparing to other baselines** We compare our RFT results of training on DU13B[′] [to several base-]
lines and the results are detailed in Table 3. Although LLaMA and LLaMA2 are top-tier opensourced LLMs [1], their mathematical reasoning performances still lag behind the current proprietary
LLMs which are of larger parameter scales, such as GPT-4 and PaLM2. Compared to results on
[1https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
-----
Figure 6: The Venn diagram of the proportions of the reasoning calculation paths that each model
provide to DU33B[′] [. For example, 15.5% (in the yellow part) of the reasoning calculation paths in]
_DU33B[′]_ [can only be exclusively found in the rejection sampling results from LLaMA2-13B-SFT.]
open-resourced models, our results on LLaMA present better performance than two recent stateof-the-art reasoning augmentation methods. Our RFT method is simpler compared to CoRE, since
RFT does not require training verifier models and decoding with Monte Carlo Tree Search (MCTS).
Compared to other open-sourced aligned language models, we can find that 7B models struggle at
a level of 35 scores which are very similar to SFT performances of LLaMA-7B. We guess they use
GSM8K during their pre-training phase following (OpenAI, 2023) or human alignment fine-tuning
phase following (Qingyi et al., 2023). Using our augmented dataset DU13B[′] [to replace the original]
GSM8K can significantly boost their 7B models’ performances.
4 DISCUSSION
4.1 DIFFERENT DISTRIBUTION OF REASONING PATHS
In the aforementioned analysis of RFT training data, we observe that rejection sampling can augment
the training question with diverse reasoning calculation paths. In this section, we investigate whether
RFT models can learn to generate different reasoning paths to reach the correct answers. We finetune LLaMA and LLaMA2 of 7B and 13B on DU13B[′] [. During inference, we sample 100 different]
reasoning paths from each trained model for each test set question with a temperature of 0.7. For
each question, we compute the number of different calculation processes presented in 100 sampled
reasoning paths that lead to the correct answer and draw histograms with respect to test set questions.
SFT and RFT models on self-sampled datasets (RFT k=100) are included for comparison.
As shown in Figure 7, the models trained by RFT on DU13B[′] [exhibit more question counts than the]
models trained by RFT k=100 and SFT on the larger numbers of unique calculation processes. There
are more question counts for SFT models where all the sampled reasoning paths only correspond to
one single calculation process and SFT models can barely generate more than 8 different calculation
-----
|Base Model Training|maj1@1 maj1@K*|
|---|---|
|Proprietary LLMs GPT-4 (OpenAI, 2023) 5-shot ICL GPT-3-175B (Brown et al., 2020) SFT PaLM2 (Anil et al., 2023) 8-shot ICL PaLM-540B (Chowdhery et al., 2022) 8-shot ICL Chinchilla-70B (Uesato et al., 2022) 5-shot ICL Chinchilla-70B SFT|92.0 - 34.0 - 80.7 91.0@K=40 56.5 74.4@K=40 43.7 58.6@K=96 58.9 77.7@K=96|
|Open-sourced LLMs GPT-Neo-2.7B (Black et al., 2021) FCS + PCS (Ni et al., 2023) GPT-J-6B (Wang & Komatsuzaki, 2021) CoRE (Zhu et al., 2023) ChatGLM2-6B (Zeng et al., 2022) 8-shot ICL ChatGLM2-6B Human Alignment ChatGLM2-12B 8-shot ICL ChatGLM2-12B Human Alignment InternLM-7B (Team, 2023) 4-shot ICL InternLM-7B Human Alignment LLaMA-7B SFT|19.5 41.4 34.9 63.2@K=40 32.4 - 28.1 - 40.9 - 38.1 - 31.2 - 34.5 35.9 48.7|
|Our RFT on open-sourced LLMs LLaMA-7B RFT-U13B LLaMA2-7B RFT-U13B LLaMA-13B RFT-U13B LLaMA2-13B RFT-U13B|49.3 61.8 50.3 65.6 52.1 66.2 55.4 69.1|
Table 3: Compare GSM8K results with other baselines. RFT-U13B means models fine-tuned on
_DU13B[′]_ [. FCS and PCS represent fully-correct solutions and partially-correct solutions respectively.]
*K=100 if not specified.
Figure 7: The histograms of question numbers solved with different numbers of unique reasoning
calculation paths. We show the difference in question counts between SFT and RFT U13B in two
cases where the numbers of unique reasoning calculation paths are 1 or more than 10.
processes for a question. This analysis demonstrates that diverse reasoning calculation paths in
training data can equip the LLMs with finding diverse reasoning logic for solving math problems.
-----
|Model size|7B 7B-2 13B 13B-2 33B 65B 70B|
|---|---|
|Pre-train FLOPs SFT FLOPs RFT Inference FLOPs RFT-U33B FLOPs|4.2 × 1022 8.4 × 1022 7.8 × 1022 1.6 × 1023 2.7 × 1023 5.5 × 1023 8.4 × 1023 1.7 × 1017 3.3 × 1017 7.7 × 1017 1.3 × 1018 1.7 × 1018 1.4 × 1018 2.6 × 1018 6.9 × 1018 1.4 × 1019 1.8 × 1019 3.0 × 1018 5.7 × 1018 1.3 × 1019 2.2 × 1019 3.0 × 1019|
|Pre-train GPU hrs SFT GPU hrs RFT Inference GPU hrs RFT-U33B GPU hrs|82k 184k 135k 368k 530k 1022k 1720k 0.6 4 40 74 80 10 0.1k 0.1k 4.3k 4.5k 9 62 0.6k 1k 1.2k|
|ICL Accuracy SFT Accuracy RFT-U33B Accuracy|11.0 14.6 17.8 28.7 35.6 50.9 56.8 35.9 41.6 43.0 50.0 54.6 59.3 63.2 49.1 51.2 51.4 55.3 57.9 - -|
Table 4: The statistics of FLOPs and GPU hours required for pre-training, SFT, RFT inference, and
RFT. We take the pre-training GPU hours from Touvron et al. (2023a;b). The GPU hours for RFT
inference are calculated for 7,473 train set questions and 100 samples per question. To make the best
of GPUs and properly fit models into the GPU memory, we tune the inference batch size. For 33B,
65B, and 70B models, we use DeepSpeed ZeRO3 (Rasley et al., 2020) for distributed training. All
the GPU hours are based on NVIDIA A100 80GB GPU. Note we use non-embedding parameters to
compute FLOPs in our experiments.
4.2 TOWARDS EXCELSIOR MATHEMATICAL REASONING
From our findings, there are two main factors that can improve mathematical reasoning abilities
given a preset amount of human-annotated samples, including: (1) Pre-training the LLMs to lower
losses; (2) Augmenting fine-tuning with rejection sampling. Through extensive experiments, we empirically verify the scaling relationships between the mathematical reasoning performance of LLM
with both factors respectively. Out of the consideration of sustainable NLP, in this section, we investigate the possible computational resources required to extrapolate the mathematical performance of
LLMs by both factors and discuss how to improve the performance more efficiently.
We estimate the pre-training, SFT, RFT inference, and RFT FLOPs following Kaplan et al. (2020)
and GPU times in Table 4 which is detailed in Appendix E. We can find that the cost times of SFT
(∼ 1 × 10[−][5]) and RFT (∼ 1 × 10[−][4]) are negligible compared to pre-training. One can always use
SFT and RFT to improve models’ performance. However, it could be hard to use RFT to further
boost performance. Since we need much more sampling counts (at an exponential level) to increase
distinct reasoning paths and there exists an upper bound of distinct reasoning path amount for a
given math reasoning question.
We assume that performance follows RFT>SFT>ICL, from the findings in this paper we know
the improvement speed follows RFT<SFT<ICL. And if we have an omnipotent language model
which has a pre-training loss that is the same as the corpus randomness, it could have RFT = SFT
= ICL = 100. Thus when you pre-train a better language model (i.e. smaller pre-training loss),
your model’s performance still follows RFT>SFT>ICL but their performance gaps are diminishing.
Since you can obtain an RFT model without too much effort (compared to pre-training), then the
most important thing we should do is to decrease the model’s pre-training loss. From LLaMA-7B to
LLaMA2-7B, it needs to add 4.2×10[22] FLOPs to obtain a 2.1 improvement in the RFT-U33B setting
with a 0.05 pre-training loss decrease. From LLaMA-7B to LLaMA-13B, it adds 3.6 × 10[22] FLOPs
to obtain a 2.3 improvement in the RFT-U33B setting with a 0.07 pre-training loss decrease. While
minimizing pre-training loss is expensive compared to SFT and RFT, we believe other abilities may
follow a similar pattern and better pre-training can benefit all other tasks.
5 CONCLUSIONS
In this paper, we are investigating the scaling relationship in supervising math reasoning abilities
with large language models. We find the relationship between math performance and pre-training
-----
losses, supervised data amount, and distinct reasoning paths. We find that better language models
benefit less with SFT and RFT, and the most important thing is to pre-train a better language model
towards excellent math reasoning abilities.
6 ACKNOWLEDGEMENT
We would like to express our sincere appreciation to Tianhang Zhu, Runji Lin, Kai Dang, Keming
Lu, Wei Wang, and Junyang Lin for their valuable insights and contributions to this paper.
7 LIMITATIONS
In this paper, we miss the following parts which are very important for building math reasoning
abilities for LLMs and should be discussed in the revised version of this paper or future works.
- RFT for 65B and 70B LLaMA models.
- Pre-training on the math-related corpus. This is obviously useful shown in Lewkowycz
et al. (2022). While the pre-training loss obtained here cannot align with general domain
pre-trained models’ losses.
- We do not regress any scaling laws in this paper since many numbers are estimated and
pre-training losses, ICL prompts and SFT settings of various models may not be aligned.
REFERENCES
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder,
Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan,
and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Cur[ran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/](https://proceedings.neurips.cc/paper_files/paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf)
[paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf.](https://proceedings.neurips.cc/paper_files/paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf)
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report.
_arXiv preprint arXiv:2305.10403, 2023._
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli TranJohnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse,
Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna
Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario
Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai:
Harmlessness from ai feedback, 2022.
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Au[toregressive Language Modeling with Mesh-Tensorflow, March 2021. URL https://doi.](https://doi.org/10.5281/zenodo.5297715)
[org/10.5281/zenodo.5297715.](https://doi.org/10.5281/zenodo.5297715)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh,
Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler,
Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot
[learners. ArXiv, abs/2005.14165, 2020. URL https://api.semanticscholar.org/](https://api.semanticscholar.org/CorpusID:218971783)
[CorpusID:218971783.](https://api.semanticscholar.org/CorpusID:218971783)
-----
Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken neural scaling laws. arXiv
_preprint arXiv:2210.14891, 2022._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh,
Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam
Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James
Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin
Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret
Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick,
Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica
Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas
Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways,
2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum,
and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment,
2023.
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Chain-of-thought hub: A
continuous effort to measure large language models’ reasoning performance, 2023a.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language
models towards multi-step reasoning. arXiv preprint arXiv:2301.12726, 2023b.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization, 2022.
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo
Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generative
modeling. arXiv preprint arXiv:2010.14701, 2020.
Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. Scaling laws for transfer,
2021.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy,
Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre.
Training compute-optimal large language models, 2022.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei
Han. Large language models can self-improve, 2022.
Zhanming Jie, Jierui Li, and Wei Lu. Learning to reason deductively: Math word problem solving
as complex relation extraction. In Proceedings of the 60th Annual Meeting of the Association
_for Computational Linguistics (Volume 1: Long Papers), pp. 5944–5955, Dublin, Ireland, May_
2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.410. URL
[https://aclanthology.org/2022.acl-long.410.](https://aclanthology.org/2022.acl-long.410)
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language
[models. CoRR, abs/2001.08361, 2020. URL https://arxiv.org/abs/2001.08361.](https://arxiv.org/abs/2001.08361)
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave,
and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL
[https://openreview.net/forum?id=e2TBb5y0yFf.](https://openreview.net/forum?id=e2TBb5y0yFf)
-----
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North
_American Chapter of the Association for Computational Linguistics: Human Language Technolo-_
_gies, pp. 1152–1157, San Diego, California, June 2016. Association for Computational Linguis-_
[tics. doi: 10.18653/v1/N16-1136. URL https://aclanthology.org/N16-1136.](https://aclanthology.org/N16-1136)
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan, Bing Tian Dai, Yan Wang, Dongxiang Zhang,
and Ee-Peng Lim. Mwptoolkit: An open-source framework for deep learning-based math word
problem solvers, 2021.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making
language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–_
[5333, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https:](https://aclanthology.org/2023.acl-long.291)
[//aclanthology.org/2023.acl-long.291.](https://aclanthology.org/2023.acl-long.291)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language
models, 2023.
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex Polozov, Christopher Meek, Dragomir
Radev, and Jianfeng Gao. Learning math reasoning from self-sampled correct and partiallycorrect solutions. In The Eleventh International Conference on Learning Representations, 2023.
[URL https://openreview.net/forum?id=4D4TSJE6-K.](https://openreview.net/forum?id=4D4TSJE6-K)
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David
Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models,
2021.
OpenAI. Gpt-4 technical report, 2023.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple
math word problems? In Proceedings of the 2021 Conference of the North American Chapter
_of the Association for Computational Linguistics: Human Language Technologies, pp. 2080–_
2094, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.
[naacl-main.168. URL https://aclanthology.org/2021.naacl-main.168.](https://aclanthology.org/2021.naacl-main.168)
Si Qingyi, Wang Tong, Gu Naibin, Liu Rui, and Lin Zheng. Alpaca-cot: An instruction-tuning
platform with unified interface of instruction collection, parameter-efficient methods, and large
[language models. https://github.com/PhoebusSi/alpaca-CoT, 2023.](https://github.com/PhoebusSi/alpaca-CoT)
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings
_of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,_
KDD ’20, pp. 3505–3506, New York, NY, USA, 2020. Association for Computing Machin[ery. ISBN 9781450379984. doi: 10.1145/3394486.3406703. URL https://doi.org/10.](https://doi.org/10.1145/3394486.3406703)
[1145/3394486.3406703.](https://doi.org/10.1145/3394486.3406703)
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models, 2019.
-----
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. Distilling reasoning capabilities into
smaller language models. In Findings of the Association for Computational Linguistics: ACL
_2023, pp. 7059–7073, Toronto, Canada, July 2023. Association for Computational Linguistics._
[URL https://aclanthology.org/2023.findings-acl.441.](https://aclanthology.org/2023.findings-acl.441)
Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang.
Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023.
InternLM Team. Internlm: A multilingual language model with progressively enhanced capabilities.
[https://github.com/InternLM/InternLM, 2023.](https://github.com/InternLM/InternLM)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee
Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation
language models, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher,
Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy
Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models,
2023b.
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia
Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process- and
outcome-based feedback, 2022.
Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language
[Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021.](https://github.com/kingoflolz/mesh-transformer-jax)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2023. URL
[https://openreview.net/forum?id=1PL1NIMMrw.](https://openreview.net/forum?id=1PL1NIMMrw)
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan
Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. _ArXiv, abs/2109.01652, 2021._ [URL https://api.semanticscholar.org/](https://api.semanticscholar.org/CorpusID:237416585)
[CorpusID:237416585.](https://api.semanticscholar.org/CorpusID:237416585)
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori Hashimoto,
Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language
[models. Trans. Mach. Learn. Res., 2022, 2022a. URL https://api.semanticscholar.](https://api.semanticscholar.org/CorpusID:249674500)
[org/CorpusID:249674500.](https://api.semanticscholar.org/CorpusID:249674500)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. _ArXiv, abs/2201.11903, 2022b._ [URL https://api.semanticscholar.org/](https://api.semanticscholar.org/CorpusID:246411621)
[CorpusID:246411621.](https://api.semanticscholar.org/CorpusID:246411621)
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large
language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023a.
-----
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank
responses to align language models with human feedback without tears, 2023b.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. STar: Bootstrapping reasoning with
reasoning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Ad_[vances in Neural Information Processing Systems, 2022. URL https://openreview.net/](https://openreview.net/forum?id=_3ELRdg2sgI)_
[forum?id=_3ELRdg2sgI.](https://openreview.net/forum?id=_3ELRdg2sgI)
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint
_arXiv:2210.02414, 2022._
Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E. Gonzalez. The wisdom of
hindsight makes language models better instruction followers, 2023.
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang,
and Yujiu Yang. Solving math word problems via cooperative reasoning induced language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
_(Volume 1: Long Papers), pp. 4471–4485, Toronto, Canada, July 2023. Association for Compu-_
[tational Linguistics. URL https://aclanthology.org/2023.acl-long.245.](https://aclanthology.org/2023.acl-long.245)
A DETAILED EXPERIMENT SETTING
A.1 SFT ON GSM8K
We fine-tune GSM8K with 3 epochs and a batch size of 128 on NVIDIA A100 GPUs. We use 8
GPUs for 7B and 13B models, 16 GPUs for 33B models, and 32 GPUs for 65B and 70B models
during fine-tuning. We use a peak learning rate of 2e-5 with a 3% learning rate warmup. We
evaluate the results on the final epoch. We use greedy decode to calculate maj1@1 and decode with
temperature 0.7 to calculate maj1@100.
A.2 SFT ON DOWNSAMPLED GSM8K
We random downsample GSM8K dataset for fine-tuning. We find that using 3 epochs for little
data will result in very poor results which are listed in Table 5. We search training epoch among
_{3,_ _data fraction3_ _[}][ and evaluate the latest epoch. We report better test results among these two different]_
epoch settings.
A.3 REJECTION SAMPLING FINE-TUNING ON GSM8K
We use an SFT model π to sample on training dataset for k = 100 times with a temperature of
0.7. We extract the equation list in generated reasoning paths by finding <<equation>> first,
removing all white spaces, and joining the equation string list by a special symbol to a string (called
-----
it get equation in our algorithm) for deduplication. We select the reasoning paths by this algorithm:
**Algorithm 1: Reasoning Path Selection**
**Data: Reasoning paths for question q, Rq**
**Result: Selected reasoning paths for question q,** _q_
_R[s]_
**1 Initialize selected reasoning paths, Rq[s]** [=][ list][()]
**2 Initialize appeared equation set, Eq[s]** [=][ set][()]
**3 for r in Rq do**
**4** **if get equation(r) /∈Eq[s]** **[then]**
**5** _Rq[s][.append(][r][);]_
**6** _Eq[s][.update([][get equation(][r][)][])]_
**7** **end**
**8** **else**
**9** find r[s] _∈Rq[s]_ [s.t.][ get equation(][r][s][)][ =][ get equation(][r][)][;]
**10** **if** _i:ri[s][∈E]q[s][,r]i[s][̸][=][r][s][ Levenstein dist(][r][,][ r]i[s][)][ >][ P]i:ri[s][∈E]q[s][,r]i[s][̸][=][r][s][ Levenstein dist(][r][s][,][ r]i[s][)][ then]_
**11** _r[s]_ = r;
**12** **end[P]**
**13** **end**
**14 end**
We are trying to find the most dissimilar reasoning paths based on Levenstein distances. The idea
comes from we want diverse reasoning paths for better generalization.
B DETAILED RESULTS OF SFT AND RFT
We list detailed results of SFT and RFT in Table 5 and 6.
|Model|Data Epoch|7B 7B-2 13B 13B-2 33B 65B 70B-2|
|---|---|---|
|ICL-8shot|0 0|11.0 14.6 17.8 28.7 35.6 50.9 56.8|
|SFT SFT SFT SFT SFT|1/32 96 1/16 48 1/8 24 1/4 12 1/2 6|9.5 10.1 8.6 17.1 18.6 25.2 27.4 14.3 15.5 14.2 23.9 25.9 28.9 33.6 17.9 20.8 18.4 28.5 31.6 35.8 38.9 21.6 27.7 26.7 36.3 38.4 45.6 46.9 29.0 33.1 35.2 43.7 48.6 50.5 57.5|
|SFT SFT SFT SFT SFT SFT|1/32 3 1/16 3 1/8 3 1/4 3 1/2 3 7.4K 3|7.8 14.2 0.0 5.9 25.3 28.9 15.8 12.7 16.2 7.4 27.7 29.2 39.5 52.8 16.5 21.8 19.5 33.4 39.3 46.0 57.8 22.7 28.1 27.4 37.5 44.6 50.4 57.8 30.9 34.6 36.1 45.3 50.8 55.6 61.0 35.9 41.6 43.0 50.0 54.6 59.3 63.2|
|RFT no dedup RFT no dedup RFT no dedup RFT no dedup RFT no dedup RFT no dedup|1/32 3 1/16 3 1/8 3 1/4 3 1/2 3 400K 3|37.5 - - - - - - 38.3 - - - - - - 41.1 - - - - - - 41.2 - - - - - - 43.9 - - - - - - 43.6 46.7 46.9 53.7 - - -|
|RFT k=1 RFT k=3 RFT k=6 RFT k=12 RFT k=25 RFT k=50 RFT k=100|∼12K 3 ∼15K 3 ∼18K 3 ∼22K 3 ∼28K 3 ∼35K 3 ∼47K 3|37.6 43.4 42.7 52.1 - - - 39.0 45.3 45.2 51.9 - - - 39.5 45.6 46.8 52.2 - - - 41.6 45.3 48.0 53.1 - - - 40.9 46.5 46.0 52.6 - - - 40.7 47.0 49.4 54.5 - - - 41.7 47.5 49.1 54.8 54.5 - -|
|RFT-U13B RFT-U33B|104K 3 110K 3|49.3 50.3 52.1 55.4 56.5 59.0 62.3 49.1 51.2 51.4 55.3 57.9 59.7 64.8|
Table 5: Detailed numerical results in this paper, some experiments are still under running. We
report maj1@1 (accuracy) in this table.
-----
|Setting|7B 7B-2 13B 13B-2 33B 65B 70B-2|
|---|---|
|ICL-8shot SFT RFT k=100 RFT-U13B RFT-U33B|11.0/18.1 14.6/- 17.8/29.3 28.7/- 35.6/53.1 50.9/69.7 56.8/- 35.9/48.7 41.6/55.4 43.0/55.2 50.0/61.7 54.6/72.6 59.3/69.7 63.2/73.5 41.7/52.7 47.5/58.7 49.1/59.9 54.8/65.4 54.5/- - - 49.3/61.8 50.3/65.6 52.1/66.2 55.4/69.1 56.5/- 59.0/- 62.3/- 49.1/61.6 51.2/64.1 51.4/66.3 55.3/69.1 57.9/71.3 59.7/- 64.8/-|
Table 6: Detailed numerical results in this paper, some experiments are still under running. We
report maj1@100 in this table.
|Query A|Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?|
|---|---|
|Path 1|Weng earned 12/60 = $〈〈12/60=0.2〉〉0.2 an hour for the 50 minutes that she worked yes- terday. Therefore, she earned $0.2 x 50 = $〈〈0.2*50=10〉〉10 for the 50 minutes that she worked yesterday.#### 10|
|Path 2|Weng earns 12 / 60 = $〈〈12/60=0.2〉〉0.2 per minute. So, Weng earned 50 * 0.2 = $〈〈50*0.2=10〉〉10 for the 50 minutes of babysitting. #### 10|
|Path 3|Since there are 60 minutes in an hour, then Weng did 50/60 = 〈〈50/60=0.8333〉〉0.8333 of an hour of babysitting. So, she earned $12 x 0.8333 = $〈〈12*0.8333=10〉〉10. #### 10|
|Path 4|She earns 12 * 50 / 60 = $〈〈12*50/60=10〉〉10. #### 10|
|Path 5|Weng earns 12 * (50 / 60) = $〈〈12*(50/60)=10〉〉10. #### 10|
|Query B|A local college is offering German lessons and currently has 8 students enrolled. Through advertising, 8 more became interested but a fourth of these dropped out within a day. 2 more got frustrated and left. The class then rallied to show how simple the course actually is and increased enrollment by 5 times the amount of students already enrolled in the class, but 2 had to drop it because of scheduling conflicts. After one last rally, 6 more people enrolled. As the days passed, half of the class eventually dropped, and half of the remaining students graduated. How many are still enrolled?|
|Path 1|8+8=〈〈8+8=16〉〉16 students are enrolled in the German class. 8*.25=〈〈8*.25=2〉〉2 dropped out from the new recruits. 2 more left, so the class had 16-2-2=〈〈16-2-2=12〉〉12 students left. There were 12*5+12=〈〈12*5+12=72〉〉72 total students in the class after the rally. Two people had to drop the class, so there were 72-2=〈〈72-2=70〉〉70 left. 70+6=〈〈70+6=76〉〉76 were enrolled after the final rally. 76/2=〈〈76/2=38〉〉38 are left after half the class dropped. 38/2=〈〈38/2=19〉〉19 are left after half the class graduated. #### 19|
|Path 2|8+8=〈〈8+8=16〉〉16 students are enrolled after the first rally. 16/4=〈〈16/4=4〉〉4 dropped out. 16-4=〈〈16-4=12〉〉12 remain. 12*5+12=〈〈12*5+12=72〉〉72 after the second rally. 72-2=〈〈72-2=70〉〉70 after 2 more dropped. 70+6=〈〈70+6=76〉〉76 after 6 more enrolled. 76/2=〈〈76/2=38〉〉38 remain after half the class dropped. 38/2=〈〈38/2=19〉〉19 remain after half the class graduated. #### 19|
|Path 3|Advertising brought in 8+8=〈〈8+8=16〉〉16 students. Of these, 16/4=〈〈16/4=4〉〉4 dropped out. 16-4-2=〈〈16-4-2=10〉〉10 remained. Then 10*5=〈〈10*5=50〉〉50 more enrolled. This brought the class to 50+10=〈〈50+10=60〉〉60 students. Then 60+6=〈〈60+6=66〉〉66 enrolled. So there were 66+10=〈〈66+10=76〉〉76 students. Then 76/2=〈〈76/2=38〉〉38 dropped. So 76-38=〈〈76-38=38〉〉38 remained. Then 38/2=〈〈38/2=19〉〉19 graduated. So 38-19=〈〈38-19=19〉〉19 were left. #### 19|
Table 7: Cases of generated reasoning paths with different reasoning complexity from rejection
sampling for RFT. The calculations are highlighted in red.
C CASE STUDY OF RFT
In this section, we present the cases of the training samples from rejection sampling. The case studies
would shed light on how RFT potentially improves the mathematical reasoning performance of
LLMs. The cases are shown in Table 7. As aforementioned, RFT considers the reasoning paths with
different calculation processes regarding equation forms or orders, leading to the correct answers.
In the cases from Table 7, all the reasoning paths from RFT result in the correct answer of 10, while
the calculation processes of reasoning are diverse. Path 1 and 2, as well as Path 4 and 5, are different
in the equation forms as highlighted in red. Path 1 and 2 present a two-step calculation reasoning
process while Path 4 and 5 alter to a one-step calculation reasoning process. The case demonstrates
-----
that rejection sampling can potentially provide more supervision signals that improve mathematical
reasoning performance. The filtered reasoning paths sampled from LLMs themselves are of similar
quality to the reasoning demonstrations from human annotations.
D PRELIMINARY EXPERIMENTS
D.1 SELF QUERY AUGMENTATION
Through our preliminary experiments and case studies, the errors made by the fine-tuned LLMs are
partly attributed to the incorrect reasoning chains where LLMs mistakenly understand the context
information or fail to consider all the information in the queries. Although such incorrect reasoning
chains lead to wrong answers to the original queries, the reasoning chains themselves represent
reasonable logic. For example, for the query Josh decides to try flipping a house. He buys a house
_for $80,000 and then puts in $50,000 in repairs. This increased the value of the house by 150%. How_
_much profit did he make?, a fine-tuned LLaMA model predicts The value of the house increased_
_by 80,000*.15=$12,000. So the house was worth 80,000+12,000=$92,000. So he made a profit_
_of 92,000-80,000-50,000=$42,000 where the model erroneously interprets 150% as 15%, but the_
reasoning chain is reasonable if we ignore the error.
Therefore, such wrong predictions made by the LLMs may be correct under other queries (if we
change 150% to 15% in the above example). We conduct experiments to generate queries for the
predicted reasoning chains. This is a similar idea to the hindsight experience replay (Andrychowicz
et al., 2017) in reinforcement learning where the method is designed to deal with the sparse reward
problems by changing the original objectives for the failed samples to form samples with positive
rewards. Such an idea was recently adopted by HIR (Zhang et al., 2023) to better align LLMs with
instructions.
Concretely, we reformat GSM8K reversely by predicting the query given the corresponding groundtrue reasoning result and then we fine-tune a LLaMA model on the reversed task. We use this model
to generate queries on the predicted reasoning chains by a normally fine-tuned LLaMA model on
the training set of GSM8K, formalizing a training sample for augmentation. We experiment on the
LLaMA 7B model and fine-tune models on the data mixing original and generated samples or solely
on generated samples.
The results are shown in the left subfigure in Figure 8. We can see that fine-tuning with self query
augmentation data leads to the worst results, and the performance of mixing the original data with
self query augmented data still falls short of that of the original data. The fine-tuned performance for
mathematical reasoning does not benefit from the naive idea of self query augmentation. Through
several case studies of generated data, we find that there are two major defects in the generated
data. The first one is some reasoning chains themselves are not logically reasonable, for example,
there may be some calculation errors in the reasoning chains. The second one is that the generated
query may not be suitable for a reasoning chain. The query generation model may still erroneously
interpret the information in the reasoning chains. Both defects attribute to a mediocre augmented
data quality, hence can be possible reasons for the failure of this data augmentation procedure.
D.2 SELF REVISING AUGMENTATION
We also explore improving the mathematical reasoning abilities of LLMs through revising augmentation. To equip LLaMA with revising abilities, we generate a revising dataset by first sampling
_K reasoning paths from a fine-tuned LLaMA model, then concatenating the query with one of the_
sampled reasoning paths using a template, and finally pairing with the ground-true reasoning path
to form a training sample. We use a sampling temperature of 0.7 for generating reasoning paths.
During inference, we use the fine-tuned revising model to revise the prediction from the normally
fine-tuned model.
The results are shown in the middle subfigure of Figure 8. We can see that with K = 1 the revising
model improves the final accuracy marginally comparing 36.09% to 35.90%. Surprisingly, as we
increase K, the performances degrade. The possible defect of the revising model is that generated
samples on the training set for revising training suffer from a distribution discrepancy with generated
samples on the test set for revising inference. The sampled reasoning paths on the training set may
-----
Figure 8: Results for different methods of self data augmentation. GSM. and H. represent GSM8K
and Hindsight respectively. The red dotted lines in the middle and right figures represent the results
of vanilla fine-tuning on GSM8K.
have a larger lexical similarity to the ground true reasoning paths compared to those on the test set.
Therefore we try two different procedures to alleviate such an issue.
1. We use the sampled reasoning path with the largest Levenstein distance out of K sampled paths
with respect to the ground true path to form a training sample.
2. We split the train set to N folds, and fine-tune a model on each N − 1 folds and sampling
reasoning path on the left fold.
The results are shown in the middle and right subfigures in Figure 8, we can see that when leveraging
Levenstein distance for reasoning path selection, the fine-tuned revising model enjoys a performance
boost, harvesting uniformly better performance than the fine-tuning baseline across different K’s.
The results demonstrate that for the revising performance, the lexical diversity of reasoning paths
matters when constructing training samples. However, the revising performance does not benefit
from the N -fold procedure.
E ESTIMATING FLOPS OF SFT AND RFT
We mainly follow the notations of (Kaplan et al., 2020) here.
**Training FLOPs** For each input sample of length nctx in GSM8K dataset, we can split it into two
parts:
_nctx = nQ + nR_ (1)
where nQ, nR denotes the length of question and generated reasoning path and answers respectively.
_Ctrain ≈_ 6NnctxNs (2)
where Ns denotes the numbers of samples.
**Inference FLOPs** We roughly computed the FLOPs of each token during the forward pass:
_Cforward(nctx) = 2N + 2nlayernctxdmodel_ (3)
To ensure the results were more accurate and reliable, we also took into account the Key-Value (KV)
cache during the decoding procedure.
_KVcache_ 4nlayerd[2]model (4)
_≈_
-----
Therefore, we obtain the FLOPs per token during the forward pass considering the KV cache.
_′_
_Cforward[(][n][ctx][) = 2][N][ + 2][n][layer][n][ctx][d][model]_ _[−]_ _[KV][cache]_ (5)
= 24nlayerd[2]model [+ 2][n][layer][n][ctx][d][model] _[−]_ [4][n][layer][d]model[2] (6)
= 20nlayerd[2]model [+ 2][n][layer][n][ctx][d][model] (7)
_≈_ 1.66N + 2nlayernctxdmodel (8)
The total inference FLOPs are computed as follows:
_Ctotal = Ns · [nqCforward(nq) +_
_nq+nr_
_′_
_i · Cforward[(][i][)]]_ (9)
_i=nq_
X
where Ns denotes the numbers of samples. nq, nr denotes the average length (tokens) of the user
query and generated response respectively. In GSM8K dataset, nq 66 and nr 130.
_≈_ _≈_
-----
| [
"Chengpeng, Li",
"Zheng, Yuan",
"Hongyi, Yuan",
"Guanting, Dong",
"Keming, Lu",
"Chuanqi, Tan",
"Chang, Zhou",
"Jingren, Zhou"
] | 2023-09-12T00:00:00 | null | false | 96 | 7 | null | http://arxiv.org/abs/2308.01825 | https://arxiv.org/abs/2308.01825 | https://www.semanticscholar.org/paper/91206346edbe28abb606d7b3425cd455d4019d4f |
How well do Large Language Models perform in Arithmetic tasks? | Large language models have emerged abilities including chain-of-thought to answer math word problems step by step. Solving math word problems not only requires abilities to disassemble problems via chain-of-thought but also needs to calculate arithmetic expressions correctly for each step. To the best of our knowledge, there is no work to focus on evaluating the arithmetic ability of large language models. In this work, we propose an arithmetic dataset MATH 401 to test the latest large language models including GPT-4, ChatGPT, InstrctGPT, Galactica, and LLaMA with various arithmetic expressions and provide a detailed analysis of the ability of large language models. MATH 401 and evaluation codes are released at \url{https://github.com/GanjinZero/math401-llm}. | This work proposes an arithmetic dataset MATH 401 to test the latest large language models including GPT-4, ChatG PT, InstrctGPT, Galactica, and LLaMA with various arithmetic expressions and provides a detailed analysis of the ability of largelanguage models. | ## How well do Large Language Models perform in Arithmetic tasks?
**Zheng Yuan[1]** **Hongyi Yuan[12]** **Chuanqi Tan[1]** **Wei Wang[1]** **Songfang Huang[1]**
1Alibaba Group 2Tsinghua University
{yuanzheng.yuanzhen,chuanqi.tcq,hebian.ww,songfang.hsf}@alibaba-inc.com
[email protected]
**Abstract**
Large language models have emerged abilities including chain-of-thought to answer math
word problems step by step (Wei et al., 2022b).
Solving math word problems not only requires
abilities to disassemble problems via chain-ofthought but also needs to calculate arithmetic
expressions correctly for each step. To the
best of our knowledge, there is no work to
focus on evaluating the arithmetic ability of
large language models. In this work, we propose an arithmetic dataset MATH 401 to test
latest large language models including GPT-4,
ChatGPT, InstrctGPT, Galactica, and LLaMA
with various arithmetic expressions and provide a detailed analysis of the ability of large
language models. MATH 401 and evaluation
[codes are released at https://github.com/](https://github.com/GanjinZero/math401-llm)
[GanjinZero/math401-llm.](https://github.com/GanjinZero/math401-llm) [1]
contained in this dataset including addition (+),
subtraction (−), multiplication (×), division
(÷), exponentiation (∧), trigonometry functions
(sin, cos, tan), and logarithm functions (log, ln) of
integers, decimals, and irrational numbers (π, e).
Long arithmetic expressions with brackets are also
included which are common in complex math word
problems. Results in Table 1 show detailed evaluations on OpenAI’s GPTs including GPT-4 (OpenAI, 2023), ChatGPT[2], GPT-3.5 (Ouyang et al.,
2022) and other open-sourced LLMs. We find that
GPT-4 and ChatGPT outperform other models by
a large margin in all kinds of arithmetic abilities.
InstructGPT (Ouyang et al., 2022) and Galactica
(Taylor et al., 2022) do have some arithmetic abilities. We analyze factors affecting LLMs’ arithmetic
ability systematically including tokenization (§4.2),
pre-training (§4.3), prompts (§4.4), interpolation
and extrapolation (§4.5), scaling laws (§4.6), COT
(§4.7), and ICL (§4.8).
One may say that the ability to solve arithmetic
tasks is not necessary for a large language model.
LLMs can use the calculator API when they need
to decode an answer (Schick et al., 2023). Arithmetic ability evaluation can be a gauge for general
intelligence since mastering arithmetic serves as a
fundamental requirement for performing intricate
mathematical tasks including symbolic math reasoning (Noorbakhsh et al., 2021; Gaur and Saunshi,
2022) and automatic theorem proofing (Polu and
Sutskever, 2020; Wu et al., 2022).
**2** **Related Works**
**1** **Introduction**
Emergent abilities show in sufficiently large language models (LLMs) (Wei et al., 2022a) like
chain-of-thought reasoning (COT) (Wei et al.,
2022b). Chain-of-thought reasoning requires
LLMs to solve a question by thinking questions
step by step which performs well in school math
word problems (Wei et al., 2022b; Kojima et al.,
2022). Recent LLMs are further fine-tuned with
instruction tuning (Sanh et al., 2021; Chung et al.,
2022; Ouyang et al., 2022) which demonstrates
improved COT ability compared to only selfsupervised pre-training. To solve a math word
problem, COT disassembles the problem into simple steps. For each step, LLMs have to compute
correctly based on arithmetic expressions. Thus,
evaluating the arithmetic ability of LLMs is necessary since it is the upper bound of LLMs’ ability
for solving math word problems.
To this end, we propose an arithmetic dataset
named MATH 401. Different difficulties are
**Evaluate Math Ability of LLMs** To show the
math reasoning ability of LLMs, Wang and Komatsuzaki (2021); Chung et al. (2022); Thoppilan
et al. (2022) evaluate their models on various math
word problems benchmark (Saxton et al., 2019;
Hendrycks et al., 2021; Cobbe et al., 2021; Shi
[2https://openai.com/blog/](https://openai.com/blog/introducing-chatgpt-and-whisper-apis)
[introducing-chatgpt-and-whisper-apis](https://openai.com/blog/introducing-chatgpt-and-whisper-apis)
1This project is working in progress.
-----
|Model Size|E +− × ÷ ∧ Tri log|Dec Neg Irr Big Long|Easy Hard All|
|---|---|---|---|
|GPT-4 ? ChatGPT ? InstructGPT 175B CodeX 175B Galactica 120B LLaMA 65B OPT 175B GPT-Neox 20B GLM 130B BloomZ 176B Bloom 176B T0++ 11B Flan-T5 11B|✓ 99 67 100 50 68 76 ✓ 97 65 80 50 44 56 × 83 59 80 36 8 16 ✓ 36 27 8 10 8 0 ✓ 69 43 24 44 16 0 ✓ 44 35 8 22 8 0 ✓ 33 35 4 12 0 4 ✓ 51 48 4 40 4 0 ✓ 39 31 8 22 0 0 × 23 37 12 30 8 0 × 21 37 12 30 0 0 × 6 3 0 6 8 0 × 1 13 4 0 0 0|67 67 100 48 96 67 67 64 40 68 64 64 36 4 24 25 25 12 0 0 57 57 28 0 24 41 41 20 0 4 25 25 8 0 0 43 43 20 0 8 29 29 24 0 8 43 43 20 0 8 37 37 16 0 0 3 3 4 0 0 11 11 8 0 0|100 67 84 100 49 74 92 22 57 40 4 22 78 12 45 52 5 28 41 2 22 66 4 35 46 5 26 39 6 22 37 4 20 7 2 4 6 2 4|
Table 1: Arithmetic ability for LLMs measured by accuracy, we only list models with largest parameter counts. E
= Euler, Dec = Decimal, Neg = Negative, Irr = Irrational, Big = Big Numbers, Long = Long Expressions.
et al., 2022). For newly released LLM ChatGPT,
Shakarian et al. (2023); Frieder et al. (2023) evaluate its mathematical ability independently. To
notice, our paper evaluates ChatGPT using gpt3.5-turbo-0301 version and GPT-4 using chat UI
on March 16th which may have different performances compared to their reported results and future analysis.
**Evaluate** **Arithmetic** **Ability** **of** **LLMs**
Nogueira et al. (2021); Wang et al. (2021)
evaluate pretrained language models on simple
arithmetic expressions including addition (+)
and subtraction (−). Muffo et al. (2022) have
further tested the multiplication (×) of language
models. They found tokenization (Nogueira et al.,
2021; Kim et al., 2021) and token frequency
(Razeghi et al., 2022) are two important factors for
language model arithmetic ability. Compared to
previous work, we focus on evaluating Large LMs
(with instruction fine-tuning) on comprehensive
arithmetic abilities with different types of operators
and numbers.
**3** **Evaluation Settings**
**3.1** **Arithmetic Expression Settings**
We construct 401 arithmetic expressions to test
large language models which include Euler equation (e[iπ] +1 = 0) as group 0 and 25 problems each
for group 1∼16. If not otherwise mentioned, used
numbers are positive integers.
- Euler Equation.
- Add & Subtract of two integers within 10.
- Add & Subtract of two integers within 100.
- Add & Subtract of two integers within 1,000.
- Add & Subtract of two integers within
1,000,000,000,000.
- Add & Subtract of two integers within 10∼10.
- Add & Subtract of two decimal numbers
within -100∼100.
- Multiply two integers within 100.
- Multiply two decimal numbers within 10.
- Multiply two integers within 100,000.
- Division of two integers within 100.
- Exponentiation of with integer base within 10
and integer exponent within 2∼4.
- Exponentiation of with a decimal number
within 10 as the base and a decimal number
within 2∼4 as the exponent.
- Add, Subtract & Multiply with one integer
within 10 and a common irrational number
(i.e. e or π).
- Long arithmetic expressions with brackets, involved integers are all within 100 and operators contain add, subtract, multiply, and division.
- Trigonometry functions including sin, cos,
and tan. Inputs can be in the format of degrees and radians (π can also appear in the
inputs).
- Logarithm of integers within 1000 of different
bases: 2, e, 10.
-----
|Model Prompt|Acc ↑ RE ↓ NNR ↓|
|---|---|
|gpt-4 Cal*4 gpt-3.5-turbo-0301 Cal* text-davinci-003 Cal code-davinci-002 Eqa galactica-120b Eqa galactica-30b Eqa llama-65b Eqa opt-175b Cal gpt-neox-20b Eqa glm-130b $ bloomz-176b $$ bloom-176b $ T0++-11b Cal flan-t5-xxl-11b Eqa flan-t5-xl-3b $|83.54 0.07 0.00 75.06 0.14 0.50 56.61 0.76 2.99 21.7 2.39 11.47 45.14 1.30 3.99 45.14 0.69 1.75 28.43 1.61 4.74 21.70 3.18 21.70 35.41 1.19 4.49 25.94 1.27 2.74 22.44 1.50 4.74 20.20 2.60 18.45 4.24 3.34 9.48 3.74 5.78 43.89 7.48 3.34 25.19|
Table 2: Evaluation on MATH 401 with different LLMs. Prompts are selected via best accuracy. Cal means “Calculate:” and Eqa means “\begin{equation}”. - means providing an additional
system-level message.
(Scao et al., 2022; Muennighoff et al., 2022), T0++
(Sanh et al., 2021), GLM (Zeng et al., 2022) and
Flan-T5 (Chung et al., 2022). We also test the
smaller versions of the above models.
We test following prompts: ∅ (i.e. no prompt),
“Calculate:”, “$”, “$$”, and “\begin{equation}”.
The latest three prompts are inspired by that LLMs
may be pretrained with LATE[X sources. We provide]
three versions of input formats: math texts (π),
plain texts (pi), LATE[X texts (\pi). When we use]
LATE[X-related prompts, we provide the model with]
LATE[X texts. When we use other prompts, we will]
provide math texts if their tokenizers can encode
them. Otherwise, we will provide plain text. For
ChatGPT (gpt-3.5-turbo-0301), we test different
system-level prompts as instructions: ∅ (i.e. no
prompt), “You are an accurate calculator.”, and
“You are an accurate calculator, please calculate
provided equation to four decimal places.”. For
GPT-4, we only test prompt “You are an accurate
calculator, please calculate provided equation to
four decimal places.”.
We use default decode settings for OpenAI APIs,
and we use greedy decoding for all other LLMs.
**4** **Results and Analysis**
**4.1** **Results**
**Overall Results** Table 1, 2, and 3 show results
of different LLMs on MATH 401. We find GPT4 and ChatGPT outperform all other models by a
These groups cover mathematical operators used
in elementary mathematics. We consider groups
1,2,3,5,6,7,8,11 as Easy queries and all others as
**Hard queries.** We calculate the results of all
arithmetic expressions using built-in functions of
Python and round to four decimal places. Examples
of expressions are listed in Appendix A.
**3.2** **Metrics**
Since LLMs can decode arbitrary contents (which
may contain their step-by-step calculation steps),
we first ignore decoded numbers in parentheses
and preserve the last number decoded by LLMs. If
the decoded number is a fraction, we will convert
it to decimal for evaluation except for group 10
which requires calculating division. To measure the
arithmetic ability of LLMs, we use the following
metrics to measure their outputs.
**Accuracy** If the difference between the decoded
number and the target number is less than 1e − 3,
we consider it a correct prediction. Accuracy is
calculated based on correct prediction counts.
**Relative error** We denote decoded number is ˆy
and target is y. We calculate relative error by:
_yˆ_ _y_
_RE = min(10,_ _∥_ _−_ _∥_ (1)
max( _y_ _, 1)_ [)]
_∥_ _∥_
If LLM does not decode any number, we consider
_RE = 10. We truncate the relative error to 10 to_
prevent that one big mistake dominate the average
relative error.
**Non-number ratio** If decoded content does not
contain any numbers, we consider it a failure. We
calculate the non-number ratio based on it.
**3.3** **Evaluation Details**
We test GPT-4 by their official chat UI[3]. Since GPT4 has limited request counts, we only query GPT-4
with groups that ChatGPT cannot answer correctly.
We test GPT-3.5 (including davinci (CodeX, InstructGPT) and turbo (ChatGPT) series models)
(Ouyang et al., 2022; Chen et al., 2021) via OpenAI APIs. We also test following open-sourced
LLMs including Galactica (Taylor et al., 2022),
GPT from EleutherAI (Wang and Komatsuzaki,
2021; Black et al., 2022), LLaMA (Touvron et al.,
2023), OPT (with instruction learning) (Zhang
et al., 2022), Bloom (with instruction learning)
[3https://chat.openai.com/chat?model=gpt-4](https://chat.openai.com/chat?model=gpt-4)
-----
|Model Prompt|Acc ↑ RE ↓ NNR ↓|
|---|---|
|gpt-4 Cal*4 gpt-3.5-turbo-0301 Cal* text-davinci-003 Cal text-davinci-002 Cal text-curie-001 Cal text-babbage-001 Eqa code-davinci-002 Eqa|83.54 0.07 0.00 75.06 0.14 0.50 56.61 0.76 2.99 42.89 2.13 15.96 11.47 1.92 6.48 5.24 2.59 5.74 21.70 2.39 11.47|
|galactica-120b Eqa galactica-30b Eqa galactica-6.7b Cal|45.14 1.30 3.99 45.14 0.69 1.75 34.41 2.61 8.73|
|llama-65b Eqa llama-30b Eqa llama-13b $ llama-7b $$|28.43 1.61 4.74 30.17 1.72 3.74 27.68 2.40 9.73 21.95 2.11 7.48|
|opt-175b Cal opt-66b ∅ opt-iml-max-30b Cal opt-30b ∅ opt-13b ∅ opt-6.7b Cal|21.70 3.18 21.70 20.70 2.66 18.70 17.46 1.52 6.23 15.96 2.28 11.22 15.21 2.19 10.97 14.46 1.46 4.24|
|gpt-neox-20b Eqa gpt-j-6b Cal|35.41 1.19 4.49 27.18 1.55 8.98|
|bloomz-176b $$ bloom-176b $ bloomz-7b1 $ bloom-7b1 Cal bloomz-3b $$ bloom-3b Cal bloomz-1b7 Eqa bloom-1b7 Cal T0++-11b Cal|22.44 1.50 4.74 20.2 2.60 18.45 12.72 2.56 15.46 7.23 2.41 6.48 7.98 2.63 12.47 4.24 2.41 8.73 4.74 4.28 31.17 5.24 2.54 11.22 4.24 3.34 9.48|
|glm-130b $ glm-10b Cal|25.94 1.27 2.74 14.96 2.30 3.74|
|flan-t5-xxl-11b Eqa flan-t5-xl-3b $ flan-t5-large-780m Cal flan-t5-base-250m Eqa|3.74 5.78 43.89 7.48 3.34 25.19 3.74 2.31 2.49 2.49 3.18 14.21|
Table 3: Full evaluation on MATH 401 with different
LLMs. Prompts are selected via best accuracy.
large margin[4]. GPT-4 surpasses ChatGPT with accuracy of 10 points and reduce relative error half.
InstructGPT performs third measured by accuracy
and Galactica-30B performs third measured by relative error. Compared to models proposed before
InstructGPT (text-davinci-003), GPT-series applies
Reinforcement Learning from Human Feedback
(RLHF) which may enhance their arithmetic ability
significantly. Galactica is pre-trained with massive
LATE[X source codes which could be the reason why]
Galactica performs well in arithmetics.
**Grouped Results** To clearly understand the
arithmetic ability of LLMs, we show grouped accuracy in Table 1. GPT-4 obtains first places and ChatGPT obtains second places for all groups. Most
LLMs are only capable of doing addition and subtraction and have some ability for multiplication.
4OpenAI states they improve the math of ChatGPT since
version Jan 30, and we cannot evaluate any previous version.
Division, exponentiation, trigonometry functions,
and logarithm functions are hard for most LLMs.
LLMs have some abilities dealing with decimal,
negative, and irrational numbers. Only GPT-4 and
ChatGPT have the ability to deal with big numbers
(> 1e12) and complex long queries which proves
their generalization and reasoning abilities. GPT-4
shows extremely good ability in long arithmetic
expressions.
**When will ChatGPT fail?** Though ChatGPT
obtains such a good performance, we will check
when ChatGPT fails to answer. For multiplication (×), ChatGPT passes all queries in Group
7 and 8 and get wrong answers for all queries
in Group 9. An example is ChatGPT predicts
71786 × 21638 = 1, 551, 402, 068, while the true
answer is 1, 553, 305, 468. ChatGPT gives a very
close estimation with the correct head and tail,
which proves that ChatGPT does not use a calculator API for math calculation.
For division in Group 11, ChatGPT sometimes
gives correct answers to two decimal places which
will be considered incorrect in our metric. We
can see in Table 5, requiring ChatGPT to output
four decimal places will improve its accuracy in
multiplication and division.
For exponentiation (∧), ChatGPT correctly answers all queries in Group 10 which contain only
integers as bases. It is too hard for any language
model (even ChatGPT) correctly estimate the exponentiation of a decimal number as the base and
a decimal number as the exponent. It seems that
ChatGPT treats ∗∗ as multiplication sometimes, for
example, ChatGPT estimates 5.5507 ∗∗ 2.0434 =
10.31554 which is close to 5.5507 × 2.0434 =
11.3423 and far from answer 33.1895.
For calculating trigonometry functions, ChatGPT understands degrees and radians correctly
and generates exact answers for special inputs
like cos( 210[◦]) = _√23_ (we omit explanation
_−_ _−_
generated by ChatGPT here). However, ChatGPT may generate wrong explanations which mislead itself. An example is: “We know that the
sine function is periodic with a period of 2π,
which means that sin(x + 2π) = sin(x) for any
value of x. Therefore, we can subtract multiples of 2π from −3.75π until we get a value between 0 and 2π: −3.75π = −3π − 0.75π =
_−9.42477 −_ 2.35619 = −11.78096. Adding 2π,
we get: −11.78096+2π = -9.42477 etc.” Any mistake in explanations may result in a wrong answer.
-----
For logarithm functions, we find that ChatGPT
is capable of using change of base formula and
predicting answers within two decimal places.
For long expressions, ChatGPT can understand
the operators’ priorities. ChatGPT sometimes generates answers step by step and sometimes generates answers directly. It is very likely to generate
wrong answers when it decodes answers directly.
**What about GPT-4?** For big number multiplication (×) in group 9, GPT-4 also fails in all cases
with similar problems occurring in ChatGPT.
For exponentiation (∧), GPT-4 will not consider
_∗∗_ as × anymore and give better estimations.
For calculating expressions with irrational numbers, GPT-4 will consider e as natural logarithm
correctly.
For logarithm functions, GPT-4 calculates logarithm base e and 10 by “using a calculator” (this is
a message generated by GPT-4). GPT-4 calculates
logarithm base 2 by change of base formula and
generates approximate results.
For long equations, GPT-4 solves all equations
step by step and obtains a much higher accuracy.
We compare and summarize how GPT-4 outperforms ChatGPT here:
- Better division ability.
- Better trigonometry ability.
- Understand irrational numbers properly.
- Always calculate long expressions step by
step.
**4.2** **Tokenization**
Arithmetic expressions have special tokens including π, ×, ÷, _[◦]_ which are not within T5 series models (i.e. T0++ and Flan-T5). T0++-11B (Acc 4.24
and RE 3.34) and Flan-T5-xxl-11B (Acc 3.74 and
RE 5.78) perform badly on arithmetic tasks compared to other similar-size models: Opt-13B (Acc
15.21 and RE 2.19) and LLaMA-13B (Acc 27.68
and RE 2.4).
We notice that Galactica and LLaMA split numbers into individual tokens. For example 123.456
is converted into 1 2 3 . 4 5 6. Razeghi et al.
(2022) show that arithmetic ability is related to
pre-training term frequencies. For tokens that appear more in pre-training, LLMs can have better
accuracy in answering arithmetic expressions about
them. Number tokens with more digits (e.g. 23)
apparently appear less than single digit token (e.g.
2 and 3). Splitting numbers into individual tokens
neglects all number tokens with more digits and
makes all single digit tokens (mainly 0 ∼ 9) appear in the pre-training corpus in the same order
of magnitude. Galactica-30B and LLaMA-30B obtain 45.14 and 30.17 in terms of accuracy (list in
Table 3) that outperforms OPT-30B (15.96), Bloom176B (20.2), and GLM-130B (25.94), which show
superiority of digit-level tokenization.
**4.3** **Training**
**Self-supervised** **Pre-training** While pretraining, code corpus and LATE[X-sources are]
possible to relate to arithmetic ability since they
all contain arithmetic operators and numbers.
Code-davinci-002 is pretrained with code corpus. Code-davinci-002 performs well on many
reasoning-related tasks (Zhou et al., 2022), however, it performs not good compared to other LLMs
in arithmetics. This proves that mathematical
reasoning ability is different from arithmetic
ability which needs to understand numbers
deeply. Galactica with numerous LATE[X-sources]
outperforms other LLMs except for InstructGPT
and ChatGPT which show LATE[X is useful.]
**Instruction Tuning** is also very important in
arithmetic ability. Comparing Opt-30B (Acc 15.96
RE 2.28 NNR 11.22) with Opt-Iml-Max-30B (Acc
17.46 RE 1.52 NNR 6.23), Bloom (Acc 20.2 RE
2.6 NNR 18.45) with BloomZ (Acc 22.44 RE 1.5
NNR 4.74), and code-davinci-002 (Acc 21.7) with
text-davinci-002 (Acc 42.89) in Table 3 show that
instruction tuning can boost the performance in
all metrics. Text-davinci-003 (RLHF) outperforms
text-davinci-002 (SFT) in arithmetic tasks which
shows RLHF is important for building arithmetic
ability.
**4.4** **Prompts**
**Input Prompts** We find the best prompts are different across LLMs. We list the best and worst
prompts for LLMs in Table 8. We find models are
sensitive to input prompts and not using prompts
is the worst option for most LLMs. For InstructGPT and ChatGPT, using “Calculate” as a prompt
perform best. For other LLMs, using LATE[X-related]
prompts perform best.
**System Prompts** For ChatGPT, we can also provide system-level messages as instruction prompts.
Table 5 shows providing system-level messages improves ChatGPT’s accuracy and reduces relative
-----
|Model|Best Acc Worst Acc|
|---|---|
|gpt-3.5-turbo-0301 text-davinci-003 galactica-120b llama-65b opt-175b gpt-neox-20b glm-130b bloomz-176b|Cal* 75.06 $$ 64.59 Cal 56.61 Eqa 43.64 Eqa 45.14 ∅ 38.9 Eqa 28.43 Cal 4.74 Cal 21.7 ∅ 15.21 Eqa 35.41 ∅ 26.93 $ 25.94 ∅ 22.44 $$ 22.44 ∅ 11.72|
Table 4: Best and worst prompts for different LLMs.
error significantly. The most different groups are
group 13 irrational numbers and group 16 logarithm functions. Without a system-level message,
ChatGPT thinks e can be Euler’s number or a variable and cannot give an answer. For logarithm
functions, ChatGPT tries to explain how it calculates which may mislead our provided parser. We
notice that if we require ChatGPT to output results
to four decimal places, it will have a zero nonnumber ratio. To conclude, ChatGPT will try to
explain the calculation procedure without a systemlevel prompt and will only provide answers with a
system-level prompt.
|Group|Cal Cal* Cal*4 Acc RE Acc RE Acc RE|
|---|---|
|0 Euler 1 ∼6 +− 7 ∼10 ×÷ 11 ∼12 ∧ 13 Irr. 14 Long 15 Tri. 16 Log|100 .00 100 .00 100 .00 97 .00 96 .00 93 .01 69 .20 69 .01 71 .01 50 .24 50 .32 50 .27 64 1.73 72 .56 84 .11 68 .19 64 .46 60 .59 44 1.21 48 .96 44 1.40 56 .80 60 .04 56 .01|
|Overall|74 .33 75 .14 74 .14|
Table 5: Comparing different system prompts in ChatGPT on MATH 401. Cal means no system prompt. *
= “You are an accurate calculator.” 4 = “Calculating to
four decimal places.”
**4.5** **Interpolation and Extrapolation**
LLMs have strong abilities to fit on in-domain data.
If pretraining corpora contain arithmetic expressions, it is easy for LLMs to memorize them. For
out-of-domain data, LLMs need to extrapolate how
to calculate them. We do not know what are indomain data and out-of-domain data for models
(especially ChatGPT), so it is hard to test their interpolation and extrapolation abilities. We use the
easy group and the hard group to estimate the interpolation and extrapolation abilities. The easy
group queries have possibilities that appear in the
pretraining corpora or instruct-tuning, while the
hard group queries contain big numbers / decimal
Figure 1: Performances of MATH 401 on LLMs with
different sizes. We do not know the parameter count of
ChatGPT. We list InstructGPT results with SFT setting
(text-davinci-002) only for a fair comparison.
numbers / long expressions which are very unlikely
to be covered by pretraining corpora or instructions. Thus answering easy queries may examine
the interpolation ability of models and answering
hard queries must examine the extrapolation ability
of the models. We find ChatGPT performs best
on hard queries, and all other models have limited
performance on hard queries which show limited
extrapolation.
**4.6** **Scaling Laws**
To understand how parameter counts influence
arithmetic ability, we plot the results with differentsize LLMs in Figure 1. We do not plot text-davinci003, gpt-3.5-turbo-0301 and gpt-4 since they do
not have smaller versions with the same setting.
We find that LLMs have better abilities with larger
parameter counts. An interesting phenomenon we
found is model over 30B does not improve significantly compared with 30B models, especially in
Galactica where the 120B model performs the same
as the 30B model. We hypothesize that 30B may be
enough for arithmetic ability. ChatGPT may be a
model smaller than 175B which outperforms other
175B models a lot, thus larger parameter count does
not guarantee better arithmetic ability. For GPT-4,
we cannot have any possible guess. Considering its
much slower inference speed, we guess it has larger
-----
|Expression (88 - 42) * (33 * 41) - (13 - 27) = 62252|Calculate: Zero-shot COT First, we need to simplify the expression inside the First, we simplify the parentheses: (88 - 42) * (33 * parentheses: 88 - 42 = 46 33 * 41 = 1353 13 - 27 = 41) - (13 - 27) = 46 * 1353 - (-14) Next, we simplify -14 Now we can substitute these values into the orig- the second set of parentheses by changing the sub- inal expression: (46) * (1353) - (-14) Multiplying 46 traction into addition of the opposite: = 46 * 1353 + and 1353 gives: 62,238 Remember that subtracting 14 We can then multiply 46 and 1353: = 62338 + 14 a negative number is the same as adding a positive Finally, we add: = 62352 Therefore, (88 - 42) * (33 number, so we can rewrite the expression as: 62,238 * 41) - (13 - 27) = 62352. + 14 Adding 14 gives: 62,252 Therefore, the answer is: 62,252|
|---|---|
Table 6: Example of Group 14 decoded by ChatGPT with Calculate and COT prompts.
|Model|Naive ICL Acc RE Acc RE|
|---|---|
|galactica-120b galactica-6.7b flan-t5-xxl flan-t5-base|45.14 1.3 45.14 0.42 34.41 2.61 32.67 0.65 3.74 5.78 0.0 10.0 2.49 3.18 0.0 10.0|
Table 8: In-context learning on MATH 401.
the query) for each query. We test whether ICL
can improve the well-behaved model (Galactica)
and the underperforming model (Flan-T5). For
Galactica, it does not improve accuracy but reduces
relative error significantly. For small-sized Flan
(smaller than 3B) it cannot generate any number
under the setting of in-context-learning.
**5** **Conclusion**
In this paper, we propose MATH 401 to evaluate
the arithmetic ability of LLMs. We find that tokenization, pre-training corpus, prompts, and model
parameter counts are important for their arithmetic
ability. The reason ChatGPT performs so well in
arithmetic still has some mystery, i.e. the parameter counts and instruction datasets of ChatGPT. We
hope this paper can help readers improve LLMs
with better arithmetic ability. This paper is only
focused on arithmetic, testing LLMs on other math
topics including symbolic mathematics, solving (ordinary differential, partial differential) equations,
calculus, algebra, geometry, probability theory, and
graph theory are also interesting topics.
**References**
Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael
Pieler, USVSN Sai Prashanth, Shivanshu Purohit,
Laria Reynolds, Jonathan Tow, Ben Wang, and
[Samuel Weinbach. 2022. GPT-NeoX-20B: An open-](https://arxiv.org/abs/2204.06745)
[source autoregressive language model. In Proceed-](https://arxiv.org/abs/2204.06745)
_ings of the ACL Workshop on Challenges & Perspec-_
_tives in Creating Large Language Models._
parameter counts than ChatGPT and obtain better
reasoning ability (i.e. long arithmetic expression).
**4.7** **Chain-of-Thought**
LLMs can leverage chain-of-thought to better answer math word problems (Wei et al., 2022b). We
test on ChatGPT whether chain-of-thought will improve arithmetic calculations. We use the prompt
“Let us solve this equation step by step” to instruct
ChatGPT for zero-shot COT (Kojima et al., 2022).
We compare the results of zero-shot COT using
“Calculate:” in Table 7. Surprisingly, we find that
COT does not improve the performance of any
group even in group 14 with long arithmetic expressions. To understand the reason for this phenomenon, we check decoded results for these two
prompts in Table 6. We find using “Calculate:”
as the prompt can automatically generate chainof-thoughts for long arithmetic expressions and
generate answers directly for easy questions.
|Group|Cal 0 COT Acc RE Acc RE|
|---|---|
|0 Euler 1 ∼6 +− 7 ∼10 ×÷ 11 ∼12 ∧ 13 Irr. 14 Long 15 Tri. 16 Log|100 .00 100 .00 97 .00 94 .02 69 .20 61 .66 50 .24 48 .56 64 1.73 28 4.89 68 .19 64 .46 44 1.21 40 1.14 56 .80 28 5.37|
|Overall|74 .33 66 .98|
Table 7: Comparing zero-shot COT and Calculate using ChatGPT on MATH 401.
**4.8** **In-context Learning**
In-context learning (ICL) provides related questionanswer pairs to improve LLMs (Brown et al., 2020;
Wei et al., 2022b). In our task, we can provide
similar arithmetic expressions before the queries to
help model understanding the arithmetic operator
as done in Smith et al. (2022). We provide 8 similar
cases (we promise these cases are different from
-----
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, T. J. Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeff Wu,
Clemens Winter, Christopher Hesse, Mark Chen,
Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario
Amodei. 2020. Language models are few-shot learners. ArXiv, abs/2005.14165.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde, Jared Kaplan, Harrison Edwards,
Yura Burda, Nicholas Joseph, Greg Brockman, Alex
Ray, Raul Puri, Gretchen Krueger, Michael Petrov,
Heidy Khlaaf, Girish Sastry, Pamela Mishkin,
Brooke Chan, Scott Gray, Nick Ryder, Mikhail
Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias
Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel
Herbert-Voss, William H. Guss, Alex Nichol, Igor
Babuschkin, S. Arun Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra,
Evan Morikawa, Alec Radford, Matthew M. Knight,
Miles Brundage, Mira Murati, Katie Mayer, Peter
Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba.
2021. Evaluating large language models trained on
code. ArXiv, abs/2107.03374.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz,
Philipp Christian Petersen, Alexis Chevalier, and J J
Berner. 2023. Mathematical capabilities of chatgpt.
_ArXiv, abs/2301.13867._
Vedant Gaur and Nikunj Saunshi. 2022. [Symbolic](https://doi.org/10.1109/URTC56832.2022.10002218)
[math reasoning with language models.](https://doi.org/10.1109/URTC56832.2022.10002218) In 2022
_IEEE MIT Undergraduate Research Technology_
_Conference (URTC), pages 1–5._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. _arXiv_
_preprint arXiv:2103.03874._
Jeonghwan Kim, Giwon Hong, Kyung min Kim,
Junmo Kang, and Sung-Hyon Myaeng. 2021. Have
you seen that number? investigating extrapolation in
question answering models. In Conference on Em_pirical Methods in Natural Language Processing._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. _ArXiv,_
abs/2205.11916.
Niklas Muennighoff, Thomas Wang, Lintang Sutawika,
Adam Roberts, Stella Biderman, Teven Le Scao,
M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie,
Zaid Alyafeai, Albert Webson, Edward Raff, and
Colin Raffel. 2022. [Crosslingual generalization](http://arxiv.org/abs/2211.01786)
[through multitask finetuning.](http://arxiv.org/abs/2211.01786)
Matteo Muffo, Aldo Cocco, and Enrico Bertino. 2022.
Evaluating transformer language models on arithmetic operations using number decomposition. In
_International Conference on Language Resources_
_and Evaluation._
Rodrigo Nogueira, Zhiying Jiang, and Jimmy J. Li.
2021. Investigating the limitations of the transformers with simple arithmetic tasks. _ArXiv,_
abs/2102.13019.
Kimia Noorbakhsh, Modar Sulaiman, Mahdi Sharifi,
Kallol Roy, and Pooyan Jamshidi. 2021. Pretrained
language models are symbolic mathematics solvers
too! arXiv preprint arXiv:2110.03501.
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/arXiv:2303.08774)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _arXiv preprint_
_arXiv:2203.02155._
Stanislas Polu and Ilya Sutskever. 2020. Generative
language modeling for automated theorem proving.
_ArXiv, abs/2009.03393._
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot reasoning. ArXiv,
abs/2202.07206.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja,
Manan Dey, M Saiful Bari, Canwen Xu, Urmish
Thakker, Shanya Sharma Sharma, Eliza Szczechla,
Taewoon Kim, Gunjan Chhablani, Nihal Nayak,
Debajyoti Datta, Jonathan Chang, Mike Tian-Jian
Jiang, Han Wang, Matteo Manica, Sheng Shen,
Zheng Xin Yong, Harshit Pandey, Rachel Bawden,
Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo
Gao, Tali Bers, Thomas Wolf, and Alexander M.
Rush. 2021. [Multitask prompted training enables](http://arxiv.org/abs/2110.08207)
[zero-shot task generalization.](http://arxiv.org/abs/2110.08207)
-----
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. _arXiv preprint_
_arXiv:1904.01557._
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François
Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model.
_arXiv preprint arXiv:2211.05100._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì,
Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to
use tools. ArXiv, abs/2302.04761.
Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu,
and Lakshmivihari Mareedu. 2023. An independent
evaluation of chatgpt on mathematical word problems (mwp).
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi
Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won
Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2022. Language models
are multilingual chain-of-thought reasoners. ArXiv,
abs/2210.03057.
Shaden Smith, Mostofa Patwary, Brandon Norick,
Patrick LeGresley, Samyam Rajbhandari, Jared
Casper, Zhun Liu, Shrimai Prabhumoye, George
Zerveas, Vijay Anand Korthikanti, Elton Zhang,
Rewon Child, Reza Yazdani Aminabadi, Julie
Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong
He, Michael Houston, Saurabh Tiwary, and Bryan
Catanzaro. 2022. Using deepspeed and megatron to
train megatron-turing nlg 530b, a large-scale generative language model. ArXiv, abs/2201.11990.
Ross Taylor, Marcin Kardas, Guillem Cucurull,
Thomas Scialom, Anthony Hartshorn, Elvis Saravia,
Andrew Poulton, Viktor Kerkez, and Robert Stojnic.
2022. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085.
Romal Thoppilan, Daniel De Freitas, Jamie Hall,
Noam M. Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker,
Yu Du, Yaguang Li, Hongrae Lee, Huaixiu Zheng,
Amin Ghafouri, Marcelo Menegali, Yanping Huang,
Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao
Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts,
Maarten Bosma, Yanqi Zhou, Chung-Ching Chang,
I. A. Krivokon, Willard James Rusch, Marc Pickett, Kathleen S. Meier-Hellstern, Meredith Ringel
Morris, Tulsee Doshi, Renelito Delos Santos, Toju
Duke, Johnny Hartz Søraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Díaz, Ben Hutchinson,
Kristen Olson, Alejandra Molina, Erin HoffmanJohn, Josh Lee, Lora Aroyo, Ravindran Rajakumar,
Alena Butryna, Matthew Lamm, V. O. Kuzmina,
Joseph Fenton, Aaron Cohen, Rachel Bernstein, Ray
Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Huai hsin Chi, and Quoc Le. 2022.
Lamda: Language models for dialog applications.
_ArXiv, abs/2201.08239._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Lan[guage Model. https://github.com/kingoflolz/](https://github.com/kingoflolz/mesh-transformer-jax)
[mesh-transformer-jax.](https://github.com/kingoflolz/mesh-transformer-jax)
Cunxiang Wang, Boyuan Zheng, Yuchen Niu, and Yue
Zhang. 2021. Exploring generalization ability of
pretrained language models on arithmetic and logical reasoning. In Natural Language Processing and
_Chinese Computing._
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022a. Emergent abilities of large language models.
_arXiv preprint arXiv:2206.07682._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Huai hsin Chi, Quoc Le, and Denny
Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. _ArXiv,_
abs/2201.11903.
Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li,
Markus N. Rabe, Charles Staats, Mateja Jamnik, and
Christian Szegedy. 2022. Autoformalization with
large language models. ArXiv, abs/2205.12615.
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan
Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng
Zhang, Yuxiao Dong, and Jie Tang. 2022. Glm130b: An open bilingual pre-trained model. arXiv
_preprint arXiv:2210.02414._
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.
2022. Opt: Open pre-trained transformer language
models. arXiv preprint arXiv:2205.01068.
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Huai hsin
Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. _ArXiv,_
abs/2205.10625.
**A** **Examples from MATH 401**
We list examples for each group from MATH 401.
- e[iπ] + 1 = 0
-----
- 5 + 9 = 14
- 21 + 97 = 118
- 721 − 847 = −126
- 714637232158 _−_ 667119914538
47517317620
- −1 + (−6) = −7
- −0.038 + 0.0092 = −0.0288
- 78 × 64 = 4992
- 5.0 × 0.09 = 0.045
- 45960 × 59693 = 2743490280
- 70 ÷ 61 = 1.1475
- 7[4] = 2401
- 2.242[3][.][7342] = 20.3865
- e + π = 5.8598
- (4 × 64) × (39 + 12) = 13056
- sin(−3.75π) = 0.7071
- log10(797) = 2.9015
-----
| [
"Zheng, Yuan",
"Hongyi, Yuan",
"Songfang, Huang",
"Chuanqi, Tan",
"Wei, Wang"
] | 2023-03-16T00:00:00 | null | false | 91 | 5 | null | http://arxiv.org/abs/2304.02015 | https://arxiv.org/abs/2304.02015 | https://www.semanticscholar.org/paper/99bd07e888476904c6dd77ca154fd48629ac6dce |
NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks | Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple calculations is an important skill of AI systems. While many datasets and models have been developed to this end, state-of-the-art AI systems are brittle; failing to perform the underlying mathematical reasoning when they appear in a slightly different scenario. Drawing inspiration from GLUE that was proposed in the context of natural language understanding, we propose NumGLUE, a multi-task benchmark that evaluates the performance of AI systems on eight different tasks, that at their core require simple arithmetic understanding. We show that this benchmark is far from being solved with neural models including state-of-the-art large-scale language models performing significantly worse than humans (lower by 46.4 %). Further, NumGLUE promotes sharing knowledge across tasks, especially those with limited training data as evidenced by the superior performance (average gain of 3.4 % on each task) when a model is jointly trained on all the tasks as opposed to task-specific modeling. Finally, we hope that NumGLUE will encourage systems that perform robust and general arithmetic reasoning within language, a first step towards being able to perform more complex mathematical reasoning. | NumGLUE is proposed, a multi-task benchmark that evaluates the performance of AI systems on eight different tasks, that at their core require simple arithmetic understanding and it is shown that this benchmark is far from being solved with neural models including state-of-the-art large-scale language models performing significantly worse than humans. | ## NUMGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks
**Swaroop Mishra[1]** **Arindam Mitra[2]** **Neeraj Varshney[1]** **Bhavdeep Sachdeva[1]**
**Peter Clark[3]** **Chitta Baral[1]** **Ashwin Kalyan[3]**
1Arizona State University 2Microsoft Research 3Allen Institute for AI
**Abstract**
Given the ubiquitous nature of numbers in
text, reasoning with numbers to perform simple calculations is an important skill of AI
systems. While many datasets and models
have been developed to this end, state-ofthe-art AI systems are brittle; failing to perform the underlying mathematical reasoning
when they appear in a slightly different scenario. Drawing inspiration from GLUE (Wang
et al., 2018) that was proposed in the context of natural language understanding, we
propose NUMGLUE, a multi-task benchmark
that evaluates the performance of AI systems
on eight different tasks, that at their core require simple arithmetic understanding. We
show that this benchmark is far from being
solved with neural models including state-ofthe-art large-scale language models performing significantly worse than humans (lower
by 46.4%). Further, NUMGLUE promotes
sharing knowledge across tasks, especially
those with limited training data as evidenced
by the superior performance (average gain of
3.4% on each task) when a model is jointly
trained on all the tasks as opposed to taskspecific modeling. Finally, we hope that
NUMGLUE will encourage systems that perform robust and general arithmetic reasoning
within language, a first step towards being able
to perform more complex mathematical reasoning[1].
Original Word Problem
_John had 5 apples. He gave 3 to Peter. How_
_many apples does John have now?_
Fill In The Blanks Format
John had 5 apples. He gave 3 to Peter. John has
apples now.
NLI Format
Premise: John had 5 apples. He gave 3 apples to
Peter. Hypothesis: John has 2 apples now. Does
the hypothesis entail, contradict or is neutral to
the premise?
Comparison Format
John had 5 apples. He gave 3 to Peter. Who has
more apples?
Figure 1: A system that can robustly perform numeric
reasoning over language should be able to solve problems such as the above, regardless of how the problem
is posed. However, we observe existing systems are
brittle; producing inconsistent solutions to such minor
stylistic variations.
systems are brittle and fail when problems involving similar mathematical reasoning is posed in a
slightly different manner. For instance, presenting
a word problem in a different manner as shown in
fig. 1, while hardly affecting human performance,
is sufficient to confuse state-of-the-art AI systems[2].
This brittleness in reasoning indicates that the
models latch on to spurious signals in the specific
dataset resulting in “solving” the dataset while
not truly understanding the underlying reasoning
skill of simple arithmetic. Further, we believe that
building AI systems that can truly understand and
apply simple arithmetic reasoning is a mandatory
first step towards successfully tackling complex
2The recently released GPT3-Instruct, a fine-tuned model
with 175B parameters produces inconsistent answers for these
questions. See supplementary material: GPT3-Instruct’s Response for more details.
**1** **Introduction**
Reasoning with numbers is an important skill that
occurs in various day-to-day scenarios and not
surprisingly, numbers are ubiquitous in textual data.
To train AI reasoning systems that can perform
simple mathematical reasoning, many tasks have
been proposed (Dua et al., 2019b; Ravichander
et al., 2019; Koncel-Kedziorski et al., 2016).
Despite these efforts, current state-of-the-art AI
[1https://allenai.org/data/numglue](https://allenai.org/data/numglue)
-----
mathematical reasoning skills (Saxton et al., 2019;
Hendrycks et al., 2020, 2021).
**NumGLUE. To this end, we propose NUMGLUE,**
a multi-task benchmark consisting of eight
different tasks that at their core test for arithmetic
reasoning skills. For example, as discussed in fig. 1,
tasks can involve word problems presented in a
slightly different manner or can involve additional
reasoning strategies like commonsense reasoning
or reading comprehension to be combined with the
core skill of simple arithmetic. Our benchmark
consists of four new tasks in addition to four
existing ones; with „100K problems spread
across eight differet tasks. The motivation behind
NUMGLUE is similar to GLUE (Wang et al.,
2018, 2019), a multi-task benchmark that aimed
at models that demonstrated superior language
understanding by learning the underlying linguistic
features. NUMGLUE is designed with goal of
progressing towards AI systems that are capable
of performing arithmetic reasoning in a general
setting; achieving superior performance on our
benchmark requires the ability to correctly identify
and perform the underlying arithmetic reasoning
without relying on task or dataset-specific signals.
Finally, we hope that NUMGLUE will encourage
systems that perform robust and general numeric
reasoning within language, a first step towards
being able to perform more complex mathematical
reasoning.
**Contributions.**
1. We introduce NUMGLUE– a multi-task benchmark consisting of eight different tasks, including 4 new ones, whose solution at its core requires an understanding of simple arithmetic.
2. We demonstrate that NUMGLUE is a challenging benchmark even for state-of-the-art large
scale language models, obtaining poor scores
not only in zero or few shot settings but also after
fine-tuning. This indicates a fundamental barrier
for AI systems; one that needs to be breached
before complex mathematical challenges can be
successfully tackled.
3. Finally, we propose a memory-augmented neural model to demonstrate the utility of such a
multi-task meta dataset. Our proposed model
when trained on the entirety of NUMGLUE obtains an average improvement of 3.4% on each
task as opposed to task-specific training – in
dicating that joint training leads to beneficial
transfer owing to the common theme of arithmetic reasoning.
**2** **Related Work**
**Datasets for Numerical reasoning. Quantitative**
reasoning has been a challenging problem for a
long time. Small question answering datasets were
proposed to understand the quantitative aspect
of natural language such as the template-based
dataset which solved questions with equations
as parameters (Kushman et al., 2014), additionsubtraction dataset (Hosseini et al., 2014) and
arithmetic problems dataset (Koncel-Kedziorski
et al., 2015). Difficulty of questions were increased
in subsequent datasets (Roy and Roth, 2016),
(Upadhyay et al., 2016). Later, larger datasets
were created to facilitate deep learning research
(Ling et al., 2017; Dua et al., 2019b). Several other
maths datasets have been proposed to improve
explainability (Amini et al., 2019), diversity
(Miao et al., 2020), scale information in language
embeddings (Zhang et al.) and hardness of math
questions (Hendrycks et al., 2021).
One of the motivations behind creating this
benchmark is to test for simple arithmetic reasoning independent of the context or the presentation
style of the problem. Further, To the best of
our knowledge, our work is the first to consider
multiple tasks in the numerical reasoning space.
**Multi-Task** **Benchmarks.** With increased
success of deep learning based models on individual tasks, there has been a significant push
both in the NLP community and in the broader
AI community towards general purpose models
that excel at multiple tasks. Naturally, various
benchmarks and challenges that test for such
understanding have been proposed. For instance,
the BAbI dataset (Weston et al., 2015), GLUE
(Wang et al., 2019) and the subsequent harder
SuperGLUE (Wang et al., 2019) were proposed
to both evaluate and drive progress in language
understanding via shared linguistic knowledge
across tasks. McCann et al. (2018) build a multitask dataset via a novel approach – formatting each
task as that of question-answering. In the more
restricted setting of reading comprehension, Dua
et al. (2019a) and Downey and Rumshisky build
a meta-dataset that spans multiple domains and
-----
reasoning skills.
**Multi-task Models.** With the growing interest towards models that go beyond specific
datasets, various neural models that can perform
mutliple tasks have been proposed. When the
underlying reasoning is similar – eg. commonsense
reasoning, problem decomposition or linguistic
understanding – it has been found that training on
multi-task datasets yields more robust and accurate
models. For instance, the Multi-task Question
Answering Network (McCann et al., 2018), T5
(Raffel et al., 2019), GPT3 (Brown et al., 2020)
and GPT3-Instruct models aim to build general
purpose language models that are capable of
transferring linguistic understanding across tasks.
A similar approach is taken by Khashabi et al.
(2020) in the setting of question-answering and
Lourie et al. (2021) in the scope of commonsense
reasoning. Further, Muppet (Aghajanyan et al.,
2021) adds an additional step of pre-finetuning
between pretraining and finetuning that improves
generalization to multiple tasks.
**3** **NUMGLUE**
As mentioned previously, our NUMGLUE benchmark consists of both new and already existing
arithmetic reasoning tasks. We first begin by
introducing the novel datasets curated by us before
providing a brief overview of existing tasks that
are part of NUMGLUE. Finally, in this section, we
provide an analysis of the datasets demonstrating
that it contains interesting and diverse linguistic
and mathematical properties.
**NUMGLUE** **Benchmark.** Our proposed
NUMGLUE benchmark is a collection of eight
different tasks that together include „100K
questions. The tasks may either be self-contained
or require additional background knowledge
(e.g.commonsense reasoning) to arrive at the
final solution; however, all the tasks, at their
core, involve arithmetic reasoning. Table 1
shows an example question belonging to each
task along with indicating the total number
of data points associated with each task. It is
important to note that tasks are imbalanced with
only „400 examples for Task 1 and nearly 50K
questions under Task 5. While we could have
under-sampled the questions to create a balanced
suite, we retain the imbalanced dataset in order
to mimic the real world – for instance, arithmetic
word problems are more abundant as opposed
to word problems that may require commonsense reasoning in addition to arithmetic reasoning.
**Data Partition and Evaluation.** We randomly partition data in each task into training
(70%), development (10%) and test (20%) sets . In
the case of reading comprehension tasks (Task 5
and 6), we assign all questions corresponding to a
passage to the same split – we do this in order to
discourage any data leakage and thereby, allowing
models to potentially rely on memorization to
arrive at the correct answer.
For each task, we report the F1 measure
and as an aggregate measure of performance on
the NUMGLUE benchmark similar to Dua et al.
(2019b), we report the (unweighted) average of the
F1 scores corresponding to each task.
**3.1** **Novel Datasets**
The novel tasks proposed as part of NUMGLUE are
a combination of both freshly collected data and
intelligent modifications of already existing
datasets. The four novel arithmetic reasoning tasks
introduced are as follows [3]:
**Task 1:** **Commonsense + Arithmetic Rea-**
**soning. Consider the following question – How**
_many faces do 10 dice have? Answering this not_
only requires simple arithmetic i.e.multiplying the
number of faces in a die by ten but also requires
knowing that a standard die has six faces. We
collect this dataset by first asking the annotator
to write down a numerical commonsense fact
(e.g.a human has 2 hands, a day has 24 hours etc.)
and then use frame a question that requires using
this numerical fact as part of a simple arithmetic
calculation.
**Task 2: Domain Specific + Arithmetic Reason-**
**ing. How many units of hydrogen are required**
_to produce 10 units of water?_ This question,
similar to the previously introduced task of
arithmetic reasoning questions, requires additional
domain-specific knowledge – specifically, that each
unit of water contains two units of hydrogen. We
3We annotate the datasets manually. We provide the exact
flow used to generate questions of each task in the supplementary materials: Construction of NUMGLUE.
-----
Task Question Setting Size Example
Question: A man can lift one box in each of his hands. How many
TASK 1 Commonsense + Arithmetic 404
boxes can a group of 5 people hold in total? Answer: 10
Question: How many units of H2 are required to react with 2 units
TASK 2 Domain specific + Arithmetic 1620
of C2H4 to form 2 units of C2H6? Answer: 2
Question: A person wants to get shopping done quickly. They
know that they can get through the check-out at big store in 5
minutes whereas it can take 20 minutes at small store. The store
they go to finish quickly is? (A) big store (B) small store? Answer:
big store
TASK 3 Commonsense + Quantitative 807
Question: Joan found 70 seashells on the beach. She gave Sam
TASK 4 Fill-in-the-blanks 1100 some of her seashells. She has 27 seasshells left. She gave _____
seashells to Sam? Answer: 43
TASK 5 RC + Explicit Numerical Reasoning 54212 [Passage: <>. Question: How many counties were added in 1887?]
Answer: 2
TASK 6 RC + Implicit Numerical Reasoning 32724 [Passage: <>. Question: Which player kicked the shortest field]
goal? Answer: David Akers
Statement 1: James took a 3 - hour bike ride, Statement 2: James
TASK 7 Quantitative NLI 9702 took a more than 1 - hour bike ride, Options: Entailment or contradiction or neutral?, Answer: Entailment
Question: Joe had 50 toy cars. If he gives away 12 cars, how many
TASK 8 Arithmetic word problems 1266
cars will he have remaining?, Answer: 38
Table 1: Size and example of each task in the NumGLUE benchmark. RC: Reading Comprehension
curate a dataset of such questions that require both
domain-specific knowledge and arithmetic reasoning motivated by the finding that QA systems
perform poorly on the ARC dataset Clark et al.
(2018) consisting of grade-school level science
questions. Specifically, the dataset collected by us
requires understanding of a small set of chemistry
(conservation of mass in chemical reactions) and
physics principles (speed “ _[distance]{time)._
**Task 3: Commonsense + Quantitative Compar-**
**ison. A golf ball weighs 40g and a baseball weighs**
_150g. Which has a higher gravitational force?_
Answering this question requires both knowing
that mass is directly proportional to gravitational
force and a numerical comparison via subtraction.
We collect such quantitative comparison questions
by using the QuaRel dataset (Tafjord et al., 2019)
containing questions from diverse fields such as
physics and economics as the starting point. The
annotator chooses a subset of these questions that
involve numerically comparable quantities (for
instance, in this example, mass of the objects
involved) to create the required task of quantitative
comparison questions.
**Task 4: Fill-in-the-blanks Format. Unlike the**
previously proposed tasks that require external in
formation (e.g.commonsense knowledge) in addition to simple arithmetic reasoning, this task is selfcontained but a stylistic variant of existing math
word problems. We source word problems from
the Arithmetic Word Problem repository (Roy and
Roth, 2016, 2017, 2018) and convert them into the
fill-in-the-blanks format. For an example of such a
conversion, refer to fig. 1.
**3.2** **Existing Datasets**
We now review existing datasets while discussing
any modifications made when including them
in NUMGLUE. In general, for all the datasets
included, we perform a filtering step to clean
and control for the quality of the data points
being included. This step includes – a) discarding
questions that do not have answer annotations b)
eliminating questions with high lexical overlap
with the remainder of the dataset and c) fixing
any type mismatches present in the data (e.g.“7.0
students” Ñ “7 students”).
**Task 5:** **Reading Comprehension (RC) +**
**Explicit Numerical Reasoning.** We select a
subset from the DROP (Dua et al., 2019b) dataset
to create this task. Specifically, the selected
questions involve reading comprehension and
numerical reasoning but importantly, the required
-----
**4** **Experiments**
In this section, we establish multiple baselines on
our benchmark and discuss their performance.
**4.1** **Baselines**
We evaluate several baselines on our benchmark
– (i) Heuristic, (ii) Zero-shot, (iii) Few-shot, (iv)
Fine-tuning and (v) Human. We use two kinds
of model architectures (i) Neuro-symbolic, a
memory augmented novel architecture that extends
Numnet+v2 (Ran et al., 2019) and (ii) End-to-end,
GPT3 (Brown et al., 2020).
**Architectures. In the multi-task setting where the**
same model is trained on all the NUMGLUE tasks,
we use Reading Comprehension (RC) as the
common format – converting each task to RC
format via a set of hand-coded rules [4]. In addition
to being capable of faithfully representing all
the constituent tasks, the RC format also allows
us to inject additional context in the IR setting
without affecting the rest of the pipeline [5]. On
the other hand, GPT3 being a generative model
does not require such modifications. Importantly,
note that both models are inputted the exact same
information for the multi-task experiments.
**Heuristic Baselines with Task Oracle.** For
this baseline, we assume a task oracle that knows
the task a particular question belongs (in a multitask setting) – we use this to make our heuristic
baselines more competitive. The first heuristic
baseline is random: we randomly select one of the
options in case the question has multiple options
(task 3 and 7), a number between 0 to 100 for
questions having a numerical answer and a random
entity present in the passage for questions having a
text segment from the passage as the answer. In
the majority baseline, we select the most frequent
answer for each task such as "Entailment" for
NLI questions and similarly, the most frequent
number for questions having numerical answer
and the major entity present in the passage for
questions having span based answer. As the task
information is known, we include these baselines
under task-specific baselines when discussing
results.
4More details in the supplementary material: Ex-NumNet
5Henceforth we will be calling our extension to Numnet+v2 as Ex-NumNet
answer is also a number.
**Task 6:** **Reading Comprehension (RC) +**
**Implicit Numerical Reasoning.** Consider the
following question based on a relevant passage –
_Which state has the highest income tax rate? Here,_
while the final answer is a name, arriving at it
requires performing comparison (i.e.subtraction).
We classify such questions in the DROP dataset as
a separate task in NUMGLUE.
**Task 7: Quantitative NLI EQUATE (Ravichan-**
der et al., 2019) introduces quantitative NLI
questions that require simple arithmetic calculations to be performed in order to accurately classify
the relationship between the provided premise and
the hypothesis. As noted in fig. 1, many word
problems can also be easily converted to this
format and is therefore, a diverse and interesting
task for evaluating arithmetic reasoning skills of
AI systems.
**Task 8: Arithmetic Word Problems Finally, we**
arrive at one of the earliest and extensively studied
class of arithmetic reasoning problems i.e.word
problems. The specific dataset included as part of
our NUMGLUEbenchmark is a combination of
multiple datasets proposed by Koncel-Kedziorski
et al. (2016), (Koncel-Kedziorski et al., 2015) and
Kushman et al. (2014). Further, to ensure that the
benchmark as a whole is diverse, we eliminate
questions that have a high sentence similarity with
questions from the fill-in-the-blanks task.
**3.3** **Data Quality Analysis:**
In order to ensure a high-quality test set, three independent annotators evaluate each question in the
test set across all tasks. A tiny porton of the data
marked as invalid or with disagreement between
the annotators was excluded, resulting in a verified,
high-quality NUMGLUE evaluation suite. We also
perform a variety of analysis and find that the novel
question tasks we created (task 1-4) have higher
quality than the existing question tasks since they
have higher average vocabulary (number of unique
words per number of samples), higher number of
unique nouns, verbs and other POS tags and have
less semantic textual similarity among each other
(indicating lower repetition). Detailed analysis can
be found in the supplementary material: Data Quality Analysis of NUMGLUE.
-----
Figure 2: Performance of zeroshot, fewshot and finetuning baselines (Section 4) across NumGLUE. There is
a signficant gap between the highest performing model and the human baseline. ZS: Zeroshot, GPT3I: GPT3Instruct, MT: Multi-task, TS: Task-specific, QO: Question Only, CO: Context Only, EXNN: Ex-NumNet,FS: Fewshot, OS: Oversampling, IR: Information Retrieval, CIR: Conditional Information Retrieval.
|they were|playing with?|
|---|---|
|Passage: A group of boys decided to play a game of poker and kept 8||
Question: Find the count of cards
Reading
they were playing with?
Comprehension
Convertor Passage: A group of boys decided
A group of boys decided to play a to play a game of poker and kept 8 Extended 44
game of poker and kept 8 cards cards away. NumNet+v2
away. Find the count of cards they
_Answer_
were playing with? MATHKB
There are 52 cards in a card deck.
Retrieval
_Original Question_
_Retrieved Fact_
Figure 3: Our proposed memory-augmented model that detects the type of task (1-8), uses Information Retrieval
from MATH KB and append the information that gets fed to Ex-NumNet
**Zeroshot and Fewshot Baselines.** We use
GPT3 (Brown et al., 2020) and the more recent
GPT3-Instruct[6]. We have two types of few shot
baseline (i) task specific and (ii) multi task. In
case of task specific fewshot baseline, instances
of the same task are used as in-context examples
(Brown et al., 2020) whereas in case of multitask
few shot baseline, instances from all tasks are
used to condition the model. Multitask fewshot is
naturally a harder setting as it is task-agnostic. We
use default parameters in GPT3 and GPT3-Instruct.
In few-shot setting, we experiment after feeding as
many examples as it can fit within the tokensize.
For few shot experiments, we randomly select
6newly released by OpenAI as part of the GPT3 finetuned
series
examples and averaged the results over 5 runs.
**Fine-tuning** **Baselines.** We first consider
variations of the fine-tuning baselines in the
context of our neuro-symbolic model, Ex-NumNet.
We use it as bias-checking baseline – to ensure
that solving the benchmark correctly requires
considering all of the information presented to
it. To this end, we evaluate the performance of
our model when finetuned only on the question
(Q-only) or the context (C-only). Next, we present
task-specific and multi-task baselines where
Ex-NumNet is fine-tuned on individual tasks and
the entire NUMGLUE benchmark respectively.
With the goal of addressing the data imbalance
across the tasks, we include an oversampling
-----
Learning Baseline Baseline Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8 NumGLUE
category name Score
Task-specific Random 0 0.3 46.9 0 0.5 3.4 33 0.4 10.6
HEURISTIC
Task-specific Majority 1.2 13.9 50 0.5 7.4 3.8 36.5 1.2 14.3
- GPT3 0 1 11 2 0 17 6 2 4.9
ZERO-SHOT
- GPT3-Instruct 2 1 7 3 3 29 17 3 8.1
Task-specific GPT3 **44** **42** 46 40 10 42 35 40 37.4
Task-specific GPT3-Instruct 40 39 51 33 13 43 35 33 35.9
Multi-task GPT3 0 3 27 1 7 28 30 4 12.5
Multi-task GPT3-Instruct 1 2 37 2 6 35 31 7 15.1
FEW-SHOT
FINE-TUNING Multi-task GPT3-13B 21.5 40.7 **71.2** 11.1 6.3 48.2 48.0 14.2 32.7
Multi-task (Q-only) Ex-NumNet 1.2 13.2 25.1 0.5 6.1 25.1 32.8 2.4 13.3
Multi-task (C-only) Ex-NumNet 1.2 14.2 22.8 19.1 0.6 3 0 9.5 8.8
Single-task Ex-NumNet 0 37.8 50.8 22.2 66.6 **71.6** 85.9 12.2 43.4
Multi-task Ex-NumNet 0 37.5 58 31.4 68.2 70.2 85.7 23.2 46.8
Multi-task + IR Ex-NumNet 5.6 37.5 46.6 36.4 68.6 69.6 **85.9** 22.4 46.6
Multi-task + CIR Ex-NumNet 7.4 38.8 58 **36.8** **69.2** 70.8 85.8 **23.6** **48.8**
Multi-task + OS Ex-NumNet 7.4 38.8 47.8 35.9 44.3 53.7 85.4 22.4 42.0
- Human 94.4 94.5 97.8 95 94.7 96.1 96.5 92.8 95.2
FINE-TUNING
Table 2: F1 performance of various baselines on the NumGLUE test set across various tasks 1-8. Human performance was calculated on 100 samples of each task (81 of Task 1) [*IR = Information Retrieval, CIR=Conditional
Information Retrieval, OS=Oversampling, Q. Only: Question Only, C. Only: Context Only ].
provided by OpenAI [7]) in the multi-task setting i.e.
the desired setting of the NUMGLUE benchmark.
**Human Baseline.** Human baseline was calculated on 100 test set samples of each task (81 of
Task 1) by averaging the scores of four annotators.
**5** **Results and Discussion**
Table 2 shows the performance of various baseline
models on the test set of our benchmark. Note
that the performance of all baseline models is
significantly lesser than the human baseline (Figure
2). We now discuss various insights based on these
results.
**Does the benchmark contain bias that a**
**model can exploit?** A challenging dataset
requires the model to ideally consider all the
information provided to it before arriving at an
answer. To ensure that this is indeed the case, we
perform ablations where only one portion of the
input is provided i.e. either the question or the
context. Both these “bias-checking” baselines
perform poorly even in task-specific setting –
indicating that both the benchmark and constituent
tasks are challenging.
**Which Tasks are Hard to Solve?** Our results show that task 1 which requires numerical
commonsense knowledge, is the hardest task to
solve. Similarly, tasks 2, 4 and 8 appear to be
7https://beta.openai.com/docs/guides/fine-tuning
baseline that oversamples data from tasks with
limited data so as to ensure that the model sees the
same number of examples from each constituent
task.
In addition, we propose a new architectural
modification to Ex-NumNet. Noting that our baseline model Ex-NumNet does not take into account
external knowledge, we create a new enhanced
architecture in the form of a memory-augmented
model that does Information Retrieval (IR) (Khot
et al., 2019) with respect to a knowledge base
we create, MATH KB to identify the needed
knowledge. This is inspired by the observation
that formula book and mathematical knowledge
make the task easier for humans while solving
math questions of various types. We then use this
knowledge in the Ex-NumNet setting. Figure 3
illustrates our approach which leverages our newly
created knowledge base MATH KB. Conditional
IR model is different from the regular IR model in
the sense that, IR is performed only for questions
of task 1, 2 and 4, since they require external
knowledge to get answered. More details about the
model and the IR process can be found in supplementary material: Proposed Memory-Augmented
Model (A.5 and A.6).
Finally, we discuss fine-tuning baselines in
the context of end-to-end models, specifically
GPT3. We finetune the GPT3-13B model (for
which the finetuning capability has been recently
-----
comparatively harder from the rest. One pattern
among these tasks is that all of them expect the
answer to be numeric. Numeric answer requires
accurate calculation. So, models might have
difficulty in learning the task directly from data.
This hypothesis is also justified from the slight
drop in human performance in these tasks..
On the other hand, task 7 has the best performance
among all. Further, we see that performance on
task 6 is slightly better than task 5 – although
both tasks are sourced from the same dataset, we
observe that models answer span based questions
better as compared to numeric answers. Relatively
higher performance for task 3 suggests that models
find it easier to answer in an MCQ setting.
**Does IR Help?** Results show that knowledge help in improving performance of tasks 1,
2 and 4 – where indeed, external knowledge like
commonsense or domain-specific knowledge is
needed in addition to arithmetic reasoning to arrive
at the correct answer. However, task 3 is an exception to this trend and in fact registers a drop in the
score when provided with (unnecessary) additional
information; we find that this shortcoming is
fixed when using conditional information retrieval
(CIR) which in fact leads to the strongest baseline
presented in this work.
**Does** **Oversampling** **help** **overcome** **data**
**imbalance across tasks? Even though oversam-**
pling results in higher performance in certain
tasks (in comparison with the multitask baseline),
specifically the ones with smaller training data, it
results in significant drop in performance in the
other extreme, i.e tasks with bigger training data.
Also, it never performs better than the Conditional
IR module in multitask setting.
**5.1** **Error Analysis**
We now present an analysis of the errors made
by our baselines to indicate potential avenues for
future research.
We analyze errors associated with 50 samples each of the 8 tasks and find that there are
mainly 4 categories of error models make: (1)
producing invalid output (e.g. answering text
where the answer is supposed to be a number,
answering a text different from the classes allowed
in a classification problem), (2) copying a number
Error Ex-NumNet GPT3
Invalid output 16 % 7%
Copy number 5 % 3%
Incorrect calculation 71 % 56%
Redundant text 8 % 34%
Table 3: Error analysis for the best Ex-NumNet Multitask+CIR and GPT3 Task-specific model
from the question instead of calculating the answer,
(3) incorrect calculation – this can be due to
multiple reasons including (i) using an incorrect
operation e.g. subtraction in place of addition,
(ii) incorrect parsing of numbers or (iii) incorrect
knowledge of numerical commonsense facts. (4)
producing redundant text after producing correct
answer. Based on error distribution in Table 3,
we observe that the majority of errors come from
incorrect calculation. Further, GPT3 is better than
Ex NumNet+v2 in producing valid outputs, but it
produces more redundant text.
**Future** **Directions:** **Bigger** **model,** **more**
**data or . . . ?** Table 2 shows that fine-tuned
GPT3-13B outperforms other baselines on task 1,
2 and 3. Recall that these tasks require external
knowledge and perhaps, this is the reason why
GPT3, already pre-trained on a diverse web-scale
text corpus has an edge over other baselines on
these tasks. In case of the smaller Ex-NumNet, it is
interesting that multitask baselines are higher than
the single task baselines by 3.4% on average and
that information retrieval helps in tasks that require
external knowledge. Also notice that, GPT-3 is
better on smaller datasets and NumNet is better
on large datasets. This may indicate that GPT-3
is a better few-shot learner but not necessarily a
better many-shot learner. This non-overlapping
performance of GPT-3 and Ex-numnet, end-to-end
and neuro-symbolic models respectively, indicates
that a potential future direction for research is to
combine the best of both the models.
**6** **Conclusion**
We propose NUMGLUE, a multi-task benchmark
to test for arithmetic understanding. Our benchmark consists of eight tasks including four new
ones. While some of the tasks require external
knowledge like commonsense or domain-specific
information in addition to arithmetic reasoning,
some are self-contained e.g. arithmetic word problems. Further, we demonstrate that our benchmark
-----
is far from being solved – with state-of-the-art large
scale models achieving considerably lower performance than humans. This indicates that current
AI systems are incapable of performing simple
arithmetic reasoning in a general setting – indicating a fundamental hurdle towards AI systems that
understand complex mathematical concepts like
differential equations or combinatorics. Finally,
we present various baselines including a novel architecture (memory augmented Ex-NumNet) that
demonstrate the advantages of various modeling
choices (e.g. end-to-end vs neuro-symbolic models). Specifically, we show that training in the
multi-task setting leads to meaningful sharing of
knowledge across tasks as evidenced by an average
gain of 3.4% on tasks compared to task-specific
modeling. Finally, we hope that our benchmark
not only leads to AI systems that are capable of
performing simple arithmetic reasoning in a fairly
general setting but also results in progress towards
more complex mathematical reasoning capability.
**Acknowledgements**
We thank OpenAI for providing academic access
to the GPT3 API, the Aristo team at AI2 for helpful input, the Beaker team for their support with
experiments and the anonymous reviewers for their
insightful feedback. The support of DARPA SAILON, DARPA CHESS program is gratefully acknowledged.
**Ethical Considerations**
We have verified that all licenses of source datasets
used in this paper allow for their use, modification, and redistribution in a research context. The
dataset will be distributed in a manner similar to
SuperGLUE (Wang et al., 2019) i.e. give full credit
assignment to the original data and task creators.
**References**
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal
Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. _arXiv preprint_
_arXiv:2101.11038._
Aida Amini, Saadia Gabriel, Peter Lin, Rik KoncelKedziorski, Yejin Choi, and Hannaneh Hajishirzi.
2019. Mathqa: Towards interpretable math word
problem solving with operation-based formalisms.
_arXiv preprint arXiv:1905.13319._
Anjana Arunkumar, Swaroop Mishra, Bhavdeep
Sachdeva, Chitta Baral, and Chris Bryan. 2020.
Real-time visual feedback for educative benchmark
creation: A human-and-metric-in-the-loop workflow.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry,
Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon
Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu,
Clemens Winter, Chris Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In
_Advances in Neural Information Processing Systems,_
volume 33, pages 1877–1901. Curran Associates,
Inc.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv
_preprint arXiv:1803.05457._
Anna Rogers Olga Kovaleva Matthew Downey and
Anna Rumshisky. Getting closer to ai complete
question answering: A set of prerequisite real tasks.
Dheeru Dua, Ananth Gottumukkala, Alon Talmor,
Sameer Singh, and Matt Gardner. 2019a. Orb: An
open reading benchmark for comprehensive evaluation of machine reading comprehension. _arXiv_
_preprint arXiv:1912.12598._
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019b.
Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv
_preprint arXiv:1903.00161._
Suchin Gururangan, Swabha Swayamdipta, Omer
Levy, Roy Schwartz, Samuel R Bowman, and
Noah A Smith. 2018. Annotation artifacts in
natural language inference data. _arXiv preprint_
_arXiv:1803.02324._
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. In International Conference
_on Learning Representations._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. _arXiv_
_preprint arXiv:2103.03874._
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In In Conference on Empirical Methods in
_Natural Language Processing (EMNLP._
-----
Daniel Khashabi, Tushar Khot, Ashish Sabharwal,
Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single qa system. _arXiv preprint_
_arXiv:2005.00700._
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019.
What’s missing: A knowledge gap guided approach
for multi-hop question answering. _arXiv preprint_
_arXiv:1909.09253._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In Proceedings of
_the 2016 Conference of the North American Chap-_
_ter of the Association for Computational Linguistics:_
_Human Language Technologies, pages 1152–1157._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. arXiv preprint arXiv:1705.04146.
Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula,
and Yejin Choi. 2021. Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. In Proceedings of the AAAI Con_ference on Artificial Intelligence, volume 35, pages_
13480–13488.
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong,
and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering.
_arXiv preprint arXiv:1806.08730._
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and develop-](https://doi.org/10.18653/v1/2020.acl-main.92)
[ing English math word problem solvers. In Proceed-](https://doi.org/10.18653/v1/2020.acl-main.92)
_ings of the 58th Annual Meeting of the Association_
_for Computational Linguistics, pages 975–984, On-_
line. Association for Computational Linguistics.
Swaroop Mishra, Anjana Arunkumar, Chris Bryan, and
Chitta Baral. 2020a. Our evaluation metric needs an
update to encourage generalization. arXiv preprint
_arXiv:2007.06898._
Swaroop Mishra, Anjana Arunkumar, Bhavdeep
Sachdeva, Chris Bryan, and Chitta Baral. 2020b.
Dqi: Measuring data quality in nlp. arXiv preprint
_arXiv:2005.00816._
Swaroop Mishra and Bhavdeep Singh Sachdeva. 2020.
[Do we need to create big datasets to learn a task? In](https://doi.org/10.18653/v1/2020.sustainlp-1.23)
_Proceedings of SustaiNLP: Workshop on Simple and_
_Efficient Natural Language Processing, pages 169–_
173, Online. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2019. Exploring the limits
of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan
Liu. 2019. Numnet: Machine reading comprehension with numerical reasoning. arXiv preprint
_arXiv:1910.06701._
Abhilasha Ravichander, Aakanksha Naik, Carolyn
Rose, and Eduard Hovy. 2019. Equate: A benchmark evaluation framework for quantitative reasoning in natural language inference. _arXiv preprint_
_arXiv:1901.03735._
Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. _arXiv preprint_
_arXiv:1608.01413._
Subhro Roy and Dan Roth. 2017. Unit dependency
graph and its application to arithmetic word problem
solving. In Thirty-First AAAI Conference on Artifi_cial Intelligence._
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transac_tions of the Association for Computational Linguis-_
_tics, 6:159–172._
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. _arXiv preprint_
_arXiv:1904.01557._
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie,
Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith,
and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics.
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 9275–9293.
Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau
Yih, and Ashish Sabharwal. 2019. Quarel: A dataset
and models for answering questions about qualitative relationships. In Proceedings of the AAAI Con_ference on Artificial Intelligence, volume 33, pages_
7063–7071.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang,
and Wen-tau Yih. 2016. Learning from explicit and
implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on
_Empirical Methods in Natural Language Processing,_
pages 297–306.
-----
Alex Wang, Yada Pruksachatkun, Nikita Nangia,
Amanpreet Singh, Julian Michael, Felix Hill, Omer
Levy, and Samuel Bowman. 2019. Superglue: A
stickier benchmark for general-purpose language understanding systems. In Advances in Neural Infor_mation Processing Systems, pages 3261–3275._
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R Bowman. 2018.
Glue: A multi-task benchmark and analysis platform
for natural language understanding. arXiv preprint
_arXiv:1804.07461._
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin,
and Tomas Mikolov. 2015. Towards ai-complete
question answering: A set of prerequisite toy tasks.
_arXiv preprint arXiv:1502.05698._
Xikun Zhang, Deepak Ramachandran, Ian Tenney,
Yanai Elazar, and Dan Roth. Do language embeddings capture scales?
-----
**A** **Supplemental Material**
**A.1** **NUMGLUE vs Other Datasets:**
As figure 4 shows, we select each task from one of
the clusters of numerical reasoning datasets (except
the multi-model reasoning cluster since we wanted
to limit our dataset to text only).
**A.2** **Construction of NUMGLUE :**
Figure 5 and 6 illustrate detailed data creation process for task 1, task 2, task 3 and task 4 questions
with the help of an example for each task. We follow the same procedure for creating other examples
within the task.
**A.3** **GPT3-Instruct’s Response**
We used GPT3-Instruct on various forms of a simple arithmetic question. An expert did tuning of
various parameteres such as temperature, stop condition, presence penalty, engine, maximum token
size. However, GPT3-Instruct still could not solve
the basic aritmetic questions reliabily (Figures 7
11).
**A.4** **Data Quality Analysis of NumGLUE**
In this section, we discuss various linguistic and
statistical properties of our benchmark; ones
that we believe result in the quality, diversity
and challenging nature (Gururangan et al., 2018;
Mishra et al., 2020b; Mishra and Sachdeva,
2020; Swayamdipta et al., 2020; Mishra et al.,
2020a; Arunkumar et al., 2020) of the proposed
NUMGLUE benchmark.
**Vocabulary Size.** First, we calculate vocabulary size of each task by finding the number
of unique words across all questions. Since our
dataset is unbalanced in terms of question task,
we find the average vocabulary size by dividing
vocabulary size with number of data in that task.
_Which_ _Data_ _has_ _Higher_ _Average_ _Vocabu-_
_lary? As illustrated in Figure 12a, most of the_
tasks belonging to the novel dataset category
have relatively better average vocabulary size.
This implies questions in those tasks have less
repetitiveness. Furthermore, we expand our
vocabulary analysis to understand Figure 12a
better. We dive deep to analyze different parts
of speech. Figure 12b summarises our analysis.
Most of the novel datasets have more average
number of nouns, verbs and adjectives implying
there are more varieties of entities, actions and
attributes. This further means that datasets belonging to the novel category are more diverse in nature.
**Sentence** **Similarity** **Analysis** We extend
our analysis to reinforce our inference from the
word vocabulary analysis. We find Semantic
Textual Similarity (STS) of a sentence with every
other sentence.
_Which_ _Data_ _Consists_ _of_ _Most_ _Dissimilar_
_Sentences? As depicted by Figure 12c-12f, most_
questions in QuaRel have high similarity value
with other questions indicating the repetitiveness
of data. Same is true for majority of EQUATE
data. DROP also has high similarity among
questions. However, similarity among questions in
our dataset is significantly less. Some similarity
boxes can be seen in the chart. They are mostly
due to task 2 data, and partly due to task 3 data.
Lesser similarity implies that our dataset is far
less repetitive than others. Also, the repetition in
our dataset is sparse and is not equally distributed
among the whole dataset unlike others. This way,
our dataset is more diverse.
Note that question in Task 2 have lower vocabulary and further, a higher similarity as well.
As a small set of chemistry and physics principles
are used to generate questions, the result is a fairly
templated or uniform-looking dataset – leading to
the observed reversal of trends in this particular
task.
**A.5** **Ex-NumNet**
Figure 13 illustrates our baseline model: ExNumNet. This contains a Reading Comprehension Converter module which converts each task of
question to reading comprehension format. Figure
14 illustrates various examples of how each task
of questions get converted to the reading comprehension format. We add a task converter module
to detect task of a question. We design task converter heuristically based on the features associated
with questions (e.g. NLI contains "Sentence 1"
and "Sentence 2" whereas completion contains a
blank). We convert each of the tasks to RC format.
For NLI questions, we use the premise sentence
as passage, hypothesis as the question and append
-----
_Complex Reasoning_ _College Physics_
_+ Complex Math_ _(Hendrycks et al. 2021)_
_Math_ _Complex_
_Multimodal Numerical_ _Numbersense (Zhang et al. Machine_ _(Mitra & Puzzles Baral_ _Complex Reasoning_ _(Hendrycks et al. MATH 2021)_ _Theorem Math Problems_
_Reasoning_ _2020)_ _2015)_ _Proving_
_(Polu and_
_Geometry_ _Mathematics_
_Sutskever._
_(Seo et al._ _(Saxton et al._
_2020)_
_2015)_ T5 T6 _2019_ _)_
T8
_MATHQA_ T7 _Math23K_
_(Amini et_ T2 T4 _(Wang et al._ _HMWP (Qin_
_Domain-_ _al. 2019)_ _NumGLUE_ _2017)_ _et al. 2020)_
_specific Reasoning_ _ARC (Clark et al 2018)_ T3 T1 _(Huang et Dolphin_ _Word Math Problems_
_al. 2016)_
_Numersense_
_MCTaco_ _(Lin et al._
_(Zhou et al_ _Scales_ _2020)_
_2019)_
_Commonsense_ _(Zhang et al._ _Commonsense +_
_Reasoning_ _2020)_ _Knoweldge of Facts_
_Reading_
_Question Answering_ _Classification_ _Comprehension_
_NLI_ _Probes_ _Fill in the Blank_
Figure 4: Our dataset NUMGLUE (center in the yellow circle) has been positioned with respect to existing datasets.
T1-T8 represents 8 tasks. Note that, NUMGLUE contains the feature of being format invariant unlike other
datasets. Position of datasets within clusters is done based on their semantic category, for example T1 Numerical Commonsense QA is closer to the cluster of Commonsense Reasoning + Knowledge of Facts; its position
reflects the same
Figure 5: Step by step data creation process for task 1, 2 and 4 questions
the string “Entailment, contradiction or neutral?”
to the question so that it has a span based answer.
For other questions, we tokenize the question string
into its constituent sentences and use a heuristic approach to split the question string into passage and
question. Furthermore, for option based questions,
we append all the options at the end of the question.
**A.6** **Proposed Memory-Augmented Model**
Figure 13 illustrates our baseline model ExNumNet. We add an IR mechanism as described
-----
3 of the main paper illustrates our approach.
**Algorithm 1: Our Information Retrieval**
Approach
**Input: Dataset D, MATH KB K**
**Hyper-Parameters: Z, v, th, b**
**Output: v Knowledge sentences**
**1 forall s P D do**
**2** Concat Question and Answer ;
**3** Generate Query by retaining only verbs,
adjectives and adverbs;
**4** **forall j P K do**
**5** Create Index using Elastic Search ;
**6** Retrieve top Z sentences from MATH KB.
**7** **end**
**8** **while size(Z)ąv do**
**9** **forall k P Z do**
**10** **forall u P k ´ 1 do**
**11** **if STS(Z(u),Z(k))ąth then**
**12** Delete k;
**13** **end**
**14** **end**
**15** **end**
**16** th=th-b;
**17** **end**
**18 end**
**A.7** **Hyper Parameters Used**
All the experiments were ran with the following
hyper parameters, batch size was kept at 16 where
as the eval batch size was 5. The maximum number
of epoch ran for the experiments were 5 with the
warm-up kept at 0.06. The learning rate used was
1.5e-5 and the weight decay was 0.01.
All above hyper parameters were selected using
a grid search; we kept rest of the hyper parameters
unaltered. All the experiments were performed on
"TeslaV100-SXM2-16GB", with which the model
takes 24hrs to train on nearly 100k samples.
**A.8** **Additional Examples**
We provide additional examples of task 1, 2, 3 and 4
questions here to better illustrate the novel datasets
we have created as part of our NUMGLUE.
Figure 6: Step by step data creation process for task 3
questions
in Algorithm 1 and illustrated in Figure 3 of the
main paper. As mentioned in the ‘Baselines’ subsection (Experiments section) of the main paper,
we convert each task to RC format in our baseline
and append the knowledge retrieved using IR from
_MATH KB at the end of the passage. In our exper-_
iments, we use the following hyperparameters in
the IR process: Z “ 50, v “ 10, th “ 0.75 and
_b “ 0.1._
**Formalization Let D represents dataset, s rep-**
resents sample, K represent the MATH KB, v represents the number of knowledge statements retrieved
for each sample, th is the cut off STS (Semantic
Textual Similarity) value above which knowledge
statements are treated redundant and removed, b is
the reduction we do iteratively on th until v statements remain.
We create a knowledge base, MATH KB by
accumulating all tasks of external knowledge
which are needed to solve questions of various
tasks (e.g. human has 2 hands, cow has 4 legs,
there are 24 hours in a day etc..). We also add
math formulae required to solve questions in our
benchmark (e.g. the formula of speed in terms
of distance and time). We add alll these in the
form of plain text separated by new line. We
use Elasticsearch to retrieve relevant knowledge
sentences. We further filter them using a heuristic
threshold of relevance. We append this knowledge
in the beginning of the passage so that continuity is
not broken between passage and question. Figure
-----
Figure 7: GPT3-Instruct’s response to a simple numerical reasoning question.
Figure 8: GPT3-Instruct’s response to a simple numerical reasoning question expressed in fill in the blanks format.
-----
Figure 9: GPT3-Instruct’s response to a simple numerical reasoning question expressed in fill in the blanks format
where numbers are changed.
Figure 10: GPT3-Instruct’s response to a simple numerical reasoning question expressed in comparison format.
-----
Figure 11: GPT3-Instruct’s response to a simple numerical reasoning question expressed in NLI format.
ing a game that re- faces **Question** **Knowledge Required** **Answer**
|Question|Knowledge Required|Answer|
|---|---|---|
|Find the mass percent- age of H in C6H6|Mass of C is 12 units and mass of H is 1 unit|7.69|
|How many units of H2 are required to re- act with 2 units of C2H4 to form 2 units of C2H6|H2 + C2H4 = C2H6|2|
|A car covers 912 me- ters in 19 seconds. If bike’s speed is one fourth of the car. Find the distance covered by the bike in 4 sec- onds.|distance trav- elled = speed * time|48|
Table 5: Example questions where domain knowledge
is required to answer a question.
|Question|Knowledge Required|Answer|
|---|---|---|
|Ella and Lily are play- ing a game that re- quires 10 die. Find out the total number of faces in 10 die.|A die has 6 faces|60|
|Jacob and Lillian are running a km long race. Jacob finished the race when Lil- lian was 190 meters from the finish line. How many meters did Lillian cover till that time?|1000 meters make a km|810|
|A man can lift one box in each of his hands. How many boxes can a group of 5 people hold in total?|A human being has 2 hands|10|
Table 4: Example questions where numerical knowledge required to answer is not explicitly provided in
the question.
-----
(b) Average number of unique Part of Speech (POS) tags is
higher for task 1 and task 4 in the novel datasets in contrast to
other tasks.
(a) Average vocabulary represents the average number of
unique words across various tasks. On an average, novel
datasets (task 1-4) have higher vocabulary.
(c) STS plot for the QuaReL
dataset shows significant repetition across samples
(d) STS plot for the EQUATE
dataset shows considerable repetition across samples.
(e) STS plot for the DROP
dataset shows repetitions for
most part of the data.
(f) STS plot for the novel
datasets show relatively lower
repetition than other datasets
Figure 12: Data quality analysis of NUMGLUE across various tasks of data. On an average, novel datasets have
higher quality than the others since they have higher average vocabulary, higher average POS tag numbers and
lower Semantic Textual Similarity (STS) among each other. X-axis and Y-axis represents samples ordered in the
same way, an ideal high quality dataset would have a bright line in the diagonal and rest of the places it should be
dark signifying lower repetition across instances.
Question: Find the count of cards
A group of boys decided to play a Reading they were playing with?
game of poker and kept 8 cards Comprehension Extended 44
away. Find the count of cards they Convertor Passage: A group of boys decided NumNet+v2
were playing with? to play a game of poker and kept 8 _Answer_
cards away.
_Original Question_
Figure 13: Architecture of Ex-NumNet
Figure 14: Conversion of various tasks to reading comprehension format
-----
|QuaRel Question|Transformed Question|
|---|---|
|A person wants to get shopping done quickly. They know that they can get through the checkout at big store faster than they can at small store. The store they go to to finish quickly is (A) big store (B) small store|A person wants to get shopping done quickly. They know that they can get through the checkout at big store in 5 min- utes whereas it can take 20 mintues at small store. The store they go to to fin- ish quickly is (A) big store (B) small store|
|Tina is racing her two dogs. Her greyhound is slim, her rottweiler is heavy. The dog that gets faster more quickly is the (A) rottweiler (B) grey- hound|Tina is racing her two dogs. Her greyhound weighs 88 lbs and her rot- tweiler weighs 79 lbs. The dog that gets faster more quickly is the (A) rottweiler (B) grey- hound|
|A golf ball has a smaller mass then a baseball. Which item has a weaker gravitational field? (A) golf ball (B) baseball|A golf ball has a mass of 78 grams and a baseball has a mass of 0.159 Kg. Which item has a weaker gravitational field? (A) golf ball (B) baseball|
Table 6: Examples showing conversion of QuaRel questions to quantitative comparison questions
|Arithmetic Word Problem|Transformed Question|
|---|---|
|Joan found 70 seashells on the beach. She gave Sam some of her seashells. She has 27 seashell left. How many seashells did she give to Sam ? 43|Joan found 70 seashells on the beach . She gave Sam some of her seashells . She has 27 seashells left. She gave seashells to Sam. 43|
|Last week Tom had 74 dol- lars. He washed cars over the weekend and now has 86 dollars. How much money did he make wash- ing cars ? 12|Last week Tom had 74 dol- lars. He washed cars over the weekend and made an- other 86 dollars. Tom has dollars now . 160|
**Arithmetic Word Problem** **Transformed Question**
Joan found 70 seashells Joan found 70 seashells
on the beach. She gave on the beach . She gave
Sam some of her seashells. Sam some of her seashells
She has 27 seashell left. . She has 27 seashells left.
How many seashells did She gave seashells
she give to Sam ? 43 to Sam. 43
Last week Tom had 74 dol- Last week Tom had 74 dollars. He washed cars over lars. He washed cars over
the weekend and now has the weekend and made an86 dollars. How much other 86 dollars. Tom has
money did he make wash- dollars now . 160
ing cars ? 12
Table 7: Examples showing MAWPS questions and corresponding questions in Completion format
-----
| [
"Arindam, Mitra",
"Swaroop, Mishra",
"Neeraj, Varshney",
"Bhavdeep, Sachdeva",
"Chitta, Baral",
"Peter, Clark",
"Ashwin, Kalyan"
] | 2022-04-12T00:00:00 | ACL 2022 Long Papers | true | 91 | 11 | null | http://arxiv.org/abs/2204.05660 | https://arxiv.org/abs/2204.05660 | https://www.semanticscholar.org/paper/39238a92de090c104936a4f78375b95600e42ce5 |
Teaching Algorithmic Reasoning via In-context Learning | Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines. | This work shows that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which it is referred to as algorithmic prompting, and evaluates the approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrates significant boosts in performance. | ## Teaching Algorithmic Reasoning via In-context Learning
Hattie Zhou[*1], Azade Nova[2], Hugo Larochelle[2], Aaron Courville[1], Behnam Neyshabur[†2], and Hanie Sedghi[†][2]
1Mila, Universit´e de Montreal
2Google Research
**Abstract**
Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model
and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing
a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. (2022)
showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and
study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills,
(2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs
via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting
techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of
approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines.
### 1 Introduction
Large language models (LLMs) have shown impressive progress in recent years, driven by the scaling up of models and training data sizes (Kaplan et al., 2020; Wei et al., 2022a; Hoffmann et al., 2022) that has led to improved
performance and sample efficiency (Brown et al., 2020; Chen et al., 2021; Chowdhery et al., 2022). One area with significant room for improvement is the ability of LLMs to perform complex reasoning tasks. In this realm, mathematical
reasoning (Saxton et al., 2019) provides a unique challenge as a domain. It requires the ability to parse, to logically
deconstruct a problem into sub-problems and recombine them, and to apply knowledge of rules, transformations,
processes, and axioms.
The idea of providing a rationale with the final answer was first proposed by Ling et al. (2017) and recently
revived for LLMs in the form of scratchpad (Nye et al., 2021) and chain-of-thought (Wei et al., 2022b). It has led to
improvements in performance on multi-step reasoning problems (Wang et al., 2019) such as arithmetic, commonsense,
and symbolic reasoning tasks (Nye et al., 2021; Wei et al., 2022b; Lewkowycz et al., 2022a; Wang et al., 2022a,b;
Anil et al., 2022; Zhou et al., 2022). However, despite significant progress, these models still struggle with out-ofdistribution (OOD) generalization on reasoning tasks (Nogueira et al., 2021; Kim et al., 2021; Anil et al., 2022).
To successfully generalize out-of-distribution on many of these reasoning tasks, the model needs to learn the
underlying algorithm for solving a task. We refer to this behavior as algorithmic reasoning (Kaiser and Sutskever,
2015; Veliˇckovi´c and Blundell, 2021). While following an algorithm can be seen as a form of instruction following,
algorithms are generally more complex with a larger number of steps, though each step of the algorithm may be
simpler and more concise than typical instructions. The benefit of being able to learn algorithms is that since they are
input independent by nature, they are immune to OOD performance degradation when executed properly. Moreover,
algorithms can be specified without ambiguity and hence provide a good test bed to probe model capabilities.
*Work done while interning at Google Research
†Equal advising
-----
Teaching an Algorithm as a Skill Skill Accumulation Skill Composition Using Skills as Tools
```
Q: 128 + 367 = ? Q: 128 + 367 = ? Q: 128 + 367 = ? Q: 128 + 367 = ?
A: [AP] A: [AP] A: [AP] A: [AP]
Q: 265 + 91 = ? Q: 673 – 453 = ? Q: 128 + 367 + 6000 = ? Q: Jack has 5 apples and
A: [AP] A: [AP] A: The subproblems are 4 oranges. How many
… … 128+367=ANS1, fruits do I have?
ANS1+6000=ANS2. A: Apple and orange are
Q: 349524 + 7920260 = ? Q: 8530939 – 238692 = ? [AP] both fruits so 5 + 4=<Q:
Q: 3 * 11 5 + 4 = ? A: [AP]>9.
A: The question can be …
converted into 11+11+11.
[AP] Q: Tommy has 3 toy cars.
… His neighbor, Jessie,
```
[AP]: A detailed non-ambiguous description `has 3 cars too. Jessie’s`
of the algorithm execution based on our `Q: 453 * 9754 = ?` `older brother has 5 more`
proposed Algorithmic Prompt (AP) approach `cars than Tommy and`
```
Jessie. How many cars do
the three of them have
altogether?
```
Figure 1: The four learning stages investigated in this work (from left to right): (i) Teaching an algorithm as a skill (Section 3)
(ii) Skill Accumulation, i.e., teaching multiple skills simultaneously (Section 4) (iii) Skill Composition, i.e. the ability to learn a
complex skill through building upon simpler ones (Section 5) (iv) Using Skills as Tools to solve problems (Section 6). We teach
these algorithms in-context using our proposed algorithmic prompting approach, which does not involve any further training of the
underlying model.
One surprising capability of LLMs is in-context learning (Brown et al., 2020), which refers to the ability to learn
a task from a few examples being presented within a prompt. In-context learning does not require any weight updates,
and provides a powerful platform for specialized skill acquisition without losing the generality of the underlying
model. Moreover, various prompting strategies have shown significant potential in solving certain types of reasoning
problems (Jung et al., 2022; Zhou et al., 2022; Wei et al., 2022b; Kojima et al., 2022). Nonetheless, Anil et al.
(2022) considered two algorithmic reasoning tasks and showed that while rationale-based prompting allow LLMs to
generalize to longer problem instances, they are still far from solving simple algorithmic tasks such as parity.
In this work, we investigate how to teach algorithms and compositions of algorithms to LLMs via in-context
learning. This setup is reminiscent of how similar skills are taught to children in school. We identify and explore four
key stages for teaching algorithms as skills to LLMs (Figure 1). We begin by studying the shortcomings of existing
approaches and proposing ways to alleviate them. We focus on arithmetic algorithms such as addition, subtraction
and multiplication as they have been widely benchmarked (Saxton et al., 2019; Hendrycks et al., 2021) and famously
fail at out-of-distribution generalization even for the best performing models on the MATH benchmark (Lewkowycz
et al., 2022b). While one can avoid learning these algorithms by using external tools such as a calculator (Cobbe et al.,
2021), such approach cannot scale to higher levels of abstraction where a model needs to use “soft algorithms” and
certain steps must be flexibly applied in different situations.
**Contributions:** Our main contributions are as follows:
- We introduce Algorithmic Prompting, which involves providing a detailed description of the algorithm execution
on running examples, and using explicit explanation and natural language instruction to remove ambiguity. For
a comparison of algorithmic prompting to existing prompting techniques, see Section 2 and Table 1.
- We demonstrate that algorithmic prompting significantly outperforms existing prompting techniques on several
algorithmic tasks. In particular, for long parity, addition, multiplication and subtraction, we achieve an error
reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines (Section 3
and Table 2).
- Our ablation studies reveal the impact of non-ambiguous explanations, and show that unlike other prompting
approaches, errors in the algorithmic examples affect performance significantly (Section 3.1).
- We study the model’s ability to simultaneously learn multiple algorithms via a single prompt, as well as its
ability to compose the learned algorithms in order to solve more complex tasks (Sections 4 and 5).
-----
- We explore various approaches to leverage a learned algorithm as a tool to solve math word problems. We show
that while it is possible to improve the performance in settings that require complex calculations, the model’s
general reasoning capability reduces due to the phenomenon of interference (Section 6).
### 2 Algorithmic prompting
Nye et al. (2021) proposed the idea of getting the model to show its work, i.e, breaking the problem down and asking
the model to output the intermediate steps used in solving the task. The authors show that by finetuning LLMs on such
data – which they refer to as scratchpad – they can greatly improve performance on multi-step computation problems.
This was taken further by Wei et al. (2022b) to the in-context learning setting, where they showed that providing
rationales in the prompts significantly increases the model’s ability to solve multi-step reasoning problems. They refer
to this approach as chain-of-thought. The main intuition behind the scratchpad approach is that by having intermediate
computations in the output, the model can refer to them directly instead of relying on its internal representation space
for those calculations. For chain-of-thought, one hypothesis is that the rationales loosely provide the model with a
“thinking pattern” that it can reference when tackling a problem. By encouraging the model to output an explanation
along with the answer, we steer it towards solving problems by breaking them into steps that logically follow from
each other. Inspired by these perspectives, we hypothesize that if we increase the specificity and applicability of these
thinking patterns, we can also increase the amount by which the model adheres to these patterns in its problem solving.
As we will illustrate, this approach leverages both the scratchpad ideas of showing intermediate computations and the
chain-of-thought ideas of providing an explanation for each step.
As a motivating example, consider the standard addition algorithm. This method right-aligns the two numbers
being added and calculates the sum of pairs of single digits from each number, going from right to left. For every pair
of digits, there is a possible carry that needs to be added to the next digit sum. If we use a scratchpad-style illustration,
then for a question like 182 + 376, the model would see that the digit-sum 2 + 6 generates a carry of 0, while 8 + 7
generates a carry of 1. However, the rules of carry is highly ambiguous from just this example. Ideally, we expect
the model to conclude that when a + b > 9, it generates a carry of 1, and when a + b ≤ 9, it generates a carry of
0. But from the scratchpad-style example the model could have concluded that the carry is 1 whenever we add two
even digits together and 0 otherwise, or that the first digit-pair generates a carry of 1, the second digit-pair generates
a carry of 0, and so on. In order for the model to extrapolate the correct pattern, it must be biased in such a way that
the general and correct rule is the default interpretation. Such alignment, however, can not be reliably expected from
current models.
We hypothesize that existing prompting methods fail to sufficiently constrain the model’s interpretation of the
prompting information, and result in unexpected and undesirable model behaviors on tasks that require precise algorithmic reasoning. We push the limits of rationale-based prompting by drastically increasing the amount of detail
included in the rationales, while specifying the steps of an algorithm within this information. We refer to this strategy as Algorithmic Prompting, and contrast this approach with other types in Table 1. We show that it can achieve
significant systematic generalization on several algorithmic reasoning tasks, and ground our exploration in the four
capabilities identified in Figure 1.
**2.1** **Experimental setup**
**Baselines:** We compare the proposed algorithmic prompt to few-shot and chain-of-thought baselines in our experiments. The few-shot baseline refers to the simple approach of presenting examples of question and answer pairs
with no additional explanation. The chain-of-thought baseline provides a rationale along with the final answer in the
few-shot examples. In order to generate the rationale for various tasks, we follow the method introduced in Kojima
et al. (2022) and use the phrase ”let’s think step by step” to get a model-generated rationale for the few-shot examples.
**Evaluation metric:** We measure both in-distribution and OOD performance in all experiments. For the in-context
learning setting considered in this work, the data distribution is determined by the answer lengths of the prompting
examples. Thus, questions with answer lengths that fall within those seen in the prompt are considered in-distribution,
and those with longer lengths are considered out-of-distribution. The choice of length is natural given that it is a
-----
Table 1: Comparison of different prompting strategies studied in this work. The number of ⋆ indicates the level to which each
strategy exhibits the given characteristic. In this work, we refer to the basic approach of presenting only input-target pairs with no
additional explanation as few-shot, and we refer to prompts that provide explicit instructions but no running examples of a task as
_instruction-only. We see that algorithmic prompt includes both qualities of natural language explanation and explicit intermediate_
computations.
|Prompt strategy|Input-target pairs|Natural language rationale|Intermediate com- putations|Rationale diversity|Col6|
|---|---|---|---|---|---|
Few-shot ⋆⋆⋆ - - -
Chain-of-thought ⋆⋆⋆ ⋆⋆⋆ ⋆ ⋆⋆⋆
Scratchpad ⋆⋆⋆ - ⋆⋆ -
Instruction-only - ⋆⋆⋆ - ⋆⋆⋆
Algorithmic ⋆⋆⋆ ⋆⋆ ⋆⋆⋆ ⋆
measure of complexity in the tasks we consider, and length generalization has a rich history as a measure of systematic
generalization (Csord´as et al., 2021; Anil et al., 2022). Thus, length generalization provides a good indication for
whether the model has learned the underlying algorithm.
**Experimental setting:** For all the experiments in the paper, we use the Codex model code-davinci-002 from
OpenAI (Chen et al., 2021). This model has a maximum context length of 8000 tokens. Task examples are sampled uniformly at each length. All results are sampled once using a temperature of 0 and default settings for other
hyperparameters. See Section A.2 for task details.
### 3 Teaching algorithms as skills
**3.1** **Two-number addition**
We begin our analysis by studying the two-number addition task and explore the effectiveness of various prompting
strategies with differing levels of ambiguity. The addition problem takes the form a + b = c where a, b, and c are
positive integers.
We present an algorithmic prompt for addition and compare its performance against the few-shot, chain-of-thought,
instruction-only, and scratchpad methods. An illustration of these prompting strategies for addition is shown in Figure 10, and the prompts can be found in Section B.1. For all addition experiments, we use 3 prompt examples and
restrict these examples to having answers of up to 5 digits in length. We then evaluate on questions up to 19 digits
in length. The length of 19 is chosen because this is the level after which the model begins to run out of context. A
similar choice is used for all algorithms considered in Section 3.
Figure 2(a) shows the performance of algorithmic prompting against existing methods on addition problems. These
results demonstrate that algorithmic prompt achieves near perfect performance and OOD generalization on addition,
while few-shot, chain-of-thought, and instruction-only have decreasing performance as the length of the answer increases. These results illustrate the benefit of incorporating algorithmic steps, unambiguous explanations, and demonstrations on running examples in our prompt. In Section A.3, we provide a detailed error analysis for the algorithmic
prompt. We observe that most of the errors occur in the early steps of an algorithm, where there are more remaining
digits to process, rather than later steps, where the model needs to extrapolate to longer lengths.
**Impact of unambiguous explanations: Figure 2(b) compares the performance of using scratchpad and detailed**
scratchpad as prompts. With detailed scratchpad, we add more intermediate steps to illustrate how the values of the
answer (A) and the carry (C) is derived (see Figure 10). We further include an additional version that converts the
numbers from space-delimited to comma-delimited, as we observed that comma is a more effective deliminator for
Codex. We find that the scratchpad template performs extremely poorly as a prompt[1], but including additional details
leads to a significant boost in performance. We conjecture that the abysmal performance of scratchpad as a few-shot
1The original paper by Nye et al. (2021) performs finetuning using the scratchpad format, whereas we directly use it as a prompt.
-----
1.0
0.8
0.6
1.0
0.8
0.6
0.4
0.2
0.0
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||Al|gorith|mic||
|||||||Fe Ch|w-shot ain-of-|thoug|ht|
|||||||In|structi|on-only||
|Col1|Col2|Col3|Alg Scr|orithmi atchpa|c d|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
||||Det Det|ailed S ailed S|cratch cratch|pad pad w/|Comm|a-deli|m|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
4 6 8 10 12 14 16 18
Algorithmic
Few-shot
Chain-of-thought
Instruction-only
Number of Digits in Answer
(a) Various prompting strategies on addition
4 6 8 10 12 14 16 18
Algorithmic
Scratchpad
Detailed Scratchpad
Detailed Scratchpad w/ Comma-delim
Number of Digits in Answer
(b) Variants of scratchpad prompting on addition
Figure 2: Accuracy on addition questions of increasing length for different prompting methods. Addition questions are of the form
_a + b = c, where a, b, and c are positive integers. The number of digits in answer plotted in the x-axis refers to the length of c._
Accuracy is measured over 2000 total examples sampled uniformly over the length of c. The max length for examples in the prompt
is 5. Left: We see that algorithmic prompt shows near-perfect length generalization even on extremely long addition questions, and
significantly outperforms its simple few-shot and chain-of-thought counterpart. Right: using scratchpad-style output as a prompt
leads to abysmal performance, but adding a few extra details to the scratchpad format leads to non-trivial generalization.
prompt is due to the structure of the solution format being sufficiently regimented to move the model away from its
memorized solutions, but not clear enough for the model to extract the true underlying rules and adapt them to new
examples.
We also compare the algorithmic prompt to two less-detailed variants. One version (nonexplicit calculation) omits
the explicit equation showing how the carry value is derived. This shares the same intuition as the original motivating
example. The second version (uncommon operation) requires the model to index the correct digit for a given step.
The indexing of a digit at a variable position is a more uncommon operation than the indexing of the digit at the same
position each time. In our final addition prompt, we introduce a mechanism that allows the model to avoid the indexing
operation by copying the unprocessed digits over to each step and always taking the last digit. Figure 3(a) illustrates
the relative gains that come from the disambiguation of these two aspects of the algorithm. Prompts used for the
ambiguity ablation studies can be found in Section B.2. In Section A.3 we study the role of natural language within
the algorithmic prompt, and find that including natural language descriptions leads to clear performance improvements
over using only intermediate computations.
**Is the model actually learning the algorithm through in-context learning? Min et al. (2022) have shown that**
it is not necessary to provide the correct question-answer pairings in the few-shot prompt, suggesting that the model
does not rely on the demonstrations themselves to figure out the right way to solve the given task. However, in order to
claim that we are teaching algorithms in-context, we would like to understand whether the model is actually following
the algorithm as it is prescribed in the prompt. To do so, we validate that 1) mistakes in the intermediate output steps
lead to mistakes in the final answer, and 2) errors in the prompt significantly impact performance.
We first look at the errors that the model makes. We find that for every addition question where the final answer was
correct, all intermediate steps were also correct. Next, we analyze the performance of the model when we introduce
errors into the algorithmic steps in the prompt. We introduce errors into the second digit of the calculation step
(digit1 + digit2 + carry = answer), and keep all other elements the same as before. We consider two types of errors:
_irregular errors where only a subset of the steps contain an error, and systematic errors where all of the steps presented_
in the prompt contain an error. With irregular errors (prompt shown in Section B.2.3), the model still has a chance of
extrapolating the correct rule based on the unchanged steps. With systematic errors (prompt shown in Section B.2.4),
the model should not derive the correct rule if it was truly learning from context, rather than simply mapping to the
output format and overriding the individual steps with what it has learned from its pretraining. Figure 3(b) shows that
there is a small degradation in performance with irregular errors, while the accuracy drops to near 0% with systematic
errors, thus confirming the expected behavior of a model that is actually learning in-context. This is in contrast to
the findings in which providing shuffled targets (Min et al., 2022) or wrong patterns in chain-of-thought (Madaan and
-----
1.0
0.8
0.6
1.0
0.8
0.6
0.4
0.2
0.0
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||Algori|thmic|||||||
||Algori Algori|thmic thmic|w/ Non w/ Unc|explici ommo|t Calcu n Oper|lation ation|||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||Algor Algor Algor|ithmic ithmic ithmic|w/ Irre w/ Sys|gular temat|Errors ic Erro|rs|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
4 6 8 10 12 14 16 18
Algorithmic
Algorithmic w/ Nonexplicit Calculation
Algorithmic w/ Uncommon Operation
Number of Digits in Answer
(a) Algorithmic prompts with varying ambiguity
4 6 8 10 12 14 16 18
Algorithmic
Algorithmic w/ Irregular Errors
Algorithmic w/ Systematic Errors
Number of Digits in Answer
(b) Algorithmic prompts with errors
Figure 3: Accuracy on addition questions of increasing length for variants on the algorithmic prompt. Left: Two examples of
rule ambiguity that we address in the final addition prompt are non-explicit carry calculation (Nonexplicit Calculation) and digit
indexing (Uncommon Operation). We observe a significant difference in performance before and after reducing the ambiguity of
these operations. Right: Errors are introduced to the algorithmic prompt in the digit value of the second number in the equation.
_Irregular errors are introduced to a minority subset of steps in the algorithmic examples, while systematic errors are introduced to_
all steps of the examples. We see that irregular errors have a minor impact on the performance, and systematic errors completely
destroy the model’s ability to solve this task. This suggests that the model is following the algorithm as it is specified in-context,
rather than loosely mimicking the format of the algorithm.
Yazdanbakhsh, 2022) do not materially impact model’s performance. Thus, algorithmic prompting differs from other
approaches and constrains the model’s behavior towards what is actually being taught in-context.
**3.2** **Teaching other algorithms using algorithmic prompting**
To validate that the performance of algorithmic prompting is not specific to two-number addition, we evaluate model
performance on three other algorithms: subtraction, multiplication, and parity. Similar to addition, the maximum
length evaluated in this section is based on the length that can fit into context for algorithmic prompts.
**Subtraction: We follow a similar strategy as addition. We discuss the peculiarities of the subtraction algorithm in**
more detail in Section 4, where we combine both addition and subtraction problems. The performance at length 14 is
summarized in Table 2. We see that algorithmic prompting significantly outperforms the few-shot baseline.
**Multiplication: For multiplication, we consider questions in the form of a × b = c. Multiplication requires**
_O(n[2]) steps if we use a strategy similar to the addition algorithm which takes O(n) steps. Inspired by this compli-_
cation, we explore whether the model’s existing zero-shot or few-shot capabilities can be leveraged in conjunction
with algorithmic prompting to reduce the complexity of the required instructions. Therefore, instead of using singledigit multiplication in each step, we perform direct calculations for 1-digit × n-digit numbers. Instead of doing n[2]
single-digit calculations for two n-digit numbers, we now only need to perform n steps of 1 × n-digit multiplication.
To choose a reasonable value of n for this experiment, we evaluate the model’s zero-shot accuracy in 1 × n-digit
multiplication (shown in Figure 13). We see that after n = 3, the zero-shot performance deteriorates drastically. Thus,
we restrict to n ≤ 3. If a number has more than 3 digits, we break it down into groups of ≤ 3 digits and add the
resulting sub-components appropriately. For simplicity, we consider the problems where at least one of a and b is
less than 1000, so that we only need to perform the group splitting on one of the two numbers. More details can be
found in Section A.4. Performance at length 7 is shown in Table 2, and performance across different lengths is shown
in Figure 4. We see that the multiplication algorithmic prompt performs well compared to its few-shot and chain-ofthought counterparts, thus illustrating the potential of utilizing a model’s inherent abilities within the scaffolding of
more structured algorithmic instructions.
**Parity: We consider the problem of calculating parity of a given binary list. This task has been studied extensively**
in Anil et al. (2022), and despite the intrinsic simplicity of the algorithm, it is far from being solved. Performance
at length 20 is shown in Table 2. We see that algorithmic prompt significantly outperforms random chance on this
-----
task, which is even greater than the few-shot performance reported in Anil et al. (2022). More details can be found in
Figure 14 and Section A.4.
Table 2: Performance on addition, subtraction, multiplication, and parity tasks. For addition we use the few-shot baseline and
evaluate at length 19. For subtraction we use the few-shot baseline and evaluate at length 14. For multiplication we use the chainof-thought baseline and evaluate at length 7. These lengths are chosen based on the maximum task length that could fit into context
for the algorithmic prompt. For parity we evaluate at length 20 which is the longest instance reported in Anil et al. (2022).
**Method** **Addition** **Subtraction** **Multiplication** **Parity**
Algorithmic prompt 90.5% 65.6% 79.7% 95.0%
Best available baseline 9.5% 16.7% 5.5% 50.0%
### 4 Skill Accumulation
So far we have demonstrated the ability to teach single-algorithms through in-context learning. In this section, we
study the model’s ability to simultaneously learn multiple algorithms and choose the applicable one when solving
problems, which we refer to as skill accumulation. To do so, we use the addition-subtraction task. We expand on the
addition problem to allow for both positive and negative numbers. Thus, the problems now have four possibilities:
_a + b, −a + b, −a −_ _b, a −_ _b. We refer to questions of the form a + b as addition-only questions, and the rest as_
_subtraction-only questions. For subtraction questions, the ordering of the two numbers matter. To see this, consider_
the examples 43 − 250 = −207 and 543 − 250 = 293. When we process the digits from right to left, the answer
depends on whether the first number is greater than or less than the second number in absolute value, not just on the
values of the two digits. Thus, subtraction requires a different – albeit similar – algorithm to addition. For a sense
of the relative complexity of the two settings, note that the subtraction algorithm we use runs in 2n steps, while the
addition algorithm runs in n steps.
To succeed at this task, the model needs to demonstrate the ability to follow different processing paths when the
question is addition or subtraction. Figure 5 shows the performance of the combined addition-subtraction prompt,
with the accuracy broken down by question type. We see that the model is able to effectively execute the correct
algorithm based on the individual questions. The model exhibits lower accuracy on subtraction questions compared to
addition-only questions, reflecting the increased complexity of the subtraction algorithm. Comparing the performance
on addition-only questions to the addition prompt from Section 3.1, we see that there is minimal change in performance
despite having an extra other algorithm present in the prompt. Nonetheless, we note that the prompt development for
this task is non-trivial, and the best performance required adding all combinations of positive and negative numbers.
Thus, scaling to larger number of algorithms may call for more efficient strategies.
To further study the effects of teaching addition alongside subtraction, we evaluate two subtraction-only prompts.
The first one removes the addition-only prompt examples from the combined addition-subtraction prompt. In the combined prompt, 6 examples are provided, with 2 of them being addition-only examples. After removing the additiononly examples, we are left with 4 subtraction-only examples in the prompt. The second subtraction-only prompt
matches the number of shots as the original combined prompt, but includes only subtraction-only examples for all 6
shots. The results are shown in Figure 6(a). We see that using only the 4 subtraction-only prompt examples (Com_bined Algo, Sub examples-only) results in a significant decrease in performance compared to the combined algorithmic_
prompt. However, when we are able to match the same number of shots (6) as the combined prompt (Sub-only Algo),
we can recover the original performance. This demonstrates the synergy and positive transfer when simultaneously
learning algorithms that share similarities. As a control experiment, we also observe in Figure 6(b) that adding more
shots to an addition-only prompt does not improve performance beyond the original prompt, which supports the conclusion that addition-only performance using the combined prompt is not harmed by having other algorithms in the
same prompt.
-----
1.0
0.8
0.6
0.4
0.2
0.0
|Single Algo - Add-only Comb Few-shot - Add-only Comb CoT - Add-only|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|Single Algo - Add-only Comb Few-shot - Add-only Comb CoT - Add-only Comb Algo - Add-only Comb Few-shot - Sub-only Comb CoT - Sub-only Comb Algo - Sub-only 1.0||||||||
|0.8 0.6 0.4 0.2||||||||
|||||||||
|||||||||
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|Alg Few|orithmic -shot|||||
Chain-of-thought
6 8 10 12 14
Number of Digits in Answer
Number of Digits in Answer
Figure 4: Performance of algorithmic prompt on multiplication questions, where at least one of the two numbers in the
question is less than 1000. We use 2 shots of up to 6-digits in
answer length in all the prompts. The algorithmic prompt for
multiplication leverages direct 1 × n-digit calculations by the
model to simplify the number of algorithmic steps required,
showcasing an ability to utilize the pretrained model’s zero
or few-shot capabilities within a larger algorithmic scaffolding. Algorithmic prompt shows superior length generalization
compared to the baselines.
Figure 5: Accuracy on addition and subtraction questions using a combined prompt. We use 6 shots of up to 5-digits in
answer length in the prompt (2 shots of addition and 4 shots
of subtraction examples). Performance is split into additiononly (add-only) questions and subtraction-only (sub-only) questions. “Comb Algo” refers to the combined algorithmic prompt
with both addition and subtraction examples, while “single algo”
refers to the algorithmic prompt for addition in Section 3.1.
There is minimal degradation in performance for addition-only
questions using the combined prompt compared to the single algorithm addition-only prompt.
### 5 Skill Composition
In this section, we explore the model’s ability to learn multiple algorithms that build on top of each other. This is a
desirable property because it enables the model to learn a more complex algorithm without having to relearn simpler
sub-components of that algorithm and enables modularization of complex algorithms. To establish a framework for
skill composition, we explore two extensions to the addition algorithm: 1) adding multiple numbers together, and 2)
solving multiplication by turning it into an addition problem (e.g. by converting 3 ∗ 7 into 7 + 7 + 7). The ability
to add multiple numbers builds on top of the ability to add two numbers together. Solving multiplication as addition
further builds on the addition of multiple numbers. An illustration can be found in Figure 16. The evaluation dataset
contains 1000 examples sampled uniformly by length of answer.
The performance on composite tasks are shown in Figure 7. We teach these algorithms in-context by creating a
composite prompt that includes 2 examples from the 2-number addition prompt, 1 example of addition of 3 numbers,
and 1 example of converting multiplication into addition. This forms a simple composition strategy (Algo - (Simple
_Comp)). This prompt can be found in Section B.7. We also consider two ablations of the composite algorithmic_
prompt. The algorithm for n-number addition involves wrapping 2-number additions within a larger loop of n − 1
addition problems. Thus, we could provide even more information by converting the 2-number addition prompt
examples into the same loop format as the 3-number addition example. This version (Algo - (Augmented Comp))
provides an upper estimate on multi-number addition and multiplication-as-addition. The second ablation (Algo - (No
_Comp)) only presents the example that illustrates the extended skill. This has no composition and provides a lower_
estimate on the performance of the two extended skills, and illustrates the improvement that comes from having first
learned the component algorithms. See Figure 17 for an illustration of the different composition strategies.
In-context skill composition is limited by the context length of current models. Unlike the previous experimental
results, these composition tasks include a number of questions that were incomplete for the algorithmic prompt. To
separate out the issue of context length from the ability of the model to follow an algorithm, in Figure 7 we report
performance on only the questions for which the algorithmic prompt could fit into context. This subset is also used
for all baselines. Figure 7 shows that the algorithmic prompt significantly outperforms few-shot and chain-of-thought
baselines. Moreover, we observe that there is minimal difference between the simple composition and augmented
composition strategies, and that the ”no composition” approach performs much worse than its composed counterparts.
-----
1.0
0.8
0.6
0.4
0.2
0.0
1.0
0.8
0.6
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||Additi Additi|on Alg on Alg|o 3-sh o 6-sh|ot ot|||||
4 6 8 10 12 14 16 18
Number of Digits in Answer
(b) Addition-only prompt on addition
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
|C|||||||
||ombined|Algo - Sub|-only||||
|S C|ub-only Al ombined|go - Sub- Algo, Sub|only example|s-only - S|ub-only||
4 6 8 10 12 14
Number of Digits in Answer
(a) Subtraction-only prompts on subtraction
Figure 6: Left: Performance on subtraction-only questions. ”Combined Algo” refers to using 4 examples of subtraction questions
and 2 examples of addition questions within the prompt. ”Sub-only Algo” refers to using 6 examples of subtraction questions
within the prompt. ”Combined Algo, Sub examples-only” refers to using 4 examples of subtraction questions within the prompt.
We see that removing addition-only examples from the prompt significantly harms performance on subtraction-only questions, thus
showing the positive transfer that comes from having addition examples. Right: Performance on addition-only questions using
addition-only prompts with different number of shots. We see that performance of the original addition prompt on the addition task
is already saturated on the number of shots.
In order to move past context length limitations, we experiment with two strategies. First, we introduce a secondpass strategy where we keep only the last completed algorithmic step in the model’s output, and perform a second
inference pass using the original prompt and the last output step. This simple approach benefits from the fact that all
relevant state variables are outputted in each step of the algorithm. We report performance on the entire dataset using
the second pass strategy in Figure 18, and show that a significant portion of the incomplete questions can be corrected
using this approach. Second, we leverage a dialogue-like approach where models loaded with different prompts call
on each other to perform sub-components of an algorithm, so that the outputs of these sub-components do not need to
persist inside a model’s context once the answer is derived. We describe this approach in more detail in Section 6 and
Section A.6, and the performance is shown in Figure 20. This approach allows us to achieve performance comparable
to those in Figure 7 on the full dataset.
### 6 Using skills as tools
In this section, we study the behavior of the model when using a given algorithm as a step in solving a larger mathematical reasoning problem. Such problems (e.g GSM8k benchmark (Cobbe et al., 2021)) usually consist of two
components: 1) the informal mathematical reasoning component which requires the model to come up with the correct solution steps to arrive at the answer based on the information provided in the question and 2) the calculation of
arithmetic operations used in the solution steps. Prior works have focused on improving the informal mathematical
reasoning component (Wei et al., 2022b; Wang et al., 2022b; Zelikman et al., 2022; Kojima et al., 2022), and have
opted to increase calculation accuracy through the use of an external calculator (Cobbe et al., 2021) or indirectly
through improved pretraining of the LLM itself (Lewkowycz et al., 2022b). In this paper, we study how the model
can leverage a learned algorithm to improve the quality of the second component, i.e., arithmetic operations inside a
broader reasoning process. Although an external calculator can be used in this case, this will not be possible in general
for more abstract skills such as simplifying mathematical equations.
**Dataset: We consider the following two math word problem datasets: GSM8k and GSM8k-Hard. GSM8k (Cobbe**
et al., 2021) consists of high-quality mathematical reasoning problems presented as natural language questions. Figure 8 shows an example question and answer pair from GSM8k with chain-of-thought rationale. In order to study the
ability to use the addition algorithm while solving GSM8k questions, we simplify the task by filtering for a subset of
GSM8k whose solutions consist of only addition steps. The filtering procedure results in 108 pure-addition GSM8k
questions. To further illustrate the potential of leveraging skills as a form of tool use, we create a hard dataset called
-----
1.0
0.8
0.6
0.4
0.2
0.0
1.0
0.8
0.6
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
||Algo ( Algo ( Few-sh|Simple Augme ot|Comp) nted Co|mp)||||||
||Chain- Algo (|of-thou No Com|ght p)|||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
||Algo ( Algo ( Few-s|Simple Augme hot|Comp) nted Co|mp)||||||
||Chain- Algo (|of-thou No Com|ght p)|||||||
10
10
Number of Digits in Answer
(a) Multi-number addition
Number of Digits in Answer
(b) Multiplication-as-addition
Figure 7: Performance on compositions of skills. ”Algo” indicates algorithmic prompting. ”Simple Comp” refers to a simple
composition strategy where previously taught algorithms are transferred as is. ”Augmented Comp” adjusts the previously taught
algorithm to match the format of the new task. This simulates a version where the full prompt specializes to the new task. ”No
Comp” uses only the part of the ”Simple Comp” prompt that describes the new task. This simulates the comparison to learning a
new skill from scratch without first learning its stepping stones. We observe that the composed algorithmic templates demonstrate
better generalization than the baseline methods. Note that for multiplication, we evaluate the algorithmic methods on a harder task
than the few-shot baselines, since we force the model to convert the question into the addition of a number n times, while for other
baselines we simply perform 1 × n-digit multiplication directly.
GSM8k-Hard, which consists of 50 examples from the pure-addition subset of the GSM8k. In this dataset, we increase
the numerical values used in the questions, thus making the task more difficult for the model. The number of digits in
the answer range from 3 to 12, with an average length of 7.2. In the original GSM8k addition-only subset, the number
of digits range from 1 to 5 with an average length of 2.4. An example is presented in Figure 9(b).
We first evaluate how augmenting the algorithmic prompt into the chain-of-thought would affect the performance.
We then show how algorithmic prompt can be used as tool use (Parisi et al., 2022), where a model queries another
source for a particular type of information.
**Q: Tommy has 3 toy cars. His neighbor, Jessie, has 3 cars too. Jessie’s older brother has 5 more cars than Tommy and**
Jessie. How many cars do the three of them have altogether?
**A: Tommy and Jessie have 3+3=6 cars. Jessie’s brother has 5+6=11 cars. Altogether, they have 6+11=17 cars. The**
answer is 17.
Figure 8: An example question and answer pair from GSM8k with chain-of-thought rationale.
**6.1** **Augmenting informal mathematical reasoning**
In this section, we evaluate whether the chain-of-thought prompt can be augmented with the algorithmic prompt for
the addition operation. To do so, we use a single prompt to illustrate both the informal mathematical reasoning skill
and the addition skill. Specifically, we embed the addition algorithm within the chain-of-thought solutions whenever
the solution calls for the summing of numbers. There are two challenges in augmenting algorithmic prompt to the
chain-of-thought prompt: 1) since there are many instances of addition in the chain-of-thought examples, this prompt
would take up a large number of tokens, and 2) we have seen previously that combining similar skills like addition
and subtraction within the same prompt did not result in any interference (with evidence of positive transfer), but since
informal mathematical reasoning and arithmetic operations are very different skills, this may no longer be the case.
To address the first challenge (lengthy prompt), we only embed the addition algorithm in a subset of the prompt
examples, and we indicate these augmented examples through the <ALGO> flag while the remaining examples use the
<NONALGO> flag. These flags allow us to control whether the model should perform addition using the algorithm or
by direct calculation. Thus, for each setting, we run two experiments by appending the <ALGO> or <NONALGO> flag
to the test question. For more details about this approach, see Section A.7.
For the second challenge (interference), we hypothesize that explicitly presenting the summary of the solution may
10
-----
0.90
0.85
0.80
0.75
0.70
0.65
0.60
0.55
0.50
|Col1|Col2|Col3|Col4|>|
|---|---|---|---|---|
||||<ALGO|>|
||||<NONA|LGO>|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
(a) Performance on GSM8k addition-only subset (b) Example of tool use for GSM8k-Hard.
Figure 9: Addition algorithm as tool use in solving GSM8k questions. Left: Ablation study with or without algorithmic output in
the prompt or output. ”W/ algo” indicates that algorithmic output is embedded within prompt examples, and ”no algo” indicates
that only chain-of-thought rationale is included in the prompt. <ALGO> flag indicates that algorithmic reasoning is encouraged in
the output, while <NONALGO> flag indicates that calculations are done directly by the model. ”Plan” indicates a chain-of-thought
strategy that summarizes the solution plan before executing individual reasoning steps. We see that having algorithmic output
within context leads to significant interference with the model’s informal mathematical reasoning abilities. This is alleviated by
using a summary before the algorithmic output, but not fully. Right: An example question-answer pair from the GSM8k-Hard
help to disentangle the two skills (i.e. informal mathematical reasoning and arithmetic operation). Thus we explore
No plan
No algo
<NONALGO>
No plan
W/ algo
(a) Performance on GSM8k addition-only subset
that only chain-of-thought rationale is included in the prompt.
using a summary before the algorithmic output, but not fully.
W/ plan
W/ algo
addition dataset, which includes GSM8k-like questions with large numerical values.
W/ plan
No algo
a version of chain-of-thought where the answer begins with an overall plan/summary of the solution steps, before the
individual steps are explained. We refer to this version as “with plan”, and refer to the baseline version without a
summary as “no plan”. The actual prompt is shown in Section B.13.
Figure 9(a) shows the results using this approach. First, we evaluate the impact of including algorithmic output in
the prompt by comparing the chain-of-thought baseline with (“no plan no algo”) and without (“no plan w/ algo”) algorithmic output for addition questions. We find that including algorithmic output in the examples significantly disrupts
the model’s informal mathematical reasoning abilities in the <ALGO> experiment, but leaves the <NONALGO> performance relatively unchanged. This demonstrates the existence of interference between the two skills. We conjecture
that this occurs when we mix highly different skills within the same context. The informal mathematical reasoning
component relies on the model’s pretraining knowledge, while the algorithmic component is regimented and requires
the model to follow specific instructions, and the different nature and format of these two skills appears to interfere
with their performance. Next, we evaluate the impact of having a solution plan at the beginning of the output. Comparing the performance of “w/ plan w/ algo” and “no plan w/ algo”, we see that the solution plan alleviates some of
the interference seen in the <ALGO> experiment. Nonetheless, the performance is still much worse than the same
version without algorithmic output (“w/ plan no algo”). In summary, we identify an interference phenomenon which
may occur when combining skills of different kind within the same context, and find that using flags in the prompt can
be a simple way of directing a model’s attention as <NONALGO> experiments do not suffer from interference in the
way that <ALGO> experiments do.
**6.2** **Algorithmic prompt as tool use**
Motivated by context length limitations and the interference issue that we have identified, we propose a way to alleviate
these problems through a dialogue-like interaction between models loaded with different prompts. In this approach, we
utilize one model for performing the informal mathematical reasoning steps and a separate model for doing algorithmic
addition calculations. To enable a dialogue-like interaction, we teach the first model to output specific tokens to
indicate when a separate model should be consulted. See Figure 9(b) for an example of how these tokens are used.
We then extract the addition question using these tokens and send it to the second model loaded with the addition
algorithmic prompt, which executes the addition algorithm and returns the answer back to the first model. The first
11
-----
model would then continue with the rest of the answer without needing to keep the algorithmic output in its context.
Creswell and Shanahan (2022) uses a similar multi-model and multi-prompt strategy in order to separate out selection
from inference in reasoning problems. This approach can be considered a form of tool use (Parisi et al., 2022), where
a model queries another source for a particular type of information.
The performance on the GSM8k-Hard dataset is shown in Table 3. Logical accuracy refers to the correctness of the
solution setup, while addition accuracy refers to the correctness of the calculations steps within the solution setup. We
see that despite removing the algorithm output from the context of the first model, we still observe interference coming
from the use of specific tokens in the informal natural language solution steps. Nonetheless, the method that leverages
algorithmic tool use still achieves double the accuracy as the baseline chain-of-thought method without algorithmic
prompting. Lastly, this result illustrates the ability of dialogue-based tool use to bypass context length limitations, as
a single model would not have fit all the output within its context. In Section A.6, we showcase the possibility of
leveraging this dialogue-based tool use in the skill composition setting from Section 5, and demonstrate the model’s
ability to call on previously learned algorithms as subroutines inside more complex algorithms while also resolving
context length limitations.
Table 3: Performance on GSM8k-Hard addition dataset with or without algorithmic tool use. We see that the overall performance is
doubled when we call on a second model loaded with the algorithmic addition prompt to perform addition calculations, demonstrating the potential of leveraging in-context algorithmic skills as a form of tool-use. Moreover, we observe that this performance gain
comes directly from more accurate addition accuracy, and the model that performs informal mathematical reasoning still suffers
from interference due to the use of specific tokens in the logical reasoning output, as shown in the decreased logical accuracy.
**Method** **Overall Accuracy** **Logical Accuracy** **Addition Accuracy**
Chain-of-thought w/ Algo call 55.8% 57.7% 98.4%
Chain-of-thought wo/ Algo call 27.4% 70.6% 61.9%
### 7 Conclusion and Future Work
Motivated by the potential of in-context learning as a general mechanism for compositional skill acquisition in LLMs,
we studied teaching algorithmic reasoning via in context learning. We identified and studied the fundamental building
blocks towards this goal and investigated four settings: teaching an algorithm as a skill, skill accumulation, skill composition and using skills as tools. We investigated the shortcomings of existing approaches and proposed algorithmic
prompt to alleviate them, showing that it leads to significant performance boost in various algorithmic reasoning tasks.
Our work suggests that it may be possible to convert longer context length to better reasoning performance by providing more thorough solution examples. This highlights the ability to leverage long contexts (either through increasing
context length or other means such as implementing recurrence or an external memory) and generate more informative
rationales as promising research directions.
We identified the interference phenomenon for tool use application and investigated different ways to reduce its
effect. Our observations about interference suggest that teaching the model the ability to retrieve or selectively attend
to specific instructions when solving the particular problem is an important future direction. Moreover, given that
there are ongoing efforts in the community to increase the context length of LLMs, it is of interest to design more
challenging tasks for each of the four introduced settings and investigate what capabilities can be taught to LLMs
when having access to extremely large context length.
**Acknowledgments**
This work was done during Hattie Zhou’s internship at Google Research. We thank Guy Gur-Ari, Ethan Dyer, Yuhuai
(Tony) Wu and Jason Yosinski for fruitful discussions.
12
-----
### References
Anil, C., Wu, Y., Andreassen, A., Lewkowycz, A., Misra, V., Ramasesh, V., Slone, A., Gur-Ari, G., Dyer, E., and
[Neyshabur, B. (2022). Exploring length generalization in large language models. arXiv preprint arXiv:2207.04901.](http://arxiv.org/abs/2207.04901)
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. (2020). Language models are few-shot learners. Advances in neural information processing
_systems, 33:1877–1901._
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman,
[G., et al. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.](http://arxiv.org/abs/2107.03374)
Chen, X., Liang, C., Yu, A. W., Song, D., and Zhou, D. (2020). Compositional generalization via neural-symbolic
stack machines. Advances in Neural Information Processing Systems, 33:1690–1701.
Chiang, T.-R. and Chen, Y.-N. (2018). Semantically-aligned equation generation for solving and reasoning math word
[problems. arXiv preprint arXiv:1811.00720.](http://arxiv.org/abs/1811.00720)
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C.,
[Gehrmann, S., et al. (2022). Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.](http://arxiv.org/abs/2204.02311)
Cobbe, K., Kosaraju, V., Bavarian, M., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. (2021). Training verifiers
[to solve math word problems. arXiv preprint arXiv:2110.14168.](http://arxiv.org/abs/2110.14168)
Creswell, A. and Shanahan, M. (2022). Faithful reasoning using large language models.
_https://arxiv.org/abs/2208.14271._
Csord´as, R., Irie, K., and Schmidhuber, J. (2021). The devil is in the detail: Simple tricks improve systematic generalization of transformers. In ACL.
Faldu, K., Sheth, A., Kikani, P., Gaur, M., and Avasthi, A. (2021). Towards tractable mathematical reasoning: Chal[lenges, strategies, and opportunities for solving math word problems. arXiv preprint arXiv:2111.05364.](http://arxiv.org/abs/2111.05364)
Gordon, J., Lopez-Paz, D., Baroni, M., and Bouchacourt, D. (2019). Permutation equivariant models for compositional
generalization in language. In International Conference on Learning Representations.
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. (2021). Measuring
[mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874.](http://arxiv.org/abs/2103.03874)
Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. d. L., Hendricks,
L. A., Welbl, J., Clark, A., et al. (2022). Training compute-optimal large language models. _arXiv preprint_
_[arXiv:2203.15556.](http://arxiv.org/abs/2203.15556)_
Jiang, A. Q., Li, W., Tworkowski, S., Czechowski, K., Odrzyg´o´zd´z, T., Miło´s, P., Wu, Y., and Jamnik, M. (2022). Thor:
[Wielding hammers to integrate language models and automated theorem provers. arXiv preprint arXiv:2205.10893.](http://arxiv.org/abs/2205.10893)
Jones, E. and Steinhardt, J. (2022). Capturing failures of large language models via human cognitive biases. arXiv
_[preprint arXiv:2202.12299.](http://arxiv.org/abs/2202.12299)_
Jung, J., Qin, L., Welleck, S., Brahman, F., Bhagavatula, C., Bras, R. L., and Choi, Y. (2022). Maieutic prompting:
Logically consistent reasoning with recursive explanations.
[Kaiser, Ł. and Sutskever, I. (2015). Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228.](http://arxiv.org/abs/1511.08228)
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and
[Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.](http://arxiv.org/abs/2001.08361)
13
-----
Kim, J., Hong, G., Kim, K.-m., Kang, J., and Myaeng, S.-H. (2021). Have you seen that number? investigating
extrapolation in question answering models. In Proceedings of the 2021 Conference on Empirical Methods in
_Natural Language Processing, pages 7031–7037._
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. (2022). Large language models are zero-shot reasoners.
Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E., Michalewski, H., Ramasesh, V., Slone, A., Anil, C., Schlag,
I., Gutman-Solo, T., et al. (2022a). Solving quantitative reasoning problems with language models. arXiv preprint
_[arXiv:2206.14858.](http://arxiv.org/abs/2206.14858)_
Lewkowycz, A., Andreassen, A., Dohan, D. M., Dyer, E. S., Michalewski, H., Ramasesh, V., Slone, A., Anil, C.,
Schlag, I., Gutman-Solo, T., Wu, Y., Neyshabur, B., Gur-Ari, G., and Misra, V. (2022b). Solving quantitative
reasoning problems with language models.
Li, W., Yu, L., Wu, Y., and Paulson, L. C. (2020). Isarstep: a benchmark for high-level mathematical reasoning. arXiv
_[preprint arXiv:2006.09265.](http://arxiv.org/abs/2006.09265)_
Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. (2017). Program induction by rationale generation: Learning to
solve and explain algebraic word problems. In ACL.
Madaan, A. and Yazdanbakhsh, A. (2022). Text and patterns: For effective chain of thought, it takes two to tango.
_[arXiv preprint arXiv:2209.07686.](http://arxiv.org/abs/2209.07686)_
Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M., Hajishirzi, H., and Zettlemoyer, L. (2022). Rethinking the
[role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837.](http://arxiv.org/abs/2202.12837)
Nogueira, R., Jiang, Z., and Lin, J. (2021). Investigating the limitations of transformers with simple arithmetic tasks.
_[arXiv preprint arXiv:2102.13019.](http://arxiv.org/abs/2102.13019)_
Nye, M., Andreassen, A. J., Gur-Ari, G., Michalewski, H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A., Bosma,
M., Luan, D., et al. (2021). Show your work: Scratchpads for intermediate computation with language models.
_[arXiv preprint arXiv:2112.00114.](http://arxiv.org/abs/2112.00114)_
[Parisi, A., Zhao, Y., and Fiedel, N. (2022). Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255.](http://arxiv.org/abs/2205.12255)
Polu, S. and Sutskever, I. (2020). Generative language modeling for automated theorem proving. arXiv preprint
_[arXiv:2009.03393.](http://arxiv.org/abs/2009.03393)_
Rabe, M. N., Lee, D., Bansal, K., and Szegedy, C. (2020). Mathematical reasoning via self-supervised skip-tree
[training. arXiv preprint arXiv:2006.04757.](http://arxiv.org/abs/2006.04757)
Razeghi, Y., Logan IV, R. L., Gardner, M., and Singh, S. (2022). Impact of pretraining term frequencies on few-shot
[reasoning. arXiv preprint arXiv:2202.07206.](http://arxiv.org/abs/2202.07206)
Saxton, D., Grefenstette, E., Hill, F., and Kohli, P. (2019). Analysing mathematical reasoning abilities of neural
[models. arXiv preprint arXiv:1904.01557.](http://arxiv.org/abs/1904.01557)
Thawani, A., Pujara, J., Szekely, P. A., and Ilievski, F. (2021). Representing numbers in nlp: a survey and a vision.
_[arXiv preprint arXiv:2103.13136.](http://arxiv.org/abs/2103.13136)_
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017).
Attention is all you need. Advances in neural information processing systems, 30.
Veliˇckovi´c, P. and Blundell, C. (2021). Neural algorithmic reasoning. Patterns, 2(7):100273.
Wang, H., Yu, M., Guo, X., Das, R., Xiong, W., and Gao, T. (2019). Do multi-hop readers dream of reasoning chains?
_[arXiv preprint arXiv:1910.14520.](http://arxiv.org/abs/1910.14520)_
14
-----
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., and Zhou, D. (2022a). Rationale-augmented ensembles in language
[models. arXiv preprint arXiv:2207.00747.](http://arxiv.org/abs/2207.00747)
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., and Zhou, D. (2022b). Self-consistency improves chain of thought
[reasoning in language models. arXiv preprint arXiv:2203.11171.](http://arxiv.org/abs/2203.11171)
Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D.,
[et al. (2022a). Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.](http://arxiv.org/abs/2206.07682)
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., and Zhou, D. (2022b). Chain of thought prompting
[elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.](http://arxiv.org/abs/2201.11903)
Welleck, S., Liu, J., Bras, R. L., Hajishirzi, H., Choi, Y., and Cho, K. (2021). Naturalproofs: Mathematical theorem
[proving in natural language. arXiv preprint arXiv:2104.01112.](http://arxiv.org/abs/2104.01112)
Wu, Y., Jiang, A. Q., Li, W., Rabe, M. N., Staats, C., Jamnik, M., and Szegedy, C. (2022). Autoformalization with
[large language models. arXiv preprint arXiv:2205.12615.](http://arxiv.org/abs/2205.12615)
Xhonneux, L.-P., Deac, A.-I., Veliˇckovi´c, P., and Tang, J. (2021). How to transfer algorithmic reasoning knowledge to
learn new algorithms? Advances in Neural Information Processing Systems, 34:19500–19512.
Xu, K., Li, J., Zhang, M., Du, S. S., Kawarabayashi, K.-i., and Jegelka, S. (2019). What can neural networks reason
[about? arXiv preprint arXiv:1905.13211.](http://arxiv.org/abs/1905.13211)
Yan, Y., Swersky, K., Koutra, D., Ranganathan, P., and Hashemi, M. (2020). Neural execution engines: Learning to
execute subroutines. Advances in Neural Information Processing Systems, 33:17298–17308.
Zelikman, E., Wu, Y., and Goodman, N. D. (2022). Star: Bootstrapping reasoning with reasoning. arXiv preprint
_[arXiv:2203.14465.](http://arxiv.org/abs/2203.14465)_
Zhou, D., Sch¨arli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Bousquet, O., Le, Q., and Chi, E. (2022).
[Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625.](http://arxiv.org/abs/2205.10625)
### A Appendix
**A.1** **Additional Related work**
Mathematical reasoning (Chiang and Chen, 2018; Saxton et al., 2019) has been subject of interest for a long time.
Faldu et al. (2021) summarizes the mathematical reasoning benchmarks that are in the form of math-word problems.
In addition to this class of benchmarks, formal mathematics in the form of theorem-proofs (Rabe et al., 2020; Li
et al., 2020; Polu and Sutskever, 2020; Welleck et al., 2021; Jiang et al., 2022; Wu et al., 2022) has been considered
extensively. In this work we focus on algorithmic reasoning for arithmetic tasks and solving GSM8k (Cobbe et al.,
2021) problems.
Algorithmic reasoning is typically approached via using structured architectures such as graph neural networks(GNNs)
and modifying the architecture align to the algorithms under consideration (Kaiser and Sutskever, 2015; Chiang and
Chen, 2018; Xu et al., 2019; Gordon et al., 2019; Yan et al., 2020; Chen et al., 2020; Xhonneux et al., 2021; Veliˇckovi´c
and Blundell, 2021) or to the input format (Thawani et al., 2021). However, in this work we focus on teaching algorithmic reasoning to general purpose transformer-based (Vaswani et al., 2017) models.
There has been some recent works investigating in-context learning phenomena. Razeghi et al. (2022) showed
that the performance of LLMs on mathematical calculations correlates with term frequency in the training data. Min
et al. (2022) investigate which parts of the (input, output) pairs in the prompt play a role in model’s performance on
12 NLP tasks. Madaan and Yazdanbakhsh (2022) investigate this for chain-of-thought prompts and conclude that the
combination of text and patterns together play a role. Jones and Steinhardt (2022) compares failure modes of LLMs
to human biases in the context of few-shot prompts.
15
-----
**A.2** **Additional information on experimental setup**
In Table 4, we provide a summary of experimental settings for all arithmetic and parity experiments in this paper.
Table 4: Evaluation setting for arithmetic and parity tasks. Questions are sampled uniformly based on answer lengths, with an
average of 100 samples per length. *For composition tasks of multi-number addition and multiplication-as-addition, the number of
shots indicate the number of prompt examples that illustrate the particular composition skill, but more examples of other skills may
be included in the prompt. See the corresponding sections for more details.
**Task** **Answer lengths used in evaluation** **No. of shots in prompt** **Max answer length in prompt**
Two-number addition 2 - 19 3 5
Subtraction (combined) 2 - 14 6 (2 addition, 4 subtraction) 5
Multiplication 2 - 7 2 6
Parity 2 - 30 2 8
Multi-number addition 2 - 10 1* 4
Multiplication-as-addition 2 - 10 1* 2
**A.3** **Additional results on two-number addition**
This section includes additional details and results for Section 3. In Figure 10, we provide an illustration of different
prompting strategies for two-number addition with differing levels of detail in the explanation.
Figure 10: Examples of the two-number addition prompt using different techniques.
**The role of natural language within algorithmic prompt:** Since the algorithmic prompt leverages both natural
language descriptions and intermediate computations, we disentangle the two components and study the role that
natural language plays in the algorithmic prompt. To do so, we consider the following ablations: 1) a symbols-only
16
-----
version of the original algorithmic prompt for addition, where we strip away most of the natural language descriptions,
but still retain the use of certain keywords such as Len and Max (Section B.2.5), 2) a symbols-only version where
keywords Len and Max are replaced with random words VBZ and UXO (Section B.2.6), and 3) a symbols-only version
where keywords are replaced with adversarial words Str and Min, which are associated with known other operations
in the pretraining distribution. The results of the ablations are shown in Figure 11. We see that there is a small but clear
drop in performance when we move from the original prompt to the symbols-only prompt. We observe a further drop
when certain keywords are replaced by uninformative symbols. These results point to the usefulness of leveraging
the natural language understanding of LLMs in specifying aspects of the algorithm. Moreover, we see that using
misleading symbols leads to a significant drop in performance, which further illustrates the model’s reliance on its
pretraining when interpreting the algorithmic instructions.
1.0
0.8
0.6
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||Algo Algo|Origina w/ Sym|l bols O|nly|||||
|||Algo Algo|w/ Unin w/ Misl|forma eading|tive Sy Symb|mbols ols||||
|||||||||||
8 10 12 14 16 18
Algo Original
Algo w/ Symbols Only
Algo w/ Uninformative Symbols
Algo w/ Misleading Symbols
Number of Digits in Answer
Figure 11: Performance of various symbols-only algorithmic prompts on the two-number addition task. Symbols-only prompt
strips natural language from the original prompt, but keeps the use of keywords such as Len and Max. Uninformative symbols
replaces Len and Max with random words VBZ and UXO. Misleading symbols replaces Len and Max with other known words
Str and Min. We see that the symbols-only prompt performs worse than the original algorithmic prompt, and that removing the
use of known keywords and replacing them with uninformative symbols results in a further drop in performance. This illustrates
the usefulness of natural language descriptions in the prompt. Using misleading symbols leads to a significant drop in performance,
further demonstrating the model’s reliance on its pretraining when interpreting the algorithmic instructions.
**Error analysis:** We perform an error analysis for the results of using algorithmic prompting for two-number addition. Details of various error categories are found in Table 5. We see that the model can reliably perform single-step
operations, such as identifying the max number of digits, calculating two-digit sums (with carry), and copying the
previous carry value to the next step. However, the model struggles with multi-step operations such as separating
digits by comma and copying all digits within a list from the previous step.
We also see that most of the errors happen in the earlier steps of solving the problem. This is illustrated in Figure 12.
The first steps have the most number of unprocessed digits, which may explain why they are the most error prone as
the model struggles to copy the lists of digits from step to step.
**A.4** **Additional results on teaching other algorithms**
This section includes additional details and figures for Section 3.2.
**Multiplication:** The prompt used for this experiment is displayed in Section B.4.2. We use 2 shots of up to 6-digits
in answer length in the prompt. The zero-shot performance of Codex on 1-digit × n-digit multiplication is shown in
17
-----
Table 5: Error analysis of two-number addition algorithmic prompt results. We see that the most error-prone steps are faithfully
copying a list from a previous step, followed by counting and separating out digits into list format.
**Error Category** **Overall Accuracy** **Wrong Questions Only**
Count of first number digits 99.55% 88.46%
Count of second number digits 99.04% 75.64%
Identify max number of digits between first and second number 100.0% 100.0%
Convert first number to list format 99.6% 89.74%
Convert second number to list format 99.19% 79.49%
Copy unprocessed digits from first number 99.55% 88.46%
Copy unprocessed digits from second number 97.88% 46.15%
Extract last digit from unprocessed first number digits 99.9% 97.44%
Extract last digit from unprocessed second number digits 99.85% 96.15%
Copy previous carry value in two-digit calculation step 100.0% 100.0%
Sum of two digits calculation 100.0% 100.0%
Calculate new carry value from two-digit calculation step 99.8% 94.87%
Copy previously accumulated answer digits 99.14% 78.21%
Insert new value from two-digit calculation result into answer 99.65% 91.03%
Figure 13. Based on the zero-shot performance, we restrict the direct multiplication by the model to questions with
3 or fewer digits. As seen in the prompt, we explain how to break large numbers into groups of 3 or fewer digits in
natural language. This natural language description is detailed enough such that the model can correctly extrapolate
to creating multiple splits for long numbers, even though it has only seen examples of single splits in the prompt. This
illustrates the benefit of using natural language instructions along with showing the intermediate calculation steps, and
showcases the model’s ability to extrapolate beyond just length generalization.
**Parity:** Similar to (Anil et al., 2022) we investigate the parity problem as an example of length generalization. We
use algorithmic prompting for parity and compare its performance to a few-shot baseline, as well as to a scratchpadstyle prompt as discussed in Anil et al. (2022). Figure 14 captures the performance of these three approaches on lists
of varying sizes. We use 2 shots of up to 8-digits in answer length in the prompt. Each point in Figure 14 represents
average over 100 random samples and we use the same examples for all methods. We observe that the algorithmic
prompt significantly outperforms both baselines. While the baselines’ performance reaches random chance (50%)
around length 5, algorithmic prompt maintains an accuracy of around 80% for lists of up to 30 digits. Section B.5
and B.6 depict the prompts used in this experiment.
**A.5** **Additional results for Skill Accumulation**
This section includes additional details and figures on skill accumulation from Section 4.
We study whether the superior performance of the algorithmic prompt can be attributed to the fact that it is much
longer than the few-shot prompt. To control for this variation, we perform an ablation on the addition-subtraction of
the few-shot baseline. We generate n examples of addition and subtraction, such that the total number of tokens is
equal to the number of tokens used in the algorithmic prompt. The results are shown in Figure 15, and we find that
having more few-shot examples does not improve performance.
**A.6** **Additional results for Skill Composition**
This section includes additional details and figures on skill composition from Section 5. In Figure 16, we provide an
illustrative demonstration of the change in the prompt when going from two-number addition to multi-number addition
18
-----
30
25
20
15
10
0.2 0.4 0.6 0.8 1.0
Progress Ratio to Completion
Figure 12: Distribution of where errors first occur in two-number addition questions using algorithmic prompt. Progress ratio is
calculated as (first error step / total steps). We see that errors occur in earlier steps, where the model has more remaining digits to
process.
to multiplication-as-addition, showing the progression in complexity.
In Figure 18, we include the results for the entire evaluation dataset, including examples that ran out of context in
the first pass through the model. In Figure 19, we show the same results but using the count of numbers being added as
the x-axis. We employ a second-pass strategy, where we append the last completed step from the first-pass output to the
original test question, and perform another inference pass using the new prompt. We observe that this simple secondpass strategy allows us to correctly solve a portion of the questions that were previously incomplete. However, the
performance is still significantly below the hypothetical upper estimate performance achieved by first-pass completed
questions.
In Figure 20, we use a dialogue-like approach where we employ two models loaded with specialized prompts.
For multi-number addition, we prompt one model with an example that explains how to solve multi-number addition
problems as a sequence of two-number addition problems, and prompt a second model with the algorithmic prompt
for two-number addition. Within the prompt for multi-number addition, we employed specialized tokens to indicate
the start and end of a two-number addition problem that the model needs to query the addition-prompted model for.
We extract the two-number addition question and send it to the second model, then retrieve the answer and allow the
first model to continue with its output. We use the same strategy for multiplication-as-addition. The prompt of this first
model can be found in Section B.10 for multi-number addition, and Section B.11 for multiplication-as-addition. We
find in Figure 20 that we are able to generalize out-of-distribution from a single prompt example, and avoid context
length limitations when evaluated on the longest problems in the evaluation data.
**A.7** **Additional results for Tool Use**
This section includes additional details and figures for tool use in Section 6.
In order to teach the model to use the additional algorithm for addition questions, we want to augment the chainof-thought examples with algorithmic output for all addition equations. However, this would take up a lot of context
without much gain in how well the addition algorithm is learned. Thus, we employ a strategy of choosing only 2 of
the prompt examples to augment with algorithmic output, while another 6 examples are presented without algorithmic
output. To differentiate the two types of approaches, we add the flag <ALGO> at the start of the answer for the 2 algorithmic output examples, and add the flag <NONALGO> for the others. At evaluation time, we evaluate performance
with algorithmic output by appending the <ALGO> flag to the end of the prompt, and we append <NONALGO> to
get a non-algorithmic baseline using the same prompt. This flag-based strategy is simple yet effective, with 86% of
<ALGO> examples and 0% of <NONALGO> examples exhibiting algorithmic output. See Section B.13 for the actual
prompt.
19
-----
1.0
0.8
0.6
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||Zero-s|hot|
|||||||
|||||||
|||||||
|||||||
|||||||
10
Number of Digits in Answer
Figure 13: Zero-shot accuracy of one-digit multiplication, where the other number can have up to 11 digits. We see that the
accuracy starts to drop after 3 digits in the answer.
1.0
0.8
0.6
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||Algorit Scratc Chain-|hmic hpad (Ani of-though|l et al 202 t (Anil et|2) al 2022)|
||||Few-sh 50% (|ot (Code Random c|x) hance)||
||||||||
5 10 15 20 25 30
Number of Elements in the List (List length)
Figure 14: Investigating the performance of algorithmic prompting on parity problem and comparing it to scratchpad few-shot
prompt of Anil et al. (2022) as well as few-shot prompting OpenAI’s Codex. Each point on the Algorithmic plot corresponds to
100 random samples of a binary list of the same length. Sections B.5, B.6 depict the prompts used for algorithmic and scratchpad
methods. The number of examples used in the prompt is two. The scratchpad method uses the prompt from Figure 9 in (Anil et al.,
2022). Chain-of-thought values are directly copied from Figure 7 in Anil et al. (2022) (few shot finetuning - few shot eval), and
correspond to prompt from Figure 10 in Anil et al. (2022).
20
-----
1.0
0.8
0.6
0.4
0.2
0.0
|Col1|Col2|Col3|Col4|Few-sho Few-sho|t 6-shot t N-shot|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
6 8 10 12 14
Number of Digits in Answer
Figure 15: Accuracy on combined addition-subtraction questions using a few-shot prompt. Since the algorithmic prompt uses
more tokens in the prompt, we perform an ablation for the few-shot baseline and use n number of examples in the prompt, where
_n is chosen such that the prompt length matches the algorithmic prompt. We see that there is no improvements in performance_
beyond the 6 examples used in the baseline.
Figure 16: Illustration of the tasks and algorithmic prompting strategies considered for skill composition in Section 5. The actual
prompts use algorithmic prompting for each addition question. Starting from simple two-number addition, we explore multi-number
addition which decomposes the problem into a set of two-number addition questions, then extend it further to multiplication-asaddition which converts a multiplication question into an equivalent multi-number addition question.
21
-----
Figure 17: Illustration of various composition strategies. The actual prompts use algorithmic prompting for each addition question. Simple composition combines the prompt from a previously taught skill to new examples illustrating the composed skill.
Augmented composition changes the previous prompt examples so that they are treated as special cases of the new composed skill.
No composition includes only examples illustrating the new composed skill.
1.0
0.8
0.6
1.0
0.8
0.6
0.4
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Accuracy|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
||Algo (S Algo ( Few-sh|imple Augme ot|Comp) nted Co|mp)||||||
||Chain- Algo (|of-thou No Com|ght p)|||||||
4 5 6 7
Number of Digits in Answer
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
||Algo (S Algo ( Few-sh|imple Augme ot|Comp) nted Co|mp)||||||
||Chain- Algo (|of-thou No Com|ght p)|||||||
4 5 6 7
Number of Digits in Answer
0.4
0.2
0.0
0.2
0.0
10
10
(a) Multi-number addition
(b) Multiplication-as-addition
Figure 18: Performance on compositions of skills. Due to length of the algorithmic output for this task, a number of the longest
examples exceed the context length limit for Codex. We employ a second pass strategy to get a final answer for the incomplete
questions, where we keep in-context only the last completed state from the first pass. The dotted lines consider only questions for
which the model completes the output within one pass, and provide an upper estimate on performance. The dashed lines consider all
incomplete questions as false, and provide an lower estimate on performance. We observe that although the algorithmic prompting
methods are suffering from having to do a second pass for the longer samples in this task, they still demonstrate better generalization
than the baseline methods. Note that for multiplication, we evaluate the algorithmic methods on a harder task than the few-shot
baselines, since we force the model to convert the question into the addition of a number n times, while for other baselines we
simply perform 1 × n-digit multiplication directly.
22
-----
1.0
0.8
1.0
0.8
0.6
0.4
0.6
0.4
0.2
0.0
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||Algo ( Algo ( Few-sh|Simple Augme ot|Comp) nted Co|mp)|||||
||Chain- Algo (|of-thou No Com|ght p)||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
||Algo (Si Algo (Au Few-sho|mple Co gmente t|mp) d Comp)|||||
||Chain-of Algo (No|-though Comp)|t|||||
10
Algo (Simple Comp)
Algo (Augmented Comp)
Few-shot
Chain-of-thought
Algo (No Comp)
Algo (Simple Comp)
Algo (Augmented Comp)
Few-shot
Chain-of-thought
Algo (No Comp)
Count of Numbers Added
(a) Multi-number addition
Single-digit Multiplier
(b) Multiplication-as-addition
Figure 19: Performance on compositions of skills with the x-axis being the count of numbers being added instead of the number
of digits in answer used in Figure 18. Due to length of the algorithmic output for this task, a number of the longest examples exceed
the context length limit for Codex. We employ a second pass strategy to get a final answer for the incomplete questions, where
we keep in-context only the last completed state from the first pass. The dotted lines consider only questions for which the model
completes the output within one pass, and provide an upper estimate on performance. The dashed lines consider all incomplete
questions as false, and provide an lower estimate on performance. We observe that although the algorithmic prompting methods are
suffering from having to do a second pass for the longer samples in this task, they still demonstrate better generalization than the
baseline methods. Note that for multiplication, we evaluate the algorithmic methods on a harder task than the few-shot baselines,
since we force the model to convert the question into the addition of a number n times, while for other baselines we simply perform
1 × n-digit multiplication directly.
1.0
0.8
1.0
0.8
0.6
0.4
0.6
0.4
0.2
0.0
0.2
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|||||||s|||
||Algo ( Algo (|Simple Simple|Comp) Comp)|w/ Inco|mplete|s|||
||Algo C Algo C|all (No all (No|Comp) Comp)|w/ Inco|mplete|s|||
||||||||||
10
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|||||||es|||
||Algo ( Algo (|Simple Simple|Comp) Comp)|w/ Inco|mplet|es|||
||Algo C Algo C|all (No all (No|Comp) Comp)|w/ Inco|mplet|es|||
||||||||||
10
Number of Digits in Answer
(a) Multi-number addition
Number of Digits in Answer
(b) Multiplication-as-addition
Figure 20: Performance on compositions of skills with algorithmic calls. Due to length of the algorithmic output for this task, a
number of the longest examples exceed the context length limit for Codex. We employ a dialogue-like strategy to get a final answer
for the incomplete questions, where we allow models loaded with different prompts to interact with each other through the use of
specialized tokens learned in-context. The dashed lines consider all incomplete questions as false, and provide an lower estimate on
performance. “Algo Call” refers to this dialogue-like method, which is akin to the “No Composition” setup since we do not include
the two-number addition examples in the prompt. We find that we are able to generalize out-of-distribution from a single prompt
example of digits length 2, and avoid context length limitations when evaluated on the longest problems in the evaluation data.
23
-----
### B Prompt examples
**B.1** **Addition prompt strategies**
For addition prompts, we use 3-shot with the examples 128+367, 9980+29, and 802+7145 in order. For conciseness,
we may include only subsets of the prompt questions in the prompt examples.
**B.1.1** **Algorithmic prompt for addition**
**Problem: 128+367=**
Explanation:
The first number is 128, FN=[1,2,8]. The second number is 367, SN=[3,6,7]. Since FN [1,2,8] has
3 digits, SN [3,6,7] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 3. FN=[1,2,8]. Length of SN is 3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0.
Since 8+7+0=15, 15>10, 15%10=5. Length of A is 1. Thus A=[5]. Since (15-5)/10=1, C[2]=1.
Length of FN is 2. FN=[1,2]. Length of SN is 2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. Since
2+6+1=9, 9<10, 9%10=9. Length of A is 2. Thus A=[9,5]. Since (9-9)/10=0, C[1]=0.
Length of FN is 1. FN=[1]. Length of SN is 1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since
1+3+0=4, 4<10, 4%10=4. Length of A is 3. Thus A=[4,9,5]. Since (4-4)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are no more
operators, the problem is complete. The final Answer is [4,9,5].
**Problem: 9980+29=**
Explanation:
The first number is 9980, FN=[9,9,8,0]. The second number is 29, SN=[2,9]. Since FN [9,9,8,0]
has 4 digits, SN [2,9] has 2 digits, thus the maximum number of digits is 4. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 4. FN=[9,9,8,0]. Length of SN is 2. SN=[2,9]. FN[4]=0. SN[4]=9. C[4]=0.
Since 0+9+0=9, 9<10, 9%10=9. Length of A is 1. Thus A=[9]. Since (9-9)/10=0, C[3]=0.
Length of FN is 3. FN=[9,9,8]. Length of SN is 1. SN=[2]. FN[3]=8. SN[3]=2. C[3]=0. Since
8+2+0=10, 10=10, 10%10=0. Length of A is 2. Thus A=[0,9]. Since (10-0)/10=1, C[2]=1.
Length of FN is 2. FN=[9,9]. Length of SN is 0. SN=[]. FN[2]=9. SN is empty. C[2]=1. Since
9+0+1=10, 10=10, 10%10=0. Length of A is 3. Thus A=[0,0,9]. Since (10-0)/10=1, C[1]=1.
Length of FN is 1. FN=[9]. Length of SN is 0. SN=[]. FN[1]=9. SN is empty. C[1]=1. Since
9+0+1=10, 10=10, 10%10=0. Length of A is 4. Thus A=[0,0,0,9]. Since (10-0)/10=1, C[0]=1.
There are no more digits, but C[0]=1. Length of A is 5. Thus A=[1,0,0,0,9].
There are no more digits and the process is complete. Since there are no more operators, the
problem is complete. The final Answer is [1,0,0,0,9].
**Problem: 802+7145=**
Explanation:
The first number is 802, FN=[8,0,2]. The second number is 7145, SN=[7,1,4,5]. Since FN=[8,0,2]
has 3 digits, SN=[7,1,4,5] has 4 digits, thus the maximum number of digits is 4. In each
subsequent step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 3. FN=[8,0,2]. Length of SN is 4. SN=[7,1,4,5]. FN[4]=2. SN[4]=5. C[4]=0.
Since 2+5+0=7, 7<10, 7%10=7. Length of A is 1. Thus A=[7]. Since (7-7)/10=0, C[3]=0.
Length of FN is 2. FN=[8,0]. Length of SN is 3. SN=[7,1,4]. FN[3]=0. SN[3]=4. C[3]=0.
Since 0+4+0=4, 4<10, 4%10=4. Length of A is 2. Thus A=[4,7]. Since (4-4)/10=0, C[2]=0.
Length of FN is 1. FN=[8]. Length of SN is 2. SN=[7,1]. FN[2]=8. SN[2]=1. C[2]=0. Since
8+1+0=9, 9<10, 9%10=9. Length of A is 3. Thus A=[9,4,7]. Since (9-9)/10=0, C[1]=0.
Length of FN is 0. FN=[]. Length of SN is 1. SN=[7]. FN is empty. SN[1]=7. C[1]=0. Since
0+7+0=7, 7<10, 7%10=7. Length of A is 4. Thus A=[7,9,4,7]. Since (7-7)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are no more
operators, the problem is complete. The final Answer is [7,9,4,7].
**B.1.2** **Few-shot prompt for addition**
**Q: 128+367=**
A: 495.
**Q: 9980+29=**
A: 10009.
**Q: 802+7145=**
A: 7947.
24
-----
**B.1.3** **Chain-of-thought prompt for addition**
**Problem: 128+367=?**
Explanation: Let’s think step by step.
128+367=128+300+67=428+67=495. The final Answer is 495.
**Problem: 9980+29=?**
Explanation: Let’s think step by step.
9980+29=9980+20+9=10000+9=10009. The final Answer is 10009.
**Problem: 802+7145=?**
Explanation: Let’s think step by step.
802+7145=802+7000+100+45=7802+100+45=7902+45=7947. The final Answer is 7947.
**B.1.4** **Instruction addition prompt for addition**
The following are instructions for solving addition problems in the form of x + y = z, where x,
y, and z are positive integers.
We will use the standard algorithm for addition. We align the numbers x and y on the least
significant digit, which is the ones digit. Starting from right to left, we go from the least
significant digit to the most significant digit and add the corresponding digits from each
number. When the sum of the two digits is greater than 9, a carry of 1 is included in the sum
of the next digits. When there is only one digit available from the two numbers, only that
digit along with any carry is included in the sum. When all the digits are processed, only the
remaining carry if any shall be included in the sum.
For x + y = z where x = int(str(abc)), y = int(str(defg)), we can solve z with the following
steps:
1) c+g=w’, w=w’%10
2) b+f+((w’-w)/10)=v’, v=v’%10
3) a+e+((v’-v)/10)=u’, u=u’%10
4) d+((u’-u)/10)=t’, t=t’%10
5) s=(t’-t)/10
Thus, z = int(str(stuvw)).
The answer should be in the form below:
Q: What is abc+defg=?
A: abc
+defg
------stuvw
The answer is stuvw.
25
-----
**B.1.5** **Scratchpad prompt for addition**
Input:
**128+367**
Target:
<scratch>
1 2 8 + 3 6 7, C: 0
1 2 + 3 6, 5 C: 1
1 + 3, 9 5 C: 0
, 4 9 5 C: 0
4 9 5
</scratch>4 9 5.
Input:
**9980+29**
Target:
<scratch>
9 9 8 0 + 2 9, C: 0
9 9 8 + 2, 9 C: 0
9 9, 0 9 C: 1
9, 0 0 9 C: 1
, 0 0 0 9 C: 1
1 0 0 0 9
</scratch>1 0 0 0 9.
**B.1.6** **Detailed scratchpad prompt for addition**
Input:
**128+367**
Target:
<scratch>
1 2 8 has 3 digits.
3 6 7 has 3 digits.
1 2 8 + 3 6 7, C=0, 8 + 7 + 0 = 1 5, A->5, C->1
1 2 + 3 6, A=5, C=1, 2 + 6 + 1 = 9, A->9, C->0
1 + 3, A=9 5, C=0, 1 + 3 + 0 = 4, A->4, C->0
+, A=4 9 5, C=0, END
</scratch>
4 9 5
Input:
**9980+29**
Target:
<scratch>
9 9 8 0 has 4 digits.
2 9 has 2 digits.
9 9 8 0 + 2 9, C=0, 0 + 9 + 0 = 9, A->9, C->0
9 9 8 + 2, A=9, C=0, 8 + 2 + 0 = 1 0, A->0, C->1
9 9 +, A=0 9, C=1, 9 + 0 + 1 = 1 0, A->0, C->1
9 +, A=0 0 9, C=1, 9 + 0 + 1 = 1 0, A->0, C->1
+, A=0 0 0 9, C=1, 0 + 0 + 1 = 1, A->1, C->0
+, A=1 0 0 0 9, C=0, END
</scratch>
1 0 0 0 9
**B.2** **Algorithmic prompt ablations for addition**
**B.2.1** **Algorithmic prompt with uncommon operations for addition**
The uncommon indexing operation is highlighted in red.
26
-----
**Problem: 128+367=**
Explanation:
The first number is 128, FN=[1,2,8]. The second number is 367, SN=[3,6,7]. Since FN [1,2,8] has
3 digits, SN [3,6,7] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
FN[3]=8. SN[3]=7. C[3]=0. Since 8+7+0=15, 15>10, 15%10=5. Length of A is 1. Thus A=[5].
Since (15-5)/10=1, C[2]=1.
FN[2]=2. SN[2]=6. C[2]=1. Since 2+6+1=9, 9<10, 9%10=9. Length of A is 2. Thus A=[9,5].
Since (9-9)/10=0, C[1]=0.
FN[1]=1. SN[1]=3. C[1]=0. Since 1+3+0=4, 4<10, 4%10=4. Length of A is 3. Thus A=[4,9,5].
Since (4-4)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are no more
operators, the problem is complete. The final Answer is [4,9,5].
**B.2.2** **Algorithmic prompt with non-explicit carry for addition**
In this prompt, the explicit carry calculations in the prompt in Section B.1.1 is omitted.
**Problem: 128+367=**
Explanation:
The first number is 128, FN=[1,2,8]. The second number is 367, SN=[3,6,7]. Since FN [1,2,8] has
3 digits, SN [3,6,7] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 3. FN=[1,2,8]. Length of SN is 3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0.
Since 8+7+0=15. Length of A is 1. Thus A=[5]. C[2]=1.
Length of FN is 2. FN=[1,2]. Length of SN is 2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. Since
2+6+1=9. Length of A is 2. Thus A=[9,5]. C[1]=0.
Length of FN is 1. FN=[1]. Length of SN is 1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since
1+3+0=4. Length of A is 3. Thus A=[4,9,5]. C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are no more
operators, the problem is complete. The final Answer is [4,9,5].
**B.2.3** **Algorithmic prompt for addition with irregular errors**
The errors are highlighted in red.
**Problem: 128+367=**
Explanation:
The first number is 128, FN=[1,2,8]. The second number is 367, SN=[3,6,7]. Since FN [1,2,8] has
3 digits, SN [3,6,7] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 3. FN=[1,2,8]. Length of SN is 3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0.
Since 8+6+0=15, 15>10, 15%10=5. Length of A is 1. Thus A=[5]. Since (15-5)/10=1, C[2]=1.
Length of FN is 2. FN=[1,2]. Length of SN is 2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. Since
2+6+1=9, 9<10, 9%10=9. Length of A is 2. Thus A=[9,5]. Since (9-9)/10=0, C[1]=0.
Length of FN is 1. FN=[1]. Length of SN is 1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since
1+2+0=4, 4<10, 4%10=4. Length of A is 3. Thus A=[4,9,5]. Since (4-4)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are no more
operators, the problem is complete. The final Answer is [4,9,5].
**B.2.4** **Algorithmic prompt for addition with systematic errors**
The errors are highlighted in red.
27
-----
**Problem: 128+367=**
Explanation:
The first number is 128, FN=[1,2,8]. The second number is 367, SN=[3,6,7]. Since FN [1,2,8] has
3 digits, SN [3,6,7] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 3. FN=[1,2,8]. Length of SN is 3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0.
Since 8+6+0=15, 15>10, 15%10=5. Length of A is 1. Thus A=[5]. Since (15-5)/10=1, C[2]=1.
Length of FN is 2. FN=[1,2]. Length of SN is 2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. Since
2+5+1=9, 9<10, 9%10=9. Length of A is 2. Thus A=[9,5]. Since (9-9)/10=0, C[1]=0.
Length of FN is 1. FN=[1]. Length of SN is 1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since
1+2+0=4, 4<10, 4%10=4. Length of A is 3. Thus A=[4,9,5]. Since (4-4)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are no more
operators, the problem is complete. The final Answer is [4,9,5].
**B.2.5** **Symbols-only algorithmic prompt for addition**
**Problem: 128+367=**
Explanation:
FN=128, FN=[1,2,8]. SN=367, SN=[3,6,7]. Len(FN)=3, Len(SN)=3, MaxLen=3.
Len(FN)=3. FN=[1,2,8]. Len(SN)=3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0. 8+7+0=15, 15>10,
15%10=5. Len(A)=1. A=[5]. (15-5)/10=1, C[2]=1.
Len(FN)=2. FN=[1,2]. Len(SN)=2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. 2+6+1=9, 9<10, 9%10=9.
Len(A)=2. A=[9,5]. (9-9)/10=0, C[1]=0.
Len(FN)=1. FN=[1]. Len(SN)=1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since 1+3+0=4, 4<10,
4%10=4. Len(A)=3. A=[4,9,5]. (4-4)/10=0, C[0]=0.
Len(FN)=0 and Len(SN)=0 and C[0]=0. Done. The final Answer is [4,9,5].
**B.2.6** **Symbols-only algorithmic prompt for addition without keywords**
In this prompt, we do not use the keywords Len and Max.
**Problem: 128+367=**
Explanation:
FN=128, FN=[1,2,8]. SN=367, SN=[3,6,7]. VBZ(FN)=3, VBZ(SN)=3, UXOVBZ=3.
VBZ(FN)=3. FN=[1,2,8]. VBZ(SN)=3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0.
8+7+0=15, 15>10, 15%10=5. VBZ(A)=1. A=[5]. (15-5)/10=1, C[2]=1.
VBZ(FN)=2. FN=[1,2]. VBZ(SN)=2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. 2+6+1=9, 9<10, 9%10=9.
VBZ(A)=2. A=[9,5]. (9-9)/10=0, C[1]=0.
VBZ(FN)=1. FN=[1]. VBZ(SN)=1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since 1+3+0=4, 4<10,
4%10=4. VBZ(A)=3. A=[4,9,5]. (4-4)/10=0, C[0]=0.
VBZ(FN)=0 and VBZ(SN)=0 and C[0]=0. Done. The final Answer is [4,9,5].
**B.2.7** **Symbols-only algorithmic prompt for addition with misleading keywords**
In this prompt, we replace the keywords Len and Max with Str and Min.
**Problem: 128+367=**
Explanation:
FN=128, FN=[1,2,8]. SN=367, SN=[3,6,7]. Str(FN)=3, Str(SN)=3, MinStr=3. Str(FN)=3.
FN=[1,2,8]. Str(SN)=3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0. 8+7+0=15, 15>10, 15%10=5.
Str(A)=1. A=[5]. (15-5)/10=1, C[2]=1.
Str(FN)=2. FN=[1,2]. Str(SN)=2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. 2+6+1=9, 9<10, 9%10=9.
Str(A)=2. A=[9,5]. (9-9)/10=0, C[1]=0.
Str(FN)=1. FN=[1]. Str(SN)=1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since 1+3+0=4, 4<10,
4%10=4. Str(A)=3. A=[4,9,5]. (4-4)/10=0, C[0]=0.
Str(FN)=0 and Str(SN)=0 and C[0]=0. Done. The final Answer is [4,9,5].
**B.3** **Addition-subtraction prompt strategies**
**B.3.1** **Algorithmic prompt for addition-subtraction**
For the addition-subtraction prompt, we use prompt examples 128 + 367, 9980 + 29, 29 − 570, −99 − 21, 483 − 389,
and −30 + 8002 in order.
28
-----
**Problem: 483-389=**
Explanation:
The first number is 483, adding commas between each number, FN=[4,8,3]. The second number is
-389, adding commas between each number, SN=-[3,8,9]. FN [4,8,3] has 3 digits, SN -[3,8,9] has 3
digits, max is 3.
Len(FN)=3. FN=[4,8,3]. FN[3]=3. Len(SN)=3. SN=-[3,8,9]. SN[3]=-9. C[3]=0. Since 3-9+0=-6,
-6<-10, -6%-10=-6. Len(A)=1. A=[-6]. Since (-6--6)/10=0, C[2]=0.
Len(FN)=2. FN=[4,8]. FN[2]=8. Len(SN)=2. SN=-[3,8]. SN[2]=-8. C[2]=0. Since 8-8+0=0, 0<10,
0%10=0. Len(A)=2. A=[0,-6]. Since (0-0)/10=0, C[1]=0.
Len(FN)=1. FN=[4]. FN[1]=4. Len(SN)=1. SN=-[3]. SN[1]=-3. C[1]=0. Since 4-3+0=1, 1<10,
1%10=1. Len(A)=3. A=[1,0,-6]. Since (1-1)/10=0, C[0]=0.
Len(FN)=0. FN=[]. FN[0]=empty. Len(SN)=0. SN=-[]. SN[0]=empty. Since both FN and SN are
empty, next. Since C[0]=0, the steps are done. Since there are - in A, we check the sign of the
last step A[1]=1. Since 1 is non-neg, we process A from right to left. A=[1,0,-6]=[+1,+0,-6].
C[3]=0.
Len(A)=3. A=[+1,+0,-6]. A[3]=-6. Since -6<0, B=10, C[2]=-1. Since C[3]=0, thus -6+10+0=4.
Len(ANEW)=1. ANEW=[4]. C[2]=-1.
Len(A)=2. A=[+1,+0]. A[2]=+0. Since +0 is 0, B=0, C[1]=0. Since C[2]=-1, thus 0+0-1=-1, which
is neg, thus repeat with B=10, C[1]=-1. -1+10+0=9. Len(ANEW)=2. ANEW=[9,4]. C[1]=-1.
Len(A)=1. A=[+1]. A[1]=+1. Since +1>0, B=0, C[0]=0. Since C[1]=-1, thus 1+0-1=0.
Len(ANEW)=3. ANEW=[0,9,4]. C[0]=0.
Len(A)=0. A=[]. Since A is empty, the problem is complete. The final Answer is [0,9,4].
**Problem: 29-570=**
Explanation:
The first number is 29, adding commas between each number, FN=[2,9]. The second number is -570,
adding commas between each number, SN=-[5,7,0]. FN [2,9] has 2 digits, SN -[5,7,0] has 3 digits,
max is 3.
Len(FN)=2. FN=[2,9]. FN[2]=9. Len(SN)=3. SN=-[5,7,0]. SN[3]=-0. C[3]=0. Since 9-0+0=9,
9<10, 9%10=9. Len(A)=1. A=[9]. Since (9-9)/10=0, C[2]=0.
Len(FN)=1. FN=[2]. FN[1]=2. Len(SN)=2. SN=-[5,7]. SN[2]=-7. C[2]=0. Since 2-7+0=-5,
-5<-10, -5%-10=-5. Len(A)=2. A=[-5,9]. Since (-5--5)/10=0, C[1]=0.
Len(FN)=0. FN=[]. FN[0]=empty. Len(SN)=1. SN=-[5]. SN[1]=-5. C[1]=0. Since 0-5+0=-5,
-5<-10, -5%-10=-5. Len(A)=3. A=[-5,-5,9]. Since (-5--5)/10=0, C[0]=0.
Len(FN)=0. FN=[]. FN[0]=empty. Len(SN)=0. SN=-[]. SN[0]=empty. Since both FN and SN are
empty, next. Since C[0]=0, the steps are done. Since there are - in A, we check the sign of
the last step A[1]=-5. Since -5 is neg, we change the sign and process A from right to left.
A=[-5,-5,9]=-[+5,+5,-9]. C[3]=0.
Len(A)=3. A=-[+5,+5,-9]. A[3]=-9. Since -9<0, B=10, C[2]=-1. Since C[3]=0, thus -9+10+0=1.
Len(ANEW)=1. ANEW=-[1]. C[2]=-1.
Len(A)=2. A=-[+5,+5]. A[2]=+5. Since +5>0, B=0, C[1]=0. Since C[2]=-1, thus 5+0-1=4.
Len(ANEW)=2. ANEW=-[4,1]. C[1]=0.
Len(A)=1. A=-[+5]. A[1]=+5. Since +5>0, B=0, C[0]=0. Since C[1]=0, thus 5+0+0=5.
Len(ANEW)=3. ANEW=-[5,4,1]. C[0]=0.
Len(A)=0. A=-[]. Since A is empty, the problem is complete. The final Answer is -[5,4,1].
29
-----
**B.3.2** **Chain-of-thought prompt for addition-subtraction**
**Problem: 128+367=?**
Explanation: Let’s think step by step.
128+367=128+300+67=428+67=495. The final Answer is 495.
**Problem: 9980+29=?**
Explanation: Let’s think step by step.
9980+29=9980+20+9=10000+9=10009. The final Answer is 10009.
**Problem: 29-570=?**
Explanation: Let’s think step by step.
29-570=29-500-70=-471-70=-541. The final Answer is -541.
**Problem: -99-21=?**
Explanation: Let’s think step by step.
-99-21=-99-20-1=-119-1=-120. The final Answer is -120.
**Problem: 483-389=?**
Explanation: Let’s think step by step.
483-389=483-300-80-9=183-80-9=103-9=94. The final Answer is 94.
**Problem: -30+8002=?**
Explanation: Let’s think step by step.
-30+8002=-30+8000+2=-30+8002=7972. The final Answer is 7972.
**B.4** **Memorized multiplication prompt strategies**
**B.4.1** **Chain-of-thought prompt for memorized multiplication**
For multiplication, we use prompt examples 128 ∗ 367 and 2035 ∗ 87 in order.
**Q: 128*367=?**
A: Let’s think step by step.
128*367=128*(300+60+7)
128*367=128*300+128*60+128*7
128*367=38400+7680+896
128*367=46976
So, 128*367=46976. The answer is 46976.
**Q: 2035*87=?**
A: Let’s think step by step.
2035*87=2000*87+30*87+5*87
2035*87=174000+2610+435
2035*87=177045
So, 2035*87=177045. The answer is 177045.
30
-----
**B.4.2** **Algorithmic prompt for memorized multiplication**
**Q: 128*367=**
Explanation:
FN=128, FN=[1,2,8]. SN=367, SN=[3,6,7]. Len(FN)=3, Len(SN)=3. Max len is 3. Since 3=3, the
lengths of two numbers are equal, we pick FN and break [1,2,8] into 3//3=1 group of three and
one group of 3%3=0 leftover digits. Since there are 0 leftover digits, from [1,2,8] we break the
first 0 digits as [][1,2,8], thus the leftover group is []=empty and the main group is [1,2,8].
Since there is 3//3=1 group of three, we break the main group into 1 group of 3 each: [1,2,8].
Reformatting for each main group, we have 128. Thus, ignoring the empty group, the groups are
128. The other number is the MULVAL, thus MULVAL=367.
The submulproblems are 128*367=MUL1. There is 1 mul operator.
**[START]**
Submulproblem: 128*367=MUL1
FN=128, FN=[1,2,8]. Mulval=367. Len(FN)=3. P0=0.
Len(FN)=3. FN=[1,2,8]. FN[3]=8. 8*367=2936. P0=0, append 0 zero [] to [2,9,3,6][]:
[2,9,3,6]=ADV1.
Len(FN)=2. FN=[1,2]. FN[2]=2. 2*367=734. P0=1, append 1 zero [0] to [7,3,4|0]:
[7,3,4,0]=ADV2.
Len(FN)=1. FN=[1]. FN[1]=1. 1*367=367. P0=2, append 2 zero [0,0] to [3,6,7|0,0]:
[3,6,7,0,0]=ADV3.
Len(FN)=0. Done.
++START++
Addition Problem: ADV1+ADV2+ADV3=
Explanation:
The subproblems are ADV1+ADV2=ANS1, ANS1+ADV3=ANS2. There are 2 add operators.
Subproblem: ADV1+ADV2=ANS1
FN=ADV1, FN=[2,9,3,6]. SN=ADV2, SN=[7,3,4,0]. Len(FN)=4, Len(SN)=4, max len is 4.
Len(FN)=4. FN=[2,9,3,6]. Len(SN)=4. SN=[7,3,4,0]. FN[4]=6. SN[4]=0. C[4]=0. 6+0+0=6, 6<10,
6%10=6. Len(A)=1. A=[6]. (6-6)/10=0, C[3]=0.
Len(FN)=3. FN=[2,9,3]. Len(SN)=3. SN=[7,3,4]. FN[3]=3. SN[3]=4. C[3]=0. 3+4+0=7, 7<10,
7%10=7. Len(A)=2. A=[7,6]. (7-7)/10=0, C[2]=0.
Len(FN)=2. FN=[2,9]. Len(SN)=2. SN=[7,3]. FN[2]=9. SN[2]=3. C[2]=0. 9+3+0=12, 12>10,
12%10=2. Len(A)=3. A=[2,7,6]. (12-2)/10=1, C[1]=1.
Len(FN)=1. FN=[2]. Len(SN)=1. SN=[7]. FN[1]=2. SN[1]=7. C[1]=1. 2+7+1=10, 10=10, 10%10=0.
Len(A)=4. A=[0,2,7,6]. (10-0)/10=1, C[0]=1.
Len(FN)=0. FN=[]. Len(SN)=0. SN=[]. Both are empty. C[0]=1. Not done. Len(A)=5.
ANS1=[1,0,2,7,6]. Since there are 2 add operators and we processed up to ANS1, continue. The
new FN is [1,0,2,7,6].
Subproblem: ANS1+ADV3=ANS2
FN=ANS1, FN=[1,0,2,7,6]. SN=ADV3, SN=[3,6,7,0,0]. Len(FN)=5, Len(SN)=5, max len is 5.
Len(FN)=5. FN=[1,0,2,7,6]. Len(SN)=5. SN=[3,6,7,0,0]. FN[5]=6. SN[5]=0. C[5]=0. 6+0+0=6,
6<10, 6%10=6. Len(A)=1. A=[6]. (6-6)/10=0, C[4]=0.
Len(FN)=4. FN=[1,0,2,7]. Len(SN)=4. SN=[3,6,7,0]. FN[4]=7. SN[4]=0. C[4]=0. 7+0+0=7, 7<10,
7%10=7. Len(A)=2. A=[7,6]. (7-7)/10=0, C[3]=0.
Len(FN)=3. FN=[1,0,2]. Len(SN)=3. SN=[3,6,7]. FN[3]=2. SN[3]=7. C[3]=0. 2+7+0=9, 9<10,
9%10=9. Len(A)=3. A=[9,7,6]. (9-9)/10=0, C[2]=0.
Len(FN)=2. FN=[1,0]. Len(SN)=2. SN=[3,6]. FN[2]=0. SN[2]=6. C[2]=0. 0+6+0=6, 6<10, 6%10=6.
Len(A)=4. A=[6,9,7,6]. (6-6)/10=0, C[1]=0.
Len(FN)=1. FN=[1]. Len(SN)=1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. 1+3+0=4, 4<10, 4%10=4.
Len(A)=5. A=[4,6,9,7,6]. (4-4)/10=0, C[0]=0.
Len(FN)=0. FN=[]. Len(SN)=0. SN=[]. Both are empty. C[0]=0. Done. ANS2=[4,6,9,7,6].
Since there are add 2 operators and we processed up to ANS2, complete. The final ADDAnswer is
[4,6,9,7,6].
++END++
**[END]**
MUL1=[4,6,9,7,6]. Since there is 1 mul operator and we processed up to MUL1, complete.
We now combine the MUL results. Since 1 mul operator, we append 3*(1-1)=3*0=0 zeros to
MUL1, MUL1=[4,6,9,7,6][]=[4,6,9,7,6]. Addition Mul Problem: MUL1+EMPTY= Explanation: The
subproblems are MUL1+EMPTY=ANS1. There is 1 MA operator. Since EMPTY is in the equation,
ANS1=MUL1=[4,6,9,7,6]. Since there is 1 MA operator and we processed up to ANS1, complete. The
END Answer is [4,6,9,7,6].
31
-----
**B.5** **Algorithmic prompt for parity**
For parity, we use prompt examples [1, 1, 0, 1, 0] and [0, 1, 1, 0, 0, 0, 0, 0] in order.
Q: What is the parity on the list a=[1, 1, 0, 1, 0]?
A: We initialize s=
a=[1, 1, 0, 1, 0]. The first element of a is 1 so b=1. s = s + b = 0 + 1 = 1. s=1.
a=[1, 0, 1, 0]. The first element of a is 1 so b=1. s = s + b = 1 + 1 = 0. s=0.
a=[0, 1, 0]. The first element of a is 0 so b=0. s = s + b = 0 + 0 = 0. s=0.
a=[1, 0]. The first element of a is 1 so b=1. s = s + b = 0 + 1 = 1. s=1.
a=[0]. The first element of a is 0 so b=0. s = s + b = 1 + 0 = 1. s=1.
a=[] is empty. Since the list a is empty and we have s=1, the parity is 1.
**B.6** **Scratchpad parity for parity (Anil et al., 2022)**
Q: What is the parity on the list a=[1, 1, 0, 1, 0]?
A: [1, 0, 0, 1, 1], the parity is 1.
Q: What is the parity on the list a=[0, 1, 1, 0, 0, 0, 0, 0]?
A: [0, 1, 0, 0, 0, 0, 0, 0], the parity is 0.
**B.7** **Algorithmic prompt for multi-add and multiply-as-add**
**Problem: 128+367=**
Explanation:
The first number is 128, FN=[1,2,8]. The second number is 367, SN=[3,6,7]. Since FN [1,2,8] has
3 digits, SN [3,6,7] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 3. FN=[1,2,8]. Length of SN is 3. SN=[3,6,7]. FN[3]=8. SN[3]=7. C[3]=0.
Since 8+7+0=15, 15>10, 15%10=5. Length of A is 1. Thus A=[5]. Since (15-5)/10=1, C[2]=1.
Length of FN is 2. FN=[1,2]. Length of SN is 2. SN=[3,6]. FN[2]=2. SN[2]=6. C[2]=1. Since
2+6+1=9, 9<10, 9%10=9. Length of A is 2. Thus A=[9,5]. Since (9-9)/10=0, C[1]=0.
Length of FN is 1. FN=[1]. Length of SN is 1. SN=[3]. FN[1]=1. SN[1]=3. C[1]=0. Since
1+3+0=4, 4<10, 4%10=4. Length of A is 3. Thus A=[4,9,5]. Since (4-4)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. The final Answer is [4,9,5].
**Problem: Problem: 9980+29=**
Explanation:
The first number is 9980, FN=[9,9,8,0]. The second number is 29, SN=[2,9]. Since FN [9,9,8,0]
has 4 digits, SN [2,9] has 2 digits, thus the maximum number of digits is 4. In each subsequent
step, we remove one number from the end of FN and one from the end of SN.
Length of FN is 4. FN=[9,9,8,0]. Length of SN is 2. SN=[2,9]. FN[4]=0. SN[4]=9. C[4]=0.
Since 0+9+0=9, 9<10, 9%10=9. Length of A is 1. Thus A=[9]. Since (9-9)/10=0, C[3]=0.
Length of FN is 3. FN=[9,9,8]. Length of SN is 1. SN=[2]. FN[3]=8. SN[3]=2. C[3]=0. Since
8+2+0=10, 10=10, 10%10=0. Length of A is 2. Thus A=[0,9]. Since (10-0)/10=1, C[2]=1.
Length of FN is 2. FN=[9,9]. Length of SN is 0. SN=[]. FN[2]=9. SN is empty. C[2]=1. Since
9+0+1=10, 10=10, 10%10=0. Length of A is 3. Thus A=[0,0,9]. Since (10-0)/10=1, C[1]=1.
Length of FN is 1. FN=[9]. Length of SN is 0. SN=[]. FN[1]=9. SN is empty. C[1]=1. Since
9+0+1=10, 10=10, 10%10=0. Length of A is 4. Thus A=[0,0,0,9]. Since (10-0)/10=1, C[0]=1.
There are no more digits, but C[0]=1. Length of A is 5. Thus A=[1,0,0,0,9]. The final Answer
is [1,0,0,0,9].
**Problem: 802+7145+6=**
Explanation:
The subproblems are 802+7145=ANS1 and ANS1+6=ANS2. There are 2 operators.
Subproblem: 802+7145=ANS1
The first number is 802, FN=[8,0,2]. The second number is 7145, SN=[7,1,4,5]. Since FN=[8,0,2]
has 3 digits, SN=[7,1,4,5] has 4 digits, thus the maximum number of digits is 4. In each
subsequent step, we remove one number from the end of FN and one from the end of SN.
### continued on next page
32
-----
Length of FN is 3. FN=[8,0,2]. Length of SN is 4. SN=[7,1,4,5]. FN[4]=2. SN[4]=5. C[4]=0.
Since 2+5+0=7, 7<10, 7%10=7. Length of A is 1. Thus A=[7]. Since (7-7)/10=0, C[3]=0.
Length of FN is 2. FN=[8,0]. Length of SN is 3. SN=[7,1,4]. FN[3]=0. SN[3]=4. C[3]=0.
Since 0+4+0=4, 4<10, 4%10=4. Length of A is 2. Thus A=[4,7]. Since (4-4)/10=0, C[2]=0.
Length of FN is 1. FN=[8]. Length of SN is 2. SN=[7,1]. FN[2]=8. SN[2]=1. C[2]=0. Since
8+1+0=9, 9<10, 9%10=9. Length of A is 3. Thus A=[9,4,7]. Since (9-9)/10=0, C[1]=0.
Length of FN is 0. FN=[]. Length of SN is 1. SN=[7]. FN is empty. SN[1]=7. C[1]=0. Since
0+7+0=7, 7<10, 7%10=7. Length of A is 4. Thus A=[7,9,4,7]. Since (7-7)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are 2 operators
and we processed up to ANS1, there are more operators to process. Thus, ANS1 is [7,9,4,7].
Subproblem: ANS1+6=ANS2
The first number is ANS1, FN=[7,9,4,7]. The second number is 6, SN=[6]. Since FN=[7,9,4,7] has
4 digits, SN=[6] has 1 digit, thus the maximum number of digits is 4. In each subsequent step,
we remove one number from the end of FN and one from the end of SN.
Length of FN is 4. FN=[7,9,4,7]. Length of SN is 1. SN=[6]. FN[4]=7. SN[4]=6. C[4]=0.
Since 7+6+0=13, 13>10, 13%10=3. Length of A is 1. Thus A=[3]. Since (13-3)/10=1, C[3]=1.
Length of FN is 3. FN=[7,9,4]. Length of SN is 0. SN=[]. FN[3]=4. SN is empty. C[3]=1.
Since 4+0+1=5, 5<10, 5%10=5. Length of A is 2. Thus A=[5,3]. Since (5-5)/10=0, C[2]=0.
Length of FN is 2. FN=[7,9]. Length of SN is 0. SN=[]. FN[2]=9. SN is empty. C[2]=0. Since
9+0+0=9, 9<10, 9%10=9. Length of A is 3. Thus A=[9,5,3]. Since (9-9)/10=0, C[1]=0.
Length of FN is 1. FN=[7]. Length of SN is 0. SN=[]. FN[1]=7. SN is empty. C[1]=0. Since
7+0+0=7, 7<10, 7%10=7. Length of A is 4. Thus A=[7,9,5,3]. Since (7-7)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are 2 operators
and we processed up to ANS2, the problem is complete. The final Answer is [7,9,5,3].
**Problem: 3*7=**
Explanation:
The subproblems are 3*7=MS1. There is 1 * operator.
Subproblem: 3*7=MS1
Since the problem is multiplication, we find the smaller of the two numbers and add the larger
number as many times as the smaller number. The first number is 3, FN=[3]=3. The second number
is 7, SN=[7]=7. Since 3 is smaller than 7, we rewrite the problem as 7 summed together 3 times:
7+7+7. We end at ANS(3-1)=2=ANS2.
The subproblems are 7+7=ANS1 and ANS1+7=ANS2. There are 2 operators.
Subproblem: 7+7=ANS1
The first number is 7, FN=[7]. The second number is 7, SN=[7]. Since FN=[7] has 1 digit, SN=[7]
has 1 digit, thus the maximum number of digits is 1. In each subsequent step, we remove one
number from the end of FN and one from the end of SN.
Length of FN is 1. FN=[7]. Length of SN is 1. SN=[7]. FN[1]=7. SN[1]=7. C[1]=0. Since
7+7+0=14, 14>10, 14%10=4. Length of A is 1. Thus A=[4]. Since (14-4)/10=1, C[0]=1.
There are no more digits and C[0]=1. Length of A is 2. Thus A=[1,4].
There are no more digits and the process is complete. Since there are 2 operators and we
processed up to ANS1, there are more operators to process. Thus, ANS1 is [1,4].
Subproblem: ANS1+7=ANS2
The first number is ANS1, FN=[1,4]. The second number is 7, SN=[7]. Since FN=[1,4] has 2
digits, SN=[7] has 1 digit, thus the maximum number of digits is 2. In each subsequent step,
we remove one number from the end of FN and one from the end of SN.
Length of FN is 2. FN=[1,4]. Length of SN is 1. SN=[7]. FN[2]=4. SN[2]=7. C[2]=0. Since
4+7+0=11, 11>10, 11%10=1. Length of A is 1. Thus A=[1]. Since (11-1)/10=1, C[1]=1.
Length of FN is 1. FN=[1]. Length of SN is 0. SN=[]. FN[1]=1. SN is empty. C[1]=1. Since
1+0+1=2, 2<10, 2%10=2. Length of A is 2. Thus A=[2,1]. Since (2-2)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are 2 operators
and we processed up to ANS2, the problem is complete. Since there is 1 * operator and we
processed up to MS1, the overall problem is complete. The final Answer is [2,1].
**B.8** **Chain-of-thought prompt for multi-add**
We use the same prompt examples as the algorithmic prompt, which are 128 + 367, 9980 + 29, 802 + 7145 + 6,
7 + 7 + 7 in order.
33
-----
**Q: 802+7145+6=**
A: Let’s think step by step.
802+7145=7947
7947+6=7953
So, 802+7145+6=7953. The answer is 7953.
**Q: 7+7+7=**
A: Let’s think step by step.
7+7=14
14+7=21
So, 7+7+7=21. The answer is 21.
**B.9** **Chain-of-thought prompt for multiply-as-add**
We use prompt examples 3 × 107, 5 × 6, 9 × 9, 277 × 2 in order.
**Q: 3*107=**
A: Let’s think step by step.
3*100=300
3*7=21
300+21=321
So, 3*107=321. The answer is 321.
**Q: 5*6=**
A: Let’s think step by step.
5*6=30
So, 5*6=30. The answer is 30.
**B.10** **Algorithmic prompt for multi-add with algo calls**
This prompt uses a single example to illustrate multi-number addition. The special tokens that correspond to the start
and end of the question extraction are Subproblem: and <GET>.
**Problem: 802+7145+6=**
Explanation:
The subproblems are 802+7145=ANS1 and ANS1+6=ANS2. Since we ended on ANS 2, there are 2
operators.
Subproblem: 802+7145⟨GET⟩=7947. Since there are 2 operators and we processed up to ANS1, there
are more operators to process.
Subproblem: 7947+6⟨GET⟩=7953. Since there are 2 operators and we processed up to ANS2, the
problem is complete. The final Answer is 7953.
**B.11** **Algorithmic prompt for multiplication-as-addition with algo calls**
This prompt uses a single example to illustrate multiplication-as-addition, and combines it with the multi-number
addition example from Section B.10. The special tokens that correspond to the start and end of the question extraction
are Subproblem: and <GET>.
**Problem: 3*7=**
Explanation:
Since the problem is multiplication, we find the smaller of the two numbers and add the larger
number as many times as the smaller number. The first number is 3, FN=[3]=3. The second number
is 7, SN=[7]=7. Since 3 is smaller than 7, we rewrite the problem as 7 summed together 3 times:
7+7+7. We end at ANS(3-1)=2=ANS2.
The subproblems are 7+7=ANS1 and ANS1+7=ANS2. Since we ended on ANS 2, there are 2 operators.
Subproblem: 7+7⟨GET⟩=14. Since there are 2 operators and we processed up to ANS1, there are more
operators to process.
Subproblem: 14+7⟨GET⟩=21. Since there are 2 operators and we processed up to ANS2, the problem
is complete. The final Answer is 21.
34
-----
**B.12** **Chain-of-thought prompt for multi-add**
**Q: 9980+29=**
A: Let’s think step by step.
9980+29=10009
So, 9980+29=10009. The answer is 10009.
**Q: 802+7145+6=**
A: Let’s think step by step.
802+7145=7947
7947+6=7953
**B.13** **Algorithmic prompt for GSM8k**
The following is the full prompt corresponding to the ”W/ plan W/ algo” experiment in Figure 9(a).
Q: Tommy has 3 toy cars. His neighbor, Jessie, has 3 cars too. Jessie’s older brother has 5
more cars than Tommy and Jessie. How many cars do the three of them have altogether?
A: <NONALGO> Tommy and Jessie have 3+3=6 cars. Jessie’s brother has 5+6=11 cars. Altogether,
they have 6+11=17 cars. The answer is 17.
Q: An electronic shop offers smartphones for $467 each, PCs are $128 more expensive than
smartphones, and advanced tablets are the prices of a smartphone and a PC combined. How much
do you have to pay to buy one of each of the three mentioned products?
A: <ALGO> To solve this problem, we need to find the prices of a PC and an advanced tablet.
Then, we need to add the price of all three products together.
The price of a PC is $128 more than a smartphone, thus the price of PC is 467+128. We use the
addition algorithm:
Problem: 467+128=
Explanation:
The subproblems are 467+128=ANS1. There is 1 connecting operator.
Subproblem: 467+128=ANS1
The first number is 467, FN=[4,6,7]. The second number is 128, SN=[1,2,8]. Since FN [3,6,7] has
3 digits, SN [1,2,8] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN. Length of A is 0.
Length of FN is 3. FN=[4,6,7]. FN[3]=7. Length of SN is 3. SN=[1,2,8]. SN[3]=8. C[3]=0.
Since 7+8+0=15, 15>10, 15%10=5. Length of A is 1. Thus A=[5]. Since (15-5)/10=1, C[2]=1.
Length of FN is 2. FN=[4,6]. FN[2]=6. Length of SN is 2. SN=[1,2]. SN[2]=2. C[2]=1. Since
6+2+1=9, 9<10, 9%10=9. Length of A is 2. Thus A=[9,5]. Since (9-9)/10=0, C[1]=0.
Length of FN is 1. FN=[4]. FN[1]=4. Length of SN is 1. SN=[1]. SN[1]=1. C[1]=0. Since
4+1+0=5, 5<10, 5%10=5. Length of A is 3. Thus A=[5,9,5]. Since (5-5)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there is 1 operator
and we processed up to ANS1, the problem is complete. The final Answer is [5,9,5]. Removing all
2 commas, we have 595.
The addition algorithm tells us that the price of a PC is 595. Since the price of an advanced
tablet is the sum of a smartphone and a PC, its price is 467+595. We use the addition algorithm:
Problem: 467+595=
Explanation:
The subproblems are 467+595=ANS1. There is 1 connecting operator.
Subproblem: 467+595=ANS1
The first number is 467, FN=[4,6,7]. The second number is 595, SN=[5,9,5]. Since FN [4,6,7] has
3 digits, SN [5,9,5] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN. Length of A is 0.
Length of FN is 3. FN=[4,6,7]. FN[3]=7. Length of SN is 3. SN=[5,9,5]. SN[3]=5. C[3]=0.
Since 7+5+0=12, 12>10, 12%10=2. Length of A is 1. Thus A=[2]. Since (12-2)/10=1, C[2]=1.
Length of FN is 2. FN=[4,6]. FN[2]=6. Length of SN is 2. SN=[5,9]. SN[2]=9. C[2]=1. Since
6+9+1=16, 16>10, 16%10=6. Length of A is 2. Thus A=[6,2]. Since (16-6)/10=1, C[1]=1.
Length of FN is 1. FN=[4]. FN[1]=4. Length of SN is 1. SN=[5]. SN[1]=5. C[1]=1. Since
4+5+1=10, 10=10, 10%10=0. Length of A is 3. Thus A=[0,6,2]. Since (10-0)/10=1, C[0]=1.
There are no more digits, but C[0]=1. Length of A is 4. A=[1,0,6,2]. Thus the process is
complete. Since there is 1 operator and we processed up to ANS1, the problem is complete. The
final Answer is [1,0,6,2]. Removing all 3 commas, we have 1062.
###continued on next page
35
-----
The addition algorithm tells us that the price of an advanced tablet is 1062. To buy one of
each of these products, you would have to pay 467+595+1062. We use the addition algorithm:
Problem: 467+595+1062=
Explanation:
The subproblems are 467+595=ANS1, ANS1+1062=ANS2. There are 2 connecting operators.
Subproblem: 467+595=ANS1
The first number is 467, FN=[4,6,7]. The second number is 595, SN=[5,9,5]. Since FN [4,6,7] has
3 digits, SN [5,9,5] has 3 digits, thus the maximum number of digits is 3. In each subsequent
step, we remove one number from the end of FN and one from the end of SN. Length of A is 0.
Length of FN is 3. FN=[4,6,7]. FN[3]=7. Length of SN is 3. SN=[5,9,5]. SN[3]=5. C[3]=0.
Since 7+5+0=12, 12>10, 12%10=2. Length of A is 1. Thus A=[2]. Since (12-2)/10=1, C[2]=1.
Length of FN is 2. FN=[4,6]. FN[2]=6. Length of SN is 2. SN=[5,9]. SN[2]=9. C[2]=1. Since
6+9+1=16, 16>10, 16%10=6. Length of A is 2. Thus A=[6,2]. Since (16-6)/10=1, C[1]=1.
Length of FN is 1. FN=[4]. FN[1]=4. Length of SN is 1. SN=[5]. SN[1]=5. C[1]=1. Since
4+5+1=10, 10=10, 10%10=0. Length of A is 3. Thus A=[0,6,2]. Since (10-0)/10=1, C[0]=1.
There are no more digits, but C[0]=1. Length of A is 4. A=[1,0,6,2]. Thus the process is
complete. Since there are 2 operators and we processed up to ANS1, there are more operators
to process. The new FN is [1,0,6,2].
Subproblem: ANS1+1062=ANS2
The first number is ANS1, FN=[1,0,6,2]. The second number is 1062, SN=[1,0,6,2]. Since FN
[1,0,6,2] has 4 digits, SN [1,0,6,2] has 4 digits, thus the maximum number of digits is 4. In
each subsequent step, we remove one number from the end of FN and one from the end of SN. Length
of A is 0.
Length of FN is 4. FN=[1,0,6,2]. FN[4]=2. Length of SN is 4. SN=[1,0,6,2]. SN[4]=2. C[4]=0.
Since 2+2+0=4, 4<10, 4%10=4. Length of A is 1. Thus A=[4]. Since (4-4)/10=0, C[3]=0.
Length of FN is 3. FN=[1,0,6]. FN[3]=6. Length of SN is 3. SN=[1,0,6]. SN[3]=6. C[3]=0.
Since 6+6+0=12, 12>10, 12%10=2. Length of A is 2. Thus A=[2,4]. Since (12-2)/10=1, C[2]=1.
Length of FN is 2. FN=[1,0]. FN[2]=0. Length of SN is 2. SN=[1,0]. SN[2]=0. C[2]=1. Since
0+0+1=1, 1<10, 1%10=1. Length of A is 3. Thus A=[1,2,4]. Since (1-1)/10=0, C[1]=0.
Length of FN is 1. FN=[1]. FN[1]=1. Length of SN is 1. SN=[1]. SN[1]=1. C[1]=0. Since
1+1+0=2, 2<10, 2%10=2. Length of A is 4. Thus A=[2,1,2,4]. Since (2-2)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there are 2 operators
and we processed up to ANS2, the problem is complete. The final Answer is [2,1,2,4].
Removing all 3 commas, we have 2124.
The addition algorithm tells us that the sum of all the products is 2124. The answer is 2124.
Q: Cally and Danny washed their clothes. Cally has 10 white shirts, 5 colored shirts, 7 pairs of
shorts, and 6 pairs of pants, while Danny has 6 white shirts, 8 colored shirts, 10 shorts, and 6
pairs of pants. How many clothes did they wash?
A: <NONALGO> They washed 10+6=16 white shirts. They washed 5+8=13 colored shirts. They
washed 7+10=17 shorts. They washed 6+6=12 pants. Therefore, Cally and Danny washed a total
of 16+13+17+12=58 clothes. The answer is 58.
Q: If there are four times as many red crayons as blue crayons in a box, and there are 3 blue
crayons. How many crayons total are in the box?
A: <NONALGO> There are 4 times as many red crayons as blue crayons, which means there are
3+3+3+3=12 red crayons. Since there are 3 blue crayons and 12 red crayons, in total there are
12+3=15 crayons. The answer is 15.
###continued on next page
36
-----
Q: Larry spent $5 for lunch and gave his brother $12. How much did Larry have at the beginning
if he has $3 now?
A: <ALGO> To solve this problem, we need to find the total amount that Larry spent and add it to
the amount he has left.
Larry spent $5 for lunch and gave his brother $12, which means he spent 5+12. We use the
addition algorithm:
Problem: 5+12=
Explanation:
The subproblems are 5+12=ANS1. There is 1 connecting operator.
Subproblem: 5+12=ANS1
The first number is 5, FN=[5]. The second number is 12, SN=[1,2]. Since FN [5] has 1 digit, SN
[1,2] has 2 digits, thus the maximum number of digits is 2. In each subsequent step, we remove
one number from the end of FN and one from the end of SN. Length of A is 0.
Length of FN is 1. FN=[5]. FN[2]=5. Length of SN is 2. SN=[1,2]. SN[2]=2. C[2]=0. Since
5+2+0=7, 7<10, 7%10=7. Length of A is 1. Thus A=[7]. Since (7-7)/10=0, C[1]=0.
Length of FN is 0. FN=[]. FN[1]=0. Length of SN is 1. SN=[1]. SN[1]=1. C[1]=0. Since
0+1+0=1, 1<10, 1%10=1. Length of A is 2. Thus A=[1,7]. Since (1-1)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there is 1 operator
and we processed up to ANS1, the problem is complete. The final Answer is [1,7]. Removing all 1
comma, we have 17.
The addition algorithm tells us that the amount spent is 17. Larry has $3 now, so he must have
had 17+3 at the beginning. We use the addition algorithm:
Problem: 17+3=
Explanation:
The subproblems are 17+3=ANS1. There is 1 connecting operator.
Subproblem: 17+3=ANS1
The first number is 17, FN=[1,7]. The second number is 3, SN=[3]. Since FN [1,7] has 2 digits,
SN [3] has 1 digit, thus the maximum number of digits is 2. In each subsequent step, we remove
one number from the end of FN and one from the end of SN. Length of A is 0.
Length of FN is 2. FN=[1,7]. FN[2]=7. Length of SN is 1. SN=[3]. SN[2]=3. C[2]=0. Since
7+3+0=10, 10=10, 10%10=0. Length of A is 1. Thus A=[0]. Since (10-0)/10=1, C[1]=1.
Length of FN is 1. FN=[1]. FN[1]=1. Length of SN is 0. SN=[]. SN[1]=0. C[1]=1. Since
1+0+1=2, 2<10, 2%10=2. Length of A is 2. Thus A=[2,0]. Since (2-2)/10=0, C[0]=0.
There are no more digits and C[0]=0. Thus the process is complete. Since there is 1 operator
and we processed up to ANS1, the problem is complete. The final Answer is [2,0]. Removing all 1
comma, we have 20.
The addition algorithm tells us that the total amount is 20. The answer is 20.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now?
A: <NONALGO> He has 5 toys. He got 2 from mom, so after that he has 5+2=7 toys. Then he got 2
more from dad, so in total he has 7+2=9 toys. The answer is 9.
Q: Karen wanted to go out to get some fast food. She pulls up to the drive-through and orders a
5-dollar burger. Her son then yelled out that he wanted a 4-dollar sandwich, so it was added to
the order. Karen then decided to order some drinks and opted for two 4-dollar smoothies. What
is the total cost of Karen’s fast-food order?
A: <NONALGO> Karen and her son order 5+4=9 dollars worth of food. Karen decides to buy 4+4=8
dollars worth of smoothies. Thus, the total for this order is 9+8=17 dollars. The answer is 17.
Q: If there are 100 cars in the parking lot and 6 more cars arrive, how many cars are in the
parking lot?
A: <NONALGO> There are 100+6=106 cars in the parking lot. The answer is 106.
37
-----
| [
"Hattie, Zhou",
"Azade, Nova",
"Hugo, Larochelle",
"Aaron, Courville",
"Hanie, Sedghi",
"Behnam, Neyshabur"
] | 2022-01-01T00:00:00 | null | false | 91 | 6 | null | https://arxiv.org/abs/2211.09066 | https://arxiv.org/abs/2211.09066 | https://www.semanticscholar.org/paper/4d17732d90440682b0500f4e209c6cc4fac20e0e |
Unit Dependency Graph and Its Application to Arithmetic Word Problem Solving | Math word problems provide a natural abstraction to a range of natural language understanding problems that involve reasoning about quantities, such as interpreting election results, news about casualties, and the financial section of a newspaper. Units associated with the quantities often provide information that is essential to support this reasoning. This paper proposes a principled way to capture and reason about units and shows how it can benefit an arithmetic word problem solver. This paper presents the concept of Unit Dependency Graphs (UDGs), which provides a compact representation of the dependencies between units of numbers mentioned in a given problem. Inducing the UDG alleviates the brittleness of the unit extraction system and allows for a natural way to leverage domain knowledge about unit compatibility, for word problem solving. We introduce a decomposed model for inducing UDGs with minimal additional annotations, and use it to augment the expressions used in the arithmetic word problem solver of (Roy and Roth 2015) via a constrained inference framework. We show that introduction of UDGs reduces the error of the solver by over 10 %, surpassing all existing systems for solving arithmetic word problems. In addition, it also makes the system more robust to adaptation to new vocabulary and equation forms . | A decomposed model for inducing UDGs with minimal additional annotations is introduced, and it is shown that introduction of UDGs reduces the error of the solver by over 10 %, surpassing all existing systems for solving arithmetic word problems. | ## Unit Dependency Graph and its Application to Arithmetic Word Problem Solving
**Subhro Roy and Dan Roth**
University of Illinois, Urbana Champaign
_{sroy9, danr}@illinois.edu_
Example 1
Isabel picked 66 flowers for her friends wedding. She
was making bouquets with 8 flowers in each one. If 10
of the flowers wilted before the wedding, how many
bouquets could she still make?
likely be multiplied or divided to arrive at the solution. Finally, the question asks for the number of “bouquets”, indicating “8” will likely be divided, and not multiplied. Knowing such interactions could help understand the situation and
perform better quantitative reasoning. In addition, given that
unit extraction is a noisy process, this can make it more robust via global reasoning.
In this paper, we introduce the concept of unit dependency
_graph (UDG) for math word problems, to represent the re-_
lationships among the units of different numbers, and the
question being asked. We also introduce a strategy to extract
annotations for unit dependency graphs, with minimal additional annotations. In particular, we use the answers to math
problems, along with the rate annotations for a few selected
problems, to generate complete annotations for unit dependency graphs. Finally, we develop a decomposed model to
predict UDG given an input math word problem.
We augment the arithmetic word problem solver of (Roy
and Roth 2015) to predict a unit dependency graph, along
with the solution expression of the input arithmetic word
problem. Forcing the solver to respect the dependencies of
the unit dependency graph enables us to improve unit extractions, as well as leverage the domain knowledge about
unit dependencies in math reasoning. The introduction of
unit dependency graphs reduced the error of the solver by
over 10%, while also making it more robust to reduction in
lexical and template overlap of the dataset.
**2** **Unit Dependency Graph**
We first introduce the idea of a generalized rate, and its unit
representation. We define rate to be any quantity which is
some measure corresponding to one unit of some other quantity. This includes explicit rates like “40 miles per hour”, as
well as implicit rates like the one in “Each student has 3
books”. Consequently, units for rate quantities take the form
_“A per B”, where A and B refer to different entities. We refer_
to A as Num Unit (short for Numerator Unit), and B as Den
**Abstract**
Math word problems provide a natural abstraction to a range
of natural language understanding problems that involve reasoning about quantities, such as interpreting election results,
news about casualties, and the financial section of a newspaper. Units associated with the quantities often provide information that is essential to support this reasoning. This paper
proposes a principled way to capture and reason about units
and shows how it can benefit an arithmetic word problem
solver. This paper presents the concept of Unit Dependency
Graphs (UDGs), which provides a compact representation of
the dependencies between units of numbers mentioned in a
given problem. Inducing the UDG alleviates the brittleness
of the unit extraction system and allows for a natural way
to leverage domain knowledge about unit compatibility, for
word problem solving. We introduce a decomposed model
for inducing UDGs with minimal additional annotations, and
use it to augment the expressions used in the arithmetic word
problem solver of (Roy and Roth 2015) via a constrained inference framework. We show that introduction of UDGs reduces the error of the solver by over 10%, surpassing all existing systems for solving arithmetic word problems. In addition, it also makes the system more robust to adaptation to
new vocabulary and equation forms .
**1** **Introduction**
Understanding election results, sport commentaries and financial news, all require reasoning with respect to quantities. Math word problems provide a natural abstraction to
these quantitative reasoning problems. As a result, there has
a been a growing interest in developing methods which automatically solve math word problems (Koncel-Kedziorski
et al. 2015; Kushman et al. 2014; Roy and Roth 2015;
Mitra and Baral 2016).
Units associated with numbers or the question often provide essential information to support the reasoning required
in math word problems. Consider the arithmetic word problem in Example 1. The units of “66” and “10” are both
“flowers”, which indicate they can be added or subtracted.
Although unit of “8” is also “flower”, it is associated with a
rate, indicating the number of flowers in each bouquet. As
a result, “8” effectively has unit “flowers per bouquet”. Detecting such rate units help understand that “8” will more
Copyright c⃝ 2017, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
-----
|Mention|Num Unit|Den Unit|
|---|---|---|
|40 miles per hour|mile|hour|
|Each student has 3 books.|book|student|
Table 1: Units of rate quantities
Unit (short for denominator unit). Table 1 shows examples
of Num and Den Units for various rate mentions.
A unit dependency graph (UDG) of a math word problem
is a graph representing the relations among quantity units
and the question asked. Fig. 1 shows an example of a math
word problem and its unit dependency graph. For each quantity mentioned in the problem, there exists a vertex in the
unit dependency graph. In addition, there is also a vertex
representing the question asked. Therefore, if a math problem mentions n quantities, its unit dependency graph will
have n + 1 vertices. In the example in Fig 1, there is one
vertex corresponding to each of the quantities 66, 8 and 10,
and one vertex representing the question part “how many
bouquets could she still make ?”.
A vertex representing a number, is labeled RATE, if the
corresponding quantity describes a rate relationship (according to the aforementioned definition). In fig 1, “8” is labeled
as a RATE since it indicates the number of flowers in each
bouquet. Similarly, a vertex corresponding to the question is
marked RATE if the question asks for a rate.
Edges of a UDG can be directed as well as undirected.
Each undirected edge has the label SAME UNIT, indicating
that the connected vertices have the same unit. Each directed
edge going from vertex u to vertex v can have one of the
following labels:
1. NUM UNIT : Valid only for directed edges with source
vertex u labeled as RATE, indicates that Num Unit of u
matches the unit of the destination vertex v.
2. DEN UNIT : Valid only for directed edges with source
vertex labeled as RATE, indicates that Den Unit of source
vertex u matches the unit of the destination vertex v.
If no edge exists between a pair of vertices, they have unrelated units.
Several dependencies exist between the vertex and edge
labels of the unit dependency graph of a problem, and its
solution expression. Sec 4 discusses these dependencies and
how they can be leveraged to improve math problem solving.
**3** **Learning to Predict UDGs**
Predicting UDG for a math word problem is essentially a
structured prediction problem. However, since we have limited training data, we develop a decomposed model to predict parts of the structure independently, and then perform
joint inference to enforce coherent predictions. This has
been shown to be an effective method for structured prediction in the presence of limited data (Punyakanok et al. 2005;
Sutton and McCallum 2007). Empirically, we found our decomposed model to be superior to jointly trained alternatives
(see Section 5).
Our decomposed model for UDG prediction uses the following two classifiers.
1. Vertex Classifier : This is a binary classifier, which takes
a vertex of the UDG as input, and decides whether it denotes a rate.
2. Edge Classifier : This is a multiclass classifier, which
takes as input a pair of nodes of the UDG, and predicts
the properties of the edge connecting those nodes.
Finally, a constrained inference module combines the output
of the two classifiers to construct a UDG. We provide details
of the components in the following subsections.
**Vertex Classifier**
In order to detect rate quantities, we train a binary classifier.
Given problem text P and a vertex v of the UDG, the classifier predicts whether v represents a rate. It predicts one of
two labels - RATE or NOT RATE. The vertex v is either a
quantity mentioned in P, or the question of P . The features
used for the classification are as follows :
1. Context Features: We add unigrams, bigrams, part of
speech tags, and their conjunctions from the neighborhood of v.
2. Rule based Extraction Features: We add a feature indicating whether a rule based approach can detect v as a
rate.
**Edge Classifier**
We train a multiclass classifier to determine the properties of
the edges of the UDG. Given problem text P and a pair of
vertices vi and vj (i < j), the classifier predicts one of the
six labels :
1. SAME UNIT : Indicates that vi and vj should be connected by an undirected edge labeled SAME UNIT.
2. NO RELATION : Indicates no edge exists between vi and
_vj._
3. RATE[→]Num [: Indicates that][ v][i] [is a rate, and the Num Unit]
of vi matches the unit of vj.
4. RATE[←]Num [: Indicates that][ v][j] [is a rate, and the Num Unit]
of vj matches the unit of vi.
5. We similarly define RATE[→]Den [and][ R][ATE]Den[←] [.]
The features used for the classification are :
1. Context Features: For each vertex v in the query, we add
the context features described for Vertex classifier.
2. Rule based Extraction Features: We add a feature indicating whether each of the queried vertices is detected as
a rate by the rule based system. In addition, we also add
features denoting whether there are common tokens in the
units of vi and vj.
**Constrained Inference**
Our constrained inference module takes the scores of the
Vertex and Edge classifiers, and combines them to find
the most probable unit dependency graph for a problem.
We define VERTEX(v, l) to be the score predicted by the
Vertex classifier for labeling vertex v of a UDG with label l, where l ∈{RATE, NOT RATE}. Similarly, we define
-----
|Problem|Unit Dependency Graph|Expression Tree of Solution|
|---|---|---|
|Isabel picked 66 flowers for her friends wedding. She was making bouquets with 8 flowers in each one. If 10 of the flowers wilted before the wedding, how many bouquets could she still make?|Rate 8 Num Unit Num Unit Den Unit 66 how many bouquets 10 Same Unit|÷ − 8 66 10|
Figure 1: An arithmetic word problem, its UDG, and a tree representation of the solution (66 − 10)/8. Several dependencies exist between
the UDG and the final solution of a problem. Here, “66” and “10” are connected via SAME UNIT edge, hence they can be added or subtracted,
“8” is connected by DEN UNIT to the question, indicating that some expression will be divided by “8” to get the answer’s unit.
EDGE(vi, vj, l) to be the score predicted by the Edge classifier for the assignment of label l to the edge between vi and
_vj. Here the label l is one of the six labels defined for the_
edge classifier.
Let G be a UDG with vertex set V . We define the score
for G as follows:
for math expressions, which restricts the order of combination of addition and subtraction nodes, and multiplication
and division nodes. The expression tree in Fig 1 is monotonic.
**Arithmetic Word Problem Solver**
We now describe the solver pipeline of (Roy and Roth 2015).
Given a problem P with quantities q1, q2, . . ., qn, the solver
uses the following two classifiers.
1. Irrelevance Classifier : Given as input, problem P and
quantity qi mentioned in P, the classifier decides whether
_qi is irrelevant for the solution. The score of this classifier_
is denoted as IRR(q).
2. LCA Operation Classsifier : Given as input, problem
_P and a pair of quantities qi and qj (i < j), the classi-_
fier predicts the operation at the lowest common ancestor
(LCA) node of qi and qj, in the solution expression tree
of problem P . The set of possible operations are +, −,
_−r, ×, ÷ and ÷r (the subscript r indicates reverse order)._
Considering only monotonic expression trees for the solution makes this operation unique for any pair of quantities. The score of this classifier for operation o is denoted
as LCA(qi, qj, o).
The above classifiers are used to gather irrelevance scores
for each number, and LCA operation scores for each pair of
numbers. Finally, constrained inference procedure combines
these scores to generate the solution expression tree.
Let I(T ) be the set of all quantities in P which are not
used in expression tree T, and λIRR be a scaling parameter.
The score SCORE(T ) of an expression tree T is defined as:
SCORE(G) =
VERTEX(v, RATE)+
LABEL(vG,v∈V)=RATE
_λ_ EDGE(vi, vj, LABEL(G, vi, vj))
_×_
_vi,vjX∈V,i<j_
where λ is a scaling factor, and LABEL maps labels of
the UDG, to the labels of the corresponding classifiers.
LABEL(G, v) maps to RATE, if v is a rate, otherwise it
maps to NOT RATE. Similarly, if no edge exists between vi
and vj, LABEL(G, vi, vj) maps to NO RELATION, if Num
Unit of vi matches the unit of vj, LABEL(G, vi, vj) maps to
RATE[→]Num[, and so on. Finally, the inference problem has the]
following form:
arg max
_G_ GRAPHS [S][CORE][(][G][)]
_∈_
where GRAPHS is the set of all valid unit dependency graphs
for the input problem.
**4** **Joint Inference With An Arithmetic Solver**
In this section, we describe our joint inference procedure to
predict both a UDG and the solution of an input arithmetic
word problem. Our model is built on the arithmetic word
problem solver of (Roy and Roth 2015), and we briefly describe it in the following sections. We first describe the concept of expression trees, and next describe the solver, which
leverages expression tree representation of the solutions.
**Monotonic Expression Tree**
An expression tree is a binary tree representation of a mathematical expression, where leaves represent numbers, and
all non-leaf nodes represent operations. Fig 1 shows an example of an arithmetic word problem and the expression tree
of the solution mathematical expression. A monotonic ex**pression tree is a normalized expression tree representation**
SCORE(T ) = λIRR
IRR(q)+
_q∈IX(T )_
LCA(qi, qj, _LCA(qi, qj, T_ ))
_⊙_
_qi,qXj /∈I(T )_
where _LCA(qi, qj, T_ ) denotes the operation at the lowest
_⊙_
common ancestor node of qi and qj in monotonic expression
tree T . Let TREES be the set of valid expressions that can be
-----
formed using the quantities in a problem P, and also give
positive solutions. The inference algorithm now becomes:
arg max
_T_ TREES [S][CORE][(][T] [)]
_∈_
**Joint Inference**
We combine the scoring functions of UDG prediction and
the ones from the solver of (Roy and Roth 2015), so that we
can jointly predict the UDG and the solution of the problem.
For an input arithmetic word problem P, we score tuples
(G, T ) (where G is a candidate UDG for P, and T is a candidate solution expression tree of P ) as follows :
2. If vi is labeled RATE and the question is not, the path
from ni (corresponding leaf node for vi) to the root of T
cannot have only addition, subtraction nodes. Otherwise,
the question will have same rate units as vi.
3. We also check whether the edge labels are consistent with
the vertex labels using Algorithm 1, which computes edge
labels of UDGs, given the expression tree T, and vertex
labels. It uses heuristics like if a rate r is being multiplied
by a non-rate number n, the Den Unit of r should match
the unit of n, etc.
**Algorithm 1 EDGELABEL**
**Input: Monotonic expression tree T**, vertex pairs vi, vj, and their
corresponding vertex labels
**Output: Label of edge between vi and vj**
1: path ← PATH(T, vi, vj)
2: CountMulDiv ← Number of Multiplication and Division
nodes in path
3: if vi and vj have same vertex label, and CountMulDiv = 0
**then**
4: **return SAME UNIT**
5: end if
6: if vi and vj have different vertex labels, and CountMulDiv =
1 then
7: **if path contains × and vi is RATE then**
8: **return RATE[→]Den**
9: **end if**
10: **if path contains × and vj is RATE then**
11: **return RATE[←]Den**
12: **end if**
13: **if path contains ÷ and vi is RATE then**
14: **return RATE[→]Num**
15: **end if**
16: **if path contains ÷r and vj is RATE then**
17: **return RATE[←]Num**
18: **end if**
19: end if
20: return Cannot determine edge label
These consistency conditions prevent the inference procedure from considering any inconsistent tuples. They help
the solver to get rid of erroneous solutions which involve
operations inconsistent with all high scoring UDGs.
Finally, in order to find the highest scoring consistent tuple, we have to enumerate the members of TUPLES, and
score them. The size of TUPLES however is exponential in
the number of quantities in the problem. As a result, we perform beam search to get the highest scoring tuple. We first
enumerate the members of TREES, and next for each member of TREES, we enumerate consistent UDGs.
**5** **Experiments**
**Dataset**
Existing evaluation of arithmetic word problem solvers has
several drawbacks. The evaluation of (Roy and Roth 2015)
was done separately on different types of arithmetic problems. This does not capture how well the systems can distinguish between these different problem types. Datasets released by (Roy and Roth 2015) and (Koncel-Kedziorski et
SCORE(G, T ) = λIRR
IRR(q)+
_q∈IX(T )_
LCA(qi, qj, _LCA(qi, qj, T_ ))+
_⊙_
_qi,qXj /∈I(T )_
VERTEX(v, RATE)+
_λVERTEX_
LABEL(vG,v∈V)=RATE
_λEDGE_ EDGE(vi, vj, LABEL(G, vi, vj))
_vi,vjX∈V,i<j_
where λIRR, λVERTEX and λEDGE are scaling parameters. This
is simply a scaled addition of the scores for UDG prediction and solution expression generation. Finally, the inference problem is
arg max
(G,T ) TUPLES [S][CORE][(][G, T] [)]
_∈_
where TUPLES is the set of all tuples (G, T ), such that G ∈
GRAPHS, T ∈ TREES, and G is a consistent UDG for the
solution tree T .
**Consistent Rate Unit Graphs**
We have a set of conditions to check whether G is a consistent UDG for monotonic tree T . Most of these conditions
are expressed in terms of PATH(T, vi, vj), which takes as input a pair of vertices vi, vj of the UDG G, and a monotonic
expression tree T, and returns the following.
1. If both vi and vj are numbers, and their corresponding
leaf nodes in T are ni and nj respectively, then it returns
the nodes in the path connecting ni and nj in T .
2. If only vi denotes a number (implying vj represents the
question), the function returns the nodes in the path from
_ni to the root of T_, where ni is the corresponding leaf
node for vi.
For the unit dependency graph and solution tree T of Fig 1,
PATH(T, 66, 8) is {−, ÷}, whereas PATH(T, 8, question) is
_{÷}. Finally, the conditions for consistency between a UDG_
_G and an expression tree T are as follows:_
1. If vi is the only vertex labeled RATE and it is the question,
there should not exist a path from some leaf n to the root
of T which has only addition, subtraction nodes. If that
exists, it implies n can be added or subtracted to get the
answer, that is, the corresponding vertex for n in G has
same unit as the question, and should have been labeled
RATE.
-----
|Col1|AllArith|AllArithLex|AllArithTmpl|
|---|---|---|---|
|DECOMPOSE - constraints|73.6 70.9|67.7 62.9|68.7 65.5|
|JOINT|72.9|66.7|68.4|
Table 3: Performance in predicting UDGs
number represents a rate relationship, and whether the question in P asks for a rate. This process determines the labels
for the vertices of the UDG. Two annotators performed these
annotations, with an agreement of 0.94(kappa).
Once we have the labels for the vertices of the UDG, we
try to infer the labels for the edges using Algorithm 1. When
the algorithm is unable to infer the label for a particular
edge, we heuristically label that edge to be NO RELATION.
The above process allowed us to extract high quality annotations for UDGs with minimal manual annotations. In
particular, we only had to annotate vertex labels for 300
problems, out of the 831 problems in AllArith. Obviously
some of the extracted NO RELATION edge labels are noisy;
this can be remedied by collecting annotations for these
cases. However, in this work, we did not use any manual
annotations for edge labels.
**UDG Prediction**
Table 2 shows the performance of the classifiers and the
contribution of each feature type. The results indicate that
rule-based techniques are not sufficient for robust extraction,
there is a need to take context into account. Table 3 shows
the performance of our decomposed model (DECOMPOSE)
in correctly predicting UDGs, as well as the contribution of
constraints in the inference procedure. Having explicit constraints for the graph structure provides 3-5% improvement
in correct UDG prediction.
We also compare against a jointly trained model (JOINT),
which learns to predict all vertex and edge labels together.
Note that JOINT also uses the same set of constraints as
DECOMPOSE in the inference procedure, to ensure it only
predicts valid unit dependency graphs. We found that JOINT
does not outperform DECOMPOSE, while taking significantly more time to train. The worse performance of joint
learning is due to: (1) search space being too large for the
joint model to do well given our relatively small dataset size,
and (2) our independent classifiers being good enough, thus
supporting better joint inference. This tradeoff is strongly
supported in the literature (Punyakanok et al. 2005; Sutton
and McCallum 2007).
Note, that all these evaluations are based on noisy edges
annotations. This was done to reduce further annotation effort. Also, less than 15% of labels were noisy (indicated by
fraction of NO RELATION labels), which makes this evaluation reasonable.
**Solving Arithmetic Word Problems**
Here we evaluate the accuracy of our system in correctly
solving arithmetic word problems. We refer to our system as
UNITDEP. We compare against the following systems:
al. 2015) mention irrelevant quantities in words, and only
the relevant quantities are mentioned in digits. This removes
the challenge of detecting extraneous quantities.
In order to address the aforementioned issues, we pooled
arithmetic word problems from all available datasets (Hosseini et al. 2014; Roy and Roth 2015; Koncel-Kedziorski et
al. 2015), and normalized all mentions of quantities to digits.
We next prune problems such that there do not exist a problem pair with over 80% match of unigrams and bigrams. The
threshold of 80% was decided manually by determining that
problems with around 80% overlap are sufficiently different. We finally ended up with 831 problems. We refer to this
dataset as AllArith.
We also create subsets of AllArith using the MAWPS
system (Koncel-Kedziorski et al. 2016). MAWPS can generate subsets of word problems based on lexical and template
overlap. Lexical overlap is a measure of reuse of lexemes
among problems in a dataset. High lexeme reuse allows for
spurious associations between the problem text and a correct
solution (Koncel-Kedziorski et al. 2015). Evaluating on low
lexical overlap subset of the dataset can show the robustness
of solvers to lack of spurious associations. Template overlap
is a measure of reuse of similar equation templates across
the dataset. Several systems focus on solving problems under the assumption that similar equation templates have been
seen at training time. Evaluating on low template overlap
subset can show the reliance of systems on the reuse of equation templates. We create two subsets of 415 problems each
- one with low lexical overlap called AllArithLex, and one
with low template overlap called AllArithTmpl.
We report random 5-fold cross validation results on all
these datasets. For each fold, we choose 20% of the training
data as development set, and tune the scaling parameters on
this set. Once the parameters are set, we retrain all the models on the entire training data. We use a beam size of 200 in
all our experiments.
**Data Acquisition**
In order to learn the classifiers for predicting vertex and edge
labels for UDGs, we need annotated data. However, gathering vertex and edge labels for UDGs of problems, can be
expensive. In this section, we show that vertex labels for a
subset of problems, along with annotations for solution expressions, can be sufficient to gather high quality annotations for vertex and edge labels of UDGs.
Given an arithmetic word problem P, annotated with the
monotonic expression tree T of the solution expression, we
try to acquire annotations for the UDG of P . First, we try to
determine the labels for the vertices, and next the edges of
the graph.
We check if T has any multiplication or division node. If
no such node is present, we know that all the numbers in
the leaves of T have been combined via addition or subtraction, and hence, none of them describes a rate in terms of
the units of other numbers. This determines that none of T ’s
leaves is a rate, and also, the question does not ask for a rate.
If a multiplication or division node is present in T, we gather
annotations for the numbers in the leaves of T as well as the
question of P . Annotators were asked to mark whether each
-----
|Features|Vertex Classifier|Col3|Col4|Edge Classifier|Col6|Col7|
|---|---|---|---|---|---|---|
||AllArith|AllArithLex|AllArithTmpl|AllArith|AllArithLex|AllArithTmpl|
|All features|96.7|96.2|97.5|87.1|84.3|86.6|
|No rule based features|93.2|92.5|92.6|79.3|75.4|78.0|
|No context features|95.1|94.1|95.3|78.6|70.3|75.5|
Vertex Classifier Edge Classifier
Features
AllArith AllArithLex AllArithTmpl AllArith AllArithLex AllArithTmpl
All features 96.7 96.2 97.5 87.1 84.3 86.6
No rule based features 93.2 92.5 92.6 79.3 75.4 78.0
No context features 95.1 94.1 95.3 78.6 70.3 75.5
Table 2: Performance of system components for predicting vertex and edge labels for unit dependency graphs
|System|AllArith|AllArithLex|AllArithTmpl|
|---|---|---|---|
|TEMPLATE|73.7|65.5|71.3|
|SINGLEEQ|60.4|51.5|51.0|
|LCA++|79.4|63.6|74.7|
|UNITDEP|81.7|68.9|79.5|
|λ = 0 VERTEX|80.3|67.2|77.1|
|λ = 0 EDGE|79.9|64.1|75.7|
System AllArith AllArithLex AllArithTmpl
TEMPLATE 73.7 65.5 71.3
SINGLEEQ 60.4 51.5 51.0
LCA++ 79.4 63.6 74.7
UNITDEP **81.7** **68.9** **79.5**
_λVERTEX = 0_ 80.3 67.2 77.1
_λEDGE = 0_ 79.9 64.1 75.7
Table 4: Performance in solving arithmetic word problems
1. LCA++ : System of (Roy and Roth 2015) with feature
set augmented by neighborhood features, and with only
positive answer constraint. We found that augmenting the
released feature set with context features, and removing
the integral answer constraint, were helpful. Our system
UNITDEP also uses the augmented feature set for Relevance and LCA operation classifiers, and only positive
constraint for final solution value.
2. TEMPLATE : Template based algebra word problem
solver of (Kushman et al. 2014).
3. SINGLEEQ : Single equation word problem solver of
(Koncel-Kedziorski et al. 2015).
In order to quantify the gains due to vertex and edge information of UDGs, we also run two variants of UNITDEP
- one with λVERTEX = 0, and one with λEDGE = 0. Table 4
shows the performance of these systems on AllArith, AllArithLex and AllArithTmpl.
UNITDEP outperforms all other systems across all
datasets. Setting either λVERTEX = 0 or λEDGE = 0 leads
to a drop in performance, indicating that both vertex and
edge information of UDGs assist in math problem solving.
Note that setting both λVERTEX and λEDGE to 0, is equivalent
to LCA++. SINGLEEQ performs worse than other systems,
since it does not handle irrelevant quantities in a problem.
In general, reduction of lexical overlap adversely affects
the performance of most systems. The reduction of template
overlap does not affect performance as much. This is due to
the limited number of equation templates found in arithmetic
problems. Introduction of UDGs make the system more robust to reduction of both lexical and template overlap. In
particular, they provide an absolute improvement of 5% in
both AllArithLex and allArithTmpl datasets (indicated by
difference of LCA++ and UNITDEP results).
For the sake of completeness, we also ran our system on
the previously used datasets, achieving 1% and 4% absolute improvements over LCA++, in the Illinois dataset (Roy,
Vieira, and Roth 2015) and the Commoncore dataset (Roy
and Roth 2015) respectively.
**Discussion**
Most of gains of UNITDEP over LCA++ came from problems where LCA++ was predicting an operation or an expression that was inconsistent with the units. A small gain
(10%) also comes from problems where UDGs help detect
certain irrelevant quantities, which LCA++ cannot recognize. Table 5 lists some of the examples which UNITDEP
gets correct but LCA++ does not.
Most of the mistakes of UNITDEP were due to extraneous quantity detection (around 50%). This was followed by
errors due to the lack of math understanding (around 23%).
This includes comparison questions like “How many more
pennies does John have?”.
**6** **Related Work**
There has been a recent interest in automatically solving
math word problems. (Hosseini et al. 2014; Mitra and Baral
2016) focus on addition-subtraction problems, (Roy, Vieira,
and Roth 2015) look at single operation problems, (Roy and
Roth 2015) as well as our work look at arithmetic problems
with each number in question used at most once in the answer, (Koncel-Kedziorski et al. 2015) focus on single equation problems, and finally (Kushman et al. 2014) focus on
algebra word problems. None of them explicitly model the
relations between rates, units and the question asked. In contrast, we model these relations via unit dependency graphs.
Learning to predict these graphs enables us to gain robustness over rule-based extractions. Other than those related to
math word problems, there has been some work in extracting units and rates of quantities (Roy, Vieira, and Roth 2015;
Kuehne 2004a; Kuehne 2004b). All of them employ rule
based systems to extract units, rates and their relations.
**7** **Conclusion**
In this paper, we introduced the concept of unit dependency
graphs, to model the dependencies among units of numbers
mentioned in a math word problem, and the question asked.
The dependencies of UDGs help improve performance of
an existing arithmetic word problem solver, while also making it more robust to low lexical and template overlap of
the dataset. We believe a similar strategy can be used to incorporate various kinds of domain knowledge in math word
problem solving. Our future directions will revolve around
this, particularly to incorporate knowledge of entities, transfers and math concepts. Code and dataset are available at
_[http://cogcomp.cs.illinois.edu/page/publication view/804.](http://cogcomp.cs.illinois.edu/page/publication_view/804)_
-----
|Problem|LCA++|UNITDEP|
|---|---|---|
|At lunch a waiter had 10 customers and 5 of them didn’t leave a tip. If he got $3.0 each from the ones who did tip, how much money did he earn?|10.0-(5.0/3.0)|3.0*(10.0-5.0)|
|The schools debate team had 26 boys and 46 girls on it. If they were split into groups of 9, how many groups could they make?|9*(26+46)|(26+46)/9|
|Melanie picked 7 plums and 4 oranges from the orchard . She gave 3 plums to Sam . How many plums does she have now ?|(7+4)-3|(7-3)|
|Isabellas hair is 18.0 inches long. By the end of the year her hair is 24.0 inches long. How much hair did she grow?|(18.0*24.0)|(24.0-18.0)|
Table 5: Examples of problems which UNITDEP gets correct, but LCA++ does not.
**Acknowledgements**
This work is funded by DARPA under agreement number
FA8750-13-2-0008, and a grant from the Allen Institute for
Artificial Intelligence (allenai.org).
**References**
[Hosseini et al. 2014] Hosseini, M. J.; Hajishirzi, H.; Etzioni, O.; and Kushman, N. 2014. Learning to solve arithmetic word problems with verb categorization. In EMNLP.
[Koncel-Kedziorski et al. 2015] Koncel-Kedziorski, R.; Hajishirzi, H.; Sabharwal, A.; Etzioni, O.; and Ang, S. 2015.
Parsing Algebraic Word Problems into Equations. TACL.
[Koncel-Kedziorski et al. 2016] Koncel-Kedziorski, R.; Roy,
S.; Amini, A.; Kushman, N.; and Hajishirzi, H. 2016.
Mawps: A math word problem repository. In NAACL.
[Kuehne 2004a] Kuehne, S. 2004a. On the representation of
physical quantities in natural language text. In Proceedings
_of Twenty-sixth Annual Meeting of the Cognitive Science So-_
_ciety._
[Kuehne 2004b] Kuehne, S. 2004b. Understanding natural
_language descriptions of physical phenomena. Ph.D. Dis-_
sertation, Northwestern University, Evanston, Illinois.
[Kushman et al. 2014] Kushman, N.; Zettlemoyer, L.; Barzilay, R.; and Artzi, Y. 2014. Learning to automatically solve
algebra word problems. In ACL.
[Mitra and Baral 2016] Mitra, A., and Baral, C. 2016. Learning to use formulas to solve simple arithmetic problems. In
_ACL._
[Punyakanok et al. 2005] Punyakanok, V.; Roth, D.; Yih, W.;
and Zimak, D. 2005. Learning and inference over constrained output. In Proc. of the International Joint Confer_ence on Artificial Intelligence (IJCAI), 1124–1129._
[Roy and Roth 2015] Roy, S., and Roth, D. 2015. Solving
general arithmetic word problems. In Proc. of the Confer_ence on Empirical Methods in Natural Language Processing_
_(EMNLP)._
[Roy, Vieira, and Roth 2015] Roy, S.; Vieira, T.; and Roth,
D. 2015. Reasoning about quantities in natural language.
_Transactions of the Association for Computational Linguis-_
_tics 3._
[Sutton and McCallum 2007] Sutton, C., and McCallum, A.
2007. Piecewise pseudolikelihood for efficient training of
conditional random fields. In Ghahramani, Z., ed., Proceed
_ings of the International Conference on Machine Learning_
_(ICML), 863–870. Omnipress._
-----
| [
"Subhro, Roy",
"Dan, Roth"
] | 2016-12-03T00:00:00 | AAAI 2017 NLP and Knowledge Representation | false | 91 | 13 | null | http://arxiv.org/abs/1612.00969 | null | https://www.semanticscholar.org/paper/8b7336f5dd13a45d4aab38428b4a88ce507ea310 |
MathBERT: A Pre-Trained Model for Mathematical Formula Understanding | Large-scale pre-trained models like BERT, have obtained a great success in various Natural Language Processing (NLP) tasks, while it is still a challenge to adapt them to the math-related tasks. Current pre-trained models neglect the structural features and the semantic correspondence between formula and its context. To address these issues, we propose a novel pre-trained model, namely \textbf{MathBERT}, which is jointly trained with mathematical formulas and their corresponding contexts. In addition, in order to further capture the semantic-level structural features of formulas, a new pre-training task is designed to predict the masked formula substructures extracted from the Operator Tree (OPT), which is the semantic structural representation of formulas. We conduct various experiments on three downstream tasks to evaluate the performance of MathBERT, including mathematical information retrieval, formula topic classification and formula headline generation. Experimental results demonstrate that MathBERT significantly outperforms existing methods on all those three tasks. Moreover, we qualitatively show that this pre-trained model effectively captures the semantic-level structural information of formulas. To the best of our knowledge, MathBERT is the first pre-trained model for mathematical formula understanding. | This work proposes a novel pre-trained model, namely MathBERT, which is jointly trained with mathematical formulas and their corresponding contexts and qualitatively shows that this pre- trained model effectively captures the semantic-level structural information of formulas. | ### MathBERT: A Pre-Trained Model for Mathematical Formula Understanding
**Shuai Peng, Ke Yuan, Liangcai Gao, Zhi Tang**
Peking University
_{pengshuaipku, yuanke, gaoliangcai, tangzhi} @pku.edu.cn_
**Abstract**
Large-scale pre-trained models like BERT, have
obtained a great success in various Natural Language Processing (NLP) tasks, while it is still a
challenge to adapt them to the math-related tasks.
Current pre-trained models neglect the structural
features and the semantic correspondence between
formula and its context. To address these issues, we
propose a novel pre-trained model, namely Math**BERT, which is jointly trained with mathematical**
formulas and their corresponding contexts. In addition, in order to further capture the semantic-level
structural features of formulas, a new pre-training
task is designed to predict the masked formula substructures extracted from the Operator Tree (OPT),
which is the semantic structural representation of
formulas. We conduct various experiments on
three downstream tasks to evaluate the performance
of MathBERT, including mathematical information
retrieval, formula topic classification and formula
headline generation. Experimental results demonstrate that MathBERT significantly outperforms existing methods on all those three tasks. Moreover,
we qualitatively show that this pre-trained model
effectively captures the semantic-level structural
information of formulas. To the best of our knowledge, MathBERT is the first pre-trained model for
mathematical formula understanding.
**1** **Introduction**
Mathematical formulas are widely used in the fields of science, technology and engineering. Several research tasks on
mathematical formula, including Mathematical Information
Retrieval(MIR) [Yuan et al., 2016; Davila and Zanibbi, 2017;
Mansouri et al., 2019], Mathematical Formula Understanding
(MFU) [Jiang et al., 2018; Yuan et al., 2020] and so forth,
have continuously attracted researchers’ attention. Processing mathematical information is still a challenging task due
to the diversity of mathematical formula representations, the
complexity of formula structure and the ambiguity of implicit
semantics. Researchers utilize non-pretrained customized
models to solve specific math-related tasks. They are built
upon either the structural features of formula [Mansouri et al.,
In physics, mass–energy equivalence is the relationship between mass and energy in a system’s rest frame, where the two
values differ only by a constant and the units of measurement.
The principle is described by Albert Einstein's famous formula:
𝐸= 𝑚𝑐[2]
The formula defines the energy E of a particle in its rest frame
as the product of mass 𝑚 with the speed of light squared (𝑐[2]).
Equivalently, the mass of a particle at rest is equal to its energy
E divided by the speed of light squared (𝑐[2]).
(from Wikipedia)
Figure 1: An example of mathematical formula “E = mc[2]” with
its context, where the text contains rich semantic information of the
brief formula.
2019] or topical correspondence between formula and context [Yasunaga and Lafferty, 2019], but do not consider a joint
training of structural and semantic information. In the past
decades, large-scale pre-trained models such as ELMo [Peters et al., 2018], GPT [Radford et al., 2018], BERT [Devlin
_et al., 2018] and XLNet [Yang et al., 2019] have achieved_
great advancement on various Natural Language Processing
(NLP) tasks. The success in NLP also drives the development of pre-trained model in other specific fields such as
VideoBERT [Sun et al., 2019] for video, CodeBERT [Feng
_et al., 2020] for code, LayoutLM [Xu et al., 2020] for doc-_
ument. Inspired by the success of these pre-trained models,
we assume the pre-trained model will also benefit the mathrelated research.
Intuitively, formula is not only a simple sequence of mathematical symbols but also has a strong semantic relation with
its context, as is illustrated in Figure 1. The available information from the single formula is limited. For instance,
we merely acquire an equation that E is equal to m times
_c squared. Much more semantic information that is vital for_
formula understanding is often included in its context, such as
the meaning of each symbol (E for ‘energy’, m for ‘mass’,
_c for ‘light speed’), as well as some significant associated in-_
formation of the formula, including its domain (physics), its
name (mass-energy equivalence), its inherent meaning (the
relationship between mass and energy) and even its proposer
(Albert Einstein). Therefore, to fully exploit the complementary relationship between formula and context, MathBERT is
jointly trained with formula and its context. Two pre-training
tasks are employed to learn representations of formula which
-----
are Masked Language Modeling (MLM) and Context Corre_spondence Prediction (CCP). Furthermore, mathematical for-_
mula contains rich structural information, which is important
to semantic understanding and formula retrieval tasks. Thus,
we take the Operator Trees (OPTs) as the input and design a
novel pre-training task named Masked Substructure Predic_tion (MSP) to capture semantic-level structural information_
of formula.
Furthermore, We build a large dataset containing more than
8.7 million formula-context pairs which are extracted from
scientific articles published on arXiv.org[1] and train MathBERT on it. The model is evaluated on three downstream
tasks, including mathematical information retrieval, formula
topic classification and formula headline generation. Experimental results demonstrate that MathBERT significantly outperforms existing methods on all three tasks. Moreover, we
qualitatively show that the proposed model could effectively
capture the semantic-level structural information of formulas.
The main contributions of this work are summarized as follows:
- The first pre-trained model for mathematical formula understanding is proposed, which is jointly trained with
formulas, contexts and OPTs.
- A novel pre-training task is designed to capture the
semantic-level structural information of formulas.
- The proposed MathBERT model achieves a significant
improvement compared with the strong baselines on all
three downstream tasks.
- A new dataset for formula topic classification is constructed, which contains mathematical formulas and
their corresponding contexts, and would be open soon.
**2** **Related Work**
In this section, we describe the related works from the Pretrained Models to the Mathematical Formula Representation.
**2.1** **Pre-Trained Models**
Pre-tained model obtained an increasing attention since the
great successes were achieved in a variety of NLP tasks,
such as ELMo [Peters et al., 2018], GPT [Radford et al.,
2018], BERT [Devlin et al., 2018], XLNet [Yang et al.,
2019]. These pre-trained models performed well in general NLP tasks like text classification [Devlin et al., 2018],
machine translation [Zhu et al., 2020; Sundararaman et al.,
2019] and machine summarization [Miller, 2019; Xenouleas
_et al., 2019]. However, these models were not good at deal-_
ing with the specific objects. Thus some specific pre-trained
models were proposed. For instance, CodeBERT [Feng et
_al., 2020] is a pre-trained model for the code synthesis which_
was jointly trained on programming and natural languages.
LayoutLM [Xu et al., 2020] was proposed for document understanding, which was jointly trained on multi-modal information including text, image and layout.
𝑐[2] = 𝑎[2] + 𝑏[2]
(a)
EQ
SUP ADD
𝑐 2 SUP SUP
𝑎 2 𝑏 2
2 2 2
c = 𝑎 + b
(b)
(c)
Figure 2: Formula (a) c[2] = a[2] + b[2] with its Symbol Layout Tree
(SLT) (b), and Operator Tree (OPT) (c). SLTs represent formula appearance by the spatial arrangements of math symbols, while OPTs
define the mathematical operations represented in expressions.
**2.2** **Mathematical Formula Representation**
The representation of mathematical formulas is important
to the math-related tasks, such as mathematical information retrieval [Wang et al., 2015; Yuan et al., 2016; Davila
and Zanibbi, 2017; Jiang et al., 2018; Mansouri et al.,
2019] and math expression generation [Yuan et al., 2020;
Zhang et al., 2020]. Some works treat mathematical formulas as a sequence of symbols and use the one-hot representations [Yasunaga and Lafferty, 2019; Yuan et al., 2016].
However, distinct from plain text, mathematical formulas
contain strong structural features [Mansouri et al., 2019;
Yuan et al., 2020]. Thus some works [Wang et al., 2015;
Yuan et al., 2016; Jiang et al., 2018; Davila and Zanibbi,
2017; Mansouri et al., 2019] utilized the tree structure to represent mathematical formulas, including the Symbol Layout
Tree (SLT) and Operator Tree (OPT). For instance, two different tree representations of the formula “c[2] = a[2] + b[2]” are
shown in the Figure 2. In this work, OPT is selected as the input of MathBERT rather than SLT based on the following two
considerations. First, layout information of formula in SLT
has been included in LATE[X codes to some extent. Second,]
and most important, OPT plays a crucial role in incorporating semantic-level structural information for the reason that it
contains mathematical syntax and semantics which guides the
recovery of mathematical operations [Zanibbi and Blostein,
2012].
**3** **MathBERT**
In this section, we introduce our proposed MathBERT,
including the model architecture, pre-training tasks, pretraining data and pre-training details.
**3.1** **Model Architecture**
An enhanced multi-layer bidirectional Transformer [Vaswani
_et al., 2017] is built as the backbone of MathBERT, which is_
modified from vanilla BERT. Considering that there is much
implicit semantic information hidden in the context and structural information implied by formula, we concatenate the formula LATE[X tokens, context and operators together as the input]
of MathBERT. Moreover, the attention mechanism in Transformer is modified based on the structure of OPT to enhance
its ability of capturing structural information. The overall architecture of MathBERT is shown in Figure 3.
Given a sequence of LaTeX tokens T = _t1, t2, ...,tLT_,
its context C = _c1, c2,...,cLC_ and its operator tree { _OPT = }_
_{_ _}_
(N, E) where N = _n1, n2, ..., nLN_ is the set of operators,
_{_ _}_
[1https://arxiv.org](https://arxiv.org)
-----
**Source Text** **CCP** **MLM** **MSP**
**Pythagorean theorem is a fundamental**
relation in Euclidean geometry among
the three sides of a right triangle. …
𝑎[2] + 𝑏[2]= 𝑐[2]
where c represents the length of the hypotenuse and a and b the lengths of the
triangle's other two sides.
# MathBERT
**OPT**
EQ
ADD SUP
MASK
SUP SUP 𝑐 2
[CLS] a ^ 2 + b ^ 2 = c ^ 2 [SEP] Pythagorean theorem is a fundamental relation … [SEP] a SUP 2 ADD b SUP 2 EQ c SUP 2
𝑎 2 𝑏 2
Figure 3: An illustration of the architecture of MathBERT. The two figures on the left indicate the source text extracted from scientific articles
which consists of mathematical formula and its context, and the associated OPT translated from LATE[X code of the formula. Raw text is]
tokenized and concatenated with LATE[X tokens and operators as the input. In the pre-training stage, we randomly mask the input and employ]
three pre-training tasks (MLM,CCP,MSP) to train MathBERT. To learn structure-aware information of formula, we utilize the structure of
OPT to modify attention mask matrix in Transformers and train MathBERT with MSP pre-training task.
_M remain 1. Formally, it can be represented as follows:_
_M(i,j) =_ 0 if ⟨ni, nj⟩ _∈/_ _E and ⟨nj, ni⟩_ _∈/_ _E and i ̸= j._
1 otherwise.
(1)
SUP
2
ADD
SUP
2
EQ
**3.2** **Pre-Training Tasks**
We expect MathBERT to obtain three aspects of information:
text representations, latent relationship between formula and
context, and semantic-level structure of formula, which correspond to the following three pre-training tasks respectively.
**Masked Language Modeling**
_Masked Language Modeling is presented in BERT [Devlin et_
_al., 2018] to address the problem of ‘see itself’ in traditional_
bidirectional language modeling, which has been proved effective to learn text representations. Concretely, given the input [CLS] T [SEP ] C [SEP ] N, 15 % of tokens Tmask and
_Cmask are randomly sampled from T and C for masking op-_
eration, in which 80 % of them are replaced with [MASK],
10 % of them are randomly replaced by other arbitrary tokens,
and 10 % of them remain unchanged. The objective is to predict the original tokens which are masked out, formulated as
follows:
SUP
2
Figure 4: An illustration of modified attention mask map with the
input of a[2] + b[2] = c[2]. Gray squares denote that there is no edge
between these two operators so we mask them by 0, which results in
a consequence that their attention weights go to -∞. Orange squares
denote there exists an edge between them and black squares mean
they are the same node. Attention is applied as normal in these two
cases.
_E=_ _e1, e2, . . .,eLE_ is the set of edges, we set the input
_{_ _}_
as the concatenation of the above three, that is [CLS], t1,
_t2, . . ., tLT, [SEP_ ], c1, c2, ..., cLC, [SEP ], n1, n2, ...,
_nLN . Here [CLS] is a special classification token whose final_
hidden vector is often considered as the aggregate sequence
representation for classification tasks, and [SEP ] is a special
token used to separate the three segments.
In order to explicitly incorporate semantic-level structural
information from OPT, we do not simply follow BERT which
treats the operators as other normal tokens to attend them together densely in attention mechanism. Instead, the edges
between operators are leveraged to modify the attention mask
matrix, as is illustrated in Figure 4. For any two different
nodestween them, the corresponding values ni and nj, if there does not exist an edge M(i,j) and ek M ∈(Ej,i be-) in
attention mask matrix M are masked by 0 to avoid the two
nodes to attend each other directly while the other values in
log p(xi) (2)
_−_
_xi∈TmaskX∪Cmask_
_LossMLM =_
where p(xi) denotes the probability of predicting the original
token correctly in the position of xi. Particularly, owing to the
complementary relationship among formula, context and operators, it is encouraged to utilize the information from other
segments to predict the masked tokens, which contributes to
establishing connections among the three segments.
-----
im2markup[3] to tokenize separately formulas and OPT translator in TangentS[4] to convert LATE[X codes into OPTs. Finally,]
we obtain a large dataset that consists of 8.7 million formulas
with contexts and corresponding OPTs.
**3.4** **Pre-Training Details**
We train MathBERT on 4 NVIDIA TITAN X 12GB GPUs
with total batch size of 48. To well utilize the existing pretrained model in NLP and accelerate the training process,
we initialize the weights of MathBERT with the pre-trained
BERT base model released by Google[5] which has a 12-layer
Transformer with 768 hidden sizes. Due to the limitation of
GPU memory, the max length of input sequences is set as
256. The Adam optimizer is used with the learning rate of
2e-5. It costs two weeks to train MathBERT on 8.7M data
with around 10,000,000 iterations.
**4** **Experiment**
To verify the effectiveness of MathBERT, we conduct experiments and evaluate it on three downstream tasks: mathematical information retrieval, formula topic classification and
formula headline generation. Additionally, ablation study is
done followed by qualitative analysis, indicating that MathBERT well captures the semantic-level structural information
of formulas.
**4.1** **Mathematical Information Retrieval**
Approaches Partial Full H-Mean
MCAT 56.98 56.78 56.88
TangentS 58.72 63.61 61.07
Approach0 59.50 67.26 63.14
TangentCFT 71.34 59.63 64.96
BERT 70.53 58.33 63.85
**MathBERT** 73.61 61.35 66.92
**MathAPP** **76.07** **71.61** **73.77**
Table 1: NTCIR-12 Results (Avg. bpref @1000). H-Mean denotes
the harmonic mean of partial relevance and full relevance score.
Similar to other information retrieval (IR) tasks, given a
formula as the query, mathematical information retrieval aims
to return the relevance of formulas in a large set of documents.
Formulas can be indexed using vector similarity measures for
retrieval. Hence, it is a suitable downstream task to evaluate the output embeddings of MathBERT. Here MathBERT
is evaluated on the NTCIR-12 MathIR Wikipedia Formula
Browsing Task [Zanibbi et al., 2016], which is the most current benchmark for formula retrieval. The dataset contains
over 590,000 mathematical formulas from English Wikipedia
and 20 non-wildcards queries. There are two human assessors evaluating the pooled hits from participating system by
scoring the hit with the score 2, 1 or 0 from highly relevant
to irrelevant. The final hit relevance rating is the sum of the
[3https://github.com/harvardnlp/im2markup](https://github.com/harvardnlp/im2markup)
[4https://github.com/BehroozMansouri/TangentCFT/tree/master/](https://github.com/BehroozMansouri/TangentCFT/tree/master/TangentS)
[TangentS](https://github.com/BehroozMansouri/TangentCFT/tree/master/TangentS)
[5https://github.com/google-research/bert](https://github.com/google-research/bert)
**Context Correspondence Prediction**
As mentioned in Section 1, there is a latent semantic relation
between mathematical formula and its context, which is not
directly captured by language modeling. Therefore, similar to
the Next Sentence Prediction task in BERT, we pre-train for
a binarized Context Correspondence Prediction task. Specifically, 50 % of context C in pre-training examples are randomly replaced with another context in the dataset. The objective is to predict whether the current input of context C _[′]_ is
the corresponding context of T or not, which can be formulated as follows, where p denotes the probability of C = C _[′]:_
_LossCCP = −δ log p −_ (1 − _δ) log(1 −_ _p)_ (3)
1 if C = C _[′]._
_δ =_ (4)
0 otherwise.
**Masked Substructure Prediction**
In order to incorporate structural information from OPTs, we
present a pre-training task named Masked Substructure Pre_diction. Substructure here means the structure composed of_
an operator, its parent node and child nodes as a part of the
OPT. In practice, 15 % of nodes Nmask are randomly sampled from the input N . For every node ni in Nmask, we cut
off all the connections with its parent node and child nodes to
mask the substructure which ni belongs to. The objective is
to predict the parent node and child nodes of the masked ni,
formulated as follows, where p(ni, nj) denotes the probability that nj is the parent or child node of ni.
_LossMSP =_
_δ log p(ni, nj)_
_−_
_ni∈Nmask_ _nj_ _∈N_
(1 _δ) log_ 1 _p(ni, nj)_ (5)
_−_ _−_ _−_
[]
_δ =_ 1 if ei,j ∈ _E or ej,i ∈_ _E_ (6)
0 otherwise
The total loss is calculated by simply adding the above
three together:
_Losstotal = LossMLM + LossCCP + LossMSP_ (7)
**3.3** **Pre-Training Data**
Since it is the first pre-trained model for mathematical formulas, there is scarcely a large public dataset that consists of formula-context pairs. As such, we build the
pre-training dataset with the public scientific articles from
arXiv.org. Arxiv bulk data available from Amazon S3[2] is
the complete set of arxiv documents which contains source
TEX files and processed PDF files. “\begin{equation} ...
_\single-line display formulas from Lend{equation}” is used as the matching pattern to extractATE[X source in these TEX]_
files. We collect the surrounding text with at least 400 characters as the context of formula and replace the formula with
a special token [MATH] to indicate the position. As for
data-preprocessing, we utilize the toolkit LATE[X tokenizer in]
[2https://arxiv.org/help/bulk data s3](https://arxiv.org/help/bulk_data_s3)
-----
Formula Only Formula with Context
Models
Precision Recall F1 Precision Recall F1
TextRNN 56.86 56.87 56.63 64.75 64.22 64.33
TextRNN Att 57.72 57.30 57.30 65.38 65.21 65.15
TextRCNN 58.92 58.78 58.18 65.04 65.11 64.82
FastText 60.04 59.84 59.82 68.21 68.08 68.03
BERT 60.82 60.02 60.34 71.55 70.29 70.84
**MathBERT** **65.18** **64.05** **64.52** **75.68** **74.46** **75.03**
Table 2: TopicMath-100K Results, evaluated with macro-average precision, recall and F1 score on 10 classes. Formula only and formula
with context are respectively used as the input.
Class Data
Astrophysics 6,426
Machine Learning 7,597
Theoretical Economics 11,442
Relativity 19,386
High Energy Physics Theory 20,856
Number Theory 18,954
Nuclear Theory 11,262
Atomic Physics 7,279
Computational Finance 13,035
Quantum Physics 16,065
All 132,302
Table 3: Statistics of TopicMath-100K.
two assessor scores (from 0 to 4), with scores of 3 or higher
considered fully relevant and other scores of 1 or higher considered partially relevant. We regard the mean of the last
two layers’ feature vectors in MathBERT as formula embeddings and reorder the top-1000 results of TangentCFT [Mansouri et al., 2019] according to cosine similarity over formula vectors. Then we use bpref as the metric to compare
our results with previous approaches, including MCAT [Kristianto et al., 2016], TangentS [Davila and Zanibbi, 2017], Approach0 [Zhong and Zanibbi, 2019] and TangentCFT. The results are shown in Table 1.
MathBERT achieves the highest partial and harmonic
mean bpref score. Due to the lack of mathematical and
structure-aware information, BERT pre-trained on NLP data
obtains a poor result on this task. Compared with another embedding model TangentCFT, our reordered results outperform
its original top-1000 results on all the metrics. However, the
results on full bpref score are still lower than TangentS and
Approach0, which may be explained by the shortage of using cosine similarity over formula vectors rather than using
direct comparison of formula trees. Consequently, we follow the approach in TangentCFT and create another model
(MathAPP) by combining retrieval scores from MathBERT
and Approach0, achieving state-of-the-art performance.
**4.2** **Formula Topic Classification**
Formula topic classification is a typical multi-class classification task like text classification in NLP, where the goal is
to predict which topic a mathematical formula belongs to.
Following the approach described in Section 3.3, we collect
132,302 formula-context pairs from scientific articles published on arXiv.org within a year in 10 selected topics as our
R1 R2 RL BLEU-4 METEOR
Random 31.56 21.35 28.99 24.32 23.40
Tail 22.55 14.69 20.76 22.23 23.78
Lead 42.23 31.30 39.29 29.89 31.61
TextRank 42.19 30.85 38.99 28.29 31.78
Seq2Seq 52.14 38.33 49.00 42.20 30.65
PtGen 53.26 39.92 50.09 44.10 31.76
Transformer 54.49 40.57 50.90 45.79 32.92
BERT-fused 60.76 46.98 51.74 47.08 33.46
**MathBERT 61.25 48.06 57.72** **49.40** **34.67**
Table 4: EXEQ-300K Results, evaluated with F1 scores of R1
(ROUGE-1), R2 (ROUGE-2), RL (ROUGE-L), BLEU-4 and METEOR.
dataset named TopicMath-100K. Data statistics is shown in
Table 3. TopicMath-100K is randomly split into train (80 %,
105,841), validation (10 %, 13,230) and test (10 %, 13,231)
sets. We conduct experiments on this dataset and compare our
results with several non-pretrained models and BERT. The results are shown in Table 2.
MathBERT achieves state-of-the-art performance on all
metrics, especially outperforms vanilla BERT significantly.
Taking only formula as input, BERT pre-trained on natural
language data does not obtain a much better result than those
non-pretrained models, which implies that pre-training model
on mathematical formulas can improve formula topic classification indeed.
**4.3** **Formula Headline Generation**
Formula headline generation is a summarization task aiming to generate a concise math headline from a detailed
math question which contains math formulas and descriptions. Here we use EXEQ-300K proposed in [Yuan et al.,
2020] as the dataset and conduct experiments to investigate
the performance of MathBERT on generation tasks. Specifically, following BERT-fused [Zhu et al., 2020], we utilize
MathBERT to extract representations for an input sequence,
and fuse them with each layer of the encoder and decoder
of Transformer through attention mechanism to generate the
headline. The obtained results are compared with four extractive methods (Random, Tail, Lead and TextRank) and four
abstractive methods (Seq2Seq [Bahdanau et al., 2014], PtGen [See et al., 2017], Transformer [Vaswani et al., 2017]
and BERT-fused [Zhu et al., 2020]). The results are shown in
Table 4. MathBERT outperforms other models on all evaluation metrics, especially Transformer and BERT-fused, which
-----
NTCIR-12 TopicMath-100K
Settings Only Formula Formula with Context
Partial Full H-Mean
Precision Recall F1 Precision Recall F1
**MathBERT** **73.61** **61.35** **66.92** **65.18** **64.05** **64.52** **75.68** **74.46** **75.03**
-w/o OPT 72.84 61.05 66.43 64.80 63.57 64.10 75.24 73.72 74.38
-w/o context 73.24 60.92 66.51 64.65 63.51 64.01 73.42 73.01 73.17
-w/ formula only 72.36 60.35 65.81 64.67 63.44 63.97 73.36 72.91 73.11
Table 5: Results on NTCIR-12 and TopicMath-100K with different pre-training settings.
sensitive to formula structure, while context is more important in topic classification which concerns inherent meaning
of formula.
**4.5** **Qualitative Analysis**
To demonstrate the effectiveness of MathBERT in learning
semantic-level structural information of mathematical formulas, we further conduct qualitative analysis. Concretely, 15
formulas containing similar symbols are selected, some of
which have equative meanings in mathematics. Following the
approach in Section 4.1, we employ three embedding models
to extract feature vectors from these formulas and rank them
by cosine similarities. The results are shown in Table 6.
As the results indicate, BERT only considers the similarity of appearance, resulting in the poor ranking between
(a + b) × (c + d) and (a + b) ÷ (c + d). Without OPT input,
the embeddings of MathBERT still retain some semantic information, which can be proved by the increased ranking of
(a+b)/(c+d) and (a+b)÷(c+d) . As observed from the result of MathBERT, (a+b)/(c+d) and (a+b)÷(c+d) are two
of the most similar formulas to _[a]c+[+]d[b]_ [, which demonstrates that]
the complete MathBERT well incorporates semantic information. Besides, MathBERT retains layout structural information as well, such as the similarity scores of [1+2]3+4 [and][ 5+6]7+8 [,]
which are both higher than those in the former two models.
The increase of (a + b) × (c + d) in similarity score could
be explained by the same substructure of a + b and c + d.
In summary, the qualitative results support that MathBERT
is capable of incorporating semantic-level structural information of mathematic formulas.
**5** **Conclusion**
In this paper, we propose a novel and effective pre-trained
model named MathBERT, which is the first pre-trained model
for mathematical formula understanding. MathBERT is
jointly trained with mathematical formulas, contexts and
their corresponding OPTs. The experimental results demonstrate that MathBERT achieves state-of-the-art performances
on three downstream tasks including mathematical information retrieval, formula topic classification and formula headline generation. The ablation study shows that our pretraining settings could contribute to improving performance
on those downstream tasks. Qualitative analysis is further
conducted to show the effectiveness of MathBERT in capturing semantic-level structural information of math expressions.
MathBERT
Rank Formula Similarity
1 _ac++db_ 1.0
2 (a + b)/(c + d) 0.9636
3 (a + b) ÷ (c + d) 0.9447
4 (a + b) (c + d) 0.9251
1+2 _×_
5 3+4 0.9248
6 5+67+8 0.9005
. . . ... ...
MathBERT -w/o OPT
Rank Formula Similarity
1 _ac++db_ 1.0
2 3+41+2 0.9143
3 (a + b)/(c + d) 0.9130
4 5+67+8 0.8923
5 (a + b) ÷ (c + d) 0.8680
6 (a + b) × (c + d) 0.8594
. . . ... ...
BERT
Rank Formula Similarity
1 _ac++db_ 1.0
2 3+41+2 0.9036
3 5+67+8 0.8770
4 (a + b) × (c + d) 0.8526
5 (a + b) ÷ (c + d) 0.8165
6 (1 + 2) × (3 + 4) 0.7529
. . . ... ...
Table 6: The ranking results according to the cosine similarity with
_a+b_
_c+d_ [.]
implies that the formula and context representations from
MathBERT contribute to downstream generation model.
**4.4** **Ablation Study**
To explore the impact of different modalities and pre-training
tasks, an ablation study is conducted on mathematical information retrieval task and formula topic classification task, respectively. Four different pre-training settings are applied in
experiments: 1) using formula, context and OPT as inputs
and all three pre-training tasks, 2) without OPT and MSP pretraining task, 3) without context and CCP pre-training task, 4)
with only formula and MLM pre-training task. The results of
different settings are shown in Table 5.
Pre-training model using only formula as pre-training input
always leads to the lowest results. The effects of pre-training
with context or OPT vary with the different downstream
tasks. Specifically, OPT contributes more to IR task that is
-----
**References**
[Bahdanau et al., 2014] Dzmitry Bahdanau, Kyunghyun
Cho, and Yoshua Bengio. Neural machine translation
by jointly learning to align and translate. arXiv preprint
_arXiv:1409.0473, 2014._
[Davila and Zanibbi, 2017] Kenny Davila and Richard
Zanibbi. Layout and semantics: Combining representations for mathematical formula search. In SIGIR, pages
1165–1168, 2017.
[Devlin et al., 2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of
deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[Feng et al., 2020] Zhangyin Feng, Daya Guo, Duyu Tang,
Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou,
Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. Codebert: A pre-trained model for programming and natural
languages, 2020.
[Jiang et al., 2018] Zhuoren Jiang, Liangcai Gao, Ke Yuan,
Zheng Gao, Zhi Tang, and Xiaozhong Liu. Mathematics
content understanding for cyberlearning via formula evolution map. In CIKM, pages 37–46, 2018.
[Kristianto et al., 2016] Giovanni Yoko Kristianto, Goran
Topic, and Akiko Aizawa. Mcat math retrieval system for
ntcir-12 mathir task. In NTCIR, 2016.
[Mansouri et al., 2019] Behrooz Mansouri, Shaurya Rohatgi, Douglas W Oard, Jian Wu, C Lee Giles, and Richard
Zanibbi. Tangent-cft: An embedding model for mathematical formulas. In SIGIR, pages 11–18, 2019.
[Miller, 2019] Derek Miller. Leveraging bert for extractive text summarization on lectures. _arXiv preprint_
_arXiv:1906.04165, 2019._
[Peters et al., 2018] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee,
and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
[Radford et al., 2018] Alec Radford, Karthik Narasimhan,
Tim Salimans, and Ilya Sutskever. Improving language
understanding by generative pre-training, 2018.
[See et al., 2017] Abigail See, Peter J Liu, and Christopher D
Manning. Get to the point: Summarization with pointergenerator networks. _arXiv preprint arXiv:1704.04368,_
2017.
[Sun et al., 2019] Chen Sun, Austin Myers, Carl Vondrick,
Kevin Murphy, and Cordelia Schmid. Videobert: A joint
model for video and language representation learning.
_arXiv preprint arXiv:1904.01766, 2019._
[Sundararaman et al., 2019] Dhanasekar Sundararaman,
Vivek Subramanian, Guoyin Wang, Shijing Si, Dinghan
Shen, Dong Wang, and Lawrence Carin. Syntax-infused
transformer and bert models for machine translation
and natural language understanding. _arXiv preprint_
_arXiv:1911.06156, 2019._
[Vaswani et al., 2017] Ashish Vaswani, Noam Shazeer, Niki
Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you
need. In Advances in neural information processing sys_tems, pages 5998–6008, 2017._
[Wang et al., 2015] Yuehan Wang, Liangcai Gao, Simeng
Wang, Zhi Tang, Xiaozhong Liu, and Ke Yuan. Wikimirs
3.0: a hybrid mir system based on the context, structure
and importance of formulae in a document. In JCDL,
pages 173–182, 2015.
[Xenouleas et al., 2019] Stratos Xenouleas, Prodromos
Malakasiotis, Marianna Apidianaki, and Ion Androutsopoulos. Sumqe: a bert-based summary quality
estimation model. _arXiv preprint arXiv:1909.00578,_
2019.
[Xu et al., 2020] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pretraining of text and layout for document image understanding. SIGKDD, Jul 2020.
[Yang et al., 2019] Zhilin Yang, Zihang Dai, Yiming Yang,
Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le.
Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237,
2019.
[Yasunaga and Lafferty, 2019] Michihiro Yasunaga and
John D Lafferty. Topiceq: A joint topic and mathematical
equation model for scientific texts. In AAAI, volume 33,
pages 7394–7401, 2019.
[Yuan et al., 2016] Ke Yuan, Liangcai Gao, Yuehan Wang,
Xiaohan Yi, and Zhi Tang. A mathematical information
retrieval system based on rankboost. In JCDL, pages 259–
260, 2016.
[Yuan et al., 2020] Ke Yuan, Dafang He, Zhuoren Jiang,
Liangcai Gao, Zhi Tang, and C Lee Giles. Automatic generation of headlines for online math questions. In AAAI,
pages 9490–9497, 2020.
[Zanibbi and Blostein, 2012] Richard Zanibbi and Dorothea
Blostein. Recognition and retrieval of mathematical expressions. IJDAR, 15(4):331–357, 2012.
[Zanibbi et al., 2016] Richard Zanibbi, Akiko Aizawa,
Michael Kohlhase, Iadh Ounis, Goran Topic, and Kenny
Davila. Ntcir-12 mathir task overview. In NTCIR, 2016.
[Zhang et al., 2020] Jianshu Zhang, Jun Du, Yongxin Yang,
Yi-Zhe Song, Si Wei, and Lirong Dai. A tree-structured
decoder for image-to-markup generation. In ICML, pages
11076–11085. PMLR, 2020.
[Zhong and Zanibbi, 2019] Wei Zhong and Richard Zanibbi.
Structural similarity search for formulas using leaf-root
paths in operator subtrees. In European Conference on
_Information Retrieval, pages 116–129. Springer, 2019._
[Zhu et al., 2020] Jinhua Zhu, Yingce Xia, Lijun Wu, Di He,
Tao Qin, Wengang Zhou, Houqiang Li, and Tie-Yan Liu.
Incorporating bert into neural machine translation. arXiv
_preprint arXiv:2002.06823, 2020._
-----
| [
"Liangcai, Gao",
"Shuai, Peng",
"Ke, Yuan",
"Zhi, Tang"
] | 2021-05-01T00:00:00 | null | false | 90 | 3 | null | http://arxiv.org/abs/2105.00377 | https://arxiv.org/abs/2105.00377 | https://www.semanticscholar.org/paper/6563251e69e4378c189d0a0c94d8d19508d552c8 |
Mapping to Declarative Knowledge for Word Problem Solving | Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news, sports results, and casualties of war. Solving such problems requires the understanding of several mathematical concepts such as dimensional analysis, subset relationships, etc. In this paper, we develop declarative rules which govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into word problem solving. Our method learns to map arithmetic word problem text to math expressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. This provides a way to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our method models the mapping to declarative knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is exposed to is biased in a different way than the test data. | Declarative rules which govern the translation of natural language description of these concepts to math expressions are developed, and a framework for incorporating such declarative knowledge into word problem solving is presented. | # Mapping to Declarative Knowledge for Word Problem Solving
**Subhro Roy[∗]**
Massachusetts Institute of Technology
[email protected]
**Abstract**
Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news,
sports results, and casualties of war. Solving
such problems requires the understanding of
several mathematical concepts such as dimensional analysis, subset relationships, etc. In
this paper, we develop declarative rules which
govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into
word problem solving. Our method learns to
map arithmetic word problem text to math expressions, by learning to select the relevant
declarative knowledge for each operation of
the solution expression. This provides a way
to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our
method models the mapping to declarative
knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain
knowledge based solver outperforms all other
systems, and that it generalizes better in the
realistic case where the training data it is exposed to is biased in a different way than the
test data.
**1** **Introduction**
Many natural language understanding situations require reasoning with respect to numbers or quanti
_∗Most of the work was done when the authors were at the_
University of Illinois, Urbana Champaign.
159
**Dan Roth[∗]**
University of Pennsylvania
[email protected]
ties – understanding financial news, sports results,
or the number of casualties in a bombing. Math
word problems form a natural abstraction to a lot
of these quantitative reasoning problems. Consequently, there has been a growing interest in developing automated methods to solve math word problems (Kushman et al., 2014; Hosseini et al., 2014;
Roy and Roth, 2015).
**Arithmetic Word Problem**
Mrs. Hilt baked pies last weekend for a holiday dinner. She baked 16 pecan pies and 14 apple pies. If she
wants to arrange all of the pies in rows of 5 pies each,
how many rows will she have?
**Solution** (16 + 14)/5 = 6
**Math Concept needed for Each Operation**
Figure 1: An example arithmetic word problem and its
solution, along with the concepts required to generate
each operation of the solution
Understanding and solving math word problems
involves interpreting the natural language description of mathematical concepts, as well as understanding their interaction with the physical world.
Consider the elementary school level arithmetic
word problem shown in Fig 1. To solve the problem, one needs to understand that “apple pies” and
“pecan pies” are kinds of “pies”, and hence, the
-----
number of apple pies and pecan pies needs to be
summed up to get the total number of pies. Similarly, detecting that “5” represents “the number of
pies per row” and applying dimensional analysis or
unit compatibility knowledge, helps us infer that the
total number of pies needs to be divided by 5 to
get the answer. Besides part-whole relationship and
dimensional analysis, there are several other concepts that are needed to support reasoning in math
word problems. Some of these involve understanding comparisons, transactions, and the application of
math or physics formulas. Most of this knowledge
can be encoded as declarative rules, as illustrated in
this paper.
This paper introduces a framework for incorporating this “declarative knowledge” into word problem solving. We focus on arithmetic word problems, whose solution can be obtained by combining the numbers in the problem with basic operations (addition, subtraction, multiplication or division). For combining a pair of numbers or math subexpressions, our method first predicts the math con_cept that is needed for it (e.g., subset relationship, di-_
mensional analysis, etc.), and then predicts a declar_ative rule under that concept to infer the mathemati-_
cal operation. We model the selection of declarative
rules as a latent variable, which removes the need
for expensive annotations for the intermediate steps.
The proposed approach has some clear advantages compared to existing work on word problem
solving. First, it provides interpretability of the solution, without expensive annotations. Our method
selects a declarative knowledge based inference rule
for each operation needed in the solution. These
rules provide an explanation for the operations performed. In particular, it learns to select relevant rules
without explicit annotations for them. Second, each
individual operation in the solution expression can
be generated independently by a separate mathematical concept. This allows our method to handle multiple concepts in the same problem.
We show that existing datasets of arithmetic word
problems suffer from significant vocabulary biases
and, consequently, existing solvers do not do well on
conceptually similar problems that are not biased in
the same way. Our method, on the other hand, learns
the right abstractions even in the presence of biases
in the data. We also introduce a novel approach to
160
gather word problems without these biases, creating
a new dataset of 1492 problems.
The next section discusses related work. We next
introduce the mathematical concepts required for
arithmetic word problems, as well as the declarative rules for each concept. Section 4 describes our
model – how we predict answers using declarative
knowledge – and provides the details of our training paradigm. Finally, we provide an experimental
evaluation of our proposed method in Section 6, and
then conclude with a discussion of future work.
**2** **Related Work**
Our work is primarily related to three major strands
of research - automatic word problem solving, semantic parsing, and approaches incorporating background knowledge in learning.
**2.1** **Automatic Word Problem Solving**
There has been a growing interest in automatically
solving math word problems, with various systems
focusing on particular types of problems. These can
be broadly categorized into two types: arithmetic
and algebra.
**Arithmetic Word Problems Arithmetic problems**
involve combining numbers with basic operations
(addition, subtraction, multiplication and division),
and are generally directed towards elementary
school students. Roy and Roth (2015), Roy and Roth
(2017) and this work focus on this class of word
problems. The works of Hosseini et al. (2014) and
Mitra and Baral (2016) focus on arithmetic problems involving only addition and subtraction. Some
of these approaches also try to incorporate some
form of declarative or domain knowledge. Hosseini
et al. (2014) incorporates the transfer phenomenon
by classifying verbs; Mitra and Baral (2016) maps
problems to a set of formulas. Both require extensive annotations for intermediate steps (verb classification for Hosseini et al. (2014), alignment of numbers to formulas for Mitra and Baral (2016), etc). In
contrast, our method can handle a more general class
of problems, while training only requires problemequation pairs coupled with rate component annotations. Roy and Roth (2017) focuses only on using dimensional analysis knowledge, and handles
the same class of problems as we do. In contrast,
-----
our method provides a framework for including any
form of declarative knowledge, exemplified here by
incorporating common concepts required for arithmetic problems.
**Algebra Word Problems Algebra word problems**
are characterized by the use of (one or more)
variables in contructing (one or more) equations.
These are typically middle or high school problems.
Koncel-Kedziorski et al. (2015) looks at single equation problems, and Shi et al. (2015) focuses on number word problems. Kushman et al. (2014) introduces a template based approach to handle general
algebra word problems and several works have later
proposed improvements over this approach (Zhou
et al., 2015; Upadhyay et al., 2016; Huang et al.,
2017). There has also been work on generating rationale for word problem solving (Ling et al., 2017).
More recently, some focus turned to pre-university
exam questions (Matsuzaki et al., 2017; Hopkins et
al., 2017), which requires handling a wider range of
problems and often more complex semantics.
**2.2** **Semantic Parsing**
Our work is also related to learning semantic parsers
from indirect supervision (Clarke et al., 2010; Liang
et al., 2011). The general approach here is to learn a
mapping of sentences to logical forms, with the only
supervision being the response of executing the logical form on a knowledge base. Similarly, we learn
to select declarative rules from supervision that only
includes the final operation (and not which rule generated it). However, in contrast to the semantic parsing work, in our case the selection of each declarative rule usually requires reasoning across multiple sentences. Further, we do not require an explicit
grounding of words or phrases to logical variables.
**2.3** **Background Knowledge in Learning**
Approaches to incorporate knowledge in learning
started with Explanation Based Learning (EBL)
(DeJong, 1993; DeJong, 2014). EBL uses domain
knowledge based on observable predicates, whereas
we learn to map text to predicates of our declarative knowledge. More recent approaches tried to incorporate knowledge in the form of constraints or
expectations from the output (Roth and Yih, 2004;
Chang et al., 2007; Chang et al., 2012; Ganchev
et al., 2010; Smith and Eisner, 2006; Naseem et
161
al., 2010; Bisk and Hockenmaier, 2012; Gimpel and
Bansal, 2014).
Finally, we note that there has been some work
in the context of Question Answering on perturbing
questions or answers as a way to test or assure the
robustness, or lack of, the approach (Khashabi et al.,
2016; Jia and Liang, 2017). We make use of similar
ideas in order to generate an unbiased test set for
Math word problems (Sec. 6).
**3** **Knowledge Representation**
Here, we introduce our representation of domain
knowledge. We organize the knowledge hierarchically in two levels – concepts and declarative rules.
A math concept is a phenomenon which needs to be
understood to apply reasoning over quantities. Examples of concepts include part-whole relations, dimensional analysis, etc. Under each concept, there
are a few declarative rules, which dictate which operation is needed in a particular context. An example of a declarative rule under the part-whole con_cept can be that “if two numbers quantify “parts” of_
a larger quantity, the operation between them must
be addition”. These rules use concept specific predicates, which we exemplify in the following subsections.
Since this work focuses on arithmetic word problems, we consider 4 math concepts which are most
common in these problems, as follows:
1. Transfer: This involves understanding the
transfer of objects from one person to another.
For example, the action described by the sentence “Tim gave 5 apples to Jim”, results in Tim
losing “5 apples” and Jim gaining “5 apples”.
2. Dimensional Analysis: This involves understanding compatibility of units or dimensions.
For example, “30 pies” can be divided by “5
pies per row” to get the number of rows.
3. Part-Whole Relation: This includes asserting
that if two numbers quantify parts of a larger
quantity, they are to be added. For example,
the problem in Section 1 involves understanding “pecan pies” and “apple pies” are parts of
“pies”, and hence must be added.
4. Explicit Math: Word problems often mention
explicit math relationships among quantities or
-----
entities in the problem. For example, “Jim is 5
inches taller than Tim”. This concept captures
the reasoning needed for such relationships.
Each of these concepts comprises a small number
of declarative rules which determine the math operations; we describe them below.
**3.1** **Transfer**
Consider the following excerpt of a word problem
exhibiting a transfer phenomenon: “Stephen owns 5
_books. Daniel gave him 4 books.” The goal of the_
declarative rules is to determine which operation is
required between 5 and 4, given that we know that a
transfer is taking place. We note that a transfer usually involves two entities, which occurs as subject
and indirect object in a sentence. The direction of
transfer is determined by the verbs associated with
the entities. We define a set of variables to denote
these properties; we define as Subj1, Verb1, IObj1
the subject, verb and indirect object associated with
the first number, and as Subj2, Verb2, IObj2 the subject, verb and indirect object related to the second
number. For the above example, the assignment of
the variables are shown below:
[Stephen]Subj1 [owns]V erb1 5 books.
[Daniel]Subj2 [gave]V erb2 [him]IObj2 4 books.
In order to determine the direction of the transfer,
we require some classification of verbs. In particular, we classify each verb into one of five classes:
HAVE, GET, GIVE, CONSTRUCT and DESTROY.
The HAVE class consists of all verbs which signify the state of an entity, such as “have”, “own”,
etc. The GET class contains verbs which indicate
the gaining of things for the subject. Examples of
such verbs are “acquire”, “borrow”, etc. The GIVE
class contains verbs which indicate the loss of things
for the subject. Verbs like “lend”, “give” belong
to this class. Finally CONSTRUCT class constitutes verbs indicating construction or creation, like
“build”, “fill”, etc., while DESTROY verbs indicate destruction related verbs like “destroy”, “eat”,
“use”, etc. This verb classification is largely based
on the work of Hosseini et al. (2014).
Finally, the declarative rules for this concept have
the following form:
162
[Verb1 _∈_ HAVE] ∧ [Verb2 _∈_ GIVE] ∧
[Coref(Subj1, IObj2)] ⇒ Addition
where Coref(A, B) is true when A and B represent the same entity or are coreferent, and is false
otherwise. In the examples above, Verb1 is “own”
and hence [Verb1 ∈ HAVE] is true. Verb2 is
“give” and hence [Verb2 ∈ GIVE] is true. Finally, Subj1 and IObj2 both refer to Stephen, so
[Coref(Subj1, IObj2)] returns true. As a result, the
above declarative rule dictates that addition should
be performed between 5 and 4.
We have 18 such inference rules for transfer, covering all combinations of verb classes and Coref()
values. All these rules generate addition or subtraction operations.
**3.2** **Dimensional Analysis**
We now look at the use of dimensional analysis
knowledge in word problem solving. To use dimensional analysis, one needs to extract the units of
numbers as well as the relations between the units.
Consider the following excerpt of a word problem:
_“Stephen has 5 bags. Each bag has 4 apples. Know-_
ing that the unit of 5 is “bag” and the effective unit
of 4 is “apples per bag”, allows us to infer that the
numbers can be multiplied to obtain the total number
of apples.
To capture these dependencies, we first introduce
a few terms. Whenever a number has a unit of the
form “A per B”, we refer to “A” as the unit of the
number, and refer to “B” as the rate component of
_the number. In our example, the unit of 4 is “apple”,_
and the rate component of 4 is “bag”. We define
variables Unit1 and Rate1 to denote the unit and the
rate component of the first number respectively. We
similarly define Unit2 and Rate2. For the above example, the assignment of variables is shown below:
Stephen has 5 [bags]Unit1. Each [bag]Rate2 has
4 [apples]Unit2.
Finally, the declarative rule applicable for our example has the following form:
-----
[Coref(Unit1, Rate2)] ⇒ Multiplication
[Sibling(Number1, Number2)] ⇒ Addition
The rules for the part-whole concept can generate
addition and subtraction operations. Table 1 gives
a list of all the declarative rules. Note that all the
declarative rules are designed to determine an operation between two numbers only. We introduce
a strategy in Section 4, which facilitates combining
sub-expressions with these rules.
**4** **Mapping of Word Problems to**
**Declarative Knowledge**
Given an input arithmetic word problem x, the goal
is to predict the math expression y, which generates
the correct answer. In order to derive the expression y from the word problem x, we leverage math
concepts and declarative rules that we introduced in
Section 3. In order to combine two numbers mentioned in x, we first predict a concept k, and then we
choose a declarative knowledge rule r from k. The
rule r generates the math operation needed to combine the two numbers. Consider the first example
in Table 2. To combine 6 and 9, we first decide on
the transfer concept, and then choose an appropriate
rule under the transfer to generate the operation.
Next we need to combine the sub-expression (6+
9) with the number 3. However, our inference rules
were designed for the combination of two numbers only. In order to combine a sub-expression,
we choose a representative number from the subexpression, and use that number to determine the
operation. In our example, we choose the number 6
as the representative number for (6 + 9), and decide
the operation between 6 and 3, following a similar
procedure as before. This operation is now used to
combine (6 + 9) and 3.
The representative number for a sub-expression is
chosen such that it preserves the reasoning needed
for the combination of this sub-expression with
other numbers. We follow a heuristic to choose a
representative number from a sub-expression:
1. For transfers and part-whole relationships, we
choose the representative number of the left
subtree.
2. In the case of rate relationship, we choose the
number which does not have a rate component.
We only have 3 rules for dimensional analysis. They
generate multiplication or division operations.
**3.3** **Explicit Math**
In this subsection, we want to capture the reasoning
behind explicit math relationships expressed in word
problems such as the one described in: “Stephen has
5 apples. Daniel has 4 more apples than Stephen”.
We define Math1 and Math2 by any explicit math
term associated with the first and second numbers
respectively. As was the case for transfers, we also
define Subj1, IObj1, Subj2, and IObj2 to denote the
entities participating in the math relationship. The
assignment of these variables in our example is:
[Stephen]Subj1 has 5 apples. [Daniel]Subj2 has
4 [more apples than]Math2 [Stephen]IObj2.
We classify explicit math terms into one of three
classes - ADD, SUB and MUL. ADD comprises
terms for addition, like “more than”, “taller than”
and “heavier than”. SUB consists of terms for subtraction like“less than”, “shorter than”, etc., and
MUL contains terms indicating multiplication, like
“times”, “twice” and “thrice”. Finally, the declarative rule that applies for our example is:
[Coref(Subj1, IObj2)] _∧_ [Math2 _∈_ ADD] _⇒_
Addition.
We have only 7 rules for explicit math.
**3.4** **Part-Whole Relation**
Understanding part-whole relationships entails understanding whether two quantities are hyponym,
hypernym or siblings (that is, co-hyponym, or parts
of the same quantity). For example, in the excerpt
_“Mrs. Hilt has 5 pecan pies and 4 apple pies”, de-_
termining that pecan pies and apple pies are parts of
all pies, helps infer that addition is needed. We have
3 simple rules which directly map from Hyponym,
Hypernym or Sibling detection to the corresponding
math operation. For the above example, the applicable declarative rule is:
163
-----
Transfer
[Verb1 ∈ HAVE] ∧ [Verb2 ∈ HAVE] ∧ [Coref(Subj1, Subj2)] ⇒−
[Verb1 ∈ HAVE] ∧ [Verb2 ∈ (GET ∪ CONSTRUCT)] ∧ [Coref(Subj1, Subj2)] ⇒ +
[Verb1 ∈ HAVE] ∧ [Verb2 ∈ (GIVE ∪ DESTROY)] ∧ [Coref(Subj1, Subj2)] ⇒−
[Verb1 ∈ (GET ∪ CONSTRUCT)] ∧ [Verb2 ∈ HAVE] ∧ [Coref(Subj1, Subj2)] ⇒−
[Verb1 ∈ (GET ∪ CONSTRUCT)] ∧ [Verb2 ∈ (GET ∪ CONSTRUCT)] ∧ [Coref(Subj1, Subj2)] ⇒ +
[Verb1 ∈ (GET ∪ CONSTRUCT)] ∧ [Verb2 ∈ (GIVE ∪ DESTROY)] ∧ [Coref(Subj1, Subj2)] ⇒−
[Verb1 ∈ (GIVE ∪ DESTROY)] ∧ [Verb2 ∈ HAVE] ∧ [Coref(Subj1, Subj2)] ⇒ +
[Verb1 ∈ (GIVE ∪ DESTROY)] ∧ [Verb2 ∈ (GET ∪ CONSTRUCT)] ∧ [Coref(Subj1, Subj2)] ⇒−
[Verb1 ∈ (GIVE ∪ DESTROY)] ∧ [Verb2 ∈ (GIVE ∪ DESTROY)] ∧ [Coref(Subj1, Subj2)] ⇒ +
We also have another rule for each rule above, which states that if Coref(Subj1, Obj2) or
Coref(Subj2, Obj1) is true, and none of the verbs is CONSTRUCT or DESTROY, the final operation
is changed from addition to subtraction, or vice versa.
Dimensionality Analysis
[Coref(Unit1, Rate2) ∨ Coref(Unit2, Rate1)] ⇒×
[Coref(Unit1, Unit2)] ∧ [Rate2 ̸= null] ⇒÷
[Coref(Unit1, Unit2)] ∧ [Rate1 ̸= null] ⇒÷ (Reverse order)
Explicit Math
[Coref(Subj1, IObj2) ∨ Coref(Subj2, IObj1)] ∧ [Math1 ∈ ADD ∨ Math2 ∈ ADD] ⇒ +
[Coref(Subj1, IObj2) ∨ Coref(Subj2, IObj1)] ∧ [Math1 ∈ SUB ∨ Math2 ∈ SUB] ⇒−
[Coref(Subj1, Subj2)] ∧ [Math1 ∈ ADD ∨ Math2 ∈ ADD] ⇒−
[Coref(Subj1, Subj2)] ∧ [Math1 ∈ SUB ∨ Math2 ∈ SUB] ⇒ +
[Coref(Subj1, Subj2)] ∧ [Math1 ∈ MUL] ⇒÷ (Reverse order)
[Coref(Subj1, Subj2)] ∧ [Math2 ∈ MUL] ⇒÷
[Coref(Subj1, IObj2) ∨ Coref(Subj2, IObj1)] ∧ [Math1 ∈ MUL ∨ Math2 ∈ MUL] ⇒×
Part-Whole Relationship
[Sibling(Number1, Number2)] ⇒ +
[Hyponym(Number1, Number2)] ⇒−
[Hypernym(Number1, Number2)] ⇒−
Table 1: List of declarative rules used in our system. ÷ (reverse order) indicates the second number being divided by
the first. To determine the order of subtraction, we always subtract the smaller number from the larger number.
3. In the case of explicit math, we choose the
number which is not directly associated with
the explicit math expression.
**4.1** **Scoring Answer Derivations**
Given the input word problem x, the solution math
expression y is constructed by combining numbers
in x with operations. We refer to the set of operations used in an expression y as ⊙(y). Each operation o in ⊙(y) is generated by first choosing a concept k[o], and then selecting a declarative rule r[o] from
that concept.
In order to discriminate between multiple candidate solution expressions of a word problem x, we
164
score them using a linear model over features extracted from the derivation of the solution. Our scoring function has the following form:
SCORE(x, y) =
_wkφk(x, k[o]) + wrφr(x, r[o])_
_o∈⊙X(y)_
where φk(x, k[o]) and φr(x, r[o]) are feature vectors related to concept k[o], and declarative rule r[o], respectively, and wk and wr are the corresponding weight
vectors. The term wkφk(x, k[o]) is the score for the
selection of k[o], and the term wrφr(x, r[o]) is the score
for the selection of r[o]. Finally, the total score is the
sum of the scores of all concepts and rule choices,
over all operations of y.
-----
|Word Problem|Tim ’s cat had 6 kittens . He gave 3 to Jessica. Then Sara gave him 9 kittens . How many kittens does he now have ?|
|---|---|
|Knowledge based Answer Derivation||
|Word Problem|Mrs. Hilt baked pies last weekend for a holiday dinner. She baked 16 pecan pies and 14 apple pies. If she wants to arrange all of the pies in rows of 5 pies each, how many rows will she have?|
|Knowledge based Answer Derivation||
Table 2: Two examples of arithmetic word problems, and derivation of the answer. For each combination, first a math
concept is chosen, and then a declarative rule from that concept is chosen to infer the operation.
**4.2** **Learning**
We wish to estimate the parameters of the weight
vectors wk and wr, such that our scoring function
assigns a higher score to the correct math expression, and a lower score to other competing math
expressions. For learning the parameters, we assume access to word problems paired with the correct math expression. In Section 5, we show that
certain simple heuristics and rate component annotations can be used to create somewhat noisy annotations for the concepts needed for individual operations. Hence, we will assume for our formulation access to concept supervision as well. We
thus assume access to m examples of the following
form: {(x1, y1, {k[o]}o∈⊙(y1)), (x2, y2, {k[o]}o∈⊙(y2)),
_. . ., (xm, ym, {k[o]}o∈⊙(ym))}._
We do not have any supervision for declarative
rule selection, which we model as a latent variable.
**Two Stage Learning: A straightforward solution**
for our learning problem could be to jointly learn
_wk and wr using latent structured SVM. However,_
we found that this model does not perform well. Instead, we chose a two stage learning protocol. At the
first stage, we only learn wr, the weight vector for
165
scoring the declarative rule choice. Once learned,
we fix the parameters for wr, and then learn the parameters for wk.
In order to learn the parameters for wr, we solve:
1
min
_wr_ 2 _[||][w][r][||][2][ +][ C]_
max _r)+_
_rˆ_ _k[o],rˆ_ _oˆ_ _[w][r][ ·][ φ][r][(][x,][ ˆ]_
_∈_ _⇒_
_i=1_
_o∈⊙(yi)_
∆(ˆo, o) max _r),_
_−_ _rˆ_ _k[o],rˆ_ _o_ _[w][r][ ·][ φ][r][(][x,][ ˆ]_
_∈_ _⇒_
i
where ˆr ∈ _k[o]_ implies that ˆr is a declarative rule
for concept k[o], ˆr ⇒ _o signifies that the declarative_
rule ˆr generates operation o, and ∆(ˆo, o) represents
a measure of dissimilarity between operations o and
_oˆ. The above objective is similar to that of latent_
structured SVM. For each operation o in the solution expression yi, the objective tries to minimize the
difference between the highest scoring rule from its
concept k[o], and highest scoring rule from k[o] which
explains or generates the operation o.
Next we fix the parameters of wr, and solve:
_m_
1
min
_wk_ 2 _[||][w][k][||][2][ +][ C]_
_i=1_
X
max
_y_ [[][S][CORE][(][x][i][, y][) + ∆(][y, y][i][)]][ −] [S][CORE][(][x][i][, y][i][)][.]
_∈Y_
-----
**Problem: Mrs. Hilt baked pies last weekend for a**
holiday dinner. She baked 16 pecan pies and 14 apple
pies. If she wants to arrange all of the pies in rows of
5 pies each, how many rows will she have?
**Number List: 16, 14, 5**
**Solution: (16[1] + 14[2])/5[3] = 6**
**Rates: 5**
Figure 2: Annotations in our dataset. Number List refers
to the numbers detected in the problem. The subscripts in
the solution indicate the position of the numbers in the
number list.
tations for the math concepts for operations. Consider the case for combining two numbers num1 and
_num2, by operation o. We apply the following rules:_
1. If we detect an explicit math pattern in the
neighborhood of num1 or num2, we assign
concept k[o] to be Explicit Math.
2. If o is multiplication or division, and one of
_num1 or num2 has a rate component, we as-_
sign k[o] to be Dimensional Analysis.
3. If o is addition or subtraction, we check if the
dependent verb of both numbers are identical.
If they are, we assign k[o] to be a Part-Whole relationship; otherwise, we assign it to be Transfer. We extract the dependent verb using the
Stanford dependency parser (Chen and Manning, 2014).
The annotations obtained via these rules are of
course not perfect. We could not detect certain
uncommon rate patterns like “dividing the cost 4
ways”, and “I read the same number of books 4 days
running”. There were part-whole relationships exhibited with complementary verbs, as in “I won 4
games, and lost 3.”. Both of these cases lead to noisy
math concept annotations.
However, we tested a small sample of these annotations, and found less than 5% of them to be wrong.
As a result, we assume these annotations to be correct in our problem formulation.
**5.2** **Features**
We use dependency parse labels and a small set
of rules to extract subject, indirect object, dependent verb, unit and rate component of each number
This is equivalent to a standard structured SVM objective. We use a 0 − 1 loss for ∆(ˆo, o). Note that
fixing the parameters of wr determines the scores
for rule selection, removing the need for any latent
variables at this stage.
**4.3** **Inference**
Given an input word problem _x,_ inferring
the best math expression involves computing
arg maxy SCORE(x, y), where is the set of all
_∈Y_ _Y_
math expressions that can be created by combining
the numbers in x with basic math operations.
The size of Y is exponential in the number of
quantities mentioned in x. As a result, we perform
approximate inference using beam search. We initialize the beam with the set E of all numbers mentioned in the problem x. At each step of the beam
search, we choose two numbers (or sub-expressions)
_e1 and e2 from E, and then select a math concept and_
a declarative rule to infer an operation o. We create a new sub-expression e3 by combining the subexpressions e1 and e2 with operation o. We finally
create a new set E[′] from E, by removing e1 and
_e2 from it, and adding e3 to it. We remove E from_
the beam, and add all such modified sets E[′] to the
beam. We continue this process until all sets in the
beam have only one element in them. We choose the
highest scoring expression among these elements as
the solution expression.
**5** **Model and Implementation Details**
**5.1** **Supervision**
Each word problem in our dataset is annotated with
the solution math expression, along with alignment
of numbers from the problem to the solution expression. In addition, we also have annotations for the
numbers which possess a rate component. An example is shown in Fig 2. This is the same level of
supervision used in Roy and Roth (2017). Many of
the annotations can be extracted semi-automatically.
The number list is extracted automatically by a number detector, the alignments require human supervision only when the same numeric value is mentioned
multiple times in the problem. Most of the rate component annotations can also be extracted automatically, see Roy and Roth (2017) for details.
We apply a few heuristics to obtain noisy anno
166
-----
mentioned in the problem. Details of these extractions can be found in the released codebase. Using these extractions, we define two feature functions φk(x, k[o]) and φr(x, r[o]), where x is the input word problem, and k[o] and r[o] are the concept
and the declarative rule for operation o respectively.
_φr(x, r[o]) constitutes the following features:_
1. If r[o] contains Coref(·) function, we add features related to similarity of the arguments of
Coref(·) (jaccard similarity score and presence
of pronoun in one of the arguments).
2. For part-whole relationships, we add indicators for a list of words like “remaining”, “rest”,
“either”, “overall”, “total”, conjoined with the
part-whole function in r[o] (Hyponymy, Hypernymy, Sibling).
3. Unigrams from the neighborhood of numbers
being combined.
Finally, φk(x, k[o]) generates the following features:
1. If k[o] is related to dimensional analysis, we add
features indicating the presence of a rate component in the combining numbers.
2. If k[o] is part-whole, we add features indicating
whether the verbs of combining numbers are
identical.
Note that these features capture several interpretable
functions like coreference, hyponymy, etc.
We do not learn three components of our system
– verb classification for transfer knowledge, categorization of explicit math terms, and irrelevant number detection. For verb classification, we use a seed
list of around 10 verbs for each category. Given a
new verb v, we choose the most similar verb v[′] from
the seed lists according to the GloVe vector (Pennington et al., 2014) based similarity . We assign
_v the category of v[′]._ This can be replaced by a
learned component (Hosseini et al., 2014). However
we found the seed list based categorization worked
well in most cases. For explicit math, we check for
a small list of patterns to detect and categorize math
terms. Note that for both the cases above, we still
have to learn Coref(·) function to determine the final operation. Finally, to detect irrelevant numbers
167
(numbers which are not used in the solution), we use
a set of rules based on the units of numbers. Again,
this can be replaced by a learned model (Roy and
Roth, 2015).
**6** **Experiments**
**6.1** **Results on Existing Dataset**
We first evaluate our approach on the existing
datasets of AllArith, AllArithLex, and AllArithTmpl (Roy and Roth, 2017). AllArithLex and AllArithTmpl are subsets of the AllArith dataset, created to test the robustness to new vocabulary, and
new equation forms respectively. We compare to the
top performing systems for arithmetic word problems. They are as follows:
1. TEMPLATE : Template based algebra word
problem solver of Kushman et al. (2014).
2. LCA++ : System of Roy and Roth (2015) based
on lowest common ancestors of math expression trees.
3. UNITDEP: Unit dependency graph based
solver of Roy and Roth (2017).
We refer to our approach as KNOWLEDGE. For all
solvers, we use the system released by the respective authors. The system of TEMPLATE expects an
equation as the answer, whereas our dataset contains
only math expressions. We converted expressions to
equations by introducing a single variable and assigning the math expression to it. For example, an
expression “(2 + 3)” gets converted to “X = (2 + 3)”.
The first few columns of Table 3 shows the performance of the systems on the aforementioned
datasets[1]. The performance of KNOWLEDGE is on
par or lower than some of the existing systems. We
analyzed the systems, and found most of them to
not be robust to perturbations of the problem text;
Table 4 shows a few examples. We further analyzed the datasets, and identified several biases in
the problems (in both train and test). Systems which
remember these biases get an undue advantage in
evaluation. For example, the verb “give” only appears with subtraction, and hence the models are
1Results on the AllArith datasets are slightly different from
(Roy and Roth, 2017), since we fixed several ungrammatical
sentences in the dataset
-----
|System|AllArith|AllArith Lex|AllArith Tmpl|Aggregate|Aggregate Lex|Aggregate Tmpl|Train on AllArith, Test on Perturb|
|---|---|---|---|---|---|---|---|
|TEMPLATE|71.96|64.09|70.64|54.62|45.05|54.69|24.2|
|LCA++|78.34|66.99|75.66|65.21|53.62|63.0|43.57|
|UNITDEP|79.67|71.33|77.11|69.9|57.51|68.64|46.29|
|KNOWLEDGE|77.86|72.53|74.7|73.32∗|66.63∗|68.62|65.66∗|
System AllArith AllArith AllArith Aggregate Aggregate Aggregate Train on
Lex Tmpl Lex Tmpl AllArith,
Test on
Perturb
TEMPLATE 71.96 64.09 70.64 54.62 45.05 54.69 24.2
LCA++ 78.34 66.99 75.66 65.21 53.62 63.0 43.57
UNITDEP **79.67** 71.33 **77.11** 69.9 57.51 **68.64** 46.29
KNOWLEDGE 77.86 **72.53** 74.7 **73.32[∗]** **66.63[∗]** 68.62 **65.66[∗]**
Table 3: Accuracy in solving arithmetic word problems. All columns except the last report 5-fold cross validation
results. _[∗]_ indicates statistically significant improvement (p = 0.05) over second highest score in the column.
|Problem|Systems which solved correctly|Col3|
|---|---|---|
||Trained on AllArith|Trained on Aggregate|
|Adam has 70 marbles. Adam gave 27 marbles to Sam. How many marbles does Adam have now?|TEMPLATE, UNITDEP, LCA, KNOWLEDGE|LCA, UNITDEP, KNOWLEDGE|
|Adam has 70 marbles. Sam gave 27 marbles to Adam. How many marbles does Adam have now?|KNOWLEDGE|TEMPLATE, KNOWLEDGE|
|Adam has 5 marbles. Sam has 6 more marbles than Adam. How many marbles does Sam have?|LCA, UNITDEP, KNOWLEDGE|LCA, UNITDEP, KNOWLEDGE|
|Adam has 11 marbles. Adam has 6 more marbles than Sam. How many marbles does Sam have?|TEMPLATE, KNOWLEDGE|TEMPLATE, KNOWLEDGE|
Table 4: Pairs of pertubed problems, along with the systems which get them correct
learning an erroneous correlation of “give” with subtraction. Since the test also exhibits the same bias,
these systems get all the “give”-related questions
correct. However, they fail to solve the problem
in Table 4, where “give” results in addition. We
also tested KNOWLEDGE on the addition subtraction
problems dataset released by Hosseini et al. (2014).
It achieved a cross validation accuracy of 77.19%,
which is competitive with the state of the art accuracy of 78% achieved with the same level of supervision. The system of Mitra and Baral (2016) achieved
86.07% accuracy on this dataset, but requires rich
annotations for formulas and alignment of numbers
to formulas.
**6.2** **New Dataset Creation**
In order to remove the aforementioned biases from
the dataset, we augment it with new word problems
collected via a crowdsourcing platform. These new
word problems are created by perturbing the original
problems minimally, such that the answer is different from the original problem. For each word problem p with an answer expression a in our original
dataset AllArith, we replace one operation in a to
168
create a new math expression a[′]. We ask annotators
to modify problem p minimally, such that a[′] is now
the solution to the modified word problem.
We create a[′] from a either by replacing an addition with subtraction or vice versa, or by replacing
multiplication with division or vice versa. We do not
replace addition and subtraction with multiplication
or division, since there might not be an easy perturbation that supports this conversion. We only allowed perturbed expressions which evaluate to values greater than 1. For example, we generate the
expression “(3+2)” from “(3-2)”; we generated expressions “(10+2)/4” and “(10-2)*4” for the expression “(10-2)/4”. We generate all possible perturbed
expressions for a given answer expression, and ask
for problem text modification for each one of them.
We show the annotators the original problem text
_p paired with a perturbed answer a[′]. The instructions_
advised them to copy over the given problem text,
and modify it as little as possible so that the given
math expression is now the solution to this modified
problem. They were also instructed not to add or
delete the numbers mentioned in the problem. If the
original problem mentions two “3”s and one “2”, the
-----
evaluate on these two subsets on a 5-fold cross validation. Columns 4-6 of Table 3 show the performance of systems on this setting. KNOWLEDGE significantly outperforms other systems on Aggregate
and AggregateLex, and is similar to UNITDEP on
AggregateTmpl. There is a 9% absolute improvement on AggregateLex, showing that KNOWLEDGE
is significantly m ore r obust t o l ow l exical overlap
between train and test. The last column of Table 4
also shows that the other systems do not learn the
right abstraction, even when trained on Aggregate.
**6.5** **Analysis**
**Coverage** **of** **the** **Declarative** **Rules** We chose
math concepts and declarative rules based on their
prevalance in arithmetic word problems. We found
that the four concepts introduced in this paper cover
almost all the problems in our dataset; only missing
4 problems involving application of area formulas.
We also checked earlier arithmetic problem datasets
from the works of Hosseini et al. (2014); Roy
and Roth (2015), and found that the math concepts
and declarative rules introduced in this paper
cover all their problems.
A major challenge in applying these concepts and
rules to algebra word problems is the use of variables
in constructing equations. Variables are often implicitly described, and it is difficult to extract units,
dependent verbs, associated subjects and objects for
the variables. However, we need these extractions in
order to apply our declarative rules to combine variables. There has been some work to extract meaning
of variables (Roy et al., 2016) in algebra word problems; an extension of this can possibly support the
application of rules in algebra word problems. We
leave this exploration to future work.
Higher standard word problems often require the
application of math formulas like ones related to
area, interest, probability, etc. Extending our approach to handle such problems will involve encoding math formulas in terms of concepts and
rules, as well as adding concept specific features to
the learned predictors. The declarative rules under
the Explicit Math category currently handles simple cases; this set needs to be augmented to handle
complex number word problems found in algebra
datasets.
**Gains** **achieved** **by** **Declarative** **Rules** Table 5
modified problem should also contain two “3”s and
one “2”.
We manually pruned problems which did not
yield the desired solution a[′], or were too different
from the input problem p. This procedure gave us
a set of 661 new word problems, which we refer to
as Perturb. Finally we augment AllArith with the
problems of Perturb, and call this new dataset Ag**gregate. Aggregate has a total of 1492 problems.**
The addition of the Perturb problems ensures
that the dataset now has problems with similar lexical items generating different answers. This minimizes the bias that we discussed in subsection 6.1.
To quantify this, consider the probability distribution over operations for a quantity q, given that word
_w is present in the neighborhood of q. For an un-_
biased dataset, you will expect the entropy of this
distribution to be high, since the presence of a single word in a number neighborhood will seldom be
completely informative for the operation. We compute the average of this entropy value over all numbers and neighborhood words in our dataset. AllArith and Perturb have an average entropy of 0.34 and
0.32 respectively, whereas Aggregate’s average entropy is 0.54, indicating that, indeed, the complete
data set is significantly less biased.
**6.3** **Generalization from Biased Dataset**
First, we evaluate the ability of systems to generalize from biased datasets. We train all systems on
AllArith, and test them on Perturb (which was created by perturbing AllArith problems). The last column of Table 3 shows the performance of systems
in this setting. KNOWLEDGE outperforms all other
systems in this setting with around 19% absolute improvement over UNITDEP. This shows that declarative knowledge allows the system to learn the correct
abstractions, even from biased datasets.
**6.4** **Results on the New Dataset**
Finally, we evaluate the systems on the Aggregate dataset. Following previous work (Roy and
Roth, 2017), we compute two subsets of Aggregate
comprising 756 problems each, using the MAWPS
(Koncel-Kedziorski et al., 2016) system. The first,
called AggregateLex, is one with low lexical repetitions, and the second called AggregateTmpl is one
with low repetitions of equation forms. We also
169
-----
Isabel had 2 pages of math homework and 4 pages
of reading homework. If each page had 5 problems on it, how many problems did she have to
complete total ?
Tim’s cat had kittens. He gave 3 to Jessica and 6
to Sara . He now has 9 kittens . How many kittens
did he have to start with ?
Mrs. Snyder made 86 heart cookies. She made
36 red cookies, and the rest are pink. How many
pink cookies did she make?
Table 5: Examples which KNOWLEDGE gets correct, but
UNITDEP does not.
shows examples of problems which KNOWLEDGE
gets right, but UNITDEP does not. The gains can
be attributed to the injection of declarative knowledge. Earlier systems like UNITDEP try to learn the
reasoning required for these problems from the data
alone. This is often difficult in the presence of limited data, and noisy output from NLP tools. In contrast, we learn probabilistic models for interpretable
functions like coreference, hyponymy, etc., and then
use declarative knowledge involving these functions
to perform reasoning. This reduces the complexity
of the target function to be learned considerably, and
hence we end up with a more robust model.
**Effect of Beam Size We used a beam size of 1000 in**
all our experiments. However, we found that varying the beam size does not effect the performance
significantly. Even lowering the beam size to 100
reduced performance by only 1%.
**Weakness of Approach A weakness of our method**
is the requirement to have all relevant declarative
knowledge during training. Many of the component
functions (like coreference) are learned through latent alignments with no explicit annotations. If too
many problems are not explained by the knowledge,
the model will learn noisy alignments for the component functions.
Table 6 shows the major categories of errors with
examples. 26% of the errors are due to extraneous
number detection. We use a set of rules based on
units of numbers, to detect such irrelevant numbers.
As a result, we fail to detect numbers which are irrelevant due to other factors, like associated entities,
or associated verb. We can potentially expand our
rule based system to detect those, or replace it by a
learned module like Roy and Roth (2015). Another
170
|Irrelevant Number Detection (26%)|Sally had 39 baseball cards, and 9 were torn. Sara bought 24 of Sally’s baseball cards . How many baseball cards does Sally have now?|
|---|---|
|Parsing Rate Component (26%)|Mary earns $46 cleaning a home. How many homes did she clean, if she made 276 dollars?|
|Coreference (22%)|There are 5 people on the Green Bay High track team. If a relay race is 150 meters long, how far will each team member have to run?|
Table 6: Examples of errors made by KNOWLEDGE
major source of errors is parsing of rate components;
that is, understanding “earns $46 cleaning a home”
should be normalized to “$46 per home”. Although
we learn a model for coreference function, we make
several mistakes related to coreference. For the example in Table 6, we fail to detect the coreference
between “team member” and “people”.
**7** **Conclusion**
In this paper, we introduce a framework for incorporating declarative knowledge in word problem solving. Our knowledge based approach outperforms
all other systems, and also learns better abstractions
from biased datasets. Given that the variability in
text is much larger than the number of declarative
rules that governs Math word problems, we believe
that this is a good way to introduce Math knowledge
to a natural language understanding system. Consequently, future work will involve extending our approach to handle a wider range of word problems,
possibly by supporting better grounding of implicit
variables and including a larger number of math concepts and declarative rules. An orthogonal exploration direction is to apply these techniques to generate summaries of financial or sports news, or generate statistics of war or gun violence deaths from
news corpora. A straightforward approach can be
to augment news documents with a question asking
for the required information, and treating this augmented news document as a math word problem.
Code and dataset are available at https://
github.com/CogComp/arithmetic.
-----
**Acknowledgments**
We are grateful to anonymous reviewers for their insightful comments. This work is funded by DARPA
under agreement number FA8750-13-2-0008, and a
grant from the Allen Institute for Artificial Intelligence (allenai.org).
**References**
Yonatan Bisk and Julia Hockenmaier. 2012. Simple robust grammar induction with combinatory categorial
grammars. In Proceedings of the Twenty-Sixth Confer_ence on Artificial Intelligence (AAAI-12), pages 1643–_
1649, Toronto, Canada, July.
Ming-Wei Chang, Lev Ratinov, and Dan Roth.
2007. Guiding semi-supervision with constraintdriven learning. In Proceedings of the Annual Meet_ing of the Association for Computational Linguistics_
_(ACL), pages 280–287, Prague, Czech Republic, 6._
Association for Computational Linguistics.
Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012.
Structured learning with constrained conditional models. _Machine_ _Learning,_ 88(3):399–431, 6.
Danqi Chen and Christopher D. Manning. 2014. A fast
and accurate dependency parser using neural
networks. In _Proceedings_ _of_ _the_ _2014_ _Conference_ _on_
_Empirical_ _Methods_ _in_ _Natural_ _Language_ _Processing_
_(EMNLP),_ pages 740–750, Doha, Qatar, October.
Association for Computational Linguistics.
James Clarke, Dan Goldwasser, Ming-Wei Chang, and
Dan Roth. 2010. Driving semantic parsing from the
world’s response. In Proc. of the Conference on Com_putational Natural Language Learning (CoNLL), 7._
Gerald DeJong. 1993. Investigating explanation-based
_learning. Kluwer International Series in Engineering_
and Computer Science. Kluwer Academic Publishers.
Gerald DeJong. 2014. Explanation-based learning. In
T. Gonzalez, J. Diaz-Herrera, and A. Tucker, editors,
_CRC Computing Handbook: Computer Science and_
_Software Engineering, pages 66.1–66.26. CRC Press,_
Boca Raton.
Kuzman Ganchev, Joao Grac¸a, Jennifer Gillenwater, and
Ben Taskar. 2010. Posterior regularization for structured latent variable models. _Journal of Machine_
_Learning Research._
Kevin Gimpel and Mohit Bansal. 2014. Weaklysupervised learning with cost-augmented contrastive
estimation. In Proceedings of the 2014 Conference
_on Empirical Methods in Natural Language Process-_
_ing (EMNLP), pages 1329–1341, Doha, Qatar, Octo-_
ber. Association for Computational Linguistics.
171
Mark Hopkins, Cristian Petrescu-Prahova, Roie Levin,
Ronan Le Bras, Alvaro Herrasti, and Vidur Joshi.
2017. Beyond sentential semantic parsing: Tackling the math SAT with a cascade of tree transducers.
In Proceedings of the 2017 Conference on Empirical
_Methods in Natural Language Processing, pages 806–_
815, Copenhagen, Denmark, September. Association
for Computational Linguistics.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
Etzioni, and Nate Kushman. 2014. Learning to solve
arithmetic word problems with verb categorization. In
_Proceedings of the Conference on Empirical Methods_
_for Natural Language Processing (EMNLP)._
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian
Yin. 2017. Learning fine-grained expressions to solve
math word problems. In Proceedings of the 2017 Con_ference on Empirical Methods in Natural Language_
_Processing, pages 816–825, Copenhagen, Denmark,_
September. Association for Computational Linguistics.
Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems.
In Proceedings of the 2017 Conference on Empiri_cal Methods in Natural Language Processing, pages_
2021–2031. Association for Computational Linguistics, September.
Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter
Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over semistructured knowledge. In Proceedings of the Interna_tional Joint Conference on Artificial Intelligence (IJ-_
_CAI)._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Ang. 2015. Parsing Algebraic Word Problems into Equations. Trans_actions of the Association of Computational Linguis-_
_tics._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. MaWPS:
A math word problem repository. In Proceedings of
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics._
Nate Kushman, Luke Zettlemoyer, Regina Barzilay, and
Yoav Artzi. 2014. Learning to automatically solve
algebra word problems. In Proceedings of the Annual
_Meeting of the Association for Computational Linguis-_
_tics (ACL), pages 271–281._
Percy Liang, Michael Jordan, and Dan Klein. 2011.
Learning dependency-based compositional semantics.
In Proceedings of the Annual Meeting of the Associa_tion for Computational Linguistics (ACL)._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
-----
problems. In Proceedings of the 55th Annual Meeting
_of the Association for Computational Linguistics._
Takuya Matsuzaki, Takumi Ito, Hidenao Iwane, Hirokazu
Anai, and Noriko H. Arai. 2017. Semantic parsing of
pre-university math problems. In Proceedings of the
_55th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
2131–2141, Vancouver, Canada, July. Association for
Computational Linguistics.
Arindam Mitra and Chitta Baral. 2016. Learning to use
formulas to solve simple arithmetic problems. In Pro_ceedings of the 54th Annual Meeting of the Association_
_for Computational Linguistics._
Tahira Naseem, Harr Chen, Regina Barzilay, and Mark
Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of
_the 2010 Conference on Empirical Methods in Natural_
_Language Processing, EMNLP ’10, pages 1234–1244,_
Stroudsburg, PA, USA. Association for Computational
Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference
_on Empirical Methods in Natural Language Process-_
_ing (EMNLP)._
Dan Roth and Wen-Tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, Proceedings of the Conference on Computational
_Natural Language Learning (CoNLL), pages 1–8. As-_
sociation for Computational Linguistics.
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proc. of the Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP)._
Subhro Roy and Dan Roth. 2017. Unit dependency
graph and its application to arithmetic word problem
solving. In Proceedings of the Conference on Artifi_cial Intelligence (AAAI)._
Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016.
Equation parsing: Mapping sentences to grounded
equations. In Proceedings of the Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP)._
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning.
In Empirical Methods in Natural Language Process_ing._
Noah Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction.
In Proceedings of the Annual Meeting of the Associ_ation for Computational Linguistics (ACL), ACL-44,_
172
pages 569–576, Stroudsburg, PA, USA. Association
for Computational Linguistics.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang,
and Wen-tau Yih. 2016. Learning from explicit and
implicit supervision jointly for algebra word problems.
In Proceedings of the 2016 Conference on Empirical
_Methods in Natural Language Processing._
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using quadratic
programming. In Proceedings of the 2015 Conference
_on Empirical Methods in Natural Language Process-_
_ing._
-----
| [
"Subhro, Roy",
"Kristina, Toutanova",
"Mark, Johnson",
"Dan, Roth",
"Brian, Roark",
"Lillian, Lee"
] | 2018-01-01T00:00:00 | null | false | 89 | 18 | null | https://aclanthology.org/Q18-1012 | null | https://www.semanticscholar.org/paper/83cc0d20275fdc3a97cceacdc41fbb19953fa901 |
Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions | Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs’ specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantity-pair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS. | The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS. | # Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions
**Jierui Li[1], Lei Wang[1][∗], Jipeng Zhang[1], Yan Wang[2], Bing Tian Dai[3], Dongxiang Zhang[45]**
1Center for Future Media and School of Computer Science & Engineering, UESTC, 2Tencent AI Lab
3School of Information Systems, Singapore Management University, 4Afanti Research, 5Zhejiang University
_{lijierui, zhangjipeng20}@std.uestc.edu.cn, [email protected]_
[email protected], [email protected], [email protected]
**Problem: For a birthday party Tom bought 4**
regular sodas and 52 diet sodas. If his fridge
would only hold 7 on each shelf, how many
shelves would he fill up?
**Equation: x = (4.0 + 52.0)/7.0**
**Solution: 8**
Table 1: A math word problem.
The core idea is to leverage the immense capacity of neural networks to strengthen the process
of equation generating. Compared to statistical
machine learning-based methods (Kushman et al.,
2014; Mitra and Baral, 2016; Roy and Roth, 2018;
Zhou et al., 2015; Huang et al., 2016) and semantic parsing-based methods (Shi et al., 2015;
Koncel-Kedziorski et al., 2015; Roy and Roth,
2015; Huang et al., 2017), these methods do not
need hand-crafted features and achieve high performance on large datasets. However, they lack
in capturing the specific MWPs features, which
are an evidently vital component in solving MWP.
More related work and feature-related information
can be found in Zhang et al. (2018).
Inspired by recent work on modeling locality using multi-head attention (Li et al., 2018;
Yang et al., 2018, 2019), we introduce a group
attention that contains different attention mechanisms to extract various types of MWPs features. More explicitly, there are four kinds of attention mechanisms: 1) Global attention to grab
global information; 2) Quantity-related attention
to model the relations between the current quantity and its neighbor-words; 3) Quantity-pair attention to acquire the relations between quantities; 4) Question-related attention to capture the
connections between the question and quantities.
The experimental results show that the proposed
model establishes the state-of-the-art performance
**Abstract**
Several deep learning models have been
proposed for solving math word problems
(MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed
for MWPs. To utilize the merits of deep
learning models with simultaneous consideration of MWPs’ specific features, we propose
a group attention mechanism to extract global
features, quantity-related features, quantitypair features and question-related features in
MWPs respectively. The experimental results
show that the proposed approach performs significantly better than previous state-of-the-art
methods, and boost performance from 66.9%
to 69.5% on Math23K with training-test split,
from 65.8% to 66.9% on Math23K with 5-fold
cross-validation and from 69.2% to 76.1% on
MAWPS.
**1** **Introduction**
Computer systems, dating back to 1960s, have
been developing to automatically solve math word
problems (MWPs) (Feigenbaum and Feldman,
1963; Bobrow, 1964). As illustrated in Table 1,
when solving this problem, machines are asked to
infer “how many shelves would Tom fill up ” based
on the textual problem description. It requires systems having the ability to map the natural language
text into the machine-understandable form, reason
in terms of sets of numbers or unknown variables,
and then derive the numeric answer.
In recent years, a growing number of deep
learning models for MWPs (Wang et al., 2017;
Ling et al., 2017; Wang et al., 2018b,a; Huang
et al., 2018a,b; Wang et al., 2019) have drawn
inspiration from advances in machine translation.
_∗_ corresponding author
6162
-----
on both Math23K and MAWPS datasets. In addtion, we release the source code of our model in
Github[1].
**2** **Background: Self-Attention Network**
Self-attention networks have shown impressive results in various natural language processing tasks,
such as machine translation (Vaswani et al., 2017;
Shaw et al., 2018) and natural language inference (Shen et al., 2018) due to their flexibility in
parallel computation and power of modeling long
dependencies. It can model pairwise relevance by
calculating attention weights between pairs of elements of an input sequence. In Vaswani et al.
(2017), they propose a self-attention computation
module, known as “Scaled Dot-Product Attention”(SDPA). It is used as the basic unit of multihead attention. This module’s input contains query
matrix Q ∈ R[m][×][d][k], key matrix K ∈ R[m][×][d][k] and
value matrix V ∈ R[m][×][d][v], where m is the number of input tokens, dk is the dimension of query
or key vector, dv is the dimension of value vector.
Output can be computed by:
head = softmax( _[QK][T]_ )V, (1)
_√dk_
As Vaswani et al. (2017) found, performing attention by projecting the queries, keys, and values into subspace with different learnable projection functions instead of a single attention can enhance the capacity to capture various context information. More specifically, this attention model
first transforms Q, K, and V into _Qh, Kh, Vh_
_{_ _}_
with weights _WQ[h][, W]K[ h]_ _[, W]V[ h]_
the output features { head1, head[}][, and then obtains]2, _, headk_
_{_ _· · ·_ _}_
by SDPA, where k is the number of SDPA modules. Finally, these output features are concatenated and projected to produce the final output
_′_
state O .
**3** **Approach**
In this section, we introduce how the proposed
framework works and the four different types of
attention we designed.
**3.1** **Overview**
We propose a sequence-to-sequence (SEQ2SEQ)
model with group attention to capture different
types of features in MWPs. The SEQ2SEQ model
[1 https://github.com/lijierui/](https://github.com/lijierui/group-attention)
[group-attention](https://github.com/lijierui/group-attention)
( … !# !" + <E>
…
Attention
Group Attention Block
$[&] …
Bi-LSTM
…
LSTM
' …
Janet has 57 apples […] <S>
Add&Norm
Feed Forward
Add&Norm
Group Attention
- K )
Figure 1: Framework of our approach.
takes the text of the whole problem as the input
and corresponding equation as the output. Specifically, the group attention consists of four different types of multi-head attention modules. As
illustrated in Figure 1, the pre-processed input
_X =_ _x1,_ _, xm_ is transformed into H _[e]_ =
_{_ _· · ·_ _}_
_{h[e]1[,][ · · ·][, h]m[e]_ _[}][ through Bi-LSTM. We set][ Q][ =]_
_K = V = H_ _[e]. The output of the group atten-_
_′_
tion O is produced by:
_′_
_O_ = GroupAtt(Q, K, V), (2)
Following the same paradigm in (Vaswani et al.,
2017), we add a fully-connected feed forward
layer to the multi-head attention mechanism layer
(i.e., group attention), and each layer is followed
by a residual connection and layer normalization.
Consequently, the output of group attention block
_O is obtained._
During decoding, we employ the pipeline in
(Wang et al., 2018a). The output Y is obtained
through
_yt = Softmax(Attention(h[d]t_ _[, o][j][))][,]_ (3)
where h[d]t [is the hidden state at the][ t][-th step,][ o][j] [is]
the j-th state vector from the output O of the group
attention block.
**3.2** **Pre-Processing of MWPs**
Given a MWP P and its corresponding groudtruth equation, we project words of the MWP
_{through a word embedding matrixwi[P]_ _[}]i[m]=1_ [into word embedding vectors] E, i.e.,[ {][e] ei[P][P]i[}]i[m]=1=
_Ewi[P]_ [. Considering the diversity of quantities in]
natural language, we follow the work of Wang
et al. (2017) which proposed to map quantities
6163
-----
Figure 2: Example for how to separate MWPs.
into special tokens in the problem text by the following two rules: 1) All the quantities that appear in the MWP are determined if they are significant quantities that will be used in the equation using Significant Number Identify (SNI); 2)
All recognized significant quantities in the MWP
_P are mapped to a list of mapped quantity tokens_
_n1, ..., nl_ in terms of their appearance order in
_{_ _}_
the problem text, where l is the number of quantities. Through the above rules, the mapped MWP
text X = _x1,_ _, xm_ that will be used as the
_{_ _· · ·_ _}_
input of the SEQ2SEQ model can be acquired.
In addition, the quantity tokens in the equation
are also substituted according to the corresponding mapping in problem text. For example, the
mapped quantity tokens and the mapped equation
of the problem in Table 1 are {n1 = 4, n2 =
52, n3 = 7} and (n1 + n2) ÷ n3 respectively. To
address the issue that a MWP may have more than
one correct solution equations (e.g., 3×2 and 2×3
are both correct equations to solve the problem
”How many apples will Tom eat after 3 days if he
eats 2 apples per day?”), we normalize the equations to postfix expressions following the rules in
Wang et al. (2018a), ensuring that every problem
is corresponding to a unique equation. Thus, we
can obtain the mapped equation Eq that will be regarded as the target sequence.
**3.3** **Group Attention**
With the aim of implementing group attention,
as illustrated in Figure 2, we separate the problem text X = _x1,_ _, xm_ into quantity spans
_{_ _· · ·_ _}_
_Xquant = {Xquant,1, · · ·, Xquant,l} and the ques-_
tion span Xquest. The quantity span includes one
or more quantity and their neighborhood words,
and the question span consists of words of the
question. For simplicity, the spans are separated
by commas and periods, which naturally separate
the sentence semantically and each span often contains one quantity, and spans with quantity (but not
last) are considered as quantity spans while the last
span is considered as question span since it always
contains the question. By doing this, spans do not
Figure 3: Group attention: (a) Global attention; (b)
Quantity-related attention; (c) Quantity-pair attention;
(d) Question-related attention.
overlap with each other.
As illustrated in Figure 3, following how
the problem text is divided, _{Q, K, V } are_
masked into the input of group attention,
_Qg, Kg, Vg_, _Qc, Kc, Vc_, _Qp, Kp, Vp_ and
_{_ _}_ _{_ _}_ _{_ _}_
_Qq, Kq, Vq_, where g, c, p, and q are the
_{_ _}_
notations of global, quantity-related, quantitypair and question-related attention. After that,
_Og, Oc, Op, Oq_ are computed by different
_{_ _}_
groups of SDPA modules. The output of group
attention O is produced by concatenating and projecting again:
_′_
_O_ = Concat(Og, Oc, Op, Oq), (4)
We will describe four types of group attention
in detail in the following passage.
**Global Attention:** Document-level features
play an important role in distinguishing the category of MWPs and quantities order in equations. To capture these features from a global perspective, we introduce a type of attention named
as global attention, which computes the attention
vector based on the whole input sequence.
For Qg, Kg, and Vg, we set them to H _[e]. The_
output Og can be obtained by SDPA modules belonging to global attention. For example, the word
“apple” illustrated in Figure 2 will attend to the
words in the whole problem text from “Janet” to
“?”.
**Quantity-Related** **Attention:** The words
around quantity usually provide beneficial clues
for MWPs solving. Hence, we introduce quantityrelated attention, which focuses on the question
span or quantities span where the current quantity
resides.
6164
-----
For i-th span, its Qc, Kc, and Vc are all derived
from Xquant,i within its own part. For example, as
illustrated in Figure 2, the word “she” only attends
to the words in the 2-nd quantity span “She finds
another 95,”.
**Quantity-Pair Attention: The relationship be-**
tween two quantities is of great importance in determining their associated operator. We design
an attention module called quantity-pair attention,
which is used to model this relationship between
quantities.
The question span can be viewed as the quantity span containing an unknown quantity. Thus,
the computation process consists of two parts: 1)
Attention between quantities: the query Qp is derived from Xquant,i, and corresponding Kp and Vp
are stemmed from Xquant,j(j = i). For example,
_̸_
as illustrated in Figure 2, the word “has” in the 1-st
quantity span can only attend to words from the 2nd quantity span; 2) Attention between quantities
and question: the query Qp is originated Xquest
within the question span, and corresponding Kp
and Vp are derived from Xquant. For example, as
illustrated in Figure 2, the word “How” attends to
the words in the quantity spans from “Janet” to
“95,”.
**Question-Related Attention: The question can**
also derive distinguishing information such as
whether the answer value is positive. Thus, we
propose question-related attention, which is utilized to model the connections between question
and problem description stem.
There are also two parts when modeling this
type of relation: 1) Attention for quantity span: the
query Qq is derived from Xquant,i, the corresponding Kq and Vq are stemmed from Xquest. For example, as illustrated in Figure 2, the word “apples”
in quantity span only attends to the words from
the question span; 2) Attention for question span:
for the query Qq corresponding to Xquest, the corresponding Kq and Vq are extracted according to
_Xquant. For example, as illustrated in Figure 2, the_
word “does” in question span attends to the words
in all the quantity spans.
**4** **Experiment**
**4.1** **Experimental Setup**
We evaluate the proposed model on these
datasets, Math23K (Wang et al., 2017) and
MAWPS (Koncel-Kedziorski et al., 2016).
**Datasets: Math23K is collected from multiple**
online educational websites. This dataset contains
23,162 Chinese elementary school level MWPs.
MAWPS is another large scale dataset which owns
2,373 arithmetic word problems after harvesting
ones with a single unknown variable.
**Evaluation Metrics: We use answer accuracy**
to evaluate our model. The accuracy calculation
follows a simple formula. If a generated equation produces an answer equal to the corresponding ground truth answer, we consider it to be right.
**Implementation details: For Math23K, we fol-**
low the training and test set released by (Wang
et al., 2017), and we also evaluate our proposed
method with 5-fold cross-validation in main results table. We adopt the pre-trained word embeddings with dimension set to 128 and use a twolayer Bi-LSTM with 256 hidden units and a group
attention with four different functional 2-head attention as the encoder, and a two-layer LSTM
with 512 hidden units as the decoder. Dropout
probabilities for word embeddings, LSTM and
group attention are all set to 0.3. The number
of epochs and mini-batch size are set to 300 and
128 respectively. As to the optimizer, we use the
Adam optimizer with β1 = 0.9, β2 = 0.98 and
_e = 10[−][9]. Refer to (Vaswani et al., 2017), we_
use the same policy to vary the learning rate with
_warmup steps=2000. For MAWPS, we use 5-_
fold cross-validation, and the parameter setting is
similar to those on Math23K.
**Baselines: We compare our approach with re-**
trieval models, deep learning based solvers. The
retrieval models Jaccard and Cosine in (Robaidek
et al., 2018) find the most similar math word
problem in training set under a distance metric and use its equation template to compute
the result. DNS (Wang et al., 2017) first applies a vanilla SEQ2SEQ model with GRU as encoder and LSTM as the decoder to solve MWPs.
In (Wang et al., 2018a), the authors apply BiLSTM with equation normalization to reinforce
the vanilla SEQ2SEQ model. T-RNN (Wang
et al., 2019) launches a two-stage system named as
T-RNN that first predicts a tree-structure template
to be filled, and then accomplishes the template
with operators predicted by the recursive neural
network. In S-Aligned (Chiang and Chen, 2019),
the encoder is designed to understand the semantics of problems, and the decoder focuses on deciding which symbol to generate next over semantic meanings of the generated symbols.
6165
-----
In a parking lot, there are !" cars and motorcycles in total, each
car has !# wheels, and each motorcycle has n& wheels. These
cars have !' wheels in total, so how many motorcycles are there
in the parking lot?
equa,-.!: 0 = (!"!# −!')/(!# −!&)
Attention for which word Quantity-pair attention
Quantity-related attention Question-related attention
Figure 4: An example of attention visualization
**4.3** **Visualization Analysis of Attention**
To better understand how the group attention
mechanism works, we implement an attention visualization on a typical example from Math23K.
As shown in Figure 4, n3 describes how many
wheels a motorcycle has. Through quantity-pair
and quantity-related attention heads, n3 pays attention to all quantities that describe the number
of wheels. Question-related attention helps n3
attend to “motorcycle” in question. In addition,
surprisingly, in the quantity-pair heads, the attention of n3 becomes more focused on the words
“These”, “in total” from “These vehicles have n4
wheels in total”. This indicates part-whole relation(i.e., one quantity is part of a larger quantity), mentioned in (Mitra and Baral, 2016; Roy
and Roth, 2018), which is of great importance in
MWPs solving. Our analysis illustrates that the
hand-crafted grouping can force the model to utilize distinct information and relations conducive
to solving MWPs.
**5** **Conclusion**
In this paper, we introduce a group attention
method which can reinforce the capacity of model
to grab various types of MWPs specific features.
We conduct experiments on two benchmarks and
show significant improvements over a collection
of competitive baselines, verifying the value of our
model. Plus, our ablation study demonstrates the
effectiveness of each group attention mechanism.
**References**
D. Bobrow. 1964. Natural language input for a computer problem solving system. In Semantic infor_mation processing, pages 146–226. MIT Press._
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-aligned equation generation for
**4.2** **Main Results**
|45.6 38.2|- -|
|---|---|
**MAWPS** **Math23K** **Math23K***
Jaccard 45.6 - 47.2
Cosine 38.2 - 23.8
DNS 59.5 - 58.1
Bi-LSTM 69.2 66.7 T-RNN 66.8 66.9 S-Aligned - - 65.8
GROUP-ATT **76.1** **69.5** **66.9**
Table 2: Model comparison. Notice that Math23K
means the open training-test split and Math23K*
means 5-fold cross-validation.
As illustrated in Table 2, we can see that
retrieval approaches work poorly on both two
datasets. Our method named as GROUP-ATT performs substantially better than existing deep learning based methods, increasing the accuracy from
66.9% to 69.5% on Math23K based on trainingtest split, from 65.8% to 66.9% on Math23K with
5-fold cross-validation and from 69.2% to 76.1%
on MAWPS. In addition, DNS and T-RNN also
boost the performance by integrating with retrieval
methods, while (Wang et al., 2018a) improves the
performance by combining different SEQ2SEQ
models. However, we only focus on improving
the performance of single model. It is worth noting that GROUP-ATT also achieves higher accuracy than the state-of-the-art ensemble models
(Wang et al., 2019) (68.7% on Math23K based on
training-test split, 67.0% on MAWPS).
**Math23K**
Bi-LSTM 66.7
w/ Global Attention 68.2
w/ Quantity-Related Attention 68.2
w/ Quantity-Pair Attention 67.7
w/ Question-Related Attention 68.1
Table 3: The ablation study to quantify the role of each
type of attention in group attention.
In addition, we perform an ablation study to empirically examine the ability of designed group attentions. We adopt the same parameter settings as
GROUP-ATT while applying a single kind of attention with 8 heads. Table 3 shows the results of
ablation study on Math23K. Although each specified attention tries to catch related information
alone, it still outperforms Bi-LSTM by a margin
from 1.0% to 1.5%, showing its effectiveness.
6166
-----
solving and reasoning math word problems. In
_NAACL-HLT._
Edward A. Feigenbaum and Julian Feldman. 1963.
_Computers and Thought. McGraw-Hill, Inc., New_
York, NY, USA.
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
2018a. Neural math word problem solver with reinforcement learning. In COLING, pages 213–223.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian
Yin. 2017. Learning fine-grained expressions to
solve math word problems. In EMNLP, pages 805–
814.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers
solve math word problems? large-scale dataset construction and evaluation.
Danqing Huang, Jin-Ge Yao, Chin-Yew Lin, Qingyu
Zhou, and Jian Yin. 2018b. Using intermediate representations to solve math word problems. In ACL,
pages 419–428.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. TACL, 3:585–597.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
MAWPS: A math word problem repository. In
_NAACL, pages 1152–1157._
Nate Kushman, Luke Zettlemoyer, Regina Barzilay,
and Yoav Artzi. 2014. Learning to automatically
solve algebra word problems. In ACL, pages 271–
281.
Jian Li, Zhaopeng Tu, Baosong Yang, Michael R.
Lyu, and Tong Zhang. 2018. Multi-head attention
with disagreement regularization. In EMNLP, pages
2897–2903.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In ACL, pages 158–167.
Arindam Mitra and Chitta Baral. 2016. Learning to
use formulas to solve simple arithmetic problems.
In ACL.
Benjamin Robaidek, Rik Koncel-Kedziorski, and Hannaneh Hajishirzi. 2018. Data-driven methods for
solving algebra word problems. CoRR.
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In EMNLP, pages 1743–
1752.
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. TACL,
6:159–172.
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani.
2018. Self-attention with relative position representations. In NAACL-HLT.
Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang,
Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In AAAI.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In EMNLP, pages 1132–1142.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In NIPS, pages 5998–6008.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math word
problem to an expression tree. In EMNLP.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In AAAI.
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers
with recursive neural networks. In AAAI.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In
_EMNLP, pages 845–854._
Baosong Yang, Jian Li, Derek F. Wong, Lidia S. Chao,
Xing Wang, and Zhaopeng Tu. 2019. Context-aware
self-attention networks. CoRR, abs/1902.05766.
Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018.
Modeling localness for self-attention networks. In
_EMNLP, pages 4449–4458._
Dongxiang Zhang, Lei Wang, Nuo Xu, Bing Tian Dai,
and Heng Tao Shen. 2018. The gap of semantic
parsing: A survey on automatic math word problem
solvers. arXiv preprint arXiv:1808.07290.
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using
quadratic programming. In EMNLP, pages 817–
822.
6167
-----
| [
"Jipeng, Zhang",
"Jierui, Li",
"Dongxiang, Zhang",
"Lei, Wang",
"Yan, Wang",
"Bing Tian, Dai"
] | 2019-01-01T00:00:00 | ACL 2019 Main | true | 89 | 13 | null | https://www.aclweb.org/anthology/P19-1619 | null | https://www.semanticscholar.org/paper/865f5167c4353d2b120f0469ed1c298bc92794fa |
Graph Representations for Higher-Order Logic and Theorem Proving | This paper presents the first use of graph neural networks (GNNs) for higher-order proof search and demonstrates that GNNs can improve upon state-of-the-art results in this domain. Interactive, higher-order theorem provers allow for the formalization of most mathematical theories and have been shown to pose a significant challenge for deep learning. Higher-order logic is highly expressive and, even though it is well-structured with a clearly defined grammar and semantics, there still remains no well-established method to convert formulas into graph-based representations. In this paper, we consider several graphical representations of higher-order logic and evaluate them against the HOList benchmark for higher-order theorem proving. | This paper presents the first use of graph neural networks (GNNs) for higher-order proof search and demonstrates that GNNs can improve upon state-of-the-art results in this domain. | null | [
"Aditya, Paliwal",
"Sarah, Loos",
"Markus, Rabe",
"Kshitij, Bansal",
"Christian, Szegedy"
] | 2020-02-01T00:00:00 | AAAI 2020 Knowledge Representation and Reasoning | false | 87 | 11 | null | https://www.semanticscholar.org/paper/4b127897595af5a97c83860eec0540de5510f646 | null | https://www.semanticscholar.org/paper/4b127897595af5a97c83860eec0540de5510f646 |
Learning Reasoning Strategies in End-to-End Differentiable Proving | Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online, at https://github.com/uclnlp/ctp. | Conditional Theorem Provers is presented, an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation and is shown to show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. | # Learning Reasoning Strategies in End-to-End Differentiable Proving
**Pasquale Minervini** [1] **Sebastian Riedel** [1 2] **Pontus Stenetorp** [1] **Edward Grefenstette** [1 2] **Tim Rocktäschel** [1 2]
**Abstract**
Attempts to render deep learning models interpretable, data-efficient, and robust have seen some
success through hybridisation with rule-based systems, for example, in Neural Theorem Provers
(NTPs). These neuro-symbolic models can induce interpretable rules and learn representations
from data via back-propagation, while providing
logical explanations for their predictions. However, they are restricted by their computational
complexity, as they need to consider all possible
proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We
present Conditional Theorem Provers (CTPs), an
extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation.
We show that CTPs are scalable and yield stateof-the-art results on the CLUTRR dataset, which
tests systematic generalisation of neural models
by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better
link prediction results on standard benchmarks
in comparison with other neural-symbolic models, while being explainable. All source code and
datasets are available online. [1]
**1. Introduction**
Neural Natural Language Understanding (NLU) systems—
wherein a deep neural network is used as a function approximator (LeCun et al., 2015; Goodfellow et al., 2016)—have
been extremely successful at various natural language tasks,
such as Question Answering (QA) and Natural Language
Inference (NLI) (Goldberg, 2017), achieving strong generalisation results on datasets available for these tasks (Seo
et al., 2017; Hu et al., 2018; Shen et al., 2016; Huang et al.,
1UCL Centre for Artificial Intelligence, University College London [2]Facebook AI Research. Correspondence to: Pasquale Minervini <[email protected]>.
_Proceedings of the 37_ _[th]_ _International Conference on Machine_
_Learning, Online, PMLR 119, 2020. Copyright 2020 by the au-_
thor(s).
[1At https://github.com/uclnlp/ctp](https://github.com/uclnlp/ctp)
2018). Even strong performance on NLU problems have
been recently achieved with advent of large models pretrained via self-supervision, such as BERT (Devlin et al.,
2019), XLNet (Yang et al., 2019), and RoBERTa (Liu et al.,
2019).
**Generalisation in Neural Models** However, there are
growing concerns about the ability of NLU systems, and
neural networks more generally, to generalise in a systematic and robust way (Bahdanau et al., 2019; Lake & Baroni,
2018; Johnson et al., 2017; Sinha et al., 2019). For instance,
Jia & Liang (2017) highlight the brittleness of NLU systems
to adversarial examples, while Gururangan et al. (2018);
Kaushik & Lipton (2018) show that neural NLU models
tend to exploit annotation artefacts and spurious correlations in the data. Furthermore, analysing and supervising
the inner workings of such models is not trivial, due to
their inherent black-box nature (Doshi-Velez & Kim, 2017;
Lipton, 2018).
More generally, Garnelo & Shanahan (2019) emphasise
several limitations of neural models, in terms of i) data
inefficiency and high sample complexity—the need of high
volumes of training data in order to be effective, ii) poor
generalisation—modern neural models may not produce
the correct predictions when exposed to data outside the
training distribution, and iii) lack of interpretability—such
models are black boxes where internal representations and
computations are hardly interpretable by humans.
In this vein, Sinha et al. (2019) measured and compared the
systematic generalisation abilities of several neural models
(including very strong baselines such as BERT (Devlin et al.,
2019) and Graph Attention Networks (GATs) (Velickovic
et al., 2018)) on the task of answering questions about family
relationship graphs, by evaluating on held-out combinations
of reasoning patterns and by adding curated distracting noisy
facts. Interestingly, they found that performance degrades
monotonically for every model in their pool as they increase
the complexity of the relational graph, highlighting the challenge of systematic generalisation (Lake & Baroni, 2018;
Sodhani et al., 2018).
**Neuro-Symbolic Reasoning** A promising direction for
overcoming these issues consists in combining neural mod_els and symbolic reasoning given their complementary_
strengths and weaknesses (d’Avila Garcez et al., 2015;
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
Evans & Grefenstette, 2018; Garnelo & Shanahan, 2019).
We focus on NTPs (Rocktäschel & Riedel, 2017), a family
of neuro-symbolic reasoning models: NTPs are continuous
relaxations of the backward-chaining reasoning algorithm
that replace discrete symbols with their continuous embedding representations.
NTPs have interesting properties: they can jointly learn
representations and interpretable rules from data via backpropagation, and can potentially combine such rules in ways
that may have not been observed during training. However,
a major limitation in NTPs is that, during training, they need
to consider all rules for explaining a given goal or sub-goal.
This quickly renders them ineffective in settings requiring a
large number of rules or reasoning steps.
**Conditional Theorem Provers For addressing limitations**
of NTPs, we propose CTPs, an extension that is able to learn
an adaptive strategy for selecting subsets of rules to consider at each step of the reasoning process. This is achieved
by a select module that, given a goal, produce the rules
needed for proving it. Predicates and constants in the produced rules lie in a continuous embedding space. Hence,
the select module is end-to-end differentiable, and can
be trained jointly with the other modules via gradient-based
optimisation.
**2. End-to-End Differentiable Proving**
NTPs (Rocktäschel & Riedel, 2017) are a continuous relaxation of the backward chaining algorithm (Russell &
Norvig, 2010): this algorithm works backward from the
goal, chaining through rules to find known facts supporting
the proof.
Given a query (or goal) G, backward chaining first attempts
to unify it with the facts available in a given Knowledge Base
(KB). If no matching fact is available, it considers all rules
H :– B, where H denotes the head (or consequence) and B
the body (or premise), and H can be unified with the query
_G resulting in a substitution for the variables contained_
in H. Then, the backward chaining algorithm applies the
substitution to the body B, and recursively attempts to prove
the atoms contained therein.
Backward chaining can be seen as a type of and/or search:
or because the goal can be proven by any rule in the KB,
and and because all the conjuncts in the premise of a rule
must be proven.
**Example 2.1 (Backward Chaining). Consider a KB com-**
posed by the facts p(RICK, BETH) and p(BETH, MORTY),
and by the rule g(X, Y) :– p(X, Z), p(Z, Y), where p and
g denote the relationships parent and grandparent, respectively. The goal G = g(RICK, MORTY) can be proven
by unifying G with the head of the rule g(X, Y), with
the substitution {X/RICK, Y/MORTY}, and then by re
cursively proving the subgoals p(RICK, Z), p(Z, MORTY),
which hold true for the substitution {Z/BETH}. △
NTPs make this reasoning process more flexible and endto-end differentiable by replacing the comparison between
symbols with a soft matching of their respective trainable
dense vector representations. NTPs recursively build a neural network enumerating all possible proof paths for proving
a query given KB, and aggregate all proof scores via max
pooling. They do so by relying on three modules: a unifica_tion module, which compares sub-symbolic representations_
of logic atoms, and mutually recursive or and and modules,
which jointly enumerate all possible proof paths, before the
final aggregation selects the highest scoring one. The whole
process is outlined in Algorithm 1.
**Example** **2.2** (Neural Theorem Provers). Consider
a variant of Example 2.1, where each predicate and
constant lives in a continuous embedding space, i.e.
**_θp:, θq:, θRICK:, θBETH:, θMORTY:_** _∈_ R[k]. Rocktäschel
& Riedel (2017) propose replacing comparisons between symbols with a differentiable similarity measure
_K_ : R[k] _× R[k]_ _→_ [0, 1], such as a Gaussian kernel,
between their embeddings. Their model enumerates
all possible proofs for a goal G, and generates a
_proof score for each of them, given by the minimum_
of all embedding similarities. For instance, if G =
grandPa(RICK, MORTY) = [grandPa, RICK, MORTY],
one candidate proof consists in using the facts
_F_ = p(RICK, BETH) and F _[′]_ = p(BETH, MORTY)
and the rule g(X, Y) :– p(X, Z), p(Z, Y) from the KB,
yielding the score K(θgrandPa, θg). It is important to
mention that NTPs allow unifying symbols like grandPa
and g (which, in this case, share the same semantics), even
though they are lexically different. The score for G is given
by the maximum of all proof scores. △
**3. Conditional Proving Strategies**
The NTPs proposed by Rocktäschel & Riedel (2017) use
a fixed set of rules, either specified by the user or learned
from data via provided rule templates. In this model, given
a goal, there is no hard decision mechanism for deciding
which rules can be used for reformulating a given goal into
subgoals: all rules in the KB need to be considered when
proving each goal. For this reason, NTPs were shown not to
scale to large datasets (Rocktäschel & Riedel, 2017).
**Differentiable Goal Reformulation In order to explicitly**
_learn which rule to consider at each step, we propose the fol-_
lowing solution. Rather than relying on a fixed, potentially
very large set of rules, we propose to dynamically gener_ate a minimal set of rules via a neural network architecture_
conditioned on the goal to prove.
**Example 3.1 (Conditional Theorem Proving). Assume that**
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
**Algorithm 1 Overview of the neural backward chaining algorithm proposed by Rocktäschel & Riedel (2017) – intuitively, it**
recursively proves each goal with all rules in the KB (OR module) and, for each rule, it proves its premise (AND module),
up to d recursion steps.
1: function or(G, d, S)
2: **for H :– B ∈K do**
3: **for S ∈** and (B, d, unify(H, G, S)) do
4: **yield S**
1: function and(B, d, S)
2: **if B = [] or d = 0 then yield S else**
3: **for S[′]** _∈_ or (sub(B0, Sψ), d − 1, S) do
4: **for S[′′]** _∈_ and(B1:, d, S[′]) do
5: **yield S[′′]**
1: function unify(H, G, S = (Sψ, Sρ))
2: _Sψ[′]_ [=][ S][ψ] _i_ _[T][i]_
S
_Hi/Gi_ if Hi
_{_ _}_ _∈V_
with Ti = _Gi/Hi_ if Gi _, Hi_
_{_ _}_ _∈V_ _̸∈V_
∅ otherwise
3: _Sρ[′]_ [= min][ {][S][ρ][}][ S]Hi,Gi _i_ _[,][ θ][G]i_ [)][}]
4: **return (Sψ[′][, S]ρ[′]** [)] _̸∈V_ _[{][K][(][θ][H]_
where V is a set of variables, and A ≜ R[k] _× (R[k]_ _∪V) ×_
(R[k] _∪V) denotes the embedding representation of a goal,_
such as g(RICK, MORTY).
For instance, the select module in Eq. 1 can be implemented by a neural network that, given a goal such as
_G = [θg:, θRICK:, θMORTY:], generates H :– B with H =_
**[θg:, X, Y] and B = [[θp:, X, Z], [θp:, Z, Y]], correspond-**
ing to the symbolic rule g(X, Y) :– p(X, Z), p(Z, Y). If
the positions of the variables in the rule are fixed, the whole
module is end-to-end differentiable with respect to its parameters θ.
**Neural Goal Reformulation Here, we define select as**
a linear function of the goal predicate:
selectθ(G) ≜ [FH(G) :– FB1 (G), FB2 (G)], (2)
where the head and body of the resulting rule are given by
_FH(G) = [fH(θG1_ ), X, Y], FB1 (G) = [fB1 (θG1 ), X, Z],
and FB2 (G) = [fB2 (θG1 ), Z, Y]. Every fi : R[k] _→_ R[k]
is a differentiable function, such as the linear projection
_fi(x) = Wix + b, with Wi ∈_ R[k][×][k] and b ∈ R[k]. Thus,
instead of iterating through a possibly very large set of rules
in the KB K, we can generate a significantly smaller set of
rules, whose generation is conditioned on the goal G and
can be trained end-to-end on downstream reasoning tasks.
**Attentive Goal Reformulation We can incorporate a use-**
ful prior in the select module architecture—namely that
predicate symbols in the rule already exist in the KB, among
the available relations R. A method for incorporating this
prior consists in using the given goal G for generating a
distribution over the set of relations R:
_fi(x) = α ER_
(3)
**_α = softmax (Wix)_** ∆[|R|−][1],
_∈_
where E R[|R|×][k] denotes the predicate embedding
_R ∈_
matrix, Wi ∈ R[k][×|R|], and α is an attention distribution
**_α ∈_** ∆[|R|−][1] over the predicates in R, where ∆[n] denotes
**Algorithm 2 In Conditional Theorem Provers, the set of**
rules is conditioned on the goal G.
1: function or(G, d, S)
2: **for H :– B ∈** selectθ(G) do
3: **for S ∈** and (B, d, unify(H, G, S)) do
4: **yield S**
the goal to prove is G = g(RICK, MORTY), we want the
model to be able to only consider the best rules for proving G, such as g(X, Y) :– p(X, Z), p(Z, Y), rather than all
rules in the KB. Remember that, in NTPs, relations and constants in the KB are represented by their embedding vectors,
and the aforementioned rule selection can be implemented
via a mapping from θg: to [θp:, θp:]. △
Consider the or module in Algorithm 1. Selecting which
rule to use during the proving process for a given goal
_G can be implemented by rewriting the or module as in_
Algorithm 2, where the set of clauses is produced by a
module that, given the goal, generates a set of rules for
proving G.
The core difference between NTPs and CTPs is that, when
proving a goal G, rather than iterating through a possibly
very large set of clauses in the KB K (see Line 2 in the
or module definition in Algorithm 1), the conditional or
module in Algorithm 2 only iterates through a small set of
generated clauses, whose generation is conditioned on G
(see Line 2 in Algorithm 2). Given a goal G, the select
module with parameters θ in Algorithm 2 produces a set of
clauses, each specifying which sub-goals to prove in order
to produce a proof for G.
Note that the select module can be implemented by an
end-to-end differentiable parametrised function f (·) that,
given a goal G, produces a finite sequence of corresponding
subgoals:
selectθ(G) : [ :– ], (1)
_A →_ _A_ _A[∗]_
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
Sandra **_Training_** Teresa Michelle **_Test_** Cyrus **CLUTRR – Sample of Learned Rules**
```
sibling
```
`child` child(X, Y) ⇐ child(X, Z), sibling(Z, Y)
child(X, Y) SO(X, Z), child(Z, Y)
Ellis _⇐_
Molly Victor Kerrie `sibling` grand(X, Y) child(X, Z), child(Z, Y)
`sibling` `sibling` _⇐_
grand(X, Y) ⇐ grand(X, Z), sibling(Z, Y)
`grandparent` `daughter` `grandparent` grand(X, Y) ⇐ SO(X, Z), grand(Z, Y)
in-law(X, Y) child(X, Z), SO(Z, Y)
Debra `parent` Joy Kevin in-law(X, Y) ⇐ ⇐ sibling-in-law(X, Z), child(Z, Y)
Kenneth Brian Valerie siblingsibling((XX,, Y Y)) ⇐ ⇐ childsibling(X,( ZX), Z, uncle), sibling(Y, Z)(Z, Y)
`parent` `sibling` `sibling` `child` sibling(X, Y) ⇐ child(X, Z), child(Y, Z)
**Figure 2: Rules learned on Compositional Language Understand-**
**Figure 1: Example of a train and test instance in CLUTRR – the**
ing and Text-based Relational Reasoning (CLUTRR) by CTPs –
training instance (upper left) is composed by a graph with three
symbols were obtained by decoding the goal reformulations with
edges, while the test instance is composed by ten edges; the task
the nearest predicate in embedding space.
consists in identifying the relationships between the green nodes.
2015), induce algorithmic behaviours (Graves et al., 2014;
Joulin & Mikolov, 2015; Grefenstette et al., 2015; Kaiser &
Sutskever, 2016), and rapidly assimilate new data (Santoro
et al., 2016).
**Neuro-Symbolic Models Differentiable interpreters en-**
able translating declarative or procedural knowledge into
neural networks exhibiting strong inductive biases of that
knowledge (Bošnjak et al., 2017; Rocktäschel & Riedel,
2017; Evans & Grefenstette, 2018). Bošnjak et al. (2017)
propose ∂4, a differentiable abstract machine for the Forth
programming language.
Rocktäschel & Riedel (2017) propose a differentiable implementation for the backward chaining algorithm, effectively
implementing a differentiable Datalog interpreter. Evans
& Grefenstette (2018) propose a differentiable forwardchaining reasoning process, while Donadello et al. (2017)
propose a continuous generalisation of the semantics of
first-order logic. Yang et al. (2017) and Sadeghian et al.
(2019) propose an approach for learning function-free Datalog clauses from KBs by means of a differentiable graph
traversal operator, while Das et al. (2018) propose learning
policies for navigating a KB via reinforcement learning.
A major problem with these approaches is their computational complexity, which renders them unusable for largerscale learning problems. In order to address this issue,
Minervini et al. (2020) propose Greedy NTPs (GNTPs), an
extension to NTPs where, for each goal, only the top-k facts
and rules are considered during the differentiable reasoning
process.
**Neural Module Networks Andreas et al. (2016) introduce**
Neural Module Networks (NMNs), an end-to-end differentiable composition of jointly trained neural modules. Analogously, NTPs can be seen as a recursive differentiable
composition of or and and modules, jointly trained on
downstream reasoning tasks. NMNs allow defining and
**Figure 1: Example of a train and test instance in CLUTRR – the**
training instance (upper left) is composed by a graph with three
edges, while the test instance is composed by ten edges; the task
consists in identifying the relationships between the green nodes.
the standard n-simplex.[2] This is especially helpful if the
embedding size is larger than the number of relationships,
_i.e. k ≫|R|._
**Memory-Based Goal Reformulation A problem with us-**
ing black-box neural networks for reformulating goals into
sub-goals is that it can be difficult to inspect the rules by
analysing the model parameters, as in Rocktäschel & Riedel
(2017). For this reason, we propose an alternative goal reformulation module, where rules are stored in a differentiable
memory.
More precisely, n rules are stored as memory matrices
[M1, . . ., Mm], where each Mi ∈ R[n][×][k] denotes the kdimensional embeddings of the i-th predicates in the n rules.
Then, the goal G is used to compute an attention distribution
over rules α ∈ ∆[n][−][1], where each αi denotes the attention
weight on the i-th rule. The attention distribution can be
formalised as follows:
_fi(x) = α Mi_ (4)
**_α = softmax (Wx) ∈_** ∆[n][−][1],
where each fi : R[k] _→_ R[k] is a differentiable function that,
given the goal, produces an attention distribution α ∈ ∆[n][−][1]
over the rules and for indexing a memory Mi, analogous to
a key-value memory network (Miller et al., 2016).
**4. Related Work**
**Memory-Augmented Networks Memory-augmented neu-**
ral architectures aim at improving the generalisation and
reasoning abilities in neural networks by disentangling representations from computations. By enriching neural networks with a differentiable external memory, these models
were able to multi-hop reason over texts (Sukhbaatar et al.,
2The standard n-simplex ∆n is defined as ∆[n] =
_{(α0, . . ., αn) ∈_ R[n][+1] _|_ _i=0_ **_[α][i][ = 1][ ∧∀][i][ :][ α][i][ ≥]_** [0][}]
[P][n]
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
training end-to-end differentiable composable models, and
interpret and execute their compositions as simple programs.
This is especially useful when dealing with reasoning tasks
from visual and natural language inputs, such as reasoning
over text with arithmetic modules (Gupta et al., 2019) or visual question answering (Andreas et al., 2016). Interestingly,
in this work the structure of the composition is statically
drawn from the data, while Jiang & Bansal (2019) propose
a way of learning the model composition via a coordination
model (Jiang & Bansal, 2019).
**Incorporating Knowledge via Regularisation Another**
branch of works uses symbolic background knowledge to
learn better representations for entities and relationships in a
KB. An early work in this space is Rocktäschel et al. (2015),
which regularise a relation extraction model by penalising
inconsistency with respect to a set of logical constraints and
sampled entities. Minervini et al. (2017a) regularise relation
representations to incorporate equivalence and inversion
axioms for a set of neural link prediction models, while
Demeester et al. (2016) focus on simple implication axioms.
Minervini et al. (2017b) propose adversarially regularising
neural models by identifying, during training, inputs that
violate a given set of constraints, and regularising the model
to decrease the degree of such violations. Xu et al. (2018)
propose a similar idea, by using a semantic loss measuring
to which extent a model matches a set of given constraints.
**5. Experiments**
We evaluate CTPs on two datasets: systematic generalisation on the CLUTRRs dataset, and link prediction in Knowledge Graphs (KGs). Datasets are introduced in Section 5.1,
while baselines are described in Section 5.2.
**5.1. Datasets and Tasks**
**Systematic Generalisation** CLUTRR—Compositional
Language Understanding and Text-based Relational Reasoning (Sinha et al., 2019)—contains a large set of graphs
modelling hypothetical family relationships. Given a set of
family relations, encoded as a graph with a variable number of nodes and edges, the goal is to infer the relationship
between two family members, whose relationship is not
explicitly mentioned. To solve this task, a learning agent
should be able to induce the logical rules governing the
kinship relationships, such as the parent of a parent is a
_grandparent, and use these rules to infer the relationship_
between a given pair of entities.
CLUTRR allows testing a learning agent’s ability for sys_tematic generalisation, by testing on graphs containing com-_
binations of logical rules that were not seen during training.
Each edge in the graph is labelled with one out of nine family relation type from R = { child, grand, in-law, inv-child,
inv-grand, inv-in-law, inv-un, sibling, un }, and the task consists in inferring the relationship between two of the nodes
in the graph. During training, a model is trained to infer
such relationship by traversing a limited number of edges
(such as two, three, and four edges), and during evaluation
the model has to traverse up to ten edges.
Fig. 1 shows an example of a training instance and a test instance in CLUTRR: the training instances consists in a graph
modelling a set of family relations of only three edges, while
the test instance is composed by a graph with ten edges. In
both cases, the task consists in inferring the relationships
between two of the nodes in the graph.
**Link Prediction Furthermore, we evaluate CTPs on neu-**
ral link prediction tasks, following the same evaluation
protocols as Rocktäschel & Riedel (2017) on the Countries (Bouchard et al., 2015), Nations, UMLS, and Kinship (Kemp et al., 2006) datasets.
The Countries dataset contains countries, regions, and subregions as entities, and it is carefully designed to test the
logical reasoning and learning capabilities of neural link
prediction models: queries have the form locatedIn(C, ·
), where the answers are regions. This dataset comes with
three tasks (S1, S2, and S3) each requiring reasoning skills
of increasing complexity; we refer to Rocktäschel & Riedel
(2017) for more details about this dataset.
The Unified Medical Language System (UMLS) dataset is
from bio-medicine: entities are biomedical concepts, and
relations include treatments and diagnoses. The Kinship
dataset contains kinship relationships among members of
the Alyawarra tribe from Central Australia.
**5.2. Models and Baselines**
**Models We consider the following three CTP model vari-**
ants: i) CTPL, where the mappings fi from goal predicates
to rule predicates are implemented via a linear projection,
_ii) CTPA, where it is implemented via attentive goal refor-_
_mulation, as described in Section 3, and iii) CTPM_, where it
is implemented via memory-based goal reformulation, also
described in Section 3.
**Baselines We consider two classes of baselines: graph-**
_based and sequence-based. Graph-based baselines con-_
sist in neural architectures specifically designed for graph
data and. We consider GNTPs, a neuro-symbolic reasoning
model, and two Graph Neural Network (GNN) architectures, namely GATs (Velickovic et al., 2018) and Graph
Convolutional Networks (GCNs) (Kipf & Welling, 2017).
Sequence-based baselines are neural architectures originally
proposed for encoding sequences: by linearising the relational graphs into sequences of subject-predicate-object
triples, we can use such models for encoding graphs. We
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
|CLUT|Col2|TRRG(k=|Col4|=2,3), bas|selines tu|uned on k|k = 3|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
||M C|odel TP L||||||
||C C|TP A TP M||||||
||G G|AT CN||||||
||R L|NN STM||||||
||G C|RU NNH||||||
|C M|C M|NN HA||||||
5 6 7 8 9
Test Story Length
|CLUT|Col2|TRRG(k=|Col4|=2,3), bas|selines tu|uned on k|k = 9|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
||M C|odel TP L||||||
||C C|TP A TP M||||||
||G G|AT CN||||||
||R L|NN STM||||||
||G C|RU NNH||||||
|C M|C M|NN HA||||||
5 6 7 8 9
Test Story Length
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
10
10
**4 Hops** **5 Hops** **6 Hops** **7 Hops** **8 Hops** **9 Hops** **10 Hops**
CTPL .98 ± .02 _.98 ± .03_ _.97 ± .05_ _.96 ± .04_ _.94 ± .05_ _.89 ± .07_ _.89 ± .07_
CTPA .99 ± .02 _.99 ± .01_ _.99 ± .02_ _.96 ± .04_ _.94 ± .05_ _.89 ± .08_ _.90 ± .07_
CTPM .97 ± .03 _.97 ± .03_ _.96 ± .06_ _.95 ± .06_ _.93 ± .06_ _.90 ± .06_ _.89 ± .06_
GNTP .49 ± .18 ▽ _.45 ± .21 ▽_ _.38 ± .23 ▽_ _.37 ± .21 ▽_ _.32 ± .20 ▽_ _.31 ± .19 ▽_ _.31 ± .22 ▽_
GAT .91 ± .02 ▼ _.76 ± .06 ▼_ _.54 ± .03 ▼_ _.56 ± .04 ▼_ _.54 ± .03 ▼_ _.55 ± .05 ▼_ _.45 ± .06 ▼_
GCN .84 ± .03 ▼ _.68 ± .02 ▼_ _.53 ± .03 ▼_ _.47 ± .04 ▼_ _.42 ± .03 ▼_ _.45 ± .03 ▼_ _.39 ± .02 ▼_
RNN .86 ± .06 ▽ _.76 ± .08 ▼_ _.67 ± .08 ▼_ _.66 ± .08 ▼_ _.56 ± .10 ▼_ _.55 ± .10 ▼_ _.48 ± .07 ▼_
LSTM .98 ± .04 _.95 ± .03_ _.88 ± .05 ▽_ _.87 ± .04 ▽_ _.81 ± .07 ▽_ _.75 ± .10 ▽_ _.75 ± .09 ▽_
GRU .89 ± .05 ▽ _.83 ± .06 ▼_ _.74 ± .12 ▽_ _.72 ± .09 ▼_ _.67 ± .12 ▽_ _.62 ± .10 ▼_ _.60 ± .12 ▼_
CNNH .90 ± .04 ▽ _.81 ± .05 ▼_ _.69 ± .10 ▼_ _.64 ± .08 ▼_ _.56 ± .13 ▼_ _.52 ± .12 ▼_ _.50 ± .12 ▼_
CNN .95 ± .02 _.90 ± .03 ▼_ _.89 ± .04 ▽_ _.80 ± .05 ▼_ _.76 ± .08 ▽_ _.69 ± .07 ▼_ _.70 ± .08 ▽_
MHA .81 ± .04 ▼ _.76 ± .04 ▼_ _.74 ± .05 ▼_ _.70 ± .04 ▼_ _.69 ± .03 ▼_ _.64 ± .05 ▼_ _.67 ± .02 ▼_
**4 Hops** **5 Hops** **6 Hops** **7 Hops** **8 Hops** **9 Hops** **10 Hops**
CTPL .98 ± .02 _.98 ± .03_ _.97 ± .05_ _.96 ± .04_ _.94 ± .05_ _.89 ± .07_ _.89 ± .07_
CTPA .99 ± .02 _.99 ± .01_ _.99 ± .02_ _.96 ± .04_ _.94 ± .05_ _.89 ± .08_ _.90 ± .07_
CTPM .97 ± .03 _.97 ± .03_ _.96 ± .06_ _.95 ± .06_ _.93 ± .06_ _.90 ± .06_ _.89 ± .06_
GNTP .49 ± .18 ▽ _.45 ± .21 ▽_ _.38 ± .23 ▽_ _.37 ± .21 ▽_ _.32 ± .20 ▽_ _.31 ± .19 ▽_ _.31 ± .22 ▽_
GAT .92 ± .01 ▼ _.73 ± .04 ▼_ _.56 ± .04 ▼_ _.55 ± .04 ▼_ _.54 ± .03 ▼_ _.55 ± .04 ▼_ _.50 ± .04 ▼_
GCN .84 ± .04 ▼ _.61 ± .03 ▼_ _.51 ± .02 ▼_ _.48 ± .02 ▼_ _.45 ± .03 ▼_ _.47 ± .05 ▼_ _.41 ± .04 ▼_
RNN .93 ± .03 ▽ _.91 ± .03 ▽_ _.79 ± .08 ▼_ _.82 ± .06 ▼_ _.75 ± .11 ▽_ _.68 ± .07 ▼_ _.64 ± .07 ▼_
LSTM 1.0 ± .00 **1.0 ± .00** _.95 ± .03_ _.94 ± .04_ _.87 ± .08_ _.86 ± .08_ _.84 ± .09_
GRU .92 ± .05 ▽ _.88 ± .06 ▽_ _.78 ± .09 ▽_ _.77 ± .09 ▽_ _.74 ± .08 ▼_ _.66 ± .10 ▼_ _.65 ± .08 ▼_
CNNH .94 ± .03 ▽ _.86 ± .06 ▽_ _.77 ± .08 ▼_ _.72 ± .08 ▼_ _.64 ± .09 ▼_ _.59 ± .10 ▼_ _.59 ± .09 ▼_
CNN .93 ± .04 ▽ _.86 ± .07 ▽_ _.84 ± .09 ▽_ _.79 ± .08 ▽_ _.77 ± .10 ▽_ _.69 ± .09 ▽_ _.70 ± .11 ▽_
MHA .81 ± .04 ▼ _.76 ± .04 ▼_ _.74 ± .05 ▼_ _.70 ± .04 ▼_ _.69 ± .03 ▼_ _.64 ± .05 ▼_ _.67 ± .02 ▼_
**Figure 3: Results on the CLUTRR dataset after training on stories of lengths {2, 3} and evaluating on stories of length {4, 5, . . ., 10} –**
hyperparameters were fine-tuned on either short stories (left) and long stories (right). Significance testing was assessed via a unequal
variances t-test in comparison with CTPL: ▼ (resp. ▽) represents a p-value lower than 10[−][4] (resp. 10[−][2]).
consider several sequence encoding models, namely Recurrent Neural Networks (RNNs), Long Short-Term Memory Networks (LSTMs) (Hochreiter & Schmidhuber, 1997),
Gated Recurrent Units (GRUs) (Cho et al., 2014), Convolutional Neural Networks (CNNs) (Kim, 2014), CNN with
Highway Encoders (CNNHs) (Kim et al., 2016), MultiHeaded Attention Networks (MHAs) (Vaswani et al., 2017).
**Encoding For both graph-based and sequence-based base-**
lines, we considered two approaches: i) encoding the the
KB and goal independently, as in Sinha et al. (2019), and
_ii) conditioning the KB encoding on the goal._ Let encθe
denote an encoder that, given a set of ground facts (such as
a KB or a goal), produces a continuous k-dimensional representation, and ˆy denote a conditional distribution over the
candidate relationship types. The encoder in item (i), where
the goal G and the KB K are encoded independently, can
be summarised as ˆy = softmax(W [encθe( ); encθe(G)]).
_K_
The encoder in item (ii), where G and K are encoded jointly,
can be summarised as ˆy = softmax(W encθe([G; ])).
_K_
For model selection, we generate a CLUTRR-like dataset
using the code published by Sinha et al. (2019) composed
of training set graphs with {2, 3} edges, and two validation
sets, one with graphs with three edges, and another with
graphs with nine edges. We then select two sets of hyperparameters for each of the baselines: one that maximises the
validation accuracy on graphs with three edges, and another
that maximises the test accuracy on graphs with nine edges.
All details on the hyperparameter selection process can be
found in Appendix A.
To assess the statistical significance of our results, we ran
each of the experiments 10 times, each time with a different
seed, and compared the resulting accuracy values using an
_unequal variances t-test, or Welch’s t-test.[3]_
3We assume accuracy values to be Gaussian-distributed, as they
approach a normal distribution for large numbers of re-runs, due
to the Central Limit Theorem.
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
|CLUTR|Col2|RRG(k=2,3,4|Col4|4), baseline|es tuned on|n k = 3|
|---|---|---|---|---|---|---|
|Mo||del|||||
||Mo|del|||||
||CT CT|P L P A|||||
||CT GA|P M T|||||
||GC RN LS|N N TM|||||
||GR CN|U NH|||||
|CN MH|CN MH|N A|||||
6 7 8 9
Test Story Length
|CLUTR|Col2|RRG(k=2,3,4|Col4|4), baseline|es tuned on|n k = 9|
|---|---|---|---|---|---|---|
|Mo||del|||||
||Mo|del|||||
||CT CT|P L P A|||||
||CT GA|P M T|||||
||GC RN LS|N N TM|||||
||GR CN|U NH|||||
|CN MH|CN MH|N A|||||
6 7 8 9
Test Story Length
1.0
0.9
1.0
0.9
0.8
0.7
0.8
0.7
0.6
0.5
0.6
0.5
0.4
0.4
10
10
**5 Hops** **6 Hops** **7 Hops** **8 Hops** **9 Hops** **10 Hops**
CTPL .99 ± .02 _.98 ± .04_ _.97 ± .04_ _.98 ± .03_ _.97 ± .04_ _.95 ± .04_
CTPA .99 ± .04 _.99 ± .03_ _.97 ± .03_ _.95 ± .06_ _.93 ± .07_ _.91 ± .05_
CTPM .98 ± .04 _.97 ± .06_ _.95 ± .06_ _.94 ± .08_ _.93 ± .08_ _.90 ± .09_
GNTP .68 ± .28 _.63 ± .34_ _.62 ± .31_ _.59 ± .32_ _.57 ± .34_ _.52 ± .32_
GAT .99 ± .00 _.85 ± .04 ▼_ _.80 ± .03 ▼_ _.71 ± .03 ▼_ _.70 ± .03 ▼_ _.68 ± .02 ▼_
GCN .94 ± .03 ▽ _.79 ± .02 ▼_ _.61 ± .03 ▼_ _.53 ± .04 ▼_ _.53 ± .04 ▼_ _.41 ± .04 ▼_
RNN .93 ± .06 _.87 ± .07 ▽_ _.79 ± .11 ▽_ _.73 ± .12 ▽_ _.65 ± .16 ▽_ _.64 ± .16 ▽_
LSTM .98 ± .03 _.95 ± .04_ _.89 ± .10_ _.84 ± .07 ▽_ _.77 ± .11 ▽_ _.78 ± .11 ▽_
GRU .95 ± .04 _.94 ± .03_ _.87 ± .08_ _.81 ± .13 ▽_ _.74 ± .15 ▽_ _.75 ± .15 ▽_
CNNH .99 ± .01 _.97 ± .02_ _.94 ± .03_ _.88 ± .04 ▼_ _.86 ± .05 ▽_ _.84 ± .06 ▽_
CNN 1.0 ± .00 **1.0 ± .01** _.98 ± .01_ _.95 ± .03_ _.93 ± .03_ _.92 ± .04_
MHA .88 ± .03 ▼ _.83 ± .05 ▼_ _.76 ± .04 ▼_ _.72 ± .04 ▼_ _.74 ± .05 ▼_ _.70 ± .03 ▼_
**5 Hops** **6 Hops** **7 Hops** **8 Hops** **9 Hops** **10 Hops**
CTPL .99 ± .02 _.98 ± .04_ _.97 ± .04_ _.98 ± .03_ _.97 ± .04_ _.95 ± .04_
CTPA .99 ± .04 _.99 ± .03_ _.97 ± .03_ _.95 ± .06_ _.93 ± .07_ _.91 ± .05_
CTPM .98 ± .04 _.97 ± .06_ _.95 ± .06_ _.94 ± .08_ _.93 ± .08_ _.90 ± .09_
GNTP .68 ± .28 _.63 ± .34_ _.62 ± .31_ _.59 ± .32_ _.57 ± .34_ _.52 ± .32_
GAT .98 ± .01 _.86 ± .04 ▼_ _.79 ± .02 ▼_ _.75 ± .03 ▼_ _.73 ± .02 ▼_ _.72 ± .03 ▼_
GCN .88 ± .01 ▼ _.78 ± .02 ▼_ _.60 ± .02 ▼_ _.57 ± .02 ▼_ _.59 ± .04 ▼_ _.51 ± .02 ▼_
RNN .96 ± .03 _.87 ± .09 ▽_ _.82 ± .09 ▽_ _.73 ± .09 ▼_ _.65 ± .15 ▽_ _.67 ± .16 ▽_
LSTM 1.0 ± .01 _.99 ± .02_ _.96 ± .04_ _.96 ± .04_ _.94 ± .06_ _.92 ± .07_
GRU .96 ± .02 ▽ _.88 ± .03 ▼_ _.84 ± .04 ▼_ _.79 ± .06 ▼_ _.75 ± .08 ▼_ _.78 ± .04 ▼_
CNNH 1.0 ± .00 _.99 ± .01_ _.96 ± .02_ _.91 ± .04 ▽_ _.89 ± .04 ▽_ _.87 ± .04 ▽_
CNN 1.0 ± .00 _.98 ± .01_ _.97 ± .02_ _.92 ± .03 ▽_ _.89 ± .03 ▼_ _.87 ± .04 ▽_
MHA .88 ± .03 ▼ _.83 ± .05 ▼_ _.76 ± .03 ▼_ _.74 ± .04 ▼_ _.75 ± .04 ▼_ _.70 ± .03 ▼_
**Figure 4: Results on the CLUTRR dataset after training on stories of lengths {2, 3, 4} and evaluating on stories of length {5, . . ., 10} –**
hyperparameters were fine-tuned on either short stories (left) and long stories (right). Significance testing was assessed via a unequal
variances t-test in comparison with CTPL: ▼ (resp. ▽) represents a p-value lower than 10[−][4] (resp. 10[−][2]).
**5.3. Results**
**CLUTRR We evaluated three CTP variants and all con-**
sidered baselines on two datasets published by Sinha et al.
(2019) under the identifiers 089907f8 and db9b8f04—
we refer to these datasets as CLUTRRG(k = 2, 3)
and CLUTRRG(k = 2, 3, 4), where k denotes the
number of edges in the training graphs. Results for
CLUTRRG(k = 2, 3) are summarised in Fig. 3, while results for CLUTRRG(k = 2, 3, 4) are summarised in Fig. 4.
In Fig. 3, we observe that, after training on graphs with two
and three edges, baseline models tend to be able to generalise correctly to slightly longer stories (such as graphs with
four and five edges), but that predictive accuracy quickly
decreases with increasing graph sizes and this phenomenon
happens when tuning hyper-parameters either on graphs
with three edges or on graph with nine edges.
In our experiments, LSTMs had a strikingly different behaviour in comparison with other baselines: for graphs
with nine edges, the accuracy decrease caused by using the
LSTMs baseline is only significant with p ≤ 10[−][2] (for all
other baselines this change is significant with p ≤ 10[−][4]),
with a drop in significance for smaller graphs.
The phenomenon that LSTMs yield surprisingly accurate
results on the CLUTRR dataset can be seen across every experiment in our empirical evaluation, while other recurrent
models such as RNNs and GRUs do not show this.
**Model Analysis A great feature of CTPs is that we can**
analyse the goal reformulation process to understand the
reasoning process underlying a given prediction, and extract
explainable rules. In Fig. 2, we show a sample of the rules
and common-sense reasoning patterns learned by CTPs on
the CLUTRR dataset.
We can see that, for example, CTPs successfully identify
that e.g. the child of a child is a grandchild, the child of
_one’s significant other is also one’s child, and the parent of_
_a significant other is an in-law._
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
**Table 1: Comparison of CTPs, with GNTPs (Minervini et al., 2020), NeuralLP (Yang et al., 2017) and MINERVA (Das et al., 2018)**
(from Minervini et al. (2020)) on benchmark datasets: hyperparameters were selected based on the validation MRR, and we report the
mean and standard deviation over 10 random seeds.
**Models**
**GNTP**
**CTP** **NeuralLP** **MINERVA** **Learned Rules**
**Standard** **Attention**
**Datasets** **Metrics**
_S1_ **100.0 ± 0.00 99.98 ± 0.05 100.0 ± 0.00 100.0 ± 0.0 100.0 ± 0.0 locIn(X,Y) :– locIn(X,Z), locIn(Z,Y)**
**Countries** _S2_ AUC-PR 91.81 ± 1.07 90.82 ± 0.88 93.48 ± 3.29 75.1 ± 0.3 92.36 ± 2.41 neighOf(X,Y) :– neighOf(X,Z), locIn(Z,Y)
_S3_ 94.78 ± 0.00 87.70 ± 4.79 91.27 ± 4.02 92.20 ± 0.2 95.10 ± 1.20 neighOf(X,Y) :– neighOf(Y,X)
MRR **0.764 ± 0.00** 0.719 0.759 0.619 0.720 term0(X, Y) :– term22(Y, X)
Hits@1 **0.646** **0.01** 0.586 0.642 0.475 0.605 term4(X, Y) :– term4(Y, X)
**Kinship** _±_
Hits@3 **0.859 ± 0.01** 0.815 0.850 0.707 0.812 term20(X,Y) :– term24(X, Z), term6(Z, Y)
Hits@10 0.958 ± 0.00 0.958 **0.959** 0.912 0.924 term2(X,Y) :– term4(X, Z), term7(Z, Y)
MRR **0.709 ± 0.03** 0.658 0.645 — — tourism3(X, Y) :– eemigrants(Y, X)
Hits@1 **0.562** **0.05** 0.493 0.490 — — independence(X,Y) :– commonbloc0(Y,X)
**Nations** _±_
Hits@3 **0.813 ± 0.03** 0.781 0.736 — — relngo(X,Y) :– timesinceally(Y,X)
Hits@10 0.995 ± 0.00 0.985 0.975 — — relstudents(X, Y) :– relexportbooks(X, Y)
MRR 0.852 ± 0.01 0.841 **0.857** 0.778 0.825 isa(X,Y) :– isa(X,Z), isa(Z,Y)
Hits@1 0.752 0.01 0.732 **0.761** 0.643 0.728 resultOf(X,Y) :– resultOf(X,Z), resultOf(Z,Y)
**UMLS** _±_
Hits@3 **0.947 ± 0.01** 0.941 **0.947** 0.869 0.900 treats(X, Y) :– prevents(X, Z), resultOf(Z, Y)
Hits@10 0.984 ± 0.00 **0.986** 0.983 0.962 0.968 uses(X,Y) :– produces(X,Y)
100 200 300 400
|CLUTRRG|G(k=2,3,4) --|- Training It|Col4|Col5|teration × A|Accuracy|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
||||||Model|||
||||||CTP L CTP|||
||||||A CTP M|||
|||||||||
|||||||k = 5)||
|||||||||
||||||GNTP ( GNTP (|k = 7) k = 9)||
||||||GNTP (|k = 11)||
|||||||||
|||||||||
Training Iteration
A possible explanation is that, in order to scale to large rule
sets, GNTPs only considers the top-k rules, based on the
similarity between the goal and the head of the rules. This
is equivalent to a hard attention mask, which is known to be
problematic to train via gradient-based optimisation (Luong
et al., 2015).
**Link Prediction In Table 1, we show link prediction re-**
sults in comparison with three other neuro-symbolic reasoning methods, namely GNTPs (Minervini et al., 2020),
NeuralLP (Yang et al., 2017) and MINERVA (Das et al.,
2018). GNTPs are an extension of NTPs where rules are
heuristically selected by search for the rules where the head
predicate is closest to the sub-goal predicate in embedding
space.
Our experiments show that CTPs produce significantly more
accurate or very competitive link prediction results, while
controlling the complexity of the reasoning process via the
goal-conditioned rule selection. For instance, in the Nations
dataset, only two rules were generated by CTPs for each
goal, while in Rocktäschel & Riedel (2017) NTPs were
required to iterate over sixty rules.
Furthermore, in this case, CTPs were able to produce
explanations for each of their predictions. For instance, in
the Nations dataset, CTPs successfully extracted logical
patterns such as commonbloc1(X, Y) :– relngo(Y, X),
timesincewar(X, Y) :– independence(X, Y),
unweightedunvote(X, Y) :– relngo(X, Y), and
ngo(X, Y) :– independence(Y, X).
100
90
80
70
60
50
40
30
20
**Figure 5: Training dynamics of CTPs and GNTPs.**
**Training Dynamics We analyse the training dynamics of**
CTPL, CTPA, CTPM, and GNTPs (Minervini et al., 2020)
on CLUTRRG(k=2,3,4).
The CTP variants consisted of 5 goal reformulators, each
implemented by an independent select module, while
GNTP has a KB of 32 rules and k ∈{5, 7, 9, 11}. For all
models, the embedding size for entities and relations was
set to 50.
Figure 5 demonstrates how the training accuracy of such
models evolves during training. We can see that, while
the three CTP variants get to a nearly-perfect training set
accuracy in less than 300 iterations, GNTPs is unable to
match this result, even after careful hyperparameter tuning.
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
**6. Conclusions**
We introduced CTPs, an extension to NTPs for learning the
optimal rule selection strategy via gradient-based optimisation. For each sub-goal, a select module produces a
smaller set of rules, which is then used during the proving
mechanism. Furthermore, we proposed three variants of
the rule selection mechanism, where the sub-goal reformulations are obtained by linear projections of the sub-goal
predicate, attention distributions over predicate embeddings,
and a key-value memory lookup over a set of rules.
We showed that CTPs are scalable and yield state-of-the-art
results on the CLUTRR dataset, which explicitly tests the
systematic generalisation of neural models, in comparison
with a wide set of neural baselines. Finally, we demonstrated
that CTPs yield competitive results in standard link prediction benchmark in comparison with other neuro-symbolic
approaches.
**Future Work An open problem is how CTPs can be able**
to process CLUTRR instances where family relationships
are not directly provided as a labelled graph, but rather as
free-form text. A possible solution, proposed by Minervini
et al. (2020), consists in having an end-to-end differentiable
encoder for producing the fact embeddings conditioned on
the text, and we are currently analysing several options in
this space.
**Acknowledgements This work was supported by the EU**
Horizon 2020 Research and Innovation Programme under
the grant 875160. We thank Yihong Chen, Joe Stacey, and
everyone in the UCL NLP group for the enlightening discussions, and NVIDIA for GPU donations.
**References**
Andreas, J., Rohrbach, M., Darrell, T., and Klein, D. Neural
module networks. In CVPR, pp. 39–48. IEEE Computer
Society, 2016.
Bahdanau, D., Murty, S., Noukhovitch, M., Nguyen, T. H.,
de Vries, H., and Courville, A. C. Systematic generalization: What is required and can it be learned? In ICLR
_(Poster). OpenReview.net, 2019._
Bouchard, G., Singh, S., and Trouillon, T. On approximate
reasoning capabilities of low-rank vector spaces. In AAAI
_Spring Symposia. AAAI Press, 2015._
Bošnjak, M., Rocktäschel, T., Naradowsky, J., and Riedel,
S. Programming with a Differentiable Forth Interpreter.
In ICML, volume 70, pp. 547–556. PMLR, 2017.
Cho, K., van Merrienboer, B., Gülçehre, Ç., Bahdanau, D.,
Bougares, F., Schwenk, H., and Bengio, Y. Learning
phrase representations using RNN encoder-decoder for
statistical machine translation. In EMNLP, pp. 1724–
1734. ACL, 2014.
Das, R., Dhuliawala, S., Zaheer, M., Vilnis, L., Durugkar, I.,
Krishnamurthy, A., Smola, A., and McCallum, A. Go for
a walk and arrive at the answer: Reasoning over paths in
knowledge bases using reinforcement learning. In ICLR
_(Poster). OpenReview.net, 2018._
d’Avila Garcez, A. S., Besold, T. R., Raedt, L. D., Földiák,
P., Hitzler, P., Icard, T., Kühnberger, K., Lamb, L. C.,
Miikkulainen, R., and Silver, D. L. Neural-symbolic
learning and reasoning: Contributions and challenges. In
_AAAI Spring Symposia. AAAI Press, 2015._
Demeester, T., Rocktäschel, T., and Riedel, S. Lifted rule
injection for relation embeddings. In EMNLP, pp. 1389–
1399. The Association for Computational Linguistics,
2016.
Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT:
pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1), pp. 4171–4186.
Association for Computational Linguistics, 2019.
Donadello, I., Serafini, L., and d’Avila Garcez, A. S. Logic
tensor networks for semantic image interpretation. In
_IJCAI, pp. 1596–1602. ijcai.org, 2017._
Doshi-Velez, F. and Kim, B. Towards a rigorous science of
interpretable machine learning. CoRR, abs/1702.08608,
2017.
Evans, R. and Grefenstette, E. Learning Explanatory Rules
from Noisy Data. JAIR, 61:1–64, 2018.
Garnelo, M. and Shanahan, M. Reconciling deep learning
with symbolic artificial intelligence: representing objects
and relations. Current Opinion in Behavioral Sciences,
29:17 – 23, 2019. ISSN 2352-1546. SI: 29: Artificial
Intelligence (2019).
Goldberg, Y. Neural Network Methods for Natural Lan_guage Processing. Synthesis Lectures on Human Lan-_
guage Technologies. Morgan & Claypool Publishers,
2017.
Goodfellow, I. J., Bengio, Y., and Courville, A. C. Deep
_Learning. Adaptive computation and machine learning._
MIT Press, 2016.
Graves, A., Wayne, G., and Danihelka, I. Neural Turing
Machines. CoRR, abs/1410.5401, 2014.
Grefenstette, E., Hermann, K. M., Suleyman, M., and Blunsom, P. Learning to Transduce with Unbounded Memory.
In NIPS, pp. 1828–1836, 2015.
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
Gupta, N., Lin, K., Roth, D., Singh, S., and Gardner, M.
Neural module networks for reasoning over text. CoRR,
abs/1912.04971, 2019.
Gururangan, S., Swayamdipta, S., Levy, O., Schwartz, R.,
Bowman, S. R., and Smith, N. A. Annotation artifacts
in natural language inference data. In NAACL-HLT (2),
pp. 107–112. Association for Computational Linguistics,
2018.
Hochreiter, S. and Schmidhuber, J. Long short-term memory.
_Neural Computation, 9(8):1735–1780, 1997._
Hu, M., Peng, Y., Huang, Z., Qiu, X., Wei, F., and Zhou,
M. Reinforced mnemonic reader for machine reading
comprehension. In IJCAI, pp. 4099–4106. ijcai.org, 2018.
Huang, H., Zhu, C., Shen, Y., and Chen, W. Fusionnet:
Fusing via fully-aware attention with application to machine comprehension. In ICLR (Poster). OpenReview.net,
2018.
Jia, R. and Liang, P. Adversarial examples for evaluating
reading comprehension systems. In EMNLP, pp. 2021–
2031. Association for Computational Linguistics, 2017.
Jiang, Y. and Bansal, M. Self-assembling modular
networks for interpretable multi-hop reasoning. In
_EMNLP/IJCNLP (1), pp. 4473–4483. Association for_
Computational Linguistics, 2019.
Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L.,
Zitnick, C. L., and Girshick, R. B. CLEVR: A diagnostic
dataset for compositional language and elementary visual
reasoning. In CVPR, pp. 1988–1997. IEEE Computer
Society, 2017.
Joulin, A. and Mikolov, T. Inferring Algorithmic Patterns
with Stack-Augmented Recurrent Nets. In NIPS, pp. 190–
198, 2015.
Kaiser, L. and Sutskever, I. Neural GPUs Learn Algorithms.
In ICLR, 2016.
Kaushik, D. and Lipton, Z. C. How much reading does
reading comprehension require? A critical investigation
of popular benchmarks. In EMNLP, pp. 5010–5015. Association for Computational Linguistics, 2018.
Kemp, C., Tenenbaum, J. B., Griffiths, T. L., Yamada, T.,
and Ueda, N. Learning Systems of Concepts with an
Infinite Relational Model. In AAAI, pp. 381–388, 2006.
Kim, Y. Convolutional neural networks for sentence classification. In EMNLP, pp. 1746–1751. ACL, 2014.
Kim, Y., Jernite, Y., Sontag, D. A., and Rush, A. M.
Character-aware neural language models. In AAAI, pp.
2741–2749. AAAI Press, 2016.
Kipf, T. N. and Welling, M. Semi-supervised classification
with graph convolutional networks. In ICLR (Poster).
OpenReview.net, 2017.
Lake, B. M. and Baroni, M. Generalization without systematicity: On the compositional skills of sequence-tosequence recurrent networks. In ICML, volume 80 of
_Proceedings of Machine Learning Research, pp. 2879–_
2888. PMLR, 2018.
LeCun, Y., Bengio, Y., and Hinton, G. E. Deep learning.
_Nature, 521(7553):436–444, 2015._
Lipton, Z. C. The mythos of model interpretability. Com_mun. ACM, 61(10):36–43, 2018._
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy,
O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta:
A robustly optimized BERT pretraining approach. CoRR,
abs/1907.11692, 2019.
Luong, T., Pham, H., and Manning, C. D. Effective approaches to attention-based neural machine translation.
In EMNLP, pp. 1412–1421. The Association for Computational Linguistics, 2015.
Miller, A. H., Fisch, A., Dodge, J., Karimi, A., Bordes, A.,
and Weston, J. Key-value memory networks for directly
reading documents. In EMNLP, pp. 1400–1409. The
Association for Computational Linguistics, 2016.
Minervini, P., Costabello, L., Muñoz, E., Novácek, V.,
and Vandenbussche, P. Regularizing knowledge graph
embeddings via equivalence and inversion axioms. In
_ECML/PKDD (1), volume 10534 of Lecture Notes in_
_Computer Science, pp. 668–683. Springer, 2017a._
Minervini, P., Demeester, T., Rocktäschel, T., and Riedel,
S. Adversarial sets for regularising neural link predictors.
In UAI. AUAI Press, 2017b.
Minervini, P., Bosnjak, M., Rocktäschel, T., Riedel, S.,
and Grefenstette, E. Differentiable reasoning on large
knowledge bases and natural language. In AAAI, pp.
5182–5190. AAAI Press, 2020.
Rocktäschel, T. and Riedel, S. End-to-end differentiable
proving. In NIPS, pp. 3788–3800, 2017.
Rocktäschel, T., Singh, S., and Riedel, S. Injecting logical background knowledge into embeddings for relation
extraction. In HLT-NAACL, pp. 1119–1129. The Association for Computational Linguistics, 2015.
Russell, S. J. and Norvig, P. _Artificial Intelligence - A_
_Modern Approach, Third International Edition. Pearson_
Education, 2010.
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
Sadeghian, A., Armandpour, M., Ding, P., and Wang, D. Z.
DRUM: end-to-end differentiable rule mining on knowledge graphs. In NeurIPS, pp. 15321–15331, 2019.
Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., and
Lillicrap, T. Meta-learning with memory-augmented neural networks. In International conference on machine
_learning, pp. 1842–1850, 2016._
Seo, M. J., Kembhavi, A., Farhadi, A., and Hajishirzi, H.
Bidirectional attention flow for machine comprehension.
In ICLR (Poster). OpenReview.net, 2017.
Shen, Y., Huang, P., Gao, J., and Chen, W. Reasonet: Learning to stop reading in machine comprehension. CoRR,
abs/1609.05284, 2016.
Sinha, K., Sodhani, S., Dong, J., Pineau, J., and Hamilton,
W. L. CLUTRR: A diagnostic benchmark for inductive
reasoning from text. In EMNLP/IJCNLP (1), pp. 4505–
4514. Association for Computational Linguistics, 2019.
Sodhani, S., Chandar, S., and Bengio, Y. On training recurrent neural networks for lifelong learning. _CoRR,_
abs/1811.07017, 2018.
Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. EndTo-End Memory Networks. In Cortes, C. et al. (eds.),
_NIPS, pp. 2440–2448, 2015._
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention
is all you need. In NIPS, pp. 5998–6008, 2017.
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Liò,
P., and Bengio, Y. Graph attention networks. In ICLR
_(Poster). OpenReview.net, 2018._
Xu, J., Zhang, Z., Friedman, T., Liang, Y., and den Broeck,
G. V. A semantic loss function for deep learning with
symbolic knowledge. In ICML, volume 80 of Proceedings
_of Machine Learning Research, pp. 5498–5507. PMLR,_
2018.
Yang, F., Yang, Z., and Cohen, W. W. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. In
Guyon, I. et al. (eds.), NIPS, pp. 2316–2325, 2017.
Yang, Z., Dai, Z., Yang, Y., Carbonell, J. G., Salakhutdinov, R., and Le, Q. V. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR,
abs/1906.08237, 2019.
-----
**Learning Reasoning Strategies in End-to-End Differentiable Proving**
**A. Hyperparameter Selection**
For model selection in baseline models, we generated a
CLUTRR-like dataset using the code published by Sinha
et al. (2019) composed by a training set of graphs with
_{2, 3} edges, and two test sets, one with graphs with three_
edges, and another with graphs with nine edges. We then selected two sets of hyperparameters for each of the baselines:
one that maximises the validation accuracy on graphs with
three edges, and another that maximises the test accuracy
on graphs with nine edges. For each of the baselines, we
considered a wide range of hyperparameters: the dimensionalities of node and edge embeddings were varied in
_{10, 50, 100, 200, 500}, the number of attention heads in_
attention-based architectures in {1, 2, . . ., 10}, the number
of filters in convolutional architectures in {1, 2, . . ., 10},
and the number of hidden units in recurrent architectures in
_{32, 64, 128, 256, 512}._
To assess the statistical significance of our results, we ran
each of the experiments 10 times, each time with a different
seed, and compared the resulting accuracy values using an
unequal variances t-test, or Welch’s t-test. This is motivated
by the observation that accuracy values to be Gaussiandistributed, as they approach a normal distribution for large
numbers of re-runs, due to the Central Limit Theorem.
**A.1. Optimal Hyperparameters**
Note that all recurrent architectures are bidirectional.
**Graph Attention Networks: the hyperparameters that**
maximise the accuracy on validation graphs with 3
edges are h = 10 for the number of attention heads,
_k = 50 for the dimensionality of node embeddings,_
_ke = 200 for the dimensionality of edge embeddings._
For validation graphs with 9 edges, k = 50, ke = 500,
and h = 10.
**Graph Convolutional Networks: the** hyperparameters
that maximise the accuracy on validation graphs with
3 edges are k = 50 for the dimensionality of node
embeddings, ke = 500 for the dimensionality of edge
embeddings. For validation graphs with 9 edges,
_k = 50, and ke = 50._
**Convolutional Neural Networks: the** hyperparameters
that maximise the accuracy on validation graphs with
3 edges are k = 50 for the dimensionality of node and
edge embeddings, f = 8 convolutional filters, and
conditional encoding. For validation graphs with 9
edges, k = 200, f = 4, and conditional encoding.
**Recurrent Neural Networks: the hyperparameters that**
maximise the accuracy on validation graphs with 3
edges are k = 50 for the dimensionality of node and
edge embeddings, h = 64 for the size of the hidden
state representations, and conditional encoding. For
validation graphs with 9 edges, k = 500, h = 512, and
conditional encoding.
**Long Short-Memory Networks: the** hyperparameters
that maximise the accuracy on validation graphs with
3 edges are k = 50 for the dimensionality of node and
edge embeddings, h = 64 for the size of the hidden
state representations, and conditional encoding. For
validations graphs with 9 edges, k = 100, h = 512,
and independent encoding.
**Gated Recurrent Units: the hyperparameters that max-**
imise the accuracy on validation graphs with 3 edges
are k = 50 for the dimensionality of node and edge
embeddings, h = 64 for the size of the hidden state
representations, and conditional encoding. For validation graphs with 9 edges, k = 200, h = 512, and
conditional encoding.
**CNN with Highway Encoder: the hyperparameters that**
maximise the accuracy on validation graphs with 3
edges are k = 200 for the dimensionality of node and
edge embeddings, h = 2 highway layers, and conditional encoding. For validation graphs with 9 edges,
_k = 200, h = 1, and conditional encoding._
**Multi-Head Attention: the hyperparameters that max-**
imise the accuracy on validation graphs with 3 edges
are k = 500 for the dimensionality of node and edge
embeddings, h = 10 for the number of attention heads,
_hk for the size of the hidden state representation of the_
top LSTM layer, and conditional encoding. For validation graphs with 9 edges, k = 500, h = 10, hk = 128,
and conditional encoding.
-----
| [
"Pasquale, Minervini",
"Tim, Rocktäschel",
"Pontus, Stenetorp",
"Sebastian, Riedel",
"Edward, Grefenstette"
] | 2020-01-01T00:00:00 | null | false | 87 | 0 | null | https://arxiv.org/abs/2007.06477 | https://arxiv.org/abs/2007.06477 | https://www.semanticscholar.org/paper/2e8c84fd61c91e067dddef52ced76b824beb7013 |
GeoQA: A Geometric Question Answering Benchmark Towards Multimodal Numerical Reasoning | No summary was provided. | This paper proposes a Geometric Question Answering dataset GeoQA, containing 4,998 geometric problems with corresponding annotated programs, which illustrate the solving process of the given problems, and introduces a Neural Geometric Solver (NGS) to address geometric problems by comprehensively parsing multimodal information and generating interpretable programs. | ## GeoQA: A Geometric Question Answering Benchmark Towards Multimodal Numerical Reasoning
**Jiaqi Chen[1][∗], Jianheng Tang[2][∗], Jinghui Qin[1], Xiaodan Liang[2][†],**
**Lingbo Liu[1][,][3], Eric P. Xing[4], Liang Lin[1][,][3]**
1Sun Yat-sen University, 2Shenzhen Campus of Sun Yat-sen University, 3Dark Matter AI Inc.,
4Mohamed bin Zayed University of Artificial Intelligence
_{jadgechen,sqrt3tjh,xdliang328,liulingbo918}@gmail.com,_
[email protected], [email protected], [email protected]
**Abstract**
Automatic math problem solving has recently attracted increasing attention as a longstanding AI benchmark. In this paper, we focus on solving geometric problems, which requires a comprehensive understanding of textual descriptions, visual diagrams, and theorem knowledge. However, the existing methods were highly dependent on handcraft rules
and were merely evaluated on small-scale
datasets. Therefore, we propose a Geometric
**Question Answering dataset GeoQA, con-**
taining 4,998 geometric problems with corresponding annotated programs, which illustrate the solving process of the given problems. Compared with another publicly available dataset GeoS, GeoQA is 25 times larger,
in which the program annotations can provide a practical testbed for future research on
explicit and explainable numerical reasoning.
Moreover, we introduce a Neural Geometric
Solver (NGS) to address geometric problems
by comprehensively parsing multimodal information and generating interpretable programs.
We further add multiple self-supervised auxiliary tasks on NGS to enhance cross-modal semantic representation. Extensive experiments
on GeoQA validate the effectiveness of our
proposed NGS and auxiliary tasks. However, the results are still significantly lower
than human performance, which leaves large
room for future research. Our benchmark and
[code are released at https://github.com/chen-](https://github.com/chen-judge/GeoQA)
[judge/GeoQA.](https://github.com/chen-judge/GeoQA)
As shown in the figure, in ʎO, AB is
the chord, OCʏAB, if the radius of
ʎO is 5 (N0) and CE=2 (N1), then the
length of AB is ()
A. 2 B. 4 C. 6 D. 8
**Answer: D. 8**
**Problem Type: Length Calculation**
**Knowledge Points: Vertical Diameter, Pythagorean Theorem**
**Problem Solving Explanations:**
OE=OC-CE=5-2=3. According to the Pythagorean Theorem,
AE = ܱܣ[ʹ] െܱܧ[ʹ]= ͷ[ʹ] െ͵[ʹ]= 4. Thus, AB=2AE=8.
**Annotated Programs:**
Minus | N0 | N1 | PythagoreanMinus | N0 | V0 | Double | V1
Step1: Minus(N0, N1) = 5 – 2 = 3 (V0)
Step2: PythagoreanMinus(N0, V0) = ͷ[ʹ] െ͵[ʹ] = 4 (V1)
Step3: Double(V1) = 2ൈ4 = 8 (V2)
Figure 1: Illustration of a typical geometry problem
with the annotated programs in our GeoQA dataset.
2018; Lin et al., 2018). Most of the existing
methods focus on solving arithmetic and algebraic
problems, including traditional machine learning
approaches (Kushman et al., 2014; Zhou et al.,
2015; Huang et al., 2016) and network-based models (Wang et al., 2017, 2018; Xie and Sun, 2019),
while solving geometric problems has been rarely
investigated (Seo et al., 2014, 2015a; Sachan et al.,
2017). As a classic math problem, geometry dominates a large portion of secondary education. Due
to its challenges and data characteristics, geometry
problem can also serve as a multimodal numerical reasoning benchmark requiring joint reasoning
over diagram and text.
As shown in Figure 1, a typical geometric question mainly consists of textual descriptions and geometric diagrams. Compared with math word problems, which only involve problem texts, geometric
questions have posed the following new challenges.
**First, the additional problem diagrams provide es-**
sential information absent from the problem text,
**Introduction**
In recent years, developing machine learning systems to solve math word problems (MWPs) automatically has attracted increasing attention due to
its high academic value and the great application
potential in smart education (Bajaj and Sharma,
_∗Equal contribution._
_†_ Corresponding author.
-----
such as the relative location of lines and points;
thus, the solver should have the capability to parse
the diagram. Second, to solve a geometry problem, we need to understand and align the semantics
of text and diagram simultaneously. However, the
problem text often includes some ambiguous references and implicit relations to diagram elements,
which increases the difficulty of joint reasoning
over text and diagram. Third, many geometric
problems require extra theorem knowledge in the
problem solving process. For example, in Figure 1,
the Pythagorean Theorem is used to calculate the
length of line AE.
Though some previous methods (Seo et al., 2014,
2015a; Sachan et al., 2017, 2020; Sachan, 2020)
attempt to resolve the mentioned issues, the performance of their geometric problem solving systems is far away from satisfactory. They highly
depended on limited handcraft rules and were only
validated on small-scale datasets, making it hard to
generalize to more complex and real-world cases.
Besides, the solving process is sophisticated, which
means it is difficult for a human to understand and
examine its reliability.
To refresh the research on geometric problem
solving and promote further study on multimodal
numerical reasoning, we propose a large-scale realworld geometric question answering dataset called
GeoQA, which contains 4,998 multiple-choice geometric problems collected from real math exams
in Chinese middle school. Inspired by Amini et al.
(2019), we additionally introduce a new domainspecific language to model precise operation programs corresponding to the geometry problem.
These executable programs represent the numerical
reasoning steps of geometry problems. Compared
with the existing dataset GeoS and GeoS++ (Seo
et al., 2015a; Sachan et al., 2017), our GeoQA is
larger, more diverse, provides additional program
annotation, thus serves as a promising benchmark
to improve both generalization and interpretability
of the multimodal numerical reasoning approaches.
Moreover, we propose the first deep learningbased approach for geometry problem solving,
named as Neural Geometric Solver (NGS). It applies a co-attention mechanism to fuse the representation of text and diagram, and predicts the
explainable programs based on the cross-modal
representation. These sequential programs can be
executed to obtain a final answer. Benefiting from
the structured and explainable program prediction,
our NGS has both merits in superior performance
by learning-based models compared to previous
rule-based methods, as well as generating explainable numerical reasoning steps via the program
sequence in favor of the model diagnosis. We further design three highly-relevant pretext tasks to
enhance text-diagram semantic representation, including diagram jigsaw location prediction, diagram geometric element prediction, and knowledge
point prediction. Extensive experiments are conducted on GeoQA benchmark, and the quantitative
comparisons show the superiority of the proposed
NGS and auxiliary tasks over other multimodal
baselines.
In summary, our contributions are three-fold:
- We construct a large-scale dataset for geometric
problem solving, which contains 4,998 Chinese
geometric multiple-choice questions with rich
domain-specific program annotations.
- A novel Neural Geometric Solver is proposed
to solve geometric problems by generating symbolic programs based on the joint understanding
of textual descriptions and diagrams.
- Multiple specialized auxiliary tasks are employed
to effectively improve the semantic representation of text and diagrams. Experiments show
the superiority of our NGS equipped with these
auxiliary tasks.
**2** **Related Work**
**Geometry Problems Solving** Developing automated systems to solve geometry problems has a
long history in AI (Gelernter et al., 1960; WenTsun, 1986; Chou et al., 1996; Ye et al., 2008). For
example, Seo et al. (2014, 2015a) built the first
automated system, GeoS, to solve SAT style geometry problems. GeoS used NLP and computer
vision techniques (e.g., OCR) to parse a geometry
problem’s text and diagram jointly as logic forms.
However, this system highly depended on the manually designed logic forms and was only examined
in a small dataset with 185 problems. Besides, the
limited logic forms are hard to cover various geometry problems, leading to low generalization. To
improve GeoS, Sachan et al. (2017); Sachan and
Xing (2017) replaced these handcraft constraints
with geometry axiomatic knowledge in the form
of horn-clause rules, but their dataset and code are
not released. To boost the generalization and interpretability of existing works, we propose a largescale GeoQA benchmark, which is 25 times larger
-----
|Col1|Total|Train|Val|Test|
|---|---|---|---|---|
|Number|4998|3499|745|754|
|Angle Length Other|2737 1869 392|1932 1300 267|388 286 71|417 283 54|
|#Avg DS #Avg QL #Avg KP #Avg ET|108×140 52.5 2.10 1.11|108×140 52.4 2.10 1.13|107×141 52.4 2.07 1.08|107×140 52.7 2.14 1.09|
|#Avg OP #Avg PL|1.98 5.35|1.99 5.39|1.92 5.17|1.98 5.36|
Table 1: Statistics of our GeoQA dataset. It contains
three types of problems, including angle, length, and
other problems. Besides, DS, QL, KP, and ET represent diagram size, question length, knowledge points,
and element types, respectively. OP and PL represent
operation number and program length.
**3.1** **Data Description**
Generally, a geometry multiple-choice problem can
be represented as a tuple (t, d, c, i) where t is the
problem text in natural language, d is the problem
diagram, c = (c1, c2, c3, c4) represents the 4 numerical options for the problem, i is the answer
index. Given the text t and diagram d, an algorithm
is required to predict the correct answer ci **c. To**
_∈_
collect as much useful information as possible, we
also provide the natural language-based problem
solving explanations e, problem type t, the related
knowledge points k, and our annotated programs p
for each problem. Therefore, a geometry problem
can be represented as (t, d, c, i, e, t, k, p). Fig. 1
shows an example of geometric problems.
Moreover, there are three problem types in our
GeoQA, i.e., angle calculation, length calculation,
and others which contain various types of problems
such as area calculation. We adopt the corpus diversity metric proposed by Miao et al. (2020) to
evaluate the diversity of GeoQA. The result is 0.47,
which is relatively high compared with other math
problem datasets, indicating that our dataset is diverse. The source data already contains manually
tagged knowledge points in each problem, we design rule-based regular expressions to normalize
the original knowledge points to 50 categories. We
split our GeoQA into three subsets – train set, valid
set, and test set, in a ratio of 7.0: 1.5: 1.5. The data
statistics of our GeoQA are shown in Table 1.
**3.2** **Program Representation**
The neural network has proved to be a powerful tool to address complex multimodal reasoning tasks. However, it still faces challenges when
than the only public dataset Seo et al. (2015a), and
provides program annotation.
**Multimodal Reasoning** Visual question answering is a representative multimodal task that requires
the model to have reasoning ability (Goyal et al.,
2017; Yu et al., 2019). Johnson et al. (2017) built
a new diagnostic VQA dataset (called CLEVR)
with annotated functional programs. Based on
this benchmark, some methods proposed an implicit reasoning framework to jointly encode multimodal information (Perez et al., 2017; Santoro
et al., 2017). Moreover, several works (Yi et al.,
2018; Mao et al., 2019) utilize domain-specific
languages to perform explicit symbolic reasoning.
However, these program languages only consider
elementary operations, such as counting objects.
They are not directly applicable to geometric problems, which require multiple steps of numerical
calculation and involve theorem knowledge.
**Self-supervised Auxiliary Task** Self-supervised
pretraining has gradually emerged (Doersch and
Zisserman, 2017; Newell and Deng, 2020) as a effective technique to deal with label scarcity and
improve model performance. To enhance visual
features, most of these methods construct pseudo
labels automatically and train on auxiliary tasks,
including image jigsaw (Noroozi and Favaro, 2016;
Ahsan et al., 2019), inpainting (Pathak et al., 2016),
super resolution (Ledig et al., 2017), etc. Inspired
by these works, we design two self-supervised auxiliary tasks and a supervised auxiliary task to enhance the reasoning ability of our NGS.
**3** **GeoQA Dataset**
Due to the limited data scale and problem types, the
existing geometric problem reasoning dataset (Seo
et al., 2015a) can neither comprehensively reflect
model’s reasoning ability, nor support the training
of neural models. To propose a better benchmark
for the evaluation of multimodal numerical reasoning and inspire applications in smart education, we
collect a new dataset GeoQA. It contains 4,998
diverse real-world geometric problems in Chinese
middle school exams, and each problem is additionally annotated by specific programs that describe
the problem solving process. Besides, we also provide human performance on GeoQA, as shown in
Table 3.
-----
|Types|Programs|
|---|---|
|Basic|Equal, Double, Half|
|Arithmetic|Add, Minus, Multiply, Divide|
|Trigonometric|Sin, Cos, Tan, Arc-Sin, Arc-Cos|
|Theorem & Formula|Pythagorean Add/Minus, Proportion, Circle Area, Circle Perimeter, Cone Area|
|Constant|30, 60, 90, 180, 360, π, 0.618|
Table 2: An overview of 18 operations of four different
types and 7 constants in the defined program set.
conducting numerical calculation and providing explicit problem solving process, which are actually
two crucial points in the task of geometric problem
solving. To make better use of neural networks in
the geometric problems solving process, inspired
by Amini et al. (2019), we introduce a new domainspecific language to model the geometric problem
solving process based on the GeoQA dataset. This
program language can be directly executed to calculate the numerical answer based on the predefined
operations and their arguments.
The program types contain operations OP, constants Const, problem variables N, and process
variables V . As shown in Table 2, operations are
divided into multiple categories, including Basic,
Arithmetic, Trigonometric, Theorem, and Formula
operations. Each operator involves n(= 1, 2, 3) elements selected from constants and variables. Constants are predefined numbers that are frequently
used in geometric problems, such as π and the degree of a Right Angle (90). The problem variables
refer to all the number that appears in the current
problem, and process variables are obtained during
the execution process.
In addition to the common math operations, our
programs also contain some operations representing the knowledge of theorems and formulas that
is helpful to address geometric problems, such as
the Pythagorean theorem and the area calculation
formula of a circle. It is worth noting that many
simple geometric formulas do not require additional definitions. For example, given a square with
side length a, its area can be directly computed by
Multiply(a, a).
The interpretability of our program is reflected
on the sequential process of the operations, the selected constants and variables, and the application
of theorems and formulas. As shown in Figure 1,
we can have a general understanding of the entire
problem solving process after reading the program.
**3.3** **Collection and Annotation**
We collect GeoQA from two online education websites[1]. These problems are oriented grades 6-12,
containing various types of problems with corresponding knowledge points and solving explanations. we organize more than ten well-trained college students with a relevant major to specifically
annotate our programs by referring to the solving
explanations. To ensure label quality and consistency, they are required to read the guideline of annotation standards and examples in advance. Each
annotated program is double-checked by one of the
authors, and the annotator with low accuracy would
be disqualified. The annotated operations required
to solve the problem are limited to a maximum of
4 steps, thus a small number of complex and hard
problems are filtered.
**4** **Neural Geometric Solver**
We propose Neural Geometric Solver (NGS) to address geometric problems by jointly understanding
text, diagram, and then generating explainable programs. Moreover, we utilize some novel auxiliary
tasks to enhance the understanding ability of our
NGS. The overall architecture of our NGS is shown
in Fig. 2.
**4.1** **The Architecture of NGS**
**4.1.1** **Problem Encoder**
**Text Encoder** Given a problem text P = {xi}i[n]=1[,]
each token xi is first embedded into a word vector
**xi. A single-layer unidirectional LSTM (Hochre-**
iter and Schmidhuber, 1997) is then applied to
encode each word embedding xi into a hidden
state hi. The sequence of the hidden state in
LSTM are used to represent problem text P as
_HP = [h0; ...; hn]._
**Diagram Encoder** For representing a problem
diagram, we apply the first three stages of a ResNet101 (He et al., 2016) to extract it as the diagram
feature, which can be formalized as a feature matrix
_HD ∈_ R[m][×][d]. Moreover, we also apply multiple
auxiliary tasks for pretraining the diagram encoder.
Note that the parameters of the diagram encoder
are fixed when training the overall NGS.
**4.1.2** **Joint Reasoning Module**
Given the text feature HP and the diagram feature HD, it is crucial for solving geometric prob
[1http://www.zxxk.com/ and http://www.jyeoo.com/](http://www.zxxk.com/)
-----
|Minus|Const_90|N1|<EOS>|
|---|---|---|---|
**Geometry Problem** **Auxiliary Tasks**
We know that ʎO is ͫABD's peripheral
circle, AB is the diameter of ʎO, CD is the Jigsaw
chord of ʎO, ɴABD=58r(N1), then ɴ BCD
Location
is equal to ().
Prediction
A. 116r B. 32r C. 58r D. 64r
ܮ
Text Encoder Diagram Encoder Circle ʎ Geometry
Pretrain Triangle ͫ Elements
… Prediction
Joint Reasoning Module ܮீா
Angle of Circumference Knowledge
Circumcircle Points
Program Decoder … Classification
Generated
Program: Minus Const_90 N1 <EOS> ܮ
Execute: Ans = 90r - 58r = 32r
Figure 2: The overall architecture of our Neural Geometric Solver (left) in conjunction with auxiliary tasks (right).
The problem text and diagram is encoded separately, then fed into a joint reasoning module together to obtain
cross-modal fusion of text and diagram features. A decoder utilizes fused multimodal features to generate the interpretable programs. In addition, we propose three auxiliary tasks to enhance feature representation and facilitate
multimodal reasoning.
encoder state of the text encoder hn, obtaining _F[˜]R_
as the final gathered multimodal feature vector.
**4.1.3** **Program Decoder**
The program decoder module generates the programs sequentially under the guidance of multimodal information. Concretely, we use a LSTM
decoder (Hochreiter and Schmidhuber, 1997) with
attention (Bahdanau et al., 2014) over the reasoning module output FR . Let {yt}(1 ≤ _t ≤_ _T_ ) be
the target program to be generated and st be the
hidden state of LSTM at time step t. _F[˜]R is fed into_
a linear layer to obtain the initial state s0. st is
concatenated with the attention result and fed to
a linear layer with the softmax function to predict
the distribution of the next program token Pt.
During training, the generation loss Lg is the
negative log-likelihood (NLL) of the target program:
_T_
_Lg(θ) = T[1]_ log Pt(yt|x, y1, ..., yt−1; θ),
_t=1_
X
where θ are the parameters of the entire NGS architecture except for the diagram encoder, x is
the input of both problem text and the extracted
diagram feature. When testing, the decoder only
observes the input text and diagram feature along
with the program parts that have been generated.
lems to jointly fuse and align the cross-modal
information. To this end, inspired by Yu et al.
(2019), we adopt a co-attention module to conduct
cross-modal joint reasoning with attention mechanism. The co-attention module consists of 12 selfattention (SA) units and 6 guided-attention (GA)
units, which fully fuse and align the text representation HP and the diagram representation HD.
_HP is first encoded by 6 self-attention units (i.e.,_
original Transformer), and the final hidden state
processed by the 6-th self-attention unit is used
as guiding information. Then, the guiding information is fed into another stacked 6 self-attention
units and 6 guided-attention units to achieve crossmodal semantic fusion and alignment. Finally, the
co-attention module outputs a cross-modal representation FD = [f1[D][;][ ...][;][ f]n[D][]][, which contains rich]
information over the problem text and diagram.
In this work, we find that text information
is more fundamental than diagram information.
Therefore, we further enhance the cross-modal representation with the help of textual information.
Specifically, we concatenate HP and FD to acquire an enhanced reasoning module output FR for
decoding programs.
Besides, an attentional reduction network with a
two-layer MLP is applied to aggregate feature FD
into _F[˜]D. Similarly, we concatenate_ _F[˜]D and the last_
-----
**4.1.4** **Program Executor**
After a beam of top N program sequences
_g1, ..., gn_ are generated from the program de_{_ _}_
coder, the executor computes them step by step.
When executing the program, the token sequence is
first divided into several parts based on the position
of operators in the program. Once a complete operation program has been decoded, each operator
in the program is executed sequentially to obtain
a numerical result. The execution process fails if
_gi has a grammar error (e.g., the number of aug-_
ments does not match the current operator), or the
executed value does not match any options in the
current problem. NGS adopts the first successfully
executed program as the predicted solution and
chooses the corresponding operation. If all N program sequences fail, the executor will report “no
result” directly instead of guessing an option. Fig. 1
shows the detailed step-by-step execution of a final
predicted program sequence.
**4.2** **Auxiliary Tasks**
**4.2.1** **Self-supervised Diagram Auxiliary Task**
Although our NGS can jointly fuse text feature and
diagram feature with a co-attention mechanism, a
powerful diagram encoder is needed to improve the
problem understanding and answer accuracy. To
obtain a high-quality diagram feature, we investigate two self-supervised auxiliary tasks, named as
**Jigsaw Location Prediction and Geometry Ele-**
**ments Prediction, to pretrain diagram encoder.**
**Jigsaw Location Prediction** In the Jigsaw location prediction task that enforces pixel-level perception, we first split a diagram as m × m blocks
and select the center block as the target. Then, we
shuffle other blocks randomly and train the diagram
encoder to predict the correct relative location between these shuffled blocks and the target using a
cross-entropy loss.
**Geometry Elements Prediction** For objectlevel understanding, we design a geometry elements prediction task that aims at training the diagram encoder to predict the geometry elements
appearing in the diagram. A diagram usually contains multiple geometry elements which are also
mentioned in the problem text and the solving explanation. We extract these geometry elements
from text as the label and deploy an N-way classifier with binary cross-entropy (BCE) as the loss
function to train the diagram encoder, where N is
the number of the possible geometry elements on
the training set.
**4.2.2** **Knowledge Points Prediction**
In addition to self-supervised diagram training, we
also propose another auxiliary learning task called
knowledge points prediction to enhance a problem’s overall representation by providing an extra
training signal. We summarize about 50 knowledge points for our GeoQA, and label each problem
with one or more knowledge points. We predict
the knowledge points for each problem based on
the gathered feature vector _F[˜]R outputted from the_
joint reasoning module. Different from the diagram
training, the knowledge points prediction task is
trained with NGS simultaneously. We also deploy
a K-way classifier with binary cross-entropy (BCE)
as the loss function to train the knowledge points
prediction multi-label task, where K is the total
number of the possible knowledge points on the
training set.
**5** **Experiments**
**5.1** **Experimental Setup and Training Details**
We conduct experiments on GeoQA dataset, and
adopt answer accuracy as the evaluation metric.
Although there is another available geometric problem dataset (Seo et al., 2015a), the limited data
scale (with only 67 training samples) makes it impossible to support neural network training. On
the other hand, previous geometry problem solving systems require additional inputs (e.g., OCR
and dependency parsing results) of the problem
(Ye et al., 2008; Seo et al., 2015b) or not release
their codes (Sachan and Xing, 2017; Sachan, 2020).
Therefore, they are not comparable on our GeoQA
dataset.
**Implementation Details: In this work, we im-**
plement the proposed method with Pytorch (Paszke
et al., 2017). The learning rate is 1e[−][3] and the
batch size is set to 32. All models are trained
around 100 epochs and optimized by Adam optimizer (Kingma and Ba, 2014). The beam size
is typically set to 10. When pretraining the diagram encoder, we first fill the diagram with a white
background to make it equal in length and width,
and resize it to 224 × 224. Then, we utilize the
diagram feature extracted by the encoder to predict
jigsaw location and geometry elements simultaneously and optimize the diagram encoder to obtain
an informative diagram feature with a learning rate
-----
|Method|Col2|Total (%)|Angle (%)|Length (%)|Other (%)|
|---|---|---|---|---|---|
|Human|Text-Only Text-Diagram|63.0 92.3|58.0 94.2|71.7 90.5|55.6 87.0|
|W/O Program|FiLM (Perez et al., 2017) RN (Santoro et al., 2017) MCAN (Yu et al., 2019)|32.8 38.2 39.5|33.6 42.2 43.2|32.9 34.3 36.0|25.9 27.8 29.6|
|Text-Only|Seq2Prog (Amini et al., 2019) BERT2Prog (Devlin et al., 2018)|54.2 54.4|66.4 65.7|38.5 41.0|42.6 37.0|
|Text-Diagram|BERT2Prog + Diagram Seq2Prog + Diagram NGS (Ours) NGS-Auxiliary (Ours)|52.5 53.4 57.4 60.0|65.7 62.4 68.6 71.5|37.1 44.5 43.5 48.8|31.5 31.5 44.4 29.6|
Table 3: The answer accuracy comparison on different test subsets of GeoQA dataset. “Human”, “W/O Program”,
“Text-Only”, and “Text-Diagram” refer to the performance of human, not using the program, using text modal only,
and conducting multimodal numerical reasoning on both text-diagram modals, respectively.
|Method|BS|Acc(%)|NR(%)|
|---|---|---|---|
|Seq2Prog + Diagram|1 10 100|33.8 53.4 59.9|54.0 19.5 3.91|
|NGS-Auxiliary|1 10 100|46.3 60.0 63.9|41.2 13.5 2.86|
Table 4: Performance comparison under different beam
size settings. BS, Acc, and NR represent beam size,
accuracy, and no result, respectively.
of 1e[−][5]. Finally, the loss weight of the knowledge
points classification task is set to 1, to promote the
overall understanding of problems.
**5.2** **Experimental Results**
We introduce three types of models here and test
them on our GeoQA. The performance comparison
with various methods on the different subset types
of GeoQA is reported in Table 3.
**Human Performance. We invite 10 students**
with a high score (top 1%) in the national university entry exam to answer these geometric problems. For each question, they first try to solve the
problem with the text only and draw a diagram by
themselves. Then, the actual diagram is given to
answer the complete question. The total time for
each question is limited to two minutes. When using text and diagram simultaneously, the human
performance is improved from 63.0% to 92.3%,
which indicates that humans are not good at solving problems using only text, but handling multimodal information successfully. The result shows
that there is still a huge gap between the existing
models and human experts, leaving large room for
future research.
**The effectiveness of program. “W/O Program”**
refers to not using programs and regarding GeoQA
as a classification problem similar to VQA. Therefore, we conduct experiments on three models with
multimodal reasoning capabilities, including FiLM,
RN, and MCAN. However, their performance results show that these methods fail to reason about
such complex geometric problems, achieving poor
performance on GeoQA. These results prove the
effectiveness and importance of our designed interpretable programs.
**The necessity of multi-modality. “Text-Only”**
means that models only use text to generate program sequences since humans can also understand
the intent of the question, draw diagram based
on the text, and solve the problem. Motivated
by Amini et al. (2019), we design a Sequence-toProgram (Seq2Prog) model using GRU encoder
with an attention mechanism. Moreover, we replace the encoder with BERT and get a stronger
BERT2Prog. By predicting our tailor-designed programs, these two methods are effective, while the
performance is not satisfactory enough. These results show that multimodal reasoning ability is indispensable when solving geometric problems.
**Multimodal numerical reasoning baselines.**
“Text-Diagram” refers to using text and diagram
simultaneously. We concatenate the text embedding with the diagram feature extracted by ResNet
(He et al., 2016) as the baseline methods, such
as Seq2Prog + Diagram and BERT2Prog + Diagram. These feature fusion methods do not have
the strong reasoning ability and fail to improve the
program decoding. For instance, by adding diagram, the performance of BERT2Prog + Diagram
declines from 54.4% to 52.5%, which may result
-----
A student saw a tree and their distance was 20m.
The reflection of the top of the tree in the water
was 5m away from him. The student's height is
1.7m, and the tree height is ()m.
A. 3.4 B. 5.1 C. 6.8 D. 8.5
**Answer: B. 5.1**
**Knowledge Points: Similar Triangles, Distance**
**Problem Solving Explanations:**
20-5=15m, By the nature of similar triangles, 5/15=1.7/H.
H=15÷5×1.7=5.1m. Thus, height is 5.1m.
**Annotated Program: Minus | N0 | N1| Proportion | V0 | N1 | N2**
**Baseline (No Result): Add | N2 | N1 | Proportion | N0 | V0 | N2**
**NGS (No Result): Proportion | N2 | N0 | N1**
**NGS-Auxiliary (Wrong): Proportion | N2 | N1 | N0**
⊙O is a circle with a radius of 1, the distance
from point O to the line L is 3. P is the moving
point on L, and PQ is the tangent to the circle.
PQRS is a square, and its minimum area is ().
A. 7 B. 8 C. 9 D. 10
**Answer: B. 8**
**Knowledge Points: Pythagorean Theorem, Tangent, Square**
**Problem Solving Explanations:**
When the square area is smallest, PQ = 𝑃𝑃𝑃𝑃[2] −𝑄𝑄𝑄𝑄[2] = 2 2 .
Thus, Area = PQ × PQ = 8
**Annotated Program: PythagoreanMinus | N1 | N0 | Multiply | V0 | V0**
**Baseline (No Result): Tan | N1 | CircleArea | N0**
**NGS (No Result): Add | N0 | N1 | Multiply | N0 | V0**
**NGS-Auxiliary (Right): PythagoreanMinus | N1 | N0 | Multiply | V0 | V0**
Figure 3: Typical cases. No Result represents the answer executed by the programs is not in the options, and
Wrong represents getting a wrong option. Baseline is a “Seq2Prog + Diagram” model. In the case on the left,
NGS-Auxiliary successfully predicts the knowledge of the Pythagorean theorem through auxiliary tasks and get
the right answer. For the case on the right, the problem is quite hard that current model cannot solve it.
on multimodal information.
**The effect of different beam size. In general,**
we set the beam size to 10 for testing. In this section, we explore the influence of different beam
size. After the searched sequence program is executed, there will be three situations: right answer,
wrong answer, and no result. As shown in Table. 4,
as the beam size is larger, we get higher accuracy
and a lower proportion of no result. When beam
size equals 1, the NGS-Auxiliary outperforms baselines significantly. Our model can achieve the highest accuracy of 63.9% when beam size is 100.
**5.3** **Ablation Study**
As shown in Fig. 4, we conduct experiments to evaluate the contribution of different auxiliary tasks.
We consider six different combinations: 1) only the
NGS; 2) NGS + Geometry Elements (NGS+GE);
3) NGS + Jigsaw Location (NGS+JL); 4) NGS +
Knowledge Points (NGS+KP); 5) NGS + diagrambased pretraining (NGS+JL+GE); 6) NGS with all
three auxiliary tasks (NGS-Auxiliary). We can see
that all three auxiliary tasks can promote the performance of NGS. The accuracy gains of GE, JL, KP,
JL + GE, and combining all three tasks are 0.5%,
0.6%, 0.9%, 1.5%, 2.6%, respectively. These results show that all our self-supervised and auxiliary
tasks can enhance the comprehensive understanding and multimodal reasoning ability of NGS.
**5.4** **Case Analysis**
As shown in Fig. 3, we select two typical cases to
demonstrate the programs generated by different
models and some representative errors.
In the left case, the knowledge of the
Figure 4: Ablation study on different auxiliary components. ‘+’ represents we add the auxiliary component.
NGS-Auxiliary means that adding all three auxiliary
tasks together.
from the extra diagram that disturbs the text pretraining model. Note that multimodal pretraining
models (Lu et al., 2019; Li et al., 2020) cannot be
applied to geometric problems, since these models
are based on Faster-RCNN to extract object-level
features from natural images.
**The effectiveness of our methods. Our pro-**
posed NGS shows a relatively-good performance
compared to the various models mentioned above.
When adding all three auxiliary tasks together
to enhance NGS solver, our NGS-Auxiliary with
multimodal reasoning ability becomes the existing best-performing method (60.0%) on GeoQA
dataset. It also achieves the highest accuracy on all
types of problems. For example, compared with
Seq2Prog+Diagram, NGS-Auxiliary obtains an
9.1% performance improvement on the angle type
problems. Compared with other “Text-Diagram”
baselines, our model is effective when reasoning
-----
**References**
Unaiza Ahsan, Rishi Madhok, and Irfan Essa. 2019.
Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition. In 2019
_IEEE Winter Conference on Applications of Com-_
_puter Vision (WACV), pages 179–189. IEEE._
Aida Amini, Saadia Gabriel, Peter Lin, Rik KoncelKedziorski, Yejin Choi, and Hannaneh Hajishirzi.
2019. Mathqa: Towards interpretable math word
problem solving with operation-based formalisms.
In Proceedings of the 2019 Conference of the North
_American Chapter of the Association for Computa-_
_tional Linguistics, pages 2357–2367._
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. _arXiv preprint_
_arXiv:1409.0473._
Richa Bajaj and Vidushi Sharma. 2018. Smart education with artificial intelligence based determination of learning styles. Procedia computer science,
132:834–842.
Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong
Zhang. 1996. Automated generation of readable
proofs with geometric invariants. Journal of Auto_mated Reasoning, 17(3):325–347._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Carl Doersch and Andrew Zisserman. 2017. Multi-task
self-supervised visual learning. In Proceedings of
_the IEEE International Conference on Computer Vi-_
_sion, pages 2051–2060._
Herbert Gelernter, James R Hansen, and Donald W
Loveland. 1960. Empirical explorations of the geometry theorem machine. In Papers presented at the
_May 3-5, 1960, western joint IRE-AIEE-ACM com-_
_puter conference, pages 143–149._
Yash Goyal, Tejas Khot, Douglas Summers-Stay,
Dhruv Batra, and Devi Parikh. 2017. Making the
v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceed_ings of the IEEE Conference on Computer Vision_
_and Pattern Recognition, pages 6904–6913._
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on
_computer vision and pattern recognition, pages 770–_
778.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997.
Long short-term memory. _Neural computation,_
9(8):1735–1780.
Pythagorean theorem, tangent, and square are required for solving the problem. Baseline method
and our NGS fail to generate the correct operations
and get no result. However, our NGS-Auxiliary
successfully predicts the use of knowledge in the
proposed auxiliary task. And more importantly, the
correct ”PythagoreanMinus” program is generated,
and the right answer is obtained.
The right one is a typical error case, in which
model needs to understand a complex scene. Although NGS-Auxiliary has predicted the knowledge points of Proportion program correctly, all
three models fail to predict the correct answer. A
better multimodal method is required to handle this
hard high-level reasoning task in the future.
**6** **Conclusion**
In this work, we focus on the geometric problem
and propose the first large-scale geometric question answering dataset “GeoQA”, containing 4,998
problems with program annotation. Besides, we
propose a deep neural baseline, named as Neural
Geometric Solver (NGS), to solve a geometric problem by jointly reasoning over multimodal data and
generating interpretable programs. We further propose multiple novel auxiliary tasks to enhance the
semantic representation of text and diagram. Extensive experimental results and analyses show that
our GeoQA is challenging, and our NGS-Auxiliary
outperforms other methods on GeoQA.
**7** **Ethical Impact**
We collected GeoQA from two online education
websites, which is only used for academic research,
and the copyright belongs to the original websites.
This work may inspire research in the field of multimodal numerical reasoning.
**Acknowledgements** This work was supported
in part by National Key R&D Program of
China under Grant No.2020AAA0109700,
National Natural Science Foundation of
China (NSFC) under Grant No.U19A2073
and No.61976233, Guangdong Province Basic and Applied Basic Research (Regional
Joint Fund-Key) Grant No.2019B1515120039,
Shenzhen Fundamental Research Program
(Project No.RCYX20200714114642083 and
No.JCYJ20190807154211365), Zhijiang Lab’s
Open Fund (No.2020AA3AB14), and CSIG Young
Fellow Support Fund.
-----
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
887–896. Association for Computational Linguistics.
Justin Johnson, Bharath Hariharan, Laurens Van
Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and
Ross Girshick. 2017. Clevr: A diagnostic dataset
for compositional language and elementary visual
reasoning. In Proceedings of the IEEE Conference
_on Computer Vision and Pattern Recognition, pages_
2901–2910.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
_arXiv:1412.6980._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52th Annual Meeting of the Association for Compu-_
_tational Linguistics, volume 1, pages 271–281._
Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose
Caballero, Andrew Cunningham, Alejandro Acosta,
Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. 2017. Photo-realistic single image super-resolution using a generative adversarial
network. In Proceedings of the IEEE conference
_on computer vision and pattern recognition, pages_
4681–4690.
Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and
Daxin Jiang. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. In Proceedings of the AAAI Conference
_on Artificial Intelligence, volume 34, pages 11336–_
11344.
Jinjiao Lin, Haitao Pu, Yibin Li, and Jian Lian. 2018.
Intelligent recommendation system for course selection in smart education. _Procedia Computer Sci-_
_ence, 129:449–453._
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan
Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language
tasks. arXiv preprint arXiv:1908.02265.
Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B
Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes,
words, and sentences from natural supervision. In_ternational Conference on Learning Representa-_
_tions._
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceed_ings of the 58th Annual Meeting of the Association_
_for Computational Linguistics, pages 975–984._
Alejandro Newell and Jia Deng. 2020. How useful is
self-supervised pretraining for visual tasks? In Pro_ceedings of the IEEE/CVF Conference on Computer_
_Vision and Pattern Recognition, pages 7345–7354._
Mehdi Noroozi and Paolo Favaro. 2016. Unsupervised
learning of visual representations by solving jigsaw
puzzles. In European Conference on Computer Vi_sion, pages 69–84. Springer._
Adam Paszke, Sam Gross, Soumith Chintala, Gregory
Chanan, Edward Yang, Zachary DeVito, Zeming
Lin, Alban Desmaison, Luca Antiga, and Adam
Lerer. 2017. Automatic differentiation in pytorch.
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue,
Trevor Darrell, and Alexei A Efros. 2016. Context
encoders: Feature learning by inpainting. In Pro_ceedings of the IEEE conference on computer vision_
_and pattern recognition, pages 2536–2544._
Ethan Perez, Florian Strub, Harm De Vries, Vincent
Dumoulin, and Aaron Courville. 2017. Film: Visual
reasoning with a general conditioning layer. arXiv
_preprint arXiv:1709.07871._
Mrinmaya Sachan. 2020. Knowledge graph embedding compression. In Proceedings of the 58th An_nual Meeting of the Association for Computational_
_Linguistics, pages 2681–2691._
Mrinmaya Sachan, Avinava Dubey, Eduard H Hovy,
Tom M Mitchell, Dan Roth, and Eric P Xing. 2020.
Discourse in multimedia: A case study in extracting geometry knowledge from textbooks. Computa_tional Linguistics, 45(4):627–665._
Mrinmaya Sachan, Kumar Dubey, and Eric Xing. 2017.
From textbooks to knowledge: A case study in
harvesting axiomatic knowledge from textbooks to
solve geometry problems. In Proceedings of the
_2017 Conference on Empirical Methods in Natural_
_Language Processing, pages 773–784._
Mrinmaya Sachan and Eric Xing. 2017. Learning
to solve geometry problems from natural language
demonstrations in textbooks. In Proceedings of the
_6th Joint Conference on Lexical and Computational_
_Semantics (* SEM 2017), pages 251–261._
Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia,
and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances
_in neural information processing systems, pages_
4967–4976.
Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, and
Oren Etzioni. 2014. Diagram understanding in geometry questions. In Twenty-Eighth AAAI Confer_ence on Artificial Intelligence._
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Etzioni, and Clint Malcolm. 2015a. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 Conference on
-----
_Empirical Methods in Natural Language Processing,_
pages 1466–1476.
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Etzioni, and Clint Malcolm. 2015b. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 Conference
_on Empirical Methods in Natural Language Pro-_
_cessing, pages 1466–1476. Association for Compu-_
tational Linguistics.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018. Translating a math word
problem to a expression tree. In Proceedings of the
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 1064–1069. Associa-_
tion for Computational Linguistics.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854. Association for Computational Linguistics.
Wu Wen-Tsun. 1986. Basic principles of mechanical
theorem proving in elementary geometries. Journal
_of automated Reasoning, 2(3):221–252._
Zhipeng Xie and Shichao Sun. 2019. A goal-driven
tree-structured neural model for math word problems. In Proceedings of the Twenty-Eighth In_ternational Joint Conference on Artificial Intelli-_
_gence, IJCAI-19, pages 5299–5305. International_
Joint Conferences on Artificial Intelligence Organization.
Zheng Ye, Shang-Ching Chou, and Xiao-Shan Gao.
2008. An introduction to java geometry expert. In
_International Workshop on Automated Deduction in_
_Geometry, pages 189–195. Springer._
Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba,
Pushmeet Kohli, and Joshua B Tenenbaum. 2018.
Neural-symbolic vqa: Disentangling reasoning from
vision and language understanding. arXiv preprint
_arXiv:1810.02338._
Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and
Qi Tian. 2019. Deep modular co-attention networks
for visual question answering. In Proceedings of
_the IEEE conference on computer vision and pattern_
_recognition, pages 6281–6290._
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using
quadratic programming. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 817–822. Association for_
Computational Linguistics.
-----
| [
"Jiaqi, Chen",
"Jinghui, Qin",
"Jianheng, Tang",
"Liang, Lin",
"Lingbo, Liu",
"Eric P., Xing",
"Xiaodan, Liang"
] | 2021-01-01T00:00:00 | ACL 2021 Findings | false | 86 | 7 | null | https://arxiv.org/abs/2105.14517 | https://arxiv.org/abs/2105.14517 | https://www.semanticscholar.org/paper/291133a657498920451481d3bf784ebbafda8d6e |
Progressive-Hint Prompting Improves Reasoning in Large Language Models | The performance of Large Language Models (LLMs) in reasoning tasks depends heavily on prompt design, with Chain-of-Thought (CoT) and self-consistency being critical methods that enhance this ability. However, these methods do not fully exploit the answers generated by the LLM to guide subsequent responses. This paper proposes a new prompting method, named Progressive-Hint Prompting (PHP), that enables automatic multiple interactions between users and LLMs by using previously generated answers as hints to progressively guide toward the correct answers. PHP is orthogonal to CoT and self-consistency, making it easy to combine with state-of-the-art techniques to further improve performance. We conducted extensive and comprehensive experiments on seven benchmarks. The results show that PHP significantly improves accuracy while remaining highly efficient. For instance, with text-davinci-003, we observed a 4.2% improvement on GSM8K with greedy decoding compared to Complex CoT, and a 46.17% reduction in sample paths with self-consistency. With GPT-4 and PHP, we achieve state-of-the-art performances on SVAMP (89.1% -> 91.9%), GSM8K (92% -> 95.5%), AQuA (76.4% -> 79.9%) and MATH (50.3% -> 53.9%). | This paper proposes a new prompting method, named Progressive-Hint Prompting (PHP), that enables automatic multiple interactions between users and LLMs by using previously generated answers as hints to progressively guide toward the correct answers. | ## Progressive-Hint Prompting Improves Reasoning in Large Language Models
**Chuanyang Zheng[1], Zhengying Liu[2], Enze Xie[2], Zhenguo Li[2], Yu Li[1]**
Chinese University of Hong Kong, Huawei Noah’s Ark Lab
```
{cyzheng21, liyu}@cse.cuhk.edu.hk, {liuzhengying2, xie.enze, li.zhenguo}@huawei.com
https://github.com/chuanyang-Zheng/Progressive-Hint
```
**Abstract**
The performance of Large Language Models (LLMs) in reasoning tasks depends
heavily on prompt design, with Chain-of-Thought (CoT) and self-consistency
being critical methods that enhance this ability. However, these methods do not
fully exploit the answers generated by the LLM to guide subsequent responses.
This paper proposes a new prompting method, named Progressive-Hint Prompting
(PHP), that enables automatic multiple interactions between users and LLMs by
using previously generated answers as hints to progressively guide toward the
correct answers. PHP is orthogonal to CoT and self-consistency, making it easy
to combine with state-of-the-art techniques to further improve performance. We
conducted extensive and comprehensive experiments on seven benchmarks. The
results show that PHP significantly improves accuracy while remaining highly
efficient. For instance, with text-davinci-003, we observed a 4.2% improvement on
GSM8K with greedy decoding compared to Complex CoT, and a 46.17% reduction
in sample paths with self-consistency. With GPT-4 and PHP, we achieve state-ofthe-art performances on SVAMP (89.1% → 91.9%), GSM8K (92% → 95.5%),
AQuA (76.4% → 79.9%) and MATH (50.3% → 53.9%).
**1** **Introduction**
While Large Language Models (LLMs) have demonstrated remarkable performance across various
NLP tasks [1–3], their ability to reason is often perceived as a limitation that cannot be overcome
merely by increasing the scale of the model [4, 5]. Prompt engineering in large-scale models has
shown comparable or superior performance to full training set fine-tuning in enhancing reasoning
ability, while also being significantly more sample-efficient [6, 7]. One area of research that aims
to address this limitation is the use of Chain-of-Thought (CoT) approaches to promote intermediate
reasoning steps [8–10]. Other works in this area, such as Least-to-Most [9] and Complex CoT [10],
have also explored this direction. Another area of research is self-consistency-related approaches. In
comparison to CoT-related work that focuses on designing better prompts, self-consistency proposes
to sample multiple answers from the LLMs and arrive at the correct answer through a majority
vote [10]. This approach is further improved upon by complex-based selection [10]. CoT-related and
self-consistency-related works can be seamlessly combined without any conflict.
Prior research has not explored the potential of leveraging the outputs of LLM to refine reasoning
paths iteratively. It stands to reason that similar to human cognition, LLM could benefit from
reevaluating and adjusting its generated reasoning paths in order to correct errors and enhance overall
performance. In this paper, we propose a new method named Progressive-Hint Prompting (PHP) that
involves sequentially interacting with LLM to approach the correct answer gradually. The method
operates as follows: (1) given a question, we ask the LLM to provide a Base Answer; (2) we combine
the question and answer to re-ask the LLM and obtain the Subsequent Answer; (3) we repeat the
Preprint. Under review.
-----
Base Prompting
**Q: There are 96 fourth-graders at Small Tree School. 43 of** **A: There were 96 fourth-graders at Small Tree School. 43 of**
them are girls. On Friday, 5 fourth-grade girls and 4 fourth-grade boys were absent. How many fourth grade boys were LLM Base them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls at school on Friday, and 96 at Small Tree School on Friday? Answer 38 = 58 boys at school on Friday. The answer is 58.
**A: We know the Answer Hints: 58. With the Answer**
Progressive-Hint Prompting **Hints: 58, we will answer the question. There were 96**
fourth-graders at Small Tree School, and 43 of them were
**Q: There are 96 fourth-graders at Small Tree School. 43 of**
girls.On Friday, 5 girls and 4 boys were absent.So there
them are girls. On Friday, 5 fourth-grade girls and 4 fourthgrade boys were absent. How many fourth grade boys were LLM Second were 43 - 5 = 38 girls and 96 - 43 = 53 boys at Small Tree
School on Friday.After subtracting the absent students,
at Small Tree School on Friday? (Hint: The answer is near Answer
there were 38 - 4 = 34 girls and 53 - 5 = 48 boys. The
**to 58).**
answer is 48.
Progressive-Hint Prompting **A: We know the Answer Hints: 58, 48. With the Answer**
**Q: There are 96 fourth-graders at Small Tree School. 43 of** **Hints: 58, 48, we will answer the question. There were 96**
them are girls. On Friday, 5 fourth-grade girls and 4 fourth-grade boys were absent. How many fourth grade boys were LLM Third fourth-graders, and 43 of them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls and
at Small Tree School on Friday? (Hint: The answer is near Answer 96 - 9 = 87 students in total. Of those 87 students, 38 were
**to 58, 48).** girls, so there must have been 87 - 38 = 49 boys. The
answer is 49.
Progressive-Hint Prompting **A: We know the Answer Hints: 58, 48, 49. With the**
**Q: There are 96 fourth-graders at Small Tree School. 43 of** **Answer Hints: 58, 48, 49, we will answer the question.**
them are girls. On Friday, 5 fourth-grade girls and 4 fourth-grade boys were absent. How many fourth grade boys were LLM Fourth There were 96 fourth-graders at Small Tree School, and 43 of them were girls. On Friday, 5 girls and 4 boys were
at Small Tree School on Friday? (Hint: The answer is near Answer absent. So there were 43 - 5 = 38 girls and 96 - 43 = 53
**to 58, 48, 49).** boys originally. After the absences, there were 38 - 5 = 33
girls and 53 - 4 = 49 boys. The answer is 49.
Figure 1: Our proposed Progressive-Hint Prompting method combines the generated answers and
questions for double-checking purposes, which is divided into two stages. In the first stage, we
generate a base answer by passing to the LLM a concatenation of the current question and a base
**prompt, such as CoT or Complex CoT. In the second stage, we generate the subsequent answers via**
the corresponding progressive-hint prompt, such as Progressive-Hint Prompting CoT (PHP-CoT)
or Progressive-Hint Prompting Complex CoT (PHP-Complex CoT), for the subsequent interaction.
The interaction stops when two consecutive answers are the same. Purple Box: The input of LLM.
Orange Box: The output of LLM.
operation in (2) until the answer is stable and does not change over the last two answers. PHP follows
a human-like thought process where previous answers are leveraged as hints to arrive at the correct
answer after re-evaluating the question.
Figure 1 illustrates the proposed PHP framework. We use the base prompt to obtain the initial base
answer, and then employ the PHP prompt for subsequent questions. If the current answer matches the
previous answer, it is more likely to be correct, and we terminate the LLM inquiry. With Complex
CoT and GPT-4, after adding PHP, the performance achieves SOTA with 91.9% on SVAMP [11],
95.5% on GSM8K [12], and 79.9% on AQuA [13] and 53.9% on MATH [14].
In summary, our contributions are as follows:
- We propose a new method, Progressive-Hint Prompting (PHP), alongside CoT and selfconsistency, for improving LLM reasoning abilities.
- We demonstrate the effectiveness of PHP through extensive experimentation, including
baseline comparisons and ablation studies, using four LLMs, text-davinci-002 and textdavinci-003, GPT-3.5-Turbo and GPT-4 [15–17].
- The experiment results show that our method can also improve performance with selfconsistency.
- We believe that progressive-hint prompting represents an important step towards automatic
sequential interaction with LLMs and hope that it inspires future research in this field.
**2** **Related Work**
**Emergent Abilities and Multi-Step Reasoning. LLMs are particularly skilled at in-context learning,**
which involves adhering to the structure of prompts (typically few-shot) and completing corresponding
tasks [15, 18–20]. Among the diverse range of language comprehension tasks, we are particularly
-----
interested in multi-step reasoning because it exhibits two unique features. Firstly, LLMs significantly
outperform smaller models on multi-step reasoning tasks [8], whereas their performance gains on
tasks like sentiment classification can be limited [19]. Secondly, few-shot prompting outperforms full
training set fine-tuning in multi-step reasoning tasks, even when conducted on LLMs [7].
**Chain-of-Thought Reasoning. Chain-of-thought (CoT) prompting [8] is a prominent work that**
demonstrates the multi-step reasoning capacities of LLMs. This approach suggests that the reasoning
ability can be elicited through a chain of thoughts, where an answer directly follows a question
without intermediate reasoning steps. Least-to-Most prompting [9], which follows the same research
direction, divides reasoning into problem breakdown parts and problem answer parts and describes
the reasoning steps in more detail. Similarly, the complex CoT [10] highlights the importance of
prompt complexity and selects the most complex questions and their answers as prompts. To reduce
the human workload, the Auto-CoT is proposed [21]. Other works have found that using specific
phrases like "Let’s think step by step" [6] can improve performance.
**Reasoning Path Extraction. Previous research has investigated various task-specific methods for**
identifying reasoning paths, including constructing semantic graphs [22], developing Recurrent
Neural Network (RNN) models to retrieve reasoning paths from a Wikipedia graph [23], using
human-annotated reasoning paths on math problems for fine-tuning [12], or training an extractor with
heuristic-based pseudo reasoning paths [24]. A novel research work, named Self-Consistency [25],
couples the generation of reasoning paths and a final answer by sampling from the decoder and
using aggregation to retrieve the most consistent answer without extra modules. This approach has
shown great promise, and it has the potential to outperform existing methods in terms of accuracy.
Furthermore, the vote complex [10], rank and select samples via complexity, is proposed to improve
the self-consistency performance. This approach is particularly useful when dealing with complex
reasoning problems.
**3** **Progressive-Hint Prompting**
Table 1: Illustration of Progressive-Hint Prompting. Blue Color: The difference between Base CoT
and PHP-CoT. Red Color: The handcrafted Hint in the designed prompt.
**Base Prompting (e.g. CoT)**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today?
A: There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more.
So there must have been 21 - 15 = 6 trees that were planted. The answer is 6.
**Progressive-Hint Prompting 1: Hint is the correct answer**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is
near to 6).
A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. There are
15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there
must have been 21 - 15 = 6 trees that were planted. The answer is 6.
**Progressive-Hint Prompting 2: Hint is the incorrect answer**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is
near to 10, 8).
A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question.
There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more.
So there must have been 21 - 15 = 6 trees that were planted. The answer is 6.
One salient aspect of humanity is our ability to not only think once, but to also double-check
our answers. In this paper, we propose that this process can be simulated in language models by
sequentially employing previous answers. In other words, a model can generate an answer and then
combine this with the question for the next round of thinking. If the current answer is the same as the
previous one, we can have confidence that the current answer is correct.
-----
We have shown the Proposed Interaction in Figure 1 and Prompt Design in Table 1. We demonstrate
the process of generating PHP-CoT prompts for a given CoT prompt in Table 1 and provide the
complete prompt in the Appendix. Our pipeline is divided into two stages: (i) base answer & base
**prompt: the generation of the base answer via base prompts such as CoT or Complex CoT and**
(ii) subsequent answer & PHP: the subsequent interaction with the LLMs through corresponding
progressive-hint prompts like Progressive-Hint Prompting CoT (PHP-CoT) or Progressive-Hint
Prompting Complex CoT (PHP-Complex CoT). We propose a two-sentence structure for the PHP,
consisting of a phrase indicating the proximity of the answer at the question part followed by a
sentence rehearsing hints at answer part. For instance, to create a PHP prompt from a CoT prompt,
we first add the phrase "The answer is near to A1, ..., Ap" after the initial question, where A1, ..., Ap
represent possible answers. Next, we introduce the hints in the beginning sentence of the potential
answers: "We know the Answer Hints: A1, ..., Ap. With the Answer Hints: A1, ..., Ap, we will
answer the question.".
**PHP Design Principle: we should consider various situations of hints. When we ask LLM questions,**
we do not know what the answer will be so the hints are unknown. In this prompt design, we consider
the following two potential situations: 1) The hints are the same as the correct answer: to be sure that
the model can still get the correct answer when the hint is correct; 2) hints are not the same as the
correct answer: to be sure that the model can jump out of the incorrect answer.
Adhering to the above guidelines, we utilize the Standard prompt, CoT prompt, and Complex CoT
prompt to generate initial base answers, from which we can then develop the subsequent answer
generation prompts, namely, PHP-Standard prompt, PHP-CoT prompt, and PHP-Complex CoT
prompt, respectively. The stopping criterion in PHP is reached when two consecutive responses are
identical, signaling the end of the interactive exchange.
Overall, this method represents a pipeline for improving the quality of responses and enhancing
communication during question-answer scenarios.
**4** **Experiments**
**Datasets and Models. We evaluate PHP on seven datasets (AddSub [26], MultiArith [27], Sin-**
gleEQ [28], SVAMP [11], GSM8K [12], AQuA [13]) and MATH [14]. We choose the datasets
because we focus on the reasoning ability of the model. The utilized for both the Standard and CoT
prompts are sourced from the original CoT paper [8], whereas the prompt utilized for the Complex
CoT [10] prompt is derived from the corresponding Complex CoT publication. Also, to validate
our proposed method performance, we employ four models: text-davinci-002 and text-davinci-003,
GPT-3.5-Turbo and GPT-4 [15–17]. All models are employed via OpenAI API key.
**Prompts. We have shown the proposed process pipeline in the Method part. We show all the prompts**
in the Appendix and supplementary materials.
**4.1** **Main Results**
The main results of our study are presented in Table 2, with all methods using greedy decoding (i.e.
temperature = 0). Our findings indicate that the proposed PHP improves performance, particularly
when working with powerful prompts and models.
**PHP works better when the LLM is more powerful. In terms of model power, our analysis indicates**
that PHP is most effective when applied with powerful models. Specifically, when examining CoT
and Complex CoT prompts, we found that while text-davinci-002 generally yielded a performance
improvement after adding hints, there were occasions when performance would decline. However,
when we replaced text-davinci-002 with text-davinci-003, performance improvement became more
consistent and significant. For example, PHP-Complex CoT using text-davinci-002 improved performance by 3.6%, but then increased further to 4.6% with text-davinci-003. Similarly, on AQuA dataset,
using PHP-Complex CoT resulted in a performance drop of 0.4% with text-davinci-002 but a 1.2%
improvement with text-davinci-003. The text-davinci-002 is finetuned with supervised instruction tuning, while the text-davinci-003 is finetuned with reinforcement learning. The improved performance
with text-davinci-003 can be attributed to its enhanced power, making it better at understanding and
employing the given hint.
-----
Table 2: PHP, when applied to different LLMs and prompting methods, can help to improve the
performance. Meanwhile, PHP works better when the model and prompt are more powerful. The
results are with greedy decoding.
Dataset
Prompt PHP Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
✗ 79.4 34.0 80.7 64.8 15.1 25.5 49.91
Standard [8]
✓ 80.5 31.8 79.9 64.2 14.7 25.5 49.43
(+1.1) (-2.2) (-0.8) (-0.6) (-0.4) (0.0) (-0.48)
GPT-3.5
text-davinci-002
GPT-3.5
text-davinci-003
✗ 85.8 89.1 89.7 72.9 49.5 44.4 71.89
CoT [8]
✓ 86.8 89.0 90.1 72.3 51.1 45.6 72.48
(+1.0) (-0.1) (+0.4) (-0.6) (+1.6) (+1.2) (+0.59)
✗ 82.5 89.8 87.7 70.4 57.6 37.4 70.89
Complex CoT [10]
✓ 83.7 90.1 89.9 74.6 61.2 37.0 72.75
(+1.2) (+0.3) (+2.2) (+4.2) (+3.6) (-0.4) (+1.86)
✗ 89.1 36.3 83.8 68.7 15.9 28.3 53.68
Standard [8]
✓ 89.1 36.0 83.6 68.7 16.0 28.3 53.61
(0.0) (-0.3) (-0.2) (0.0) (+0.1) (0.0) (-0.07)
✗ 90.6 93.6 92.7 81.0 56.1 44.0 76.33
CoT [8]
✓ 91.1 94.0 93.5 81.3 57.5 44.4 76.96
(+0.5) (+0.4) (+0.8) (+0.3) (+1.4) (+0.4) (+0.63)
✗ 86.3 94.8 91.5 77.4 67.0 48.8 77.63
Complex CoT [10]
✓ 88.1 95.0 94.0 80.0 71.6 50.0 79.78
(+1.8) (+0.2) (+2.5) (+2.6) (+4.6) (+1.2) (+2.15)
Standard CoT Complex CoT
2.12 3.2 3.2
2.10 TText-Davinci-002ext-Davinci-003 3.0 TText-Davinci-002ext-Davinci-003 3.0 TText-Davinci-002ext-Davinci-003
2.08 2.8 2.8
2.06 2.6 2.6
2.04 2.4 2.4
2.02 2.2 2.2
Interaction Number
2.00 2.0 2.0
AddSubMultiArithSingleEQSVAMPGSM8KAQuA AddSubMultiArithSingleEQSVAMPGSM8KAQuA AddSubMultiArithSingleEQSVAMPGSM8KAQuA
Figure 2: The Interaction Number refers to the frequency at which we need to consult the LLM
until we receive conclusive responses. With an analysis of various models and prompts, it has been
observed that: 1) A stronger model leads to a decreased interaction number; 2) An improved prompt
results in an increased interaction number.
**PHP works better when the prompt is more powerful. After analyzing our data, it was determined**
that the prompt’s power has a significant impact on the performance of the system. Our experimental
results revealed that while the inclusion of PHP produced modest improvements with standard
prompts, CoT and Complex CoT prompts demonstrated substantial gains in performance. Particularly
noteworthy is the fact that the most potent prompt, Complex CoT, exhibited the most substantial
performance improvement in comparison to the Standard prompt and CoT prompt. This finding
provides compelling evidence that a superior prompt leads to greater effectiveness of the system.
**The Interaction Number decreases when the model is more powerful and the prompt is less**
**powerful. The number of interactions refers to how many times the agent engages with the LLMs.**
The interaction number is one when the agent receives the first answer and increases to two for the
second answer. In Figure 2, we illustrate the interaction number of various models and prompts. Our
findings indicate that: 1) The interaction number for text-davinci-003 is typically lower than that
of text-davinci-002 when given the same prompt. This is primarily due to the higher accuracy of
text-davinci-003, resulting in a higher probability of the base answer and subsequent answers being
correct, thus requiring fewer interactions to obtain the final correct answer; 2) When using the same
models, the interaction number generally increases as the prompt becomes more powerful. This is
because the LLMs achieves better reasoning ability when the prompt becomes more potent, allowing
them to leverage the hints to jump out of the incorrect answers, and ultimately leading to a higher
number of interactions required to reach the final answer.
-----
**4.2** **Impact of the Hint Quality**
Table 3: Performance with different Base Answers. Initially, the base prompt provides base answers
to the model and PHP generates the subsequent answers. The results are from text-davinci-003 with
greedy decoding.
Dataset
PHP Base Prompt Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
Standard [8] 89.1 36.0 83.6 68.7 16.0 28.3 53.61
PHP-Standard CoT [8] **92.4** 80.5 92.1 **78.5** 50.2 42.5 72.70
Complex CoT [10] 90.6 **80.6** **92.9** 77.2 **60.3** **45.6** **74.53**
Standard [8] 90.8 92.5 90.7 80.2 52.3 40.9 74.56
PHP-CoT CoT [8] **91.1** 94.0 93.5 **81.3** 57.5 44.4 76.96
Complex CoT [10] 90.6 **96.8** **93.7** 81.2 **62.6** **50.0** **79.14**
Standard [8] 88.3 80.1 93.3 80.4 65.5 35.4 73.83
PHP-Complex CoT CoT [8] **88.8** **95.6** **94.8** **81.4** 70.6 45.6 79.46
Complex CoT [10] 88.1 95.0 94.0 80.0 **71.6** **50.0** **79.78**
**The quality of the hint significantly affects the performance. Shown in Table 3, to enhance the**
PHP-Standard, replacing the base prompt Standard with Complex CoT or CoT leads to a significant
improvement in the final performance. For PHP-Standard, we observe that GSM8K performance
amplifies from 16.0% with base prompt Standard to 50.2% with base prompt CoT and 60.3% with
base prompt Complex CoT. Conversely, replacing the base prompt Complex CoT with Standard will
reduce the final performance. For example, after replacing base prompt Complex CoT with Standard,
the performance of PHP-Complex CoT drops from 71.6% to 65.5% on GSM8K dataset.
**Performance may further improve if PHP is not designed from the corresponding base prompt.**
The results indicate that the CoT with PHP-Complex CoT achieved a high accuracy rate of 96.8%
on the MultiArith dataset, surpassing the performance of the CoT with PHP-CoT. Similarly, the
Complex CoT with PHP-CoT demonstrated a notable accuracy rate of 95.6% on the same dataset,
outperforming the Complex CoT with PHP-Complex CoT. The rationale behind these findings is
twofold: 1) the performance of CoT and Complex CoT are similar on all six datasets, and 2) since
the Base answer is provided by CoT (or Complex CoT) and the subsequent answer is based on
PHP-Complex CoT (or PHP-CoT), it is comparable to having two individuals collaborating to solve
a problem. Therefore, in such circumstances, the system’s performance may be further enhanced.
**4.3** **Ablation Study**
Furthermore, we conducted an ablation study to verify the criticality of the two sentences in answers:
1) P1: We know the Answer Hints A1, ..., Ap; 2) P2: With the Answer Hints A1, ..., Ap, we will
answer the question. Moreover, we introduced a new type of prompt called CoT-Merge and Complex
CoT-Merge. Firstly, we combined the original prompt with the PHP prompt into a single file.
Subsequently, we utilized the same Merge Prompt for both the base answer and subsequent answers.
Also, we prove that both correct and incorrect hints are necessary in prompt design.
**The Proposed P1 and P2 are necessary. Incorporating the sentences P1 and P2 resulted in better**
performance for CoT with PHP across three of the six datasets. However, the significance of these
two sentences became particularly apparent when we employed Complex CoT. With this method,
better performance was achieved on five of the six datasets after adding P1 and P2. For instance,
Complex CoT improved its performance from 78.0% to 80.0% on the SVAMP dataset, and from
68.3% to 71.6% on the GSM8K dataset. This highlights that sentences P1 and P2 can exhibit more
potent abilities, particularly when the model’s logical capacity is superior. Consequently, we can
conclude that P1 and P2 will likely enhance model performance to a greater extent, particularly with
more powerful prompts and models.
**Non-Merge based PHP is better than merge based PHP when prompts are more powerful. Re-**
garding the CoT with PHP-CoT, the initial answer is derived from the CoT prompt, and subsequently,
the answer is obtained from the PHP-CoT. Notably, compared to other CoT-base methods, CoTMerge achieves the best performance. However, compared to other Complex CoT-based methods,
we observe that non-merge PHP-Complex CoT with both P1 and P2 achieves the best performance.
-----
Table 4: Ablation Study. CoT-Merge: for the CoT base prompt and the PHP-CoT prompt, we
employ the prompt that contains both base prompt and the PHP. P1: We know the Answer Hints
_A1, ..., Ap. P2: With the Answer Hints A1, ..., Ap, we will answer the question. According to the_
experiment results, we see that both the proposed P1 and P2 are necessary. Meanwhile, non-merge
based method is better than merge based method when prompts are more powerful. The results are
from text-davinci-003 with greedy decoding.
Dataset
Method P1 P2 Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
CoT-Merge ✓ ✓ **91.3** **94.6** 93.1 79.5 58.6 **50.0** **77.85**
91.1 93.5 93.3 80.0 58.1 44.8 76.80
90.8 93.1 92.9 80.7 **58.8** 43.7 76.66
**91.3** 93.8 93.5 80.5 58.2 46.4 77.28
91.1 94.0 **93.5** **81.3** 57.5 44.4 76.96
CoT [8]
Complex CoT-Merge ✓ ✓ **88.8** 94.3 94.6 78.1 70.2 46.8 78.80
87.8 93.3 93.7 78.0 68.3 **50.3** 78.56
87.8 **95.1** 94.2 78.5 70.5 48.4 79.08
88.3 94.3 **94.6** 79.1 69.3 46.8 78.73
88.1 95.0 94.0 **80.0** **71.6** 50.0 **79.78**
Complex CoT [10]
Table 5: Analysis of Hint Design (Shown in Figure 1). Correct: The hints of designed prompt are the
same as the correct answers. Incorrect: The hints of the designed prompt are the incorrect answers.
Green: The performance is better than without progressive-hint. Red: The performance is worse than
without progressive-hint. The results are from text-davinci-003 with greedy decoding.
Hint Dataset
Method Average
Correct Incorrect AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
✗ ✗ 90.6 93.6 92.7 81.0 56.1 44.0 76.33
CoT [8]
Complex CoT [10]
91.6 94.3 93.3 81.9 57.0 43.7 76.96
91.1 93.5 93.1 79.7 57.9 45.2 76.74
91.1 94.0 93.5 81.3 57.5 44.4 **76.96**
86.3 94.8 91.5 77.4 67.0 48.8 77.63
88.3 94.0 93.8 77.8 68.6 46.4 78.14
88.1 94.6 94.0 79.2 70.2 48.4 79.08
88.1 95.0 94.0 80.0 71.6 50.0 **79.78**
Hence, when prompts are better, the performance of non-merge based method will be better than
merge-based method.
**Both correct and incorrect hints are needed in the prompt design. Table 5 demonstrates that the**
use of PHP was superior to its absence when the designed prompt included both correct and incorrect
hints. Specifically, the provision of a correct hint in the prompt promoted the generation of answers
that matched the given hint. Conversely, the provision of incorrect answers in the prompt encouraged
the generation of alternative answers, with the aid of the given hint.
**4.4** **Performance with Self-Consistency**
As we discussed before, our proposed method can combine with CoT and self-consistency to further
improve the model performance. The results are shown in Table 6. Following the self-consistency
paper, we sample paths with numbers 5, 10, 20 and 40, and the model temperature 0.7.
**PHP further improves performance. By utilizing similar prompts and sample path numbers, we**
discovered that our proposed PHP-CoT and PHP-Complex CoT always achieve superior performance
when compared to CoT and Complex CoT, shown in Table 6 and Figure 3. For instance, CoT with
self-consistency was able to attain a 96.5% accuracy on the MultiArith dataset with a sample path of
10, 20 and 40. Therefore, we can conclude that the best performance by CoT with self-consistency is
96.5% with text-davinci-003. However, after implementing PHP, the performance skyrocketed to
97.1%. Similarly, we observed CoT with self-consistency on SVAMP, achieving the best accuracy
of 83.3% with 20 sampled paths, and further improved to 83.7% upon implementing PHP. This
illustrates that PHP could break the performance bottleneck and further improve performance.
-----
Table 6: The results after adding Self-Consistency (SC). Number: The interaction number between
agent and LLM. The best results of adding PHP are highlighted with red color, and the best results
without PHP are highlighted with green color. We find that PHP further improves performance, even
adding self-consistency. Meanwhile, PHP may reduce the cost of self-consistency.
Dataset
Prompt SC PHP Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
5 ✗ 90.6 95.3 94.4 81.6 63.3 49.2 79.06
5 ✓ 90.8 96.6 94.8 83.5 66.3 49.6 80.26
5 Number 2.0075 2.0433 2.0098 2.1090 2.5458 2.0157 2.1218
10 ✗ 90.6 96.5 93.8 83.0 65.5 49.2 79.76
10 ✓ 90.8 97.1 93.8 83.5 67.5 50.0 80.45
10 Number 2.0075 2.0283 2.0059 2.0510 2.2145 2.0118 2.0531
20 ✗ 91.1 96.5 94.2 83.3 68.0 55.1 81.36
20 ✓ 91.6 96.5 94.4 83.7 68.6 55.1 81.64
20 Number 2.0050 2.0366 2.0098 2.0250 2.1144 2.0078 2.0330
40 ✗ 91.6 96.5 94.8 82.9 67.3 53.1 81.03
40 ✓ 91.6 96.6 95.0 83.7 68.4 53.1 81.39
40 Number 2.0050 2.0300 2.0050 2.0320 2.0530 2.0000 2.0208
5 ✗ 88.1 97.0 93.1 80.4 73.5 51.5 80.60
5 ✓ 89.6 97.3 95.2 82.5 76.9 51.9 82.23
5 Number 2.0378 2.0166 2.0334 2.2370 2.5390 2.0118 2.1459
10 ✗ 88.6 98.3 93.3 82.4 76.4 54.3 82.21
10 ✓ 89.1 98.5 95.2 83.4 78.2 54.7 83.18
10 Number 2.0177 2.0016 2.0295 2.059 2.1531 2.0078 2.0447
20 ✗ 88.6 98.0 93.8 82.5 77.7 56.2 82.80
20 ✓ 89.8 98.0 95.8 83.6 78.6 56.2 83.66
20 Number 2.0253 2.0000 2.0196 2.0330 2.0401 2.0000 2.0196
40 ✗ 88.3 98.5 94.8 83.9 78.1 58.6 83.70
40 ✓ 88.6 98.5 95.8 84.7 79.0 58.6 84.20
40 Number 2.0101 2.0000 2.0137 2.0210 2.0348 2.0039 2.0137
CoT [8]
Complex CoT [10]
CoT of MultiArith CoT of SVAMP Complex CoT of GSM8K
79.0
97.0 83.5
96.8 78.0
96.5 83.0 77.0
96.2
82.5 76.0
96.0
Accuracy95.8 w/ PHP 82.0 w/ PHP 75.0 w/ PHP
95.5 w/o PHP w/o PHP 74.0 w/o PHP
95.2 81.5
5 10 20 40 5 10 20 40 5 10 20 40
#Sampled Reasoning Paths #Sampled Reasoning Paths #Sampled Reasoning Paths
Figure 3: We show the results: 1) CoT of MultiArith; 2) CoT of SVAMP; 3) Complex CoT of
GSM8K. According to 1) and 2), we can see that PHP could further improve performance. With
result 3), we found that the PHP could even reduce the cost of self-consistency.
**PHP could reduce the cost of self-consistency. Incorporating the PHP can also lead to cost reduction.**
It is widely acknowledged that self-consistency involves an increased number of reasoning paths,
resulting in a higher cost. The Table 6 illustrates that PHP can be an effective approach for reducing
this cost, while still preserving performance gains. As shown in Figure 3, using Complex CoT
with self-consistency, a 78.1% accuracy can be reached with 40 sample paths, while incorporating
PHP reduces the required sample amount to 10×2.1531=21.531 paths, and results in an even better
accuracy of 78.2%.
**4.5** **Performance with Chat Model**
In the previous section, we follow the previous work settings and employ text generation models for
our experiments. With the release API of GPT-3.5-Turbo and GPT-4, we validate the performance of
Complex CoT with PHP on the same six datasets. We use greedy decoding (i.e. temperature = 0) and
Complex CoT as prompt for both models.
-----
Table 7: Performance of Complex CoT with GPT-3.5-Turbo and GPT-4, employing greedy decoding.
Number: The average interaction number with LLM.
Dataset
PHP Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
Previous SOTA ✗ 94.9 [27] 100 [25] 95.5 [29] 89.1 [30] 92.0 [17] 76.4 [31] 91.31
✗ 85.5 97.5 92.5 81.0 82.8 57.4 82.78
✓ 85.3 98.0 92.9 83.1 85.1 60.6 84.16
(-0.2) (+0.5) (+0.4) (+2.1) (+2.3) (+3.2) (+1.38)
Number 2.1037 2.0133 2.0610 2.3570 2.3426 2.3228 2.2000
✗ 89.3 97.8 93.1 90.5 94.9 77.5 90.51
✓ 89.6 98.1 93.1 91.9 95.5 79.9 91.34
(+0.3) (+0.3) (0.0) (+1.4) (+0.6) (+2.4) (+0.83)
Number 2.0126 2.0033 2.0019 2.0700 2.0507 2.2913 2.0716
GPT-3.5
Turbo
GPT-4
Table 8: Performance of Complex CoT with GPT-3.5-Turbo and GPT-4 on MATH dataset, employing
greedy decoding. Number: The average interaction number with LLM. Overall: The results overall
MATH subtopics [14].
MATH Dataset
PHP
InterAlgebra Precalculus Geometry NumTheory Probability PreAlgebra Algebra Overall
Previous SOTA[7] ✗ - - - - - - - 50.30
GPT-4 CoT[17] ✗ - - - - - - - 42.50
✗ 14.6 16.8 22.3 33.4 29.7 53.8 49.1 34.12
✓ 17.1 16.1 25.4 35.1 33.7 57.7 51.1 36.50
(+2.5) (-0.7) (+3.1) (+1.7) (+4.0) (+3.9) (+2.0) (+2.38)
Number 4.2746 3.9625 4.3361 3.8166 3.7594 3.1526 3.0716 3.6673
✗ 23.4 26.7 36.5 49.6 53.1 71.6 70.8 50.36
✓ 26.3 29.8 41.9 55.7 56.3 73.8 74.3 53.90
(+2.9) (+3.1) (+5.4) (+6.1) (+3.2) (+2.2) (+3.5) (+3.54)
Number 3.2414 3.2435 3.2233 3.1740 2.8122 2.3226 2.4726 2.8494
GPT-3.5-Turbo
Complex CoT
(Ours)
GPT-4
Complex CoT
(Ours)
**Analyze GPT-3.5-Turbo. As depicted in Table 7, the proposed PHP enhances performance, improv-**
ing 2.3% on GSM8K and 3.2% on AQuA. However, GPT-3.5-Turbo exhibits a reduced ability to
adhere to prompts compared to text-davinci-003. We provide two examples to illustrate this point: a)
In cases where the given hints are absent, GPT-3.5-Turbo fails to answer the question and responds
with a statement such as, "We cannot answer this question as the answer hint is missing. Please
provide the answer hint to proceed.". In contrast, text-davinci-003 autonomously generates and fills in
the missing answer hint before addressing the question (as demonstrated in the Appendix); b) When
more than ten hints are provided, GPT-3.5-Turbo may respond with "We cannot determine the correct
answer as multiple answer hints are given. Please provide only one answer hint for the question.".
**Analyze GPT-4. After deploying the GPT-4 model, we were able to achieve the new SOTA per-**
formance on the SVAMP, GSM8K, AQuA and MATH benchmarks. Our proposed PHP method
consistently improves the performance of GPT-4. Furthermore, compared to the GPT-3.5-Turbo
model, we observed a reduction in the number of interactions required by GPT-4, which aligns with
the finding that "The Interaction Number decreases when the model is more powerful."
**5** **Conclusion**
This paper introduces a novel approach named Progressive-Hint Prompting (PHP) for interacting with
LLMs, which offers multiple advantages: 1) PHP achieves substantial performance improvements on
math reasoning tasks, leading to state-of-the-art results on several reasoning benchmarks; 2) with
more powerful models and prompts, PHP can better and consistently benefit the LLMs; 3) PHP can
be easily combined with CoT and self-consistency to further improve performance.
To better enhance the progressive-hint prompting approach, future research endeavors can focus on
improving the design of handcrafted hints in the question phase and prompt sentences in the answer
part. Additionally, novel hints that aid the LLMs to reconsider the questions can be identified and
extracted beside the answer.
-----
**References**
[1] Daniel W Otter, Julian R Medina, and Jugal K Kalita. A survey of the usages of deep learning
for natural language processing. IEEE transactions on neural networks and learning systems,
32(2):604–624, 2020. 1
[2] Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pre-trained
models for natural language processing: A survey. Science China Technological Sciences,
63(10):1872–1897, 2020.
[3] KR1442 Chowdhary and KR Chowdhary. Natural language processing. Fundamentals of
_artificial intelligence, pages 603–649, 2020. 1_
[4] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song,
John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language
models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446,
2021. 1
[5] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid,
Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al.
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.
_arXiv preprint arXiv:2206.04615, 2022. 1_
[6] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.
Large language models are zero-shot reasoners. In Advances in Neural Information Processing
_Systems, 2022. 1, 3_
[7] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving
quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022.
1, 3, 9
[8] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny
Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint
_arXiv:2201.11903, 2022. 1, 3, 4, 5, 6, 7, 8_
[9] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale
Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables
complex reasoning in large language models. In The Eleventh International Conference on
_Learning Representations, 2023. 1, 3_
[10] Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based
prompting for multi-step reasoning. In The Eleventh International Conference on Learning
_Representations, 2023. 1, 3, 4, 5, 6, 7, 8_
[11] Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve
simple math word problems? In Proceedings of the 2021 Conference of the North American
_Chapter of the Association for Computational Linguistics: Human Language Technologies,_
pages 2080–2094, 2021. 2, 4
[12] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021. 2, 3, 4
[13] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale
generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th
_Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
pages 158–167, 2017. 2, 4
[14] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_arXiv preprint arXiv:2103.03874, 2021. 2, 4, 9_
-----
[15] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. 2,
4
[16] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to
follow instructions with human feedback. Advances in Neural Information Processing Systems,
35:27730–27744, 2022.
[17] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2, 4, 9
[18] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. 2
[19] Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts.
_arXiv preprint arXiv:2010.15980, 2020. 3_
[20] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig.
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language
processing. ACM Computing Surveys, 55(9):1–35, 2023. 2
[21] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting
in large language models. arXiv preprint arXiv:2210.03493, 2022. 3
[22] Weiwen Xu, Yang Deng, Huihui Zhang, Deng Cai, and Wai Lam. Exploiting reasoning chains
for multi-hop science question answering. In Findings of the Association for Computational
_Linguistics: EMNLP 2021, pages 1143–1156, 2021. 3_
[23] Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong.
Learning to retrieve reasoning paths over wikipedia graph for question answering. In Interna_tional Conference on Learning Representations, 2020. 3_
[24] Jifan Chen, Shih-ting Lin, and Greg Durrett. Multi-hop question answering via reasoning chains.
_arXiv preprint arXiv:1910.02610, 2019. 3_
[25] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Selfconsistency improves chain of thought reasoning in language models. In The Eleventh In_ternational Conference on Learning Representations, 2023. 3, 9_
[26] Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning
to solve arithmetic word problems with verb categorization. In EMNLP, pages 523–533, 2014.
4
[27] Subhro Roy and Dan Roth. Solving general arithmetic word problems. In Proceedings of the
_2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752,_
Lisbon, Portugal, September 2015. Association for Computational Linguistics. 4, 9
[28] Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. Parsing algebraic word problems into equations. Transactions of the Association for
_Computational Linguistics, 3:585–597, 2015. 4_
[29] Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong Zhang. Active prompting with chain-ofthought for large language models. arXiv preprint arXiv:2302.12246, 2023. 9
[30] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv
_preprint arXiv:2211.12588, 2022. 9_
[31] Silviu Pitis, Michael R Zhang, Andrew Wang, and Jimmy Ba. Boosted prompt ensembles for
large language models. arXiv preprint arXiv:2304.05970, 2023. 9
-----
[32] Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz,
Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of
chatgpt. arXiv preprint arXiv:2301.13867, 2023.
[33] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Norman Rabe, Charles E Staats, Mateja
Jamnik, and Christian Szegedy. Autoformalization with large language models. In Advances in
_Neural Information Processing Systems, 2022._
[34] Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz
Odrzygó´zd´z, Piotr Miło´s, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to
integrate language models and automated theorem provers. In Advances in Neural Information
_Processing Systems, 2022._
[35] Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée
Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem
provers with informal proofs. arXiv preprint arXiv:2210.12283, 2022.
[36] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344,
2022.
[37] Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. Towards a human-like
open-domain chatbot. arXiv preprint arXiv:2001.09977, 2020.
[38] Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat,
Gabriel Ebner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural
theorem proving. arXiv preprint arXiv:2205.11491, 2022.
[39] Kyle Richardson and Ashish Sabharwal. Pushing the limits of rule reasoning in transformers
through natural language satisfiability. In Proceedings of the AAAI Conference on Artificial
_Intelligence, volume 36, pages 11209–11219, 2022._
[40] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[41] Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. Learning an executable
neural semantic parser. Computational Linguistics, 45(1):59–94, 2019.
[42] Ohad Rubin, Jonathan Herzig, and Jonathan Berant. Learning to retrieve prompts for incontext learning. In Proceedings of the 2022 Conference of the North American Chapter of the
_Association for Computational Linguistics: Human Language Technologies, pages 2655–2671,_
2022.
[43] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi,
Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are
multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057, 2022.
[44] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for
boltzmann machines. Cognitive science, 9(1):147–169, 1985.
[45] Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong.
Learning to retrieve reasoning paths over wikipedia graph for question answering. arXiv preprint
_arXiv:1911.10470, 2019._
[46] Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich
Schütze, and Yoav Goldberg. Measuring and improving consistency in pretrained language
models. Transactions of the Association for Computational Linguistics, 9:1012–1031, 2021.
[47] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for
dialog applications. arXiv preprint arXiv:2201.08239, 2022.
-----
[48] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open
and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[49] SU Hongjin, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari
Ostendorf, Luke Zettlemoyer, Noah A Smith, et al. Selective annotation makes language models
better few-shot learners. In The Eleventh International Conference on Learning Representations,
2023.
[50] Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use:
Improving few-shot performance of language models. In International Conference on Machine
_Learning, pages 12697–12706. PMLR, 2021._
[51] Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of
in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080, 2021.
[52] Colin Wei, Sang Michael Xie, and Tengyu Ma. Why do pretrained language models help in
downstream tasks? an analysis of head and prompt tuning. Advances in Neural Information
_Processing Systems, 34:16158–16170, 2021._
[53] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Rationaleaugmented ensembles in language models. arXiv preprint arXiv:2207.00747, 2022.
[54] Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. What makes reading
comprehension questions easier? In Proceedings of the 2018 Conference on Empirical Methods
_in Natural Language Processing, pages 4208–4219, 2018._
[55] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bertnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan_guage Processing and the 9th International Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 3982–3992, 2019._
[56] Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically
ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In
_Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 8086–8098, 2022._
[57] Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu
Chen. What makes good in-context examples for gpt-3? In Proceedings of Deep Learning
_Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for_
_Deep Learning Architectures, pages 100–114, 2022._
[58] Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen.
On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336,
2022.
[59] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics
_and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long_
_Papers), pages 4582–4597, 2021._
[60] Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internetaugmented language models through few-shot prompting for open-domain question answering.
_arXiv preprint arXiv:2203.05115, 2022._
[61] Yuxuan Lai, Chen Zhang, Yansong Feng, Quzhe Huang, and Dongyan Zhao. Why machine
reading comprehension models learn shortcuts? arXiv preprint arXiv:2106.01024, 2021.
[62] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language
models. arXiv preprint arXiv:2001.08361, 2020.
-----
[63] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn
Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless
assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862,
2022.
[64] Wenhao Yu, Chenguang Zhu, Lianhui Qin, Zhihan Zhang, Tong Zhao, and Meng Jiang. Diversifying content generation for commonsense reasoning with mixture of knowledge graph experts.
In Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language
_Processing (DLG4NLP 2022), pages 1–11, 2022._
**6** **Implementation Details**
We have provided the code in our supplementary materials.
**7** **Limitation and Further Work**
In this paper, we will explore the limitations of our proposed progressive-hint prompting technique
and discuss possible avenues for further improvement.
**The Progressive-Hint Prompt is handcrafted. Our proposed progressive-hint prompts are hand-**
crafted by humans, similar to other related techniques such as Chain-Of-Thought and Complex Chain
of Thought. As such, we aim to design Auto Progressive Hint in the future to improve its efficiency.
For instance, we could continuously build and update the progressive hint during testing.
**The hint is defined as the answer. In this paper, we defined the hint as the answer. However, we**
acknowledge that the concept of hint encompasses other possibilities generated by models. These
hints may include model confidence, reasoning path, and even interaction number.
**The interaction number is at least two. Our design requires at least two interactions to determine**
the final answers. Therefore, using greedy decoding (where the sample path is always 1) as opposed
to self-consistency can increase costs. Going forward, we plan to employ a powerful model initially
to obtain the base answer then rely on cheaper models for subsequent answers. This approach could
help us reduce the overall cost.
**8** **Broader Impacts**
Progressive-Hint Prompting aims to enhance the reasoning ability of Large Language Models (LLMs)
by utilizing previous answers. We believe that the integration of PHP with LLM can be applied
in a variety of areas, including: 1) Assisting students, particularly those from low-income areas,
in learning more effectively and obtaining accurate answers with the help of LLM and PHP; 2)
Aiding mathematicians in solving complex mathematical problems; 3) and other reasoning-related
applications. By leveraging PHP with LLM, we hope to improve the performance of these models
and enable their use in various practical scenarios.
**9** **Model Response with Different Hints**
**Model Response with different hints. When presented with the hint 100, the model accurately**
solves the question. Conversely, the model can be misled if the provided hint strays too far from the
correct answer. For instance, when the hint is 0, the model calculates the amount of not used carrots
is 80 pounds, outputs an answer of 0.
**10** **Interaction Cases**
In this section, we provide more interaction cases.
**10.1** **Success Case**
-----
Table 9: A GSM8K example of text-davinci-003 response with greedy decoding, and the prompt
is PHP-Complex CoT. Based on our analysis, it appears that a hint that is too far from the correct
answer has the potential to mislead LLMs and lead to an incorrect final answer. Conversely, a more
effective hint can result in a more precise outcome.
**Question: 200 pounds of carrots will be distributed to 40 restaurants in a certain city. Each restaurant**
is to receive 2 pounds of carrots. How many pounds of carrots will not be used?
**Hint: 0.We know the Answer Hints: 0. With the Answer Hints: 0, we will answer the question. Let’s**
think step by step We know that 200 pounds of carrots are to be distributed to 40 restaurants. Each
restaurant is to receive 2 pounds of carrots. Therefore, 40 restaurants x 2 pounds of carrots = 80
pounds of carrots. The total amount of carrots used is 200 pounds - 80 pounds = 120 pounds. The
amount of carrots not used is 200 pounds - 120 pounds = 80 pounds. The answer is 0
**Hint: 100. We know the Answer Hints: 100. With the Answer Hints: 100, we will answer the**
question. Let’s think step by step We know that 200 pounds of carrots are to be distributed to 40
restaurants. Each restaurant will receive 2 pounds of carrots, so 40 restaurants x 2 pounds = 80
pounds of carrots. This means that 200 pounds of carrots - 80 pounds of carrots = 120 pounds of
carrots will not be used. The answer is 120
Question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many
total meters does he run a week?
A: Let’s think step by step
James runs 3 sprints 3 times a week, so he runs 3*3 = 9 sprints a week.
Each sprint is 60 meters, so he runs 9*60 = 540 meters a week.
The answer is 540.
Question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many
total meters does he run a week? (Hint: The answer is near to 540).
A: We know the Answer Hints: 540. With the Answer Hints: 540, we will answer the question. Let’s
think step by step James runs 3 sprints 3 times a week. Each sprint is 60 meters, so he runs 3 x 60
= 180 meters each time. He runs 3 times a week, so he runs 180 x 3 = 540 meters each week. The
answer is 540.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where
each competing team has four members that each run 400 meters, or one lap, around a standard track.
One of the two teams is very well-rounded and each of their members will run their 400 meter leg in
precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter
leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner.
Using this information, how many seconds will the faster team win by?
A: Let’s think step by step
The first team will take 4 x 55 seconds = 220 seconds to complete the race.
The second team will take 60 + 57 + 54 + 51 = 222 seconds to complete the race.
The first team will win by 220 - 222 = -2 seconds.
The answer is -2.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where
each competing team has four members that each run 400 meters, or one lap, around a standard track.
One of the two teams is very well-rounded and each of their members will run their 400 meter leg in
precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter
leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner.
Using this information, how many seconds will the faster team win by? (Hint: The answer is near to
-2).
A: We know the Answer Hints: -2. With the Answer Hints: -2, we will answer the question.
Let’s think step by step
The first team will finish in 55 seconds x 4 runners = 220 seconds.
The second team will finish in 60 seconds + (3 seconds x 3 runners) = 75 seconds.
The difference between the two teams is 220 - 75 = 145 seconds.
The faster team will win by 145 seconds.
-----
The answer is 145 seconds.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where
each competing team has four members that each run 400 meters, or one lap, around a standard track.
One of the two teams is very well-rounded and each of their members will run their 400 meter leg in
precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter
leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner.
Using this information, how many seconds will the faster team win by? (Hint: The answer is near to
-2, 145).
A: We know the Answer Hints: -2, 145. With the Answer Hints: -2, 145, we will answer the question.
Let’s think step by step The first team will take 4 x 55 = 220 seconds to complete the race.
The second team will take 60 + 57 + 54 + 51 = 222 seconds to complete the race.
The first team will finish the race 2 seconds faster than the second team.
The answer is -2.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where
each competing team has four members that each run 400 meters, or one lap, around a standard track.
One of the two teams is very well-rounded and each of their members will run their 400 meter leg in
precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter
leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner.
Using this information, how many seconds will the faster team win by? (Hint: The answer is near to
-2, 145, -2).
A: We know the Answer Hints: -2, 145, -2. With the Answer Hints: -2, 145, -2, we will answer the
question.
Let’s think step by step
The first team will run their 4 legs in 55 seconds each, so their total time will be 4 x 55 = 220 seconds.
The second team’s first runner will run their 400 meter leg in 60 seconds, then each subsequent
runner will be 3 seconds faster than the previous runner.
So the second runner will run in 57 seconds, the third runner in 54 seconds, and the fourth runner in
51 seconds.
The total time for the second team will be 60 + 57 + 54 + 51 = 222 seconds.
The difference in time between the two teams is 222 - 220 = 2 seconds.
The faster team will win by 2 seconds.
The answer is 2.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where
each competing team has four members that each run 400 meters, or one lap, around a standard track.
One of the two teams is very well-rounded and each of their members will run their 400 meter leg in
precisely 55 seconds. The other team is less well-rounded; their first runner will run their 400 meter
leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous runner.
Using this information, how many seconds will the faster team win by? (Hint: The answer is near to
-2, 145, -2, 2).
A: We know the Answer Hints: -2, 145, -2, 2. With the Answer Hints: -2, 145, -2, 2, we will answer
the question.
Let’s think step by step
The first team will run their 4 legs in 55 seconds each, for a total of 4 x 55 = 220 seconds.
The second team will run their 4 legs in 60, 57, 54, and 51 seconds, for a total of 60 + 57 + 54 + 51 =
222 seconds.
The difference between the two teams is 222 - 220 = 2 seconds.
The faster team will win by 2 seconds.
The answer is 2.
Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for
one day. Now he must sell the items at original price the other day. By what percentage must he
increase the price to original? Answer Choices: (A) 21% (B) 20% (C) 25% (D) 33% (E) 18%
A: Let’s think step by step
-----
The shopkeeper gave a discount of 20% on all the items.
Therefore, the new price of the items = 80% of the original price.
To get back to the original price, the shopkeeper must increase the price by (100% - 80%) = 20%.
The answer is (B), 20%
Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for
one day. Now he must sell the items at original price the other day. By what percentage must he
increase the price to original? Answer Choices: (A) 21% (B) 20% (C) 25% (D) 33% (E) 18% (Hint:
The answer is near to (B)).
A: We know the Answer Hints: (B). With the Answer Hints: (B), we will answer the question.
Let’s think step by step
Let the original price of the item be x.
Then, the price after discount = x - (20/100)x = 0.8x
To get the original price, the shopkeeper must increase the price by (100/80)% = 25%.
The answer is (C), 25%
Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for
one day. Now he must sell the items at original price the other day. By what percentage must he
increase the price to original? Answer Choices: (A) 21% (B) 20% (C) 25% (D) 33% (E) 18% (Hint:
The answer is near to (B), (C)).
A: We know the Answer Hints: (B), (C). With the Answer Hints: (B), (C), we will answer the
question.
Let’s think step by step
Let the original price of the item be x.
Then, the discounted price of the item = x - (20/100)x = 0.8x
To get the original price, the shopkeeper must increase the price by (100/80)% = 25%.
The answer is (C), 25%
**10.2** **Failure Case**
Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing
seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in
three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon,
she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her
chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
A: Let’s think step by step
In the morning, Wendi gives her flock of chickens 15 cups of feed. Each chicken gets 3 cups of feed,
so 15 cups of feed is for 5 chickens.
In the afternoon, Wendi gives her chickens another 25 cups of feed. Each chicken gets 3 cups of feed,
so 25 cups of feed is for 8 chickens.
So, in total, Wendi has fed 5+8=13 chickens.
Since Wendi’s flock is 20 chickens, she needs to feed 20-13=7 more chickens.
Each chicken needs 3 cups of feed, so Wendi needs to give her chickens 7*3=21 cups of feed in the
final meal of the day.
The answer is 21.
Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing
seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in
three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon,
she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her
chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens? (Hint: The answer is
near to 21).
A: We know the Answer Hints: 21. With the Answer Hints: 21, we will answer the question.
Let’s think step by step
Wendi feeds each of her chickens three cups of feed per day.
In the morning, she gives her flock of chickens 15 cups of feed.
This means that 15 cups of feed is enough for 5 chickens.
-----
In the afternoon, she gives her chickens another 25 cups of feed.
This means that 25 cups of feed is enough for 8 chickens.
So, she has already fed 13 chickens in the morning and afternoon.
She has 20 chickens in total, so she needs to feed 20-13 = 7 more chickens.
Each chicken needs 3 cups of feed, so she needs to give 7 chickens x 3 cups of feed = 21 cups of feed
in the final meal of the day.
The answer is 21.
Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the
rest are good. How many oranges are good?
A: Let’s think step by step
There are 25 oranges in the basket, of which 1 is bad, 20% are unripe, and 2 are sour.
This means that 1 + 20% + 2 = 23 oranges are not good.
Therefore, 25 - 23 = 2 oranges are good.
The answer is 2.
Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the
rest are good. How many oranges are good? (Hint: The answer is near to 2).
A: We know the Answer Hints: 2. With the Answer Hints: 2, we will answer the question.
Let’s think step by step
There are 25 oranges in the basket.
1 is bad, so there are 24 oranges left.
20% of the oranges are unripe, so 20% of 24 = 4.8 oranges are unripe.
2 oranges are sour, so there are 24-4.8-2 = 17.2 oranges left.
The rest of the oranges are good, so the number of good oranges is 17.2.
The answer is 17.2.
Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the
rest are good. How many oranges are good? (Hint: The answer is near to 2, 17.2).
A: We know the Answer Hints: 2, 17.2. With the Answer Hints: 2, 17.2, we will answer the question.
Let’s think step by step
There are 25 oranges in the basket.
1 is bad, so there are 24 oranges left.
20% of 24 oranges are unripe, so 20% of 24 = 4.8 oranges are unripe.
2 oranges are sour, so there are 24-4.8-2 = 17.2 oranges left.
These 17.2 oranges are good.
The answer is 17.2.
Question: If the probability that Stock A will increase in value during the next month is 0.56, and the
probability that Stock B will increase in value during the next month is 0.74. What is the greatest
value for the probability that neither of these two events will occur? Answer Choices: (A) 0.22 (B)
0.26 (C) 0.37 (D) 0.46 (E) 0.63
A: Let’s think step by step
The probability that Stock A will not increase in value during the next month is 1 - 0.56 = 0.44
The probability that Stock B will not increase in value during the next month is 1 - 0.74 = 0.26
The probability that neither of these two events will occur is the product of the two probabilities,
which is 0.44 x 0.26 = 0.1144.
The greatest value for the probability that neither of these two events will occur is 0.1144.
The answer is (A), 0.22.
Question: If the probability that Stock A will increase in value during the next month is 0.56, and the
probability that Stock B will increase in value during the next month is 0.74. What is the greatest
value for the probability that neither of these two events will occur? Answer Choices: (A) 0.22 (B)
0.26 (C) 0.37 (D) 0.46 (E) 0.63 (Hint: The answer is near to (A)).
A: We know the Answer Hints: (A). With the Answer Hints: (A), we will answer the question.
-----
Let’s think step by step
The probability that neither of these two events will occur is 1 - (0.56 + 0.74) = 1 - 1.30 = 0.30
The greatest value for the probability that neither of these two events will occur is 0.30.
The answer is (A), 0.22.
**10.3** **Response when Miss Hint for GPT-3.5 Text-Divinci-003**
Question: Jerry has two pools, both with leaks emptying them out at 4 gallons/minute. 4 minutes ago,
the big pool had twice as much water as the small pool. Now the big pool has four times as much
water as the small pool. How much water does the small pool have now? (Hint: The answer is near
to ).
A: We know the Answer Hints: 16. With the Answer Hints: 16, we will answer the question.
Let’s think step by step
4 minutes ago, the big pool had twice as much water as the small pool.
This means that the small pool had 4 gallons of water.
In 4 minutes, the big pool has lost 4 gallons x 4 minutes = 16 gallons of water.
The small pool has also lost 4 gallons x 4 minutes = 16 gallons of water.
Therefore, the small pool now has 4 gallons - 16 gallons = -12 gallons of water.
Since it is impossible to have negative gallons of water, the small pool must have 0 gallons of water.
The answer is 0.
**11** **Standard Prompt**
**11.1** **For AddSub, MultiArith, SingleEQ, SVAMP and GSM8K**
**Base Standard**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today?
A: The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?
A: The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?
A: The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now?
A: The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room?
A: The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?
A: The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: The answer is 8.
**Progressive-Hint Prompting Standard (PHP-Standard)**
-----
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is
near to 6).
A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. The
answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
(Hint: The answer is near to 10, 8).
A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question.
The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total? (Hint: The answer is near to 30, 35).
A: We know the Answer Hints: 30, 35. With the Answer Hints: 30, 35, we will answer the question.
The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny? (Hint: The answer near to 8, 12).
A: We know the Answer Hints: 8, 12. With the Answer Hints: 8, 12, we will answer the question.
The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now? (Hint: The answer is near to 9, 5).
A: We know the Answer Hints: 9, 5. With the Answer Hints: 9, 5, we will answer the question. The
answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room? (Hint: The answer is near to
20).
A: We know the Answer Hints: 20. With the Answer Hints: 20, we will answer the question. The
answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday? (Hint: The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. The
answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? (Hint:
The answer is near to 8).
A: We know the Answer Hints: 8. With the Answer Hints: 8, we will answer the question. The
answer is 8.
**11.2** **For AQuA**
**Base Standard**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of
the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d)
4/2 (e) 7/2
A: The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer
Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156
(b) 1392 (c) 1480 (d) 1562 (e) 1788
A: The answer is (b).
**Progressive-Hint Prompting Standard (PHP-Standard)**
-----
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of
the numbers is? (Hint: The answer is near to (a)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The
answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (b)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The
answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint:
The answer is near to (d), (c)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (d), (c). With the Answer Hints: (d), (c), we will answer the question.
The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to
(a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The
answer is (b).
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of
the numbers is? (Hint: The answer is near to (a), (d)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a), (d). With the Answer Hints: (a), (d), we will answer the question.
The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (a)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The
answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint:
The answer is near to (b)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The
answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to
(b), (a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (b), (a). With the Answer Hints: (b), (a), we will answer the question.
The answer is (b).
-----
**12** **Chain-Of-Thought Prompt**
**12.1** **For AddSub, MultiArith, SingleEQ, SVAMP and GSM8K**
**Base Chain-Of-Thought**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today?
A: There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more.
So there must have been 21 - 15 = 6 trees that were planted. The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are originally 3 cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot.
The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?
A: Originally, Leah had 32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After
eating 35, they had 74 - 35 = 39 pieces left in total. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?
A: Jason had 20 lollipops originally. Then he had 12 after giving some to Denny. So he gave Denny
20 - 12 = 8 lollipops. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now?
A: Shawn started with 5 toys. He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4
more toys. Now he has 5 + 4 = 9 toys. The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room?
A: There were originally 9 computers. For each day from monday to thursday, 5 more computers
were installed. So 4 * 5 = 20 computers were added. Now 9 + 20 = 29 computers are now in the
server room. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?
A: Michael started with 58 golf balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he
had 58 - 23 = 35 at the end of Tuesday, and 35 - 2 = 33 at the end of wednesday. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. She bought 5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars.
Now she has 23 - 15 = 8 dollars left. The answer is 8.
**Progressive-Hint Prompting Chain-Of-Thought (PHP-CoT)**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? (Hint: The answer is
near to 6).
A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. There are
15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there
must have been 21 - 15 = 6 trees that were planted. The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
(Hint: The answer is near to 10, 8).
A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question.
There are originally 3 cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot. The
answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total? (Hint: The answer is near to 30, 35).
A: We know the Answer Hints: 30, 35. With the Answer Hints: 30, 35, we will answer the question.
-----
Originally, Leah had 32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After
eating 35, they had 74 - 35 = 39 pieces left in total. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny? (Hint: The answer near to 8, 12).
A: We know the Answer Hints: 8, 12. With the Answer Hints: 8, 12, we will answer the question.
Jason had 20 lollipops originally. Then he had 12 after giving some to Denny. So he gave Denny 20 12 = 8 lollipops. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now? (Hint: The answer is near to 9, 5).
A: We know the Answer Hints: 9, 5. With the Answer Hints: 9, 5, we will answer the question.
Shawn started with 5 toys. He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4 more
toys. Now he has 5 + 4 = 9 toys. The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room? (Hint: The answer is near to
20).
A: We know the Answer Hints: 20. With the Answer Hints: 20, we will answer the question. There
were originally 9 computers. For each day from monday to thursday, 5 more computers were installed.
So 4 * 5 = 20 computers were added. Now 9 + 20 = 29 computers are now in the server room. The
answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday? (Hint: The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. Michael
started with 58 golf balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he had 58 - 23 =
35 at the end of Tuesday, and 35 - 2 = 33 at the end of wednesday. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? (Hint:
The answer is near to 8).
A: We know the Answer Hints: 8. With the Answer Hints: 8, we will answer the question. Olivia had
23 dollars. She bought 5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars. Now she has 23 15 = 8 dollars left. The answer is 8.
**12.2** **For AQuA**
**Base Chain-Of-Thought**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of
the numbers is?
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new
mean would be 50. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a.
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which
means 44a / 3 = 22. So a is equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance?
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is
(e).
Q: How many keystrokes are needed to type the numbers from 1 to 500?
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There
are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b).
**Progressive-Hint Prompting Chain-Of-Thought (PHP-CoT)**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of
the numbers is? (Hint: The answer is near to (a)).
-----
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. If 10 is
added to each number, then the mean of the numbers also increases by 10. So the new mean would
be 50. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (b)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. a / b =
3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3
= 22. So a is equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint:
The answer is near to (d), (c)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (d), (c). With the Answer Hints: (d), (c), we will answer the question.
The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to
(a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. There
are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401
three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b).
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of
the numbers is? (Hint: The answer is near to (a), (d)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a), (d). With the Answer Hints: (a), (d), we will answer the question.
If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean
would be 50. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (a)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. a / b =
3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3
= 22. So a is equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint:
The answer is near to (b)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The
distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to
(b), (a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (b), (a). With the Answer Hints: (b), (a), we will answer the question.
There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b).
-----
**13** **Complex Chain-Of-Thought Prompt**
**13.1** **For AddSub, MultiArith, SingleEQ, SVAMP and GSM8K**
**Base Complex Chain-Of-Thought**
Question: Angelo and Melanie want to plan how many hours over the next week they should study
together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to
memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and
1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days
should they plan to study total over the next week if they take a 10-minute break every hour, include
3 10-minute snack breaks each day, and 30 minutes for lunch each day?
A: Let’s think step by step. Angelo and Melanie think they should dedicate 3 hours to each of the 2
chapters, 3 hours x 2 chapters = 6 hours total.
For the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6
hours total.
Angelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days.
However, they need to include time for breaks and lunch. Every hour they want to include a 10-minute
break, so 12 total hours x 10 minutes = 120 extra minutes for breaks.
They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes.
And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for
snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours.
So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total. They
want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75 They will need to
plan to study 4 days to allow for all the time they need.
The answer is 4
Question: Mark’s basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their
opponents score double the 2 pointers but half the 3 pointers and free throws. What’s the total number
of points scored by both teams added together?
A: Let’s think step by step
Mark’s team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers.
His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers
They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free
throws.
All together his team scored 50+24+10= 84 points
Mark’s opponents scored double his team’s number of 2 pointers, meaning they scored 50*2=100
points in 2 pointers.
His opponents scored half his team’s number of 3 pointers, meaning they scored 24/2= 12 points in 3
pointers.
They also scored half Mark’s team’s points in free throws, meaning they scored 10/2=5 points in free
throws.
All together Mark’s opponents scored 100+12+5=117 points
The total score for the game is both team’s scores added together, so it is 84+117=201 points
The answer is 201
Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck
cards. If she buys 2/5 times more of each item, what would be the total number of the items she will
have if she currently has 60 marbles?
A: Let’s think step by step
When Bella buys 2/5 times more marbles, she’ll have increased the number of marbles by 2/5*60 =
24
The total number of marbles she’ll have is 60+24 = 84
If Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 =
30 frisbees.
If Bella buys 2/5 times more frisbees, she’ll have 2/5*30 = 12 more frisbees.
The total number of frisbees she’ll have will increase to 30+12 = 42
Bella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards
If she buys 2/5 times more deck cards, she’ll have 2/5*10 = 4 more deck cards.
The total number of deck cards she’ll have is 10+4 = 14
-----
Together, Bella will have a total of 14+42+84 = 140 items
The answer is 140
Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three
baskets and 2 less of each fruit in the fourth basket. How many fruits are there?
A: Let’s think step by step
For the first three baskets, the number of apples and oranges in one basket is 9+15=24
In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three
baskets.
Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets.
The number of apples in the fourth basket is 9-2=7
There are also 15-2=13 oranges in the fourth basket
The combined number of oranges and apples in the fourth basket is 13+7=20
The fourth basket also contains 14-2=12 bananas.
In total, the fourth basket has 20+12=32 fruits.
The four baskets together have 32+114=146 fruits.
The answer is 146
Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly
split between oranges, apples and watermelons, and the price of 1 orange is $0.50. How much does 1
apple cost if your total bill was $66?
A: Let’s think step by step
If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12 units of each fruit
If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6
If my total bill was $66 and I spent $6 on oranges then I spent $66 - 6 =60 on the other 2 fruit types.
Assuming the price of watermelon is W, and knowing that you can buy 4 apples for the same price
and that the price of one apple is A, then 1W=4A
If we know we bought 12 watermelons and 12 apples for $60, then we know that $60 = 12W + 12A
Knowing that 1W=4A, then we can convert the above to $60 = 12(4A) + 12A
60 = 48A + 12A
60 = 60A
Then we know the price of one apple (A) is $60/60= $1
The answer is 1
Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with
only 300 students. At the start of the school year, Susy had 100 social media followers. She gained
40 new followers in the first week of the school year, half that in the second week, and half of that in
the third week. Sarah only had 50 social media followers at the start of the year, but she gained 90
new followers the first week, a third of that in the second week, and a third of that in the third week.
After three weeks, how many social media followers did the girl with the most total followers have?
A: Let’s think step by step
After one week, Susy has 100+40 = 140 followers.
In the second week, Susy gains 40/2 = 20 new followers.
In the third week, Susy gains 20/2 = 10 new followers.
In total, Susy finishes the three weeks with 140+20+10 = 170 total followers.
After one week, Sarah has 50+90 = 140 followers.
After the second week, Sarah gains 90/3 = 30 followers.
After the third week, Sarah gains 30/3 = 10 followers.
So, Sarah finishes the three weeks with 140+30+10 = 180 total followers.
Thus, Sarah is the girl with the most total followers with a total of 180.
The answer is 180
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He
rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package.
He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he
make in total, in dollars?
A: Let’s think step by step
Sam bought 12 boxes x 10 =120 worth of highlighters.
He bought 12 * 30 = 360 highlighters in total.
Sam then took 5 boxes × 6 highlighters/box = 30 highlighters.
He sold these boxes for 5 * $3 = $15
-----
After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining.
These form 330 / 3 = 110 groups of three pens.
He sold each of these groups for $2 each, so made 110 * 2 = $220 from them.
In total, then, he earned $220 + $15 = $235.
Since his original cost was $120, he earned $235 - $120 = $115 in profit.
The answer is 115
Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the
female students like to play basketball. What percent of the population of the school do not like to
play basketball if the ratio of the male to female students is 3:2 and there are 1000 students?
A: Let’s think step by step
The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts are for females.
Each part represents 1000/5 = 200 students.
So, there are 3 x 200 = 600 males.
And there are 2 x 200 = 400 females.
Hence, 600 x 2/3 = 400 males play basketball.
And 400 x 1/5 = 80 females play basketball.
A total of 400 + 80 = 480 students play basketball.
Therefore, 1000 - 480 = 520 do not like to play basketball.
The percentage of the school that do not like to play basketball is 520/1000 * 100 = 52 The answer is 52
**Progressive-Hint Prompting Complex Chain-Of-Thought (PHP-Complex CoT)**
Question: Angelo and Melanie want to plan how many hours over the next week they should study
together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to
memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and
1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days
should they plan to study total over the next week if they take a 10-minute break every hour, include 3
10-minute snack breaks each day, and 30 minutes for lunch each day? (Hint: The answer is near to 4).
A: We know the Answer Hints: 4. With the Answer Hints: 4, we will answer the question.
Let’s think step by step.
Angelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2 chapters
= 6 hours total.
For the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6
hours total.
Angelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days.
However, they need to include time for breaks and lunch. Every hour they want to include a 10-minute
break, so 12 total hours x 10 minutes = 120 extra minutes for breaks.
They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes.
And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for
snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours.
So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total.
They want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75
They will need to plan to study 4 days to allow for all the time they need.
The answer is 4
Question: Mark’s basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their
opponents score double the 2 pointers but half the 3 pointers and free throws. What’s the total number
of points scored by both teams added together?(Hint: The answer is near to 201).
A: We know the Answer Hints: 201. With the Answer Hints: 201, we will answer the question.
Let’s think step by step
Mark’s team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers.
His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers
They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free
throws.
All together his team scored 50+24+10= 84 points
Mark’s opponents scored double his team’s number of 2 pointers, meaning they scored 50*2=100
points in 2 pointers.
His opponents scored half his team’s number of 3 pointers, meaning they scored 24/2= 12 points in 3
-----
pointers.
They also scored half Mark’s team’s points in free throws, meaning they scored 10/2=5 points in free
throws
All together Mark’s opponents scored 100+12+5=117 points
The total score for the game is both team’s scores added together, so it is 84+117=201 points
The answer is 201
Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck
cards. If she buys 2/5 times more of each item, what would be the total number of the items she will
have if she currently has 60 marbles? (Hint: The answer is near to 160,145).
A: We know the Answer Hints: 160, 145. With the Answer Hints: 160, 145, we will answer the
question.
Let’s think step by step
When Bella buys 2/5 times more marbles, she’ll have increased the number of marbles by 2/5*60 =
24
The total number of marbles she’ll have is 60+24 = 84
If Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 =
30 frisbees.
If Bella buys 2/5 times more frisbees, she’ll have 2/5*30 = 12 more frisbees.
The total number of frisbees she’ll have will increase to 30+12 = 42
Bella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards
If she buys 2/5 times more deck cards, she’ll have 2/5*10 = 4 more deck cards.
The total number of deck cards she’ll have is 10+4 = 14
Together, Bella will have a total of 14+42+84 = 140 items
The answer is 140
Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three
baskets and 2 less of each fruit in the fourth basket. How many fruits are there? (Hint: The answer is
near to 93, 141).
A: We know the Answer Hints: 93, 141. With the Answer Hints: 93, 141, we will answer the
question.
Let’s think step by step
For the first three baskets, the number of apples and oranges in one basket is 9+15=24
In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three
baskets.
Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets.
The number of apples in the fourth basket is 9-2=7
There are also 15-2=13 oranges in the fourth basket
The combined number of oranges and apples in the fourth basket is 13+7=20
The fourth basket also contains 14-2=12 bananas.
In total, the fourth basket has 20+12=32 fruits.
The four baskets together have 32+114=146 fruits.
The answer is 146
Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly
split between oranges, apples and watermelons, and the price of 1 orange is $0.50. How much does 1
apple cost if your total bill was $66? (Hint: The answer is near to 1, 5).
A: We know the Answer Hints: 1, 5. With the Answer Hints: 1, 5, we will answer the question.
Let’s think step by step
If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12 units of each fruit
If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6
If my total bill was $66 and I spent $6 on oranges then I spent $66 - $6 = $60 on the other 2 fruit
types.
Assuming the price of watermelon is W, and knowing that you can buy 4 apples for the same price
and that the price of one apple is A, then 1W=4A
If we know we bought 12 watermelons and 12 apples for $60, then we know that $60 = 12W + 12A
Knowing that 1W=4A, then we can convert the above to $60 = 12(4A) + 12A
$60 = 48A + 12A
$60 = 60A
-----
Then we know the price of one apple (A) is $60/60= $1
The answer is 1
Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with
only 300 students. At the start of the school year, Susy had 100 social media followers. She gained
40 new followers in the first week of the school year, half that in the second week, and half of that in
the third week. Sarah only had 50 social media followers at the start of the year, but she gained 90
new followers the first week, a third of that in the second week, and a third of that in the third week.
After three weeks, how many social media followers did the girl with the most total followers have?
(Hint: The answer is near to 180, 160).
A: We know the Answer Hints: 180, 160. With the Answer Hints: 180, 160, we will answer the
question.
Let’s think step by step
After one week, Susy has 100+40 = 140 followers.
In the second week, Susy gains 40/2 = 20 new followers.
In the third week, Susy gains 20/2 = 10 new followers.
In total, Susy finishes the three weeks with 140+20+10 = 170 total followers.
After one week, Sarah has 50+90 = 140 followers.
After the second week, Sarah gains 90/3 = 30 followers.
After the third week, Sarah gains 30/3 = 10 followers.
So, Sarah finishes the three weeks with 140+30+10 = 180 total followers.
Thus, Sarah is the girl with the most total followers with a total of 180.
The answer is 180
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He
rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package.
He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he
make in total, in dollars? (Hint: The answer is near to 120).
A: We know the Answer Hints: 120. With the Answer Hints: 120, we will answer the question.
Let’s think step by step
Sam bought 12 boxes x $10 = $120 worth of highlighters.
He bought 12 * 30 = 360 highlighters in total.
Sam then took 5 boxes × 6 highlighters/box = 30 highlighters.
He sold these boxes for 5 * $3 = $15
After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining.
These form 330 / 3 = 110 groups of three pens.
He sold each of these groups for $2 each, so made 110 * 2 = $220 from them.
In total, then, he earned $220 + $15 = $235.
Since his original cost was $120, he earned $235 - $120 = $115 in profit.
The answer is 115
Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the
female students like to play basketball. What percent of the population of the school do not like to
play basketball if the ratio of the male to female students is 3:2 and there are 1000 students? (Hint:
The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question.
Let’s think step by step
The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts are for females.
Each part represents 1000/5 = 200 students.
So, there are 3 x 200 = 600 males.
And there are 2 x 200 = 400 females.
Hence, 600 x 2/3 = 400 males play basketball.
And 400 x 1/5 = 80 females play basketball.
A total of 400 + 80 = 480 students play basketball.
Therefore, 1000 - 480 = 520 do not like to play basketball.
The percentage of the school that do not like to play basketball is 520/1000 * 100 = 52
The answer is 52
-----
**13.2** **For AQuA**
**Base Complex Chain-Of-Thought**
Question: there were 35 students in a hostel . due to the admission of 7 new students the expenses of
the mess were increased by rs . 84 per day while the average expenditure per head diminished by re 1
. what was the original expenditure of the mess ? Options: (a) rs 450 (b) rs 920 (c) rs 550 (d) rs . 630
(e) none of these
A: Let’s think step by step let the original average expenditure be rs . x then, 42 ( x - 1 ) - 35 x = 84 7
x = 126 x = 18 therefore original expenditure = rs . ( 35 * 18 ) = rs . 630. The answer is (d), rs . 630
Question: a train 200 m long passes a man, running at 5 km / hr in the same direction in which the
train is going, in 10 seconds . the speed of the train is ? Options: (a) 28 (b) 50 (c) 77 (d) 22 (e) 12
A: Let’s think step by step speed of the train relative to man = ( 200 / 10 ) m / sec = ( 20 ) m / sec. [ (
20 ) * ( 18 / 5 ) ] km / hr = 72 km / hr. let the speed of the train be x km / hr. then, relative speed = ( x
- 5 ) km / hr. x - 5 = 72, x = 77 km / hr . The answer is (c), 77
Question: solution x contains 20 % of material a and 80 % of material b . solution y contains 30 % of
material a and 70 % of material b . a mixture of both these solutions contains 22 % of material a in
the final product . how much solution x is present in the mixture ?
Options: (a) 40 % (b) 60 % (c) 80 % (d) 100 % (e) 110 %
A: Let’s think step by step
we can assume the total weight of the mixture = 100
conc of a in the final mixture = 22
let weight of a in the mixture be x.
conc given = 20% = 0.2
therefore, weight of b = 100 - x.
conc given = 30% = 0.3
now, accordding to the problem, 0.2 x + 0.3 ( 100 - x ) = 22
solving, we get x = 80
since we assumed the weight of the mixture = 100, therefore presence of a in the mixture = 80%.
The answer is (c), 80%
Question: a trader sells 40 metres of cloth for rs . 8200 at a profit of rs . 35 per metre of cloth . how
much profit will the trder earn on 40 metres of cloth ?
Options: (a) rs . 950 (b) rs . 1500 (c) rs . 1000 (d) rs . 1400 (e) none of these
A: Let’s think step by step
price of 1 metre cloth = 8200 / 40 = rs 205
cost of 1 metre cloth = rs 205 – 35 = rs 170
cost on 40 metres = 170 x 40 = rs . 6800
profit earned on 40 metres cloth = rs . 8200 – rs . 6800 = rs . 1400
The answer is (d), rs . 1400
Question: if x < y < z and y - x > 5, where x is an even integer and y and z are odd integers, what is
the least possible value s of z - x ?
Options: (a) 6 (b) 7 (c) 8 (d) 9 (e) 10
A: Let’s think step by step
We know x < y < z
to find the least possible value for z - x, we need to find the values for z and x that can be closest to
each other.
if x is some even number, then what could be minimum possible odd z.
if x is some even number, y - x > 5 ; y > x + 5
minimum value for y = x + 5 + 2 = x + 7
(note : x + 5 is as even + odd = odd and nearest odd greater than x + 5 is x + 5 + 2)
minimum value for z = y + 2 = x + 7 + 2 = x + 9
(note : z = y + 2 because both z and y are odd, difference between two odd numbers is 2)
s = z - x = x + 9 - x = 9
The answer is (d), 9
Question: what is the difference between the c . i . on rs . 6000 for 1 1 / 2 years at 4 % per annum
compounded yearly and half - yearly ?
Options: (a) s . 2.04 (b) s . 2.08 (c) s . 2.02 (d) s . 2.83 (e) s . 2.45
-----
A: Let’s think step by step
c . i . when interest is compounded yearly = [ 6000 * ( 1 + 4 / 100 ) * ( 1 + ( 1 / 2 * 4 ) / 100 ] = 6000
- 26 / 25 * 51 / 50 = rs . 6364.8
c . i . when interest is compounded half - yearly = [ 6000 * ( 1 + 2 / 100 ) 2 ] = ( 6000 * 51 / 50 * 51 /
50 * 51 / 50 ) = rs . 6367.25
difference = ( 6367.25 - 6364.8 ) = rs . 2.45.
The answer is (e), s . 2.45
Question: the average weight of a, b and c is 45 kg . if the average weight of a and b be 40 kg and
that of b and c be 45 kg, then the weight of b is :
Options: (a) 31 kg (b) 32 kg (c) 33 kg (d) 35 kg (e) none of these
A: Let’s think step by step
let a, b, c represent their respective weights.
then, we have : a + b + c = ( 45 x 3 ) = 135 ... ( i )
a + b = ( 40 x 2 ) = 80 ... ( ii )
b + c = ( 45 x 2 ) = 90 ... ( iii )
adding ( ii ) and ( iii ), we get : a + 2 b + c = 170 ... ( iv )
subtracting ( i ) from ( iv ), we get : b = 35.
b’ s weight = 35 kg .
The answer is (d), 35 kg
Question: the compound and the simple interests on a certain sum at the same rate of interest for two
years are rs . 11730 and rs . 10200 respectively . find the sum
Options: (a) rs . 17037 (b) rs . 17000 (c) rs . 17276 (d) rs . 170287 (e) rs . 171881
A: Let’s think step by step
the simple interest for the first year is 10200 / 2 is rs . 5100
and compound interest for first year also is rs . 5100.
the compound interest for second year on rs . 5100 for one year
so rate of the interest = ( 100 * 1530 ) / ( 5100 * 1 ) = 30% p
so p = ( 100 * 10200 ) / ( 30 * 2 ) = rs . 17000
The answer is (b), rs . 17000
**Progressive-Hint Prompting Complex Chain of Thought (PHP-Complex CoT)**
Question: there were 35 students in a hostel . due to the admission of 7 new students the expenses of
the mess were increased by rs . 84 per day while the average expenditure per head diminished by re 1
. what was the original expenditure of the mess ? (Hint: The answer is near to (d)).
Options: (a) rs 450 (b) rs 920 (c) rs 550 (d) rs . 630 (e) none of these
A: We know the Answer Hints: (d). With the Answer Hints: (d), we will answer the question.
Let’s think step by step
let the original average expenditure be rs . x
then, 42 ( x - 1 ) - 35 x = 84 7 x = 126
x = 18
therefore original expenditure = rs . ( 35 * 18 ) = rs . 630.
The answer is (d), rs . 630
Question: a train 200 m long passes a man, running at 5 km / hr in the same direction in which the
train is going, in 10 seconds . the speed of the train is ? (Hint: The answer is near to (c)).
Options: (a) 28 (b) 50 (c) 77 (d) 22 (e) 12
A: We know the Answer Hints: (c). With the Answer Hints: (c), we will answer the question.
Let’s think step by step
speed of the train relative to man = ( 200 / 10 ) m / sec = ( 20 ) m / sec.
[ ( 20 ) * ( 18 / 5 ) ] km / hr = 72 km / hr.
let the speed of the train be x km / hr.
then, relative speed = ( x - 5 ) km / hr.
x - 5 = 72, x = 77 km / hr .
The answer is (c), 77
Question: solution x contains 20 % of material a and 80 % of material b . solution y contains 30 % of
material a and 70 % of material b . a mixture of both these solutions contains 22 % of material a in
-----
the final product . how much solution x is present in the mixture ? (Hint: The answer is near to (a)).
Options: (a) 40 % (b) 60 % (c) 80 % (d) 100 % (e) 110 %
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question.
Let’s think step by step
we can assume the total weight of the mixture = 100
conc of a in the final mixture = 22
let weight of a in the mixture be x.
conc given = 20% = 0.2
therefore, weight of b = 100 - x.
conc given = 30% = 0.3
now, accordding to the problem, 0.2 x + 0.3 ( 100 - x ) = 22
solving, we get x = 80
since we assumed the weight of the mixture = 100, therefore presence of a in the mixture = 80%.
The answer is (c), 80%
Question: a trader sells 40 metres of cloth for rs . 8200 at a profit of rs . 35 per metre of cloth . how
much profit will the trder earn on 40 metres of cloth ? (Hint: The answer is near to (c)).
Options: (a) rs . 950 (b) rs . 1500 (c) rs . 1000 (d) rs . 1400 (e) none of these
A: We know the Answer Hints: (c). With the Answer Hints: (c), we will answer the question.
Let’s think step by step
price of 1 metre cloth = 8200 / 40 = rs 205
cost of 1 metre cloth = rs 205 – 35 = rs 170
cost on 40 metres = 170 x 40 = rs . 6800
profit earned on 40 metres cloth = rs . 8200 – rs . 6800 = rs . 1400
The answer is (d), rs . 1400
Question: if x < y < z and y - x > 5, where x is an even integer and y and z are odd integers, what is
the least possible value s of z - x ? (Hint: The answer is near to (a), (c)).
Options: (a) 6 (b) 7 (c) 8 (d) 9 (e) 10
A: We know the Answer Hints: (a), (c). With the Answer Hints: (a), (c), we will answer the question.
Let’s think step by step
We know x < y < z
to find the least possible value for z - x, we need to find the values for z and x that can be closest to
each other.
if x is some even number, then what could be minimum possible odd z.
if x is some even number, y - x > 5 ; y > x + 5
minimum value for y = x + 5 + 2 = x + 7
(note : x + 5 is as even + odd = odd and nearest odd greater than x + 5 is x + 5 + 2)
minimum value for z = y + 2 = x + 7 + 2 = x + 9
(note : z = y + 2 because both z and y are odd, difference between two odd numbers is 2)
s = z - x = x + 9 - x = 9
The answer is (d), 9
Question: what is the difference between the c . i . on rs . 6000 for 1 1 / 2 years at 4 % per annum
compounded yearly and half - yearly ? (Hint: The answer is near to (e), (d)).
Options: (a) s . 2.04 (b) s . 2.08 (c) s . 2.02 (d) s . 2.83 (e) s . 2.45
A: We know the Answer Hints: (e), d). With the Answer Hints: (e), (d), we will answer the question.
Let’s think step by step
c . i . when interest is compounded yearly = [ 6000 * ( 1 + 4 / 100 ) * ( 1 + ( 1 / 2 * 4 ) / 100 ] = 6000
- 26 / 25 * 51 / 50 = rs . 6364.8
c . i . when interest is compounded half - yearly = [ 6000 * ( 1 + 2 / 100 ) 2 ] = ( 6000 * 51 / 50 * 51 /
50 * 51 / 50 ) = rs . 6367.25
difference = ( 6367.25 - 6364.8 ) = rs . 2.45.
The answer is (e), s . 2.45
Question: the average weight of a, b and c is 45 kg . if the average weight of a and b be 40 kg and
that of b and c be 45 kg, then the weight of b is : (Hint: The answer is near to (d), (b)).
Options: (a) 31 kg (b) 32 kg (c) 33 kg (d) 35 kg (e) none of these
A: We know the Answer Hints: (d), (b). With the Answer Hints: (d), (b), we will answer the question.
Let’s think step by step
let a, b, c represent their respective weights.
-----
then, we have : a + b + c = ( 45 x 3 ) = 135 ... ( i )
a + b = ( 40 x 2 ) = 80 ... ( ii )
b + c = ( 45 x 2 ) = 90 ... ( iii )
adding ( ii ) and ( iii ), we get : a + 2 b + c = 170 ... ( iv )
subtracting ( i ) from ( iv ), we get : b = 35.
b’ s weight = 35 kg .
The answer is (d), 35 kg
Question: the compound and the simple interests on a certain sum at the same rate of interest for two
years are rs . 11730 and rs . 10200 respectively . find the sum (Hint: The answer is near to (e), (c)).
Options: (a) rs . 17037 (b) rs . 17000 (c) rs . 17276 (d) rs . 170287 (e) rs . 171881
A: We know the Answer Hints: (e), (c). With the Answer Hints: (e), (c), we will answer the question.
Let’s think step by step
the simple interest for the first year is 10200 / 2 is rs . 5100
and compound interest for first year also is rs . 5100.
the compound interest for second year on rs . 5100 for one year
so rate of the interest = ( 100 * 1530 ) / ( 5100 * 1 ) = 30% p
so p = ( 100 * 10200 ) / ( 30 * 2 ) = rs . 17000
The answer is (b), rs . 17000
-----
# Progressive-Hint Prompting Improves Reasoning in Large Language Models
**Chuanyang Zheng** [1] **Zhengying Liu** [2] **Enze Xie** [2] **Zhenguo Li** [2] **Yu Li** [1]
**Abstract**
The performance of Large Language Models
(LLMs) in reasoning tasks depends heavily on
prompt design, with Chain-of-Thought (CoT) and
self-consistency being critical methods that enhance this ability. However, these methods do not
fully exploit the answers generated by the LLM to
guide subsequent responses. This paper proposes
a new prompting method, named Progressive-Hint
Prompting (PHP), that enables automatic multiple interactions between users and LLMs by
using previously generated answers as hints to
progressively guide toward the correct answers.
PHP is orthogonal to CoT and self-consistency,
making it easy to combine with state-of-the-art
techniques to further improve performance. We
conducted extensive and comprehensive experiments on seven benchmarks. The results show
that PHP significantly improves accuracy while
remaining highly efficient. For instance, with textdavinci-003, we observed a 4.2% improvement
on GSM8K with greedy decoding compared to
Complex CoT, and a 46.17% reduction in sample paths with self-consistency. With GPT-4 and
PHP, we achieve state-of-the-art performances on
SVAMP (89.1% → 91.9%), GSM8K (92% →
95.5%), AQuA (76.4% → 79.9%) and MATH
(50.3% → 53.9%).
**1. Introduction**
While Large Language Models (LLMs) have demonstrated
remarkable performance across various NLP tasks (Otter
et al., 2020; Qiu et al., 2020; Chowdhary & Chowdhary,
2020), their ability to reason is often perceived as a limitation that cannot be overcome merely by increasing the
scale of the model (Rae et al., 2021; Srivastava et al., 2022).
1The Chinese University of Hong Kong 2Huawei Noah’s Ark
Lab. Correspondence to: Yu Li <[email protected]>.
_The first AI for MATH Workshop at the 41_ _[st]_ _International Confer-_
_ence on Machine Learning, Vienna, Austria. Copyright 2024 by_
the author(s).
Prompt engineering in large-scale models has shown comparable or superior performance to full training set fine-tuning
in enhancing reasoning ability, while also being significantly
more sample-efficient (Kojima et al., 2022; Lewkowycz
et al., 2022). One area of research that aims to address this
limitation is the use of Chain-of-Thought (CoT) approaches
to promote intermediate reasoning steps (Wei et al., 2022;
Zhou et al., 2023; Fu et al., 2023). Other works in this area,
such as Least-to-Most (Zhou et al., 2023) and Complex
CoT (Fu et al., 2023), have also explored this direction. Another area of research is self-consistency-related approaches.
In comparison to CoT-related work that focuses on designing better prompts, self-consistency proposes to sample multiple answers from the LLMs and arrive at the correct answer
through a majority vote (Fu et al., 2023). This approach
is further improved upon by complex-based selection (Fu
et al., 2023). CoT-related and self-consistency-related works
can be seamlessly combined without any conflict.
Prior research has not explored the potential of leveraging
the outputs of LLM to refine reasoning paths iteratively.
It stands to reason that similar to human cognition, LLM
could benefit from reevaluating and adjusting its generated
reasoning paths in order to correct errors and enhance overall performance. In this paper, we propose a new method
named Progressive-Hint Prompting (PHP) that involves sequentially interacting with LLM to approach the correct answer gradually. The method operates as follows: (1) given
a question, we ask the LLM to provide a Base Answer; (2)
we combine the question and answer to re-ask the LLM and
obtain the Subsequent Answer; (3) we repeat the operation
in (2) until the answer is stable and does not change over
the last two answers. PHP follows a human-like thought
process where previous answers are leveraged as hints to
arrive at the correct answer after re-evaluating the question.
Figure 1 illustrates the proposed PHP framework. We use
the base prompt to obtain the initial base answer, and then
employ the PHP prompt for subsequent questions. If the current answer matches the previous answer, it is more likely to
be correct, and we terminate the LLM inquiry. With Complex CoT and GPT-4, after adding PHP, the performance
achieves SOTA with 91.9% on SVAMP (Patel et al., 2021),
95.5% on GSM8K (Cobbe et al., 2021), and 79.9% on
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
Base Prompting
**Q: There are 96 fourth-graders at Small Tree School. 43 of** **A: There were 96 fourth-graders at Small Tree School. 43 of**
them are girls. On Friday, 5 fourth-grade girls and 4 fourth-grade boys were absent. How many fourth grade boys were LLM Base them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls at school on Friday, and 96 at Small Tree School on Friday? Answer 38 = 58 boys at school on Friday. The answer is 58.
**A: We know the Answer Hints: 58. With the Answer**
Progressive-Hint Prompting **Hints: 58, we will answer the question. There were 96**
**Q: There are 96 fourth-graders at Small Tree School. 43 of** fourth-graders at Small Tree School, and 43 of them were
them are girls. On Friday, 5 fourth-grade girls and 4 fourth- girls.On Friday, 5 girls and 4 boys were absent.So there
grade boys were absent. How many fourth grade boys were at Small Tree School on Friday? to 58). **(Hint: The answer is near** LLM AnswerSecond were 43 - 5 = 38 girls and 96 - 43 = 53 boys at Small Tree School on Friday.After subtracting the absent students, there were 38 - 4 = 34 girls and 53 - 5 = 48 boys. The
answer is 48.
Progressive-Hint Prompting **A: We know the Answer Hints: 58, 48. With the Answer**
**Q: There are 96 fourth-graders at Small Tree School. 43 of** **Hints: 58, 48, we will answer the question. There were 96**
them are girls. On Friday, 5 fourth-grade girls and 4 fourth-grade boys were absent. How many fourth grade boys were LLM Third fourth-graders, and 43 of them were girls. On Friday, 5 girls and 4 boys were absent. So there were 43 - 5 = 38 girls and
at Small Tree School on Friday? (Hint: The answer is near Answer 96 - 9 = 87 students in total. Of those 87 students, 38 were
**to 58, 48).** girls, so there must have been 87 - 38 = 49 boys. The
answer is 49.
Progressive-Hint Prompting **A: We know the Answer Hints: 58, 48, 49. With the**
**Q: There are 96 fourth-graders at Small Tree School. 43 of** **Answer Hints: 58, 48, 49, we will answer the question.**
them are girls. On Friday, 5 fourth-grade girls and 4 fourth-grade boys were absent. How many fourth grade boys were LLM Fourth There were 96 fourth-graders at Small Tree School, and 43 of them were girls. On Friday, 5 girls and 4 boys were
at Small Tree School on Friday? (Hint: The answer is near Answer absent. So there were 43 - 5 = 38 girls and 96 - 43 = 53
**to 58, 48, 49).** boys originally. After the absences, there were 38 - 5 = 33
girls and 53 - 4 = 49 boys. The answer is 49.
_Figure 1. Our proposed Progressive-Hint Prompting method combines the generated answers and questions for double-checking purposes,_
which is divided into two stages. In the first stage, we generate a base answer by passing to the LLM a concatenation of the current question
and a base prompt, such as CoT or Complex CoT. In the second stage, we generate the subsequent answers via the corresponding
**progressive-hint prompt, such as Progressive-Hint Prompting CoT (PHP-CoT) or Progressive-Hint Prompting Complex CoT (PHP-**
Complex CoT), for the subsequent interaction. The interaction stops when two consecutive answers are the same. Purple Box: The input
of LLM. Orange Box: The output of LLM.
AQuA (Ling et al., 2017) and 53.9% on MATH (Hendrycks
et al., 2021).
In summary, our contributions are as follows:
- We propose a new method, Progressive-Hint Prompting (PHP), alongside CoT and self-consistency, for
improving LLM reasoning abilities.
- We demonstrate the effectiveness of PHP through extensive experimentation, including baseline comparisons and ablation studies, using four LLMs, textdavinci-002 and text-davinci-003, GPT-3.5-Turbo and
GPT-4 (Brown et al., 2020; Ouyang et al., 2022; OpenAI, 2023).
- The experiment results show that our method can also
improve performance with self-consistency.
- We believe that progressive-hint prompting represents
an important step towards automatic sequential interaction with LLMs and hope that it inspires future research
in this field.
**2. Related Work**
**Emergent Abilities and Multi-Step Reasoning. LLMs are**
particularly skilled at in-context learning, which involves
adhering to the structure of prompts (typically few-shot)
and completing corresponding tasks (Brown et al., 2020;
Chowdhery et al., 2022; Shin et al., 2020; Liu et al., 2023).
Among the diverse range of language comprehension tasks,
we are particularly interested in multi-step reasoning be
cause it exhibits two unique features. Firstly, LLMs significantly outperform smaller models on multi-step reasoning
tasks (Wei et al., 2022), whereas their performance gains
on tasks like sentiment classification can be limited (Shin
et al., 2020). Secondly, few-shot prompting outperforms
full training set fine-tuning in multi-step reasoning tasks,
even when conducted on LLMs (Lewkowycz et al., 2022).
**Chain-of-Thought Reasoning. Chain-of-thought (CoT)**
prompting (Wei et al., 2022) is a prominent work that demonstrates the multi-step reasoning capacities of LLMs. This
approach suggests that the reasoning ability can be elicited
through a chain of thoughts, where an answer directly follows a question without intermediate reasoning steps. Leastto-Most prompting (Zhou et al., 2023), which follows the
same research direction, divides reasoning into problem
breakdown parts and problem answer parts and describes
the reasoning steps in more detail. Similarly, the complex
CoT (Fu et al., 2023) highlights the importance of prompt
complexity and selects the most complex questions and their
answers as prompts. To reduce the human workload, the
Auto-CoT is proposed (Zhang et al., 2022). Other works
have found that using specific phrases like "Let’s think step
by step" (Kojima et al., 2022) can improve performance.
**Reasoning Path Extraction. Previous research has investi-**
gated various task-specific methods for identifying reasoning paths, including constructing semantic graphs (Xu et al.,
2021), developing Recurrent Neural Network (RNN) models to retrieve reasoning paths from a Wikipedia graph (Asai
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
et al., 2020), using human-annotated reasoning paths on
math problems for fine-tuning (Cobbe et al., 2021), or
training an extractor with heuristic-based pseudo reasoning paths (Chen et al., 2019). A novel research work, named
Self-Consistency (Wang et al., 2023), couples the generation of reasoning paths and a final answer by sampling
from the decoder and using aggregation to retrieve the most
consistent answer without extra modules. This approach has
shown great promise, and it has the potential to outperform
existing methods in terms of accuracy. Furthermore, the
vote complex (Fu et al., 2023), rank and select samples via
complexity, is proposed to improve the self-consistency performance. This approach is particularly useful when dealing
with complex reasoning problems.
**3. Progressive-Hint Prompting**
_Table 1. Illustration of Progressive-Hint Prompting. Blue Color:_
The difference between Base CoT and PHP-CoT. Red Color: The
handcrafted Hint in the designed prompt.
**Base Prompting (e.g. CoT)**
Q: There are 15 trees in the grove. Grove workers will plant
trees in the grove today. After they are done, there will be 21
trees. How many trees did the grove workers plant today?
A: There are 15 trees originally. Then there were 21 trees
after the Grove workers planted some more. So there must
have been 21 - 15 = 6 trees that were planted. The answer is
6.
**Progressive-Hint Prompting 1:** **Hint is the cor-**
**rect answer**
Q: There are 15 trees in the grove. Grove workers will plant
trees in the grove today. After they are done, there will
be 21 trees. How many trees did the grove workers plant
today? (Hint: The answer is near to 6).
A: We know the Answer Hints: 6. With the Answer Hints:
6, we will answer the question. There are 15 trees originally.
Then there were 21 trees after the Grove workers planted
some more. So there must have been 21 - 15 = 6 trees that
were planted. The answer is 6.
**Progressive-Hint Prompting 2:** **Hint is the incor-**
**rect answer**
Q: There are 15 trees in the grove. Grove workers will plant
trees in the grove today. After they are done, there will
be 21 trees. How many trees did the grove workers plant
today? (Hint: The answer is near to 10, 8).
A: We know the Answer Hints: 10, 8. With the Answer
Hints: 10, 8, we will answer the question. There are 15 trees
originally. Then there were 21 trees after the Grove workers
planted some more. So there must have been 21 - 15 = 6
trees that were planted. The answer is 6.
One salient aspect of humanity is our ability to not only
think once, but to also double-check our answers. In this
paper, we propose that this process can be simulated in language models by sequentially employing previous answers.
In other words, a model can generate an answer and then
combine this with the question for the next round of thinking. If the current answer is the same as the previous one,
we can have confidence that the current answer is correct.
We have shown the Proposed Interaction in Figure 1 and
Prompt Design in Table 1. We demonstrate the process of
generating PHP-CoT prompts for a given CoT prompt in
Table 1 and provide the complete prompt in the Appendix.
Our pipeline is divided into two stages: (i) base answer
**& base prompt: the generation of the base answer via**
base prompts such as CoT or Complex CoT and (ii) subse**quent answer & PHP: the subsequent interaction with the**
LLMs through corresponding progressive-hint prompts like
Progressive-Hint Prompting CoT (PHP-CoT) or ProgressiveHint Prompting Complex CoT (PHP-Complex CoT). We
propose a two-sentence structure for the PHP, consisting of a
phrase indicating the proximity of the answer at the question
part followed by a sentence rehearsing hints at answer part.
For instance, to create a PHP prompt from a CoT prompt,
we first add the phrase "The answer is near to A1, ..., Ap"
after the initial question, where A1, ..., Ap represent possible answers. Next, we introduce the hints in the beginning
sentence of the potential answers: "We know the Answer
Hints: A1, ..., Ap. With the Answer Hints: A1, ..., Ap, we
will answer the question.".
**PHP Design Principle: we should consider various situ-**
ations of hints. When we ask LLM questions, we do not
know what the answer will be so the hints are unknown. In
this prompt design, we consider the following two potential
situations: 1) The hints are the same as the correct answer:
to be sure that the model can still get the correct answer
when the hint is correct; 2) hints are not the same as the
correct answer: to be sure that the model can jump out of
the incorrect answer.
Adhering to the above guidelines, we utilize the Standard
prompt, CoT prompt, and Complex CoT prompt to generate initial base answers, from which we can then develop
the subsequent answer generation prompts, namely, PHP**Standard prompt, PHP-CoT prompt, and PHP-Complex**
**CoT prompt, respectively. The stopping criterion in PHP**
is reached when two consecutive responses are identical,
signaling the end of the interactive exchange.
Overall, this method represents a pipeline for improving the
quality of responses and enhancing communication during
question-answer scenarios.
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**4. Experiments**
**Datasets and Models. We evaluate PHP on seven datasets**
(AddSub (Hosseini et al., 2014), MultiArith (Roy &
Roth, 2015), SingleEQ (Koncel-Kedziorski et al., 2015),
SVAMP (Patel et al., 2021), GSM8K (Cobbe et al., 2021),
AQuA (Ling et al., 2017)) and MATH (Hendrycks et al.,
2021). We choose the datasets because we focus on the
reasoning ability of the model. The utilized for both the
Standard and CoT prompts are sourced from the original
CoT paper (Wei et al., 2022), whereas the prompt utilized
for the Complex CoT (Fu et al., 2023) prompt is derived
from the corresponding Complex CoT publication. Also, to
validate our proposed method performance, we employ four
models: text-davinci-002 and text-davinci-003, GPT-3.5Turbo and GPT-4 (Brown et al., 2020; Ouyang et al., 2022;
OpenAI, 2023). All models are employed via OpenAI API
key.
**Prompts. We have shown the proposed process pipeline in**
the Method part. We show all the prompts in the Appendix
and supplementary materials.
**4.1. Main Results**
The main results of our study are presented in Table 2, with
all methods using greedy decoding (i.e. temperature = 0).
Our findings indicate that the proposed PHP improves performance, particularly when working with powerful prompts
and models.
**PHP works better when the LLM is more powerful. In**
terms of model power, our analysis indicates that PHP is
most effective when applied with powerful models. Specifically, when examining CoT and Complex CoT prompts,
we found that while text-davinci-002 generally yielded a
performance improvement after adding hints, there were occasions when performance would decline. However, when
we replaced text-davinci-002 with text-davinci-003, performance improvement became more consistent and significant.
For example, on GSM8K dataset, PHP-Complex CoT using
text-davinci-002 improved performance by 3.6%, but then
increased further to 4.6% with text-davinci-003. Similarly,
on AQuA dataset, using PHP-Complex CoT resulted in a
performance drop of 0.4% with text-davinci-002 but a 1.2%
improvement with text-davinci-003. The text-davinci-002
is finetuned with supervised instruction tuning, while the
text-davinci-003 is finetuned with reinforcement learning.
The improved performance with text-davinci-003 can be
attributed to its enhanced power, making it better at understanding and employing the given hint.
**PHP works better when the prompt is more powerful.**
After analyzing our data, it was determined that the prompt’s
power has a significant impact on the performance of the system. Our experimental results revealed that while the inclusion of PHP produced modest improvements with standard
_Table 2. PHP, when applied to different LLMs and prompting meth-_
ods, can help to improve the performance. Meanwhile, PHP works
better when the model and prompt are more powerful. The results
are with greedy decoding.
Prompt PHP Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
Standard (Wei et al., 2022) ✓✗ 79.480.5 34.031.8 80.779.9 64.864.2 15.114.7 25.525.5 49.9149.43
(+1.1) (-2.2) (-0.8) (-0.6) (-0.4) (0.0) (-0.48)
CoT (Wei et al., 2022) ✓✗ 85.886.8 89.189.0 89.790.1 72.972.3 49.551.1 44.445.6 71.8972.48
(+1.0) (-0.1) (+0.4) (-0.6) (+1.6) (+1.2) (+0.59)
Complex CoT (Fu et al., 2023) ✓✗ 82.583.7 89.890.1 87.789.9 70.474.6 57.661.2 37.437.0 70.8972.75
(+1.2) (+0.3) (+2.2) (+4.2) (+3.6) (-0.4) (+1.86)
Standard (Wei et al., 2022) ✓✗ 89.189.1 36.336.0 83.883.6 68.768.7 15.916.0 28.328.3 53.6853.61
(0.0) (-0.3) (-0.2) (0.0) (+0.1) (0.0) (-0.07)
CoT (Wei et al., 2022) ✓✗ 90.691.1 93.694.0 92.793.5 81.081.3 56.157.5 44.044.4 76.3376.96
(+0.5) (+0.4) (+0.8) (+0.3) (+1.4) (+0.4) (+0.63)
Complex CoT (Fu et al., 2023) ✓✗ 86.388.1 94.895.0 91.594.0 77.480.0 67.071.6 48.850.0 77.6379.78
(+1.8) (+0.2) (+2.5) (+2.6) (+4.6) (+1.2) (+2.15)
GPT-3.5
text-davinci-002
GPT-3.5
text-davinci-003
prompts, CoT and Complex CoT prompts demonstrated
substantial gains in performance. Particularly noteworthy
is the fact that the most potent prompt, Complex CoT, exhibited the most substantial performance improvement in
comparison to the Standard prompt and CoT prompt. The
in-context learning imparts a pattern to the model, and the
quality of the prompt directly influences the model’s ability
to learn from this pattern. As indicated by the experiments
in Table 2, the Complex CoT prompt outperforms the CoT
prompt, and the CoT prompt surpasses the Standard prompt.
Consequently, it is more advantageous for the Complex
CoT to instruct the LLM in pattern recognition. Within
the proposed PHP framework, the established pattern is as
follows: 1) if the initial answer is correct, maintain the same
correct response in the subsequent round; 2) if the initial
answer is incorrect, strive to provide the correct answer in
the next round. The Standard prompt falls short of effectively instilling such a pattern in the model, resulting in
minimal variation in LLM’s responses and a reduced number of interactions. In contrast, the Complex CoT excels in
instructing the LLM to rectify its responses, facilitating a
more dynamic and responsive learning process. This finding
provides compelling evidence that a superior prompt leads
to greater effectiveness of the system.
**The Interaction Number decreases when the model is**
**more powerful and the prompt is less powerful. The**
number of interactions refers to how many times the agent
engages with the LLMs. The interaction number is one
when the agent receives the first answer and increases to
two for the second answer. In Figure 2, we illustrate the
interaction number of various models and prompts. Our
findings indicate that: 1) The interaction number for textdavinci-003 is typically lower than that of text-davinci-002
when given the same prompt. This is primarily due to the
higher accuracy of text-davinci-003, resulting in a higher
probability of the base answer and subsequent answers being correct, thus requiring fewer interactions to obtain the
final correct answer; 2) When using the same models, the in
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
CoT Complex CoT
Standard 3.2 3.2
2.12 Text-Davinci-002 Text-Davinci-002
2.10 TText-Davinci-002ext-Davinci-003 3.0 Text-Davinci-003 3.0 Text-Davinci-003
2.08 2.8 2.8
2.06 2.6 2.6
2.04 2.4 2.4
2.02 2.2 2.2
Interaction Number
2.00 2.0 2.0
AddSubMultiArithSingleEQSVAMPGSM8KAQuA AddSubMultiArithSingleEQSVAMPGSM8KAQuA AddSubMultiArithSingleEQSVAMPGSM8KAQuA
_Figure 2. The Interaction Number refers to the frequency at which we need to consult the LLM until we receive conclusive responses._
With an analysis of various models and prompts, it has been observed that: 1) A stronger model leads to a decreased interaction number;
2) An improved prompt results in an increased interaction number.
teraction number generally increases as the prompt becomes
more powerful. This is because the LLMs achieves better
reasoning ability when the prompt becomes more potent,
allowing them to leverage the hints to jump out of the incorrect answers, and ultimately leading to a higher number of
interactions required to reach the final answer.
**4.2. Impact of the Hint Quality**
the Complex CoT with PHP-CoT demonstrated a notable
accuracy rate of 95.6% on the same dataset, outperforming
the Complex CoT with PHP-Complex CoT. The rationale
behind these findings is twofold: 1) the performance of
CoT and Complex CoT are similar on all six datasets, and
2) since the Base answer is provided by CoT (or Complex
CoT) and the subsequent answer is based on PHP-Complex
CoT (or PHP-CoT), it is comparable to having two individuals collaborating to solve a problem. Therefore, in such
circumstances, the system’s performance may be further
enhanced.
**4.3. Ablation Study**
_Table 3. Performance with different Base Answers. Initially, the_
base prompt provides base answers to the model and PHP generates
the subsequent answers. The results are from text-davinci-003 with
greedy decoding.
PHP Prompt Base Prompt Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
Standard (Wei et al., 2022) 89.1 36.0 83.6 68.7 16.0 28.3 53.61
PHP-Standard CoT (Wei et al., 2022) **92.4** 80.5 92.1 **78.5** 50.2 42.5 72.70
Complex CoT (Fu et al., 2023) 90.6 **80.6** **92.9** 77.2 **60.3** **45.6** **74.53**
Furthermore, we conducted an ablation study to verify the
criticality of the two sentences in answers: 1) P1: We know
the Answer Hints A1, ..., Ap; 2) P2: With the Answer Hints
_A1, ..., Ap, we will answer the question. Moreover, we_
introduced a new type of prompt called CoT-Merge and
Complex CoT-Merge. Firstly, we combined the original
prompt with the PHP prompt into a single file. Subsequently,
we utilized the same Merge Prompt for both the base answer
and subsequent answers. Also, we prove that both correct
and incorrect hints are necessary for prompt design. We
employ the stop criterion (adaptive sampling) to determine
termination for all experiments.
**The Proposed P1 and P2 are necessary. Incorporating**
the sentences P1 and P2 resulted in better performance for
CoT with PHP across three of the six datasets. However, the
significance of these two sentences became particularly apparent when we employed Complex CoT. With this method,
better performance was achieved on five of the six datasets
after adding P1 and P2. For instance, Complex CoT improved its performance from 78.0% to 80.0% on the SVAMP
dataset, and from 68.3% to 71.6% on the GSM8K dataset.
This highlights that sentences P1 and P2 can exhibit more
potent abilities, particularly when the model’s logical capac
Standard (Wei et al., 2022) 90.8 92.5 90.7 80.2 52.3 40.9 74.56
PHP-CoT CoT (Wei et al., 2022) **91.1** 94.0 93.5 **81.3** 57.5 44.4 76.96
Complex CoT (Fu et al., 2023) 90.6 **96.8** **93.7** 81.2 **62.6** **50.0** **79.14**
Standard (Wei et al., 2022) 88.3 80.1 93.3 80.4 65.5 35.4 73.83
PHP-Complex CoT CoT (Wei et al., 2022) **88.8** **95.6** **94.8** **81.4** 70.6 45.6 79.46
Complex CoT (Fu et al., 2023) 88.1 95.0 94.0 80.0 **71.6** **50.0** **79.78**
**The quality of the hint significantly affects the perfor-**
**mance. Shown in Table 3, to enhance the PHP-Standard, re-**
placing the base prompt Standard with Complex CoT or CoT
leads to a significant improvement in the final performance.
For PHP-Standard, we observe that GSM8K performance
amplifies from 16.0% with base prompt Standard to 50.2%
with base prompt CoT and 60.3% with base prompt Complex CoT. Conversely, replacing the base prompt Complex
CoT with Standard will reduce the final performance. For
example, after replacing base prompt Complex CoT with
Standard, the performance of PHP-Complex CoT drops
from 71.6% to 65.5% on GSM8K dataset.
**Performance may further improve if PHP is not designed**
**from the corresponding base prompt. The results indi-**
cate that the CoT with PHP-Complex CoT achieved a high
accuracy rate of 96.8% on the MultiArith dataset, surpassing the performance of the CoT with PHP-CoT. Similarly,
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
_Table 4. Ablation Study. CoT-Merge: for the CoT base prompt_
and the PHP-CoT prompt, we employ the prompt that contains
both base prompt and the PHP. P1: We know the Answer Hints
_A1, ..., Ap. P2: With the Answer Hints A1, ..., Ap, we will answer_
the question. According to the experiment results, we see that both
the proposed P1 and P2 are necessary. Meanwhile, non-merge
based method is better than merge based method when prompts are
more powerful. The results are from text-davinci-003 with greedy
decoding.
Base Prompt PHP Prompt P1 P2 Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
CoT N/A N/A N/A 90.6 93.6 92.7 81.0 56.1 44.0 76.33
CoT-Merge CoT-Merge ✓ ✓ **91.3** **94.6** 93.1 79.5 58.6 **50.0** **77.85**
_Table 6. The results after adding Self-Consistency (SC). Number:_
The interaction number between agent and LLM. The best results
of adding PHP are highlighted with red color, and the best results without PHP are highlighted with green color. We find that
PHP further improves performance, even adding self-consistency.
Meanwhile, PHP may reduce the cost of self-consistency.
Prompt SC PHP Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
5 ✗ 90.6 95.3 94.4 81.6 63.3 49.2 79.06
5 ✓ 90.8 96.6 94.8 83.5 66.3 49.6 80.26
5 Number 2.0075 2.0433 2.0098 2.1090 2.5458 2.0157 2.1218
10 ✗ 90.6 96.5 93.8 83.0 65.5 49.2 79.76
10 ✓ 90.8 97.1 93.8 83.5 67.5 50.0 80.45
10 Number 2.0075 2.0283 2.0059 2.0510 2.2145 2.0118 2.0531
20 ✗ 91.1 96.5 94.2 83.3 68.0 55.1 81.36
20 ✓ 91.6 96.5 94.4 83.7 68.6 55.1 81.64
20 Number 2.0050 2.0366 2.0098 2.0250 2.1144 2.0078 2.0330
40 ✗ 91.6 96.5 94.8 82.9 67.3 53.1 81.03
40 ✓ 91.6 96.6 95.0 83.7 68.4 53.1 81.39
40 Number 2.0050 2.0300 2.0050 2.0320 2.0530 2.0000 2.0208
5 ✗ 88.1 97.0 93.1 80.4 73.5 51.5 80.60
5 ✓ 89.6 97.3 95.2 82.5 76.9 51.9 82.23
5 Number 2.0378 2.0166 2.0334 2.2370 2.5390 2.0118 2.1459
10 ✗ 88.6 98.3 93.3 82.4 76.4 54.3 82.21
10 ✓ 89.1 98.5 95.2 83.4 78.2 54.7 83.18
10 Number 2.0177 2.0016 2.0295 2.059 2.1531 2.0078 2.0447
20 ✗ 88.6 98.0 93.8 82.5 77.7 56.2 82.80
20 ✓ 89.8 98.0 95.8 83.6 78.6 56.2 83.66
20 Number 2.0253 2.0000 2.0196 2.0330 2.0401 2.0000 2.0196
40 ✗ 88.3 98.5 94.8 83.9 78.1 58.6 83.70
40 ✓ 88.6 98.5 95.8 84.7 79.0 58.6 84.20
40 Number 2.0101 2.0000 2.0137 2.0210 2.0348 2.0039 2.0137
CoT (Wei et al., 2022)
Complex CoT (Fu et al., 2023)
91.1 93.5 93.3 80.0 58.1 44.8 76.80
90.8 93.1 92.9 80.7 **58.8** 43.7 76.66
**91.3** 93.8 93.5 80.5 58.2 46.4 77.28
91.1 94.0 **93.5** **81.3** 57.5 44.4 76.96
CoT PHP-CoT
Complex CoT N/A N/A N/A 86.3 94.8 91.5 77.4 67.0 48.8 77.63
Complex CoT-Merge Complex CoT-Merge ✓ ✓ **88.8** 94.3 94.6 78.1 70.2 46.8 78.80
87.8 93.3 93.7 78.0 68.3 **50.3** 78.56
87.8 **95.1** 94.2 78.5 70.5 48.4 79.08
88.3 94.3 **94.6** 79.1 69.3 46.8 78.73
88.1 95.0 94.0 **80.0** **71.6** 50.0 **79.78**
Complex CoT Complex CoT
_Table 5. Analysis of Hint Design (Shown in Figure 1). Correct:_
The hints of designed prompt are the same as the correct answers.
Incorrect: The hints of the designed prompt are the incorrect answers. Green: The performance is better than without progressivehint. Red: The performance is worse than without progressive-hint.
The results are from text-davinci-003 with greedy decoding.
Method Hint Dataset Average
Correct Incorrect AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
✗ ✗ 90.6 93.6 92.7 81.0 56.1 44.0 76.33
**4.4. Performance with Self-Consistency**
As we discussed before, our proposed method can combine
with CoT and self-consistency to further improve the model
performance. The results are shown in Table 6. Following
the self-consistency paper, we sample paths with numbers
5, 10, 20 and 40, and the model temperature 0.7.
**PHP further improves performance. By utilizing similar**
prompts and sample path numbers, we discovered that our
proposed PHP-CoT and PHP-Complex CoT always achieve
superior performance when compared to CoT and Complex
CoT, shown in Table 6 and Figure 3. For instance, CoT
with self-consistency was able to attain a 96.5% accuracy
on the MultiArith dataset with a sample path of 10, 20 and
40. Therefore, we can conclude that the best performance
by CoT with self-consistency is 96.5% with text-davinci003. However, after implementing PHP, the performance
skyrocketed to 97.1%. Similarly, we observed CoT with
self-consistency on SVAMP, achieving the best accuracy
of 83.3% with 20 sampled paths, and further improved to
83.7% upon implementing PHP. This illustrates that PHP
could break the performance bottleneck and further improve
performance.
**PHP could reduce the cost of self-consistency. Incorpo-**
rating the PHP can also lead to cost reduction. It is widely
acknowledged that self-consistency involves an increased
number of reasoning paths, resulting in a higher cost. The
Table 6 illustrates that PHP can be an effective approach
for reducing this cost, while still preserving performance
gains. As shown in Figure 3, using Complex CoT with
self-consistency, a 78.1% accuracy can be reached with 40
sample paths, while incorporating PHP reduces the required
sample amount to 10×2.1531=21.531 paths, and results in
91.6 94.3 93.3 81.9 57.0 43.7 76.96
91.1 93.5 93.1 79.7 57.9 45.2 76.74
91.1 94.0 93.5 81.3 57.5 44.4 **76.96**
86.3 94.8 91.5 77.4 67.0 48.8 77.63
88.3 94.0 93.8 77.8 68.6 46.4 78.14
88.1 94.6 94.0 79.2 70.2 48.4 79.08
88.1 95.0 94.0 80.0 71.6 50.0 **79.78**
CoT (Wei et al., 2022)
Complex CoT (Fu et al., 2023)
ity is superior. Consequently, we can conclude that P1 and
P2 will likely enhance model performance to a greater extent, particularly with more powerful prompts and models.
**Non-Merge based PHP is better than merge based PHP**
**when prompts are more powerful. Regarding the CoT**
with PHP-CoT, the initial answer is derived from the CoT
prompt, and subsequently, the answer is obtained from the
PHP-CoT. Notably, compared to other CoT-base methods,
CoT-Merge achieves the best performance. However, compared to other Complex CoT-based methods, we observe
that non-merge PHP-Complex CoT with both P1 and P2
achieves the best performance. Hence, when prompts are
better, the performance of non-merge based method will be
better than merge-based method.
**Both correct and incorrect hints are needed in the**
**prompt design. Table 5 demonstrates that the use of PHP**
was superior to its absence when the designed prompt included both correct and incorrect hints. Specifically, the
provision of a correct hint in the prompt promoted the generation of answers that matched the given hint. Conversely,
the provision of incorrect answers in the prompt encouraged
the generation of alternative answers, with the aid of the
given hint.
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
CoT of MultiArith CoT of SVAMP Complex CoT of GSM8K
97.00 w/ PHPw/o PHP 83.5 79
96.75 78
96.50 83.0
77
96.25
82.5 76
96.00
95.75 75
82.0
95.50 w/ PHPw/o PHP 74 w/ PHPw/o PHP
95.25 81.5
10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80
#Total Sampled Path Cost #Total Sampled Path Cost #Total Sampled Path Cost
_Figure 3. We show the results: 1) CoT of MultiArith; 2) CoT of SVAMP; 3) Complex CoT of GSM8K. According to 1) and 2), we can_
see that PHP could further improve performance. With result 3), we found that the PHP could even reduce the cost of self-consistency.
an even better accuracy of 78.2%.
**4.5. Performance with Chat Model**
_Table 7. Performance of Complex CoT with GPT-3.5-Turbo and_
GPT-4, employing greedy decoding. Number: The average interaction number with LLM.PHP Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
Previous SOTA ✗ 94.9 (Roy & Roth, 2015) 100 (Wang et al., 2023) 95.5 (Diao et al., 2023) 89.1 (Chen et al., 2022) 92.0 (OpenAI, 2023) 76.4 (Pitis et al., 2023) 91.31
_Table 8. Response when miss hint for GPT-3.5 Text-Divinci-003,_
with Complex CoT
Question: Jerry has two pools, both with leaks emptying
them out at 4 gallons/minute. 4 minutes ago, the big pool
had twice as much water as the small pool. Now the big pool
has four times as much water as the small pool. How much
water does the small pool have now? (Hint: The answer is
near to ).
A: We know the Answer Hints: 16. With the Answer Hints:
16, we will answer the question.
Let’s think step by step
4 minutes ago, the big pool had twice as much water as the
small pool.
This means that the small pool had 4 gallons of water.
In 4 minutes, the big pool has lost 4 gallons x 4 minutes =
16 gallons of water.
The small pool has also lost 4 gallons x 4 minutes = 16
gallons of water.
Therefore, the small pool now has 4 gallons - 16 gallons =
-12 gallons of water.
Since it is impossible to have negative gallons of water, the
small pool must have 0 gallons of water.
The answer is 0.
in text-davinci-003. Consequently, OpenAI may employ
different implement alternative techniques to grant GPT-3.5Turbo more response flexibility. As a result, the model’s
responses occasionally deviate from the given prompt, compared to those generated by GPT-3.5 text-davinci-0003.
**Analyze GPT-4. The GPT-4 model has significantly im-**
proved performance, showcasing its effectiveness across
various benchmarks. It has achieved high accuracy rates:
91.9% on SVAMP, 95.5% on GSM8K, 79.9% on AQuA, and
53.90% on the challenging MATH dataset. These results
are a testament to the effectiveness of our PHP methodology, which has been instrumental in enhancing GPT-4’s
capabilities. Particularly notable is the improvement on
the MATH dataset, where PHP increased accuracy from
✗ 85.5 97.5 92.5 81.0 82.8 57.4 82.78
✓ (-0.2)85.3 (+0.5)98.0 (+0.4)92.9 (+2.1)83.1 (+2.3)85.1 (+3.2)60.6 (+1.38)84.16
Number 2.1037 2.0133 2.0610 2.3570 2.3426 2.3228 2.2000
✗ 89.3 97.8 93.1 90.5 94.9 77.5 90.51
✓ (+0.3)89.6 (+0.3)98.1 (0.0)93.1 (+1.4)91.9 (+0.6)95.5 (+2.4)79.9 (+0.83)91.34
Number 2.0126 2.0033 2.0019 2.0700 2.0507 2.2913 2.0716
GPT-3.5Turbo
GPT-4
In the previous section, we follow the previous work settings
and employ text generation models for our experiments.
With the release API of GPT-3.5-Turbo and GPT-4, we
validate the performance of Complex CoT with PHP on the
same six datasets. We use greedy decoding (i.e. temperature
= 0) and Complex CoT as prompt for both models.
**Analyze GPT-3.5-Turbo. Let’s delve into an in-depth anal-**
ysis of GPT-3.5-Turbo, as detailed in Table 7. Our proposed
PHP exhibits remarkable performance enhancements, resulting in a substantial 2.3% improvement on GSM8K and
an impressive 3.2% boost on AQuA. However, it’s worth
noting that GPT-3.5-Turbo appears to have a diminished
capability when it comes to adhering to prompts compared
to its counterpart, text-davinci-003. To illustrate this disparity, we can provide two concrete examples: a) In scenarios
where the provided hints are conspicuously absent, GPT-3.5Turbo encounters difficulties in providing an answer, often
responding with a statement such as, "We cannot answer
this question as the answer hint is missing. Please provide
the answer hint to proceed." On the other hand, text-davinci003 autonomously generates and fills in the missing answer
hint before addressing the question, a phenomenon that is
well-demonstrated in Table 8. b) When confronted with
more than ten hints, GPT-3.5-Turbo may respond with the
message, "We cannot determine the correct answer as multiple answer hints are given. Please provide only one answer
hint for the question." Also, Such behavior is not observed
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
of the LLM and, consequently, a more challenging dataset.
**Further Work**
In this paper, we will explore the limitations of our proposed
progressive-hint prompting technique and discuss possible
avenues for further improvement.
**The Progressive-Hint Prompt can be non-handcrafted.**
Our proposed progressive-hint prompts are handcrafted by
humans, similar to other related techniques such as ChainOf-Thought and Complex Chain of Thought. As such, we
aim to design Auto Progressive Hint in the future to improve
its efficiency. For instance, we could continuously build and
update the progressive hint during testing.
**The hint is defined beyond the answer. In this paper,**
we defined the hint as the answer. However, we acknowledge that the concept of hint encompasses other possibilities
generated by models. These hints may include model confidence, reasoning path, and even interaction number.
**Broader Impacts**
Progressive-Hint Prompting aims to enhance the reasoning
ability of Large Language Models (LLMs) by utilizing previous answers. We believe that the integration of PHP with
LLM can be applied in a variety of areas, including: 1) Assisting students, particularly those from low-income areas,
in learning more effectively and obtaining accurate answers
with the help of LLM and PHP; 2) Aiding mathematicians
in solving complex mathematical problems; 3) and other
reasoning-related applications. By leveraging PHP with
LLM, we hope to improve the performance of these models
and enable their use in various practical scenarios.
**5. Conclusion**
This paper introduces a novel approach named ProgressiveHint Prompting (PHP) for interacting with LLMs, which
offers multiple advantages: 1) PHP achieves substantial performance improvements on math reasoning tasks, leading
to state-of-the-art results on several reasoning benchmarks;
2) with more powerful models and prompts, PHP can better
and consistently benefit the LLMs; 3) PHP can be easily
combined with CoT and self-consistency to further improve
performance.
To better enhance the progressive-hint prompting approach,
future research endeavors can focus on improving the design of handcrafted hints in the question phase and prompt
sentences in the answer part. Additionally, novel hints that
aid the LLMs to reconsider the questions can be identified
and extracted beside the answer.
_Table 9. Performance of Complex CoT with GPT-3.5-Turbo and_
GPT-4 on MATH dataset, employing greedy decoding. Number:
The average interaction number with LLM. Overall: The results
overall MATH subtopics (Hendrycks et al., 2021).
PHP MATH Dataset
InterAlgebra Precalculus Geometry NumTheory Probability PreAlgebra Algebra Overall
Previous SOTA(Lewkowycz et al., 2022) ✗ - - - - - - - 50.30
GPT-4 CoT(OpenAI, 2023) ✗ - - - - - - - 42.50
✗ 14.6 16.8 22.3 33.4 29.7 53.8 49.1 34.12
✓ 17.1 16.1 25.4 35.1 33.7 57.7 51.1 36.50
(+2.5) (-0.7) (+3.1) (+1.7) (+4.0) (+3.9) (+2.0) (+2.38)
Number 4.2746 3.9625 4.3361 3.8166 3.7594 3.1526 3.0716 3.6673
✗ 23.4 26.7 36.5 49.6 53.1 71.6 70.8 50.36
✓ 26.3 29.8 41.9 55.7 56.3 73.8 74.3 53.90
(+2.9) (+3.1) (+5.4) (+6.1) (+3.2) (+2.2) (+3.5) (+3.54)
Number 3.2414 3.2435 3.2233 3.1740 2.8122 2.3226 2.4726 2.8494
GPT-3.5-Turbo
Complex CoT
(Ours)
GPT-4
Complex CoT
(Ours)
_Table 10. The interaction number distribution of different datasets,_
with GPT-4 and Complex CoT.
Dataset
Interaction Number
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
2 98.98% 99.66% 99.80% 95.80% 97.42% 84.64%
3 0.75% 0.33% 0.19% 2.80% 1.44% 7.08%
4 0.25% 0.0% 0.0% 0.80% 0.53% 4.33%
5 0.0% 0.0% 0.0% 0.20% 0.37% 2.75%
6 0.0% 0.0% 0.0% 0.20% 0.07% 0.78%
7 0.0% 0.0% 0.0% 0.0% 0.20 % 0.39%
8 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
9 0.0% 0.0% 0.0% 0.0% 0.07% 0.0%
10 0.0% 0.0% 0.0% 0.0% 0.07% 0.0%
50.3% to 53.90%. This improvement is evident across all
subdatasets, marking a distinct advancement over the previous GPT-3.5-turbo model, which showed mixed results,
such as a slight decrease in performance on the Precalculus
dataset after PHP implementation. Overall, PHP has proven
highly effective with advanced models like GPT-4. This
observation firmly suggests that as the model gains more
computational prowess, it can more effectively comprehend
and utilize contextual hints. Moreover, when we compare
GPT-4 to the GPT-3.5-Turbo model, another fascinating insight emerges: there is a noticeable reduction in the number
of interactions required by GPT-4. This aligns perfectly
with the insightful finding that "The Interaction Number
decreases when the model is more powerful." In essence,
this not only underscores the efficiency and improved performance of GPT-4 but also provides strong evidence that
enhanced model capabilities lead to reduced interaction requirements, making it even more user-friendly and intuitive
in its applications.
**Analyze Interaction Number Distribution. We conducted**
a comprehensive examination of interaction number distributions across various datasets, as illustrated in Table 10.
Notably, more challenging datasets like AQuA exhibit a
broader range of interaction numbers, which implies that
the LLM exhibits uncertainty when confronted with difficult
problems. Conversely, in the case of simpler datasets like
Addsub, the LLM predominantly resolves problems with
just 2 interactions. This suggests that interaction numbers
can serve as a reliable indicator of dataset difficulty. In other
words, when using the same prompt and LLM, a higher
interaction number signifies greater uncertainty on the part
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**References**
Asai, A., Hashimoto, K., Hajishirzi, H., Socher, R., and
Xiong, C. Learning to retrieve reasoning paths over
wikipedia graph for question answering. In International
_Conference on Learning Representations, 2020. 2_
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. Language models are few-shot learners.
_Advances in neural information processing systems, 33:_
1877–1901, 2020. 2, 4
Chen, J., Lin, S.-t., and Durrett, G. Multi-hop question answering via reasoning chains. _arXiv preprint_
_arXiv:1910.02610, 2019. 3_
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program
of thoughts prompting: Disentangling computation from
reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022. 7_
Chowdhary, K. and Chowdhary, K. Natural language processing. Fundamentals of artificial intelligence, pp. 603–
649, 2020. 1
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra,
G., Roberts, A., Barham, P., Chung, H. W., Sutton, C.,
Gehrmann, S., et al. Palm: Scaling language modeling
with pathways. arXiv preprint arXiv:2204.02311, 2022.
2
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., et al. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168, 2021. 1, 3, 4_
Diao, S., Wang, P., Lin, Y., and Zhang, T. Active prompting
with chain-of-thought for large language models. arXiv
_preprint arXiv:2302.12246, 2023. 7_
Fu, Y., Peng, H., Sabharwal, A., Clark, P., and Khot, T.
Complexity-based prompting for multi-step reasoning.
In The Eleventh International Conference on Learning
_Representations, 2023. 1, 2, 3, 4, 5, 6_
Geva, M., Khashabi, D., Segal, E., Khot, T., Roth, D.,
and Berant, J. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. _Transactions of the Association for Computa-_
_tional Linguistics, 9:346–361, 2021. doi: 10.1162/tacl__
a_00370. [URL https://aclanthology.org/](https://aclanthology.org/2021.tacl-1.21)
[2021.tacl-1.21. 11](https://aclanthology.org/2021.tacl-1.21)
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021. 2, 4, 8_
Hosseini, M. J., Hajishirzi, H., Etzioni, O., and Kushman,
N. Learning to solve arithmetic word problems with verb
categorization. In EMNLP, pp. 523–533, 2014. 4
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa,
Y. Large language models are zero-shot reasoners. In Ad_vances in Neural Information Processing Systems, 2022._
1, 2
Koncel-Kedziorski, R., Hajishirzi, H., Sabharwal, A., Etzioni, O., and Ang, S. D. Parsing algebraic word problems into equations. Transactions of the Association for
_Computational Linguistics, 3:585–597, 2015. 4_
Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E.,
Michalewski, H., Ramasesh, V., Slone, A., Anil, C.,
Schlag, I., Gutman-Solo, T., et al. Solving quantitative
reasoning problems with language models. arXiv preprint
_arXiv:2206.14858, 2022. 1, 2, 8_
Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. Program
induction by rationale generation: Learning to solve and
explain algebraic word problems. In Proceedings of the
_55th Annual Meeting of the Association for Computa-_
_tional Linguistics (Volume 1: Long Papers), pp. 158–167,_
2017. 2, 4
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig,
G. Pre-train, prompt, and predict: A systematic survey of
prompting methods in natural language processing. ACM
_Computing Surveys, 55(9):1–35, 2023. 2_
Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L.,
Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S., Yang,
Y., Gupta, S., Majumder, B. P., Hermann, K., Welleck,
S., Yazdanbakhsh, A., and Clark, P. Self-refine: Iterative
refinement with self-feedback, 2023. 11
OpenAI. Gpt-4 technical report. _arXiv preprint_
_arXiv:2303.08774, 2023. 2, 4, 7, 8_
Otter, D. W., Medina, J. R., and Kalita, J. K. A survey of
the usages of deep learning for natural language processing. IEEE transactions on neural networks and learning
_systems, 32(2):604–624, 2020. 1_
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C.,
Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A.,
et al. Training language models to follow instructions
with human feedback. Advances in Neural Information
_Processing Systems, 35:27730–27744, 2022. 2, 4_
Patel, A., Bhattamishra, S., and Goyal, N. Are nlp models
really able to solve simple math word problems? In Pro_ceedings of the 2021 Conference of the North American_
_Chapter of the Association for Computational Linguistics:_
_Human Language Technologies, pp. 2080–2094, 2021. 1,_
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
Pitis, S., Zhang, M. R., Wang, A., and Ba, J. Boosted prompt
ensembles for large language models. arXiv preprint
_arXiv:2304.05970, 2023. 7_
Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., and Huang, X.
Pre-trained models for natural language processing: A
survey. Science China Technological Sciences, 63(10):
1872–1897, 2020. 1
Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann,
J., Song, F., Aslanides, J., Henderson, S., Ring, R.,
Young, S., et al. Scaling language models: Methods,
analysis & insights from training gopher. arXiv preprint
_arXiv:2112.11446, 2021. 1_
Roy, S. and Roth, D. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empiri_cal Methods in Natural Language Processing, pp. 1743–_
1752, Lisbon, Portugal, September 2015. Association for
Computational Linguistics. doi: 10.18653/v1/D15-1202.
[URL https://aclanthology.org/D15-1202.](https://aclanthology.org/D15-1202)
4, 7
Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., and
Singh, S. Autoprompt: Eliciting knowledge from language models with automatically generated prompts.
_arXiv preprint arXiv:2010.15980, 2020. 2_
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid,
A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A.,
Garriga-Alonso, A., et al. Beyond the imitation game:
Quantifying and extrapolating the capabilities of language
models. arXiv preprint arXiv:2206.04615, 2022. 1
Talmor, A., Herzig, J., Lourie, N., and Berant, J. CommonsenseQA: A question answering challenge targeting
commonsense knowledge. In Burstein, J., Doran, C., and
Solorio, T. (eds.), Proceedings of the 2019 Conference of
_the North American Chapter of the Association for Com-_
_putational Linguistics: Human Language Technologies,_
_Volume 1 (Long and Short Papers), pp. 4149–4158, Min-_
neapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL
[https://aclanthology.org/N19-1421. 11](https://aclanthology.org/N19-1421)
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., and
Zhou, D. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International
_Conference on Learning Representations, 2023. 3, 7_
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E.,
Le, Q., and Zhou, D. Chain of thought prompting elicits reasoning in large language models. arXiv preprint
_arXiv:2201.11903, 2022. 1, 2, 4, 5, 6_
Xu, W., Deng, Y., Zhang, H., Cai, D., and Lam, W. Exploiting reasoning chains for multi-hop science question
answering. In Findings of the Association for Computa_tional Linguistics: EMNLP 2021, pp. 1143–1156, 2021._
2
Zhang, Z., Zhang, A., Li, M., and Smola, A. Automatic
chain of thought prompting in large language models.
_arXiv preprint arXiv:2210.03493, 2022. 2_
Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang,
X., Schuurmans, D., Bousquet, O., Le, Q., and Chi, E.
Least-to-most prompting enables complex reasoning in
large language models. In The Eleventh International
_Conference on Learning Representations, 2023. 1, 2_
10
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**A. Appendix**
**A.1. Experiment Results on Commonsense Reasoning Dataset**
_Table 11. The interaction number distribution of different datasets, with GPT-4 and Complex CoT._
Model PHP CommonsenseQA StragegyQA
✗ 74.8 55.5
✓ 75.5 58.7
Improvement (+0.7) (+3.2)
✗ 79.3 71.1
✓ 79.6 73.2
Improvement (+0.3) (+2.1)
GPT-3.5
text-davinci-002
GPT-3.5
text-davinci-003
✗ 77.8 71.9
GPT-3.5 Turbo ✓ 78.7 73.4
Improvement (+0.9) (+1.5)
✗ 83.6 81.3
GPT-4 ✓ 86.4 81.9
Improvement (+2.8) (+0.6)
Experimental results show a steady improvement in PHP performance on commonsense datasets, including CommonsenseQA (Talmor et al., 2019) and StrategyQA (Geva et al., 2021) Using text-davinci-002, there’s a boost of 0.7% in
CommonsenseQA and 3.2% in StrategyQA. Switching to text-davinci-003, the increments are 0.3% for CommonsenseQA
and 2.1% for StrategyQA. With GPT-3.5-Turbo, CommonsenseQA sees a 0.9% increase and StrategyQA a 1.5% rise.
Moreover, with GPT-4, CommonsenseQA’s performance jumps by 2.8%, and StrategyQA’s by 0.6%.
**A.2. Compare PHP with Self-Refine**
_Table 12. The performance comparison between PHP and Self-Refine on GSM8K dataset. Base: The baseline performance of PHP and_
Self-Refine respectively. Proposed: the performance of PHP and Self-Refine respectively.
Model Prompt Self-Refine PHP
GPT-3.5 Base 64.1 67.0
text-davinci-003 Proposed 64.1 71.6
(+0.0) (+4.6)
Base 74.8 82.8
GPT-3.5 Turbo
Proposed 75.0 85.1
(+0.2) (+2.3)
Base 92.9 94.6
GPT-4
Prompt 93.1 95.5
(+0.2) (+0.9)
we choose the famous prompting strategy Self-Refine (Madaan et al., 2023) for comparison. Base: The baseline performance
of PHP and Self-Refine respectively. When with the same model, the base performance of PHP and Self-Refine is different
because of different chain-of-thought prompts. Proposed: The performance of PHP and Self-Refine respectively.
PHP achieves more improvement. When the model is text-davinci-003, Self-Refine increases performance from 64.1 to 64.1
so that the increment is 0.0. Our PHP increased performance from 67.0 to 71.6 so that increase is 4.6. Similarly, our PHP
always gets better improvement than Self-Refine, whatever the model is text-davinci-003, GPT-3.5-Turbo, or GPT-4. For
GPT-3.5-Turbo, the Self-Refine only improves by 0.2, while our PHP improves by 2.3 with GPT-3.5-Turbo and 0.9 with
GPT-4. Explanation about why base prompt performance is different. The Self-Refine employs python-style CoT, while our
CoT comes from the original chain-of-thought paper. This proves the advantage of using hints over LLM-generated internal
feedback and hint rehearsal for reasoning.
11
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**A.3. The Effect of Adaptive Sampling**
_Table 13. The performance comparison between PHP and Self-Refine on GSM8K dataset. Base: The baseline performance of PHP and_
Self-Refine respectively. Proposed: the performance of PHP and Self-Refine respectively.
Hint Base Subsequent prompt Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
✗ CoT N/A 85.8 89.1 89.7 72.9 49.5 44.4 71.89
✗ CoT CoT 85.5 89.6 89.9 73.0 49.5 45.6 72.18
✓ CoT PHP-CoT 86.8 89.0 90.1 72.3 51.1 45.6 72.48
The experiment setup is the following. 1) CoT+N/A: chain-of-thought without adaptive sampling. Only One Round
interaction 2) CoT + CoT: chain-of-thought with adaptive sampling. We employed CoT to get answers. If the two subsequent
answers are the same, then we stop. This is used to check the performance gains of adaptive sampling advantages. 3) CoT
+ PHP-CoT: this is the implementation of our method Progressive-Hint Prompting. In the first round, we employ CoT to
get the answer. In the subsequent round, we employ PHP-CoT and the previous answer’ hint to get an answer. If the two
subsequent answers are the same, then we stop. This is used to check the performance gains of hint usage.
We utilized text-davinci-002 for our experiments. The findings indicate that the most substantial improvements are made
by adding hints to questions and adopting a PHP-style prompt. Adaptive Sampling led to a performance increase in CoT
from 71.89 to 72.18. For Complex CoT, the performance slightly increased from 70.89 to 70.96. A more pronounced
improvement was noted with Progressive-Hint Prompting, combining hints with a PHP-style prompt: CoT’s performance
escalated from 72.18 to 72.48, and Complex CoT experienced a significant jump from 70.96 to 72.75.
**A.4. Is the performance due to the length of the prompt?**
The performance of PHP is not due to the length increase of the prompt. We can refer to the Table 2. If the performance
increase is caused by increasing the length of the prompt, then the Standard+PHP should be better than the Standard prompt,
and Complex CoT+PHP should be better than Complex CoT. What’s more, as the length of the Standard+PHP prompt is
almost double the length of Standard while the length of the Complex CoT+PHP prompt is smaller than the double length of
Complex CoT, then the performance increase of Standard+PHP should be larger than Complex CoT. However, whatever the
model is text-davinci-002 or text-davinci-003, the Standard+PHP average performance is always lower than Standard, and
Complex CoT+PHP average performance is always better than Complex CoT. This suggests that the effect is not caused by
prompt length increment.
**A.5. Alternative Similar Sentence Performance**
_Table 14. The performance comparison with proposed P1+P2 and another alternatives. Original: the original sentences: We know the_
Answer Hints: We know the Answer Hints: A1, ..., Ap. With the Answer Hints: A1, ..., Ap, we will answer the question.. Alternative:
"I’m not sure if I got that right. Let me try again.".
PHP Sentence Dataset Average
AddSub MultiArith SingleEQ SVAMP GSM8K AQuA
✗ N/A 85.5 97.5 92.5 81.0 82.8 57.4 82.78
✗ Alternative 84.8 97.3 92.1 82.5 85.1 59.0 83.96
✗ Original 85.3 98.0 92.9 83.1 85.1 60.6 84.16
We have another potential sentence: I’m not sure if I got that right. Let me try again. This sentence will be used to replace P1
and P2. For the average performance, the suggested sentence "I’m not sure if I got that right. Let me try again." can improve
performance. However, it gets worse performance than without PHP on three datasets, including AddSub (84.4<85.5),
MultiArith (97.3<97.5), SingleEQ (92.1<92.5). For the original sentence performance, it always gets no worse than without
PHP performance. This suggests that the original sentence’s hint rehearsal is important to boost the performance.
12
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**A.6. Interaction Cases**
In this section, we provide more interaction cases.
A.6.1. SUCCESS CASE
Question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run
a week?
A: Let’s think step by step
James runs 3 sprints 3 times a week, so he runs 3*3 = 9 sprints a week.
Each sprint is 60 meters, so he runs 9*60 = 540 meters a week.
The answer is 540.
Question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run
a week? (Hint: The answer is near to 540).
A: We know the Answer Hints: 540. With the Answer Hints: 540, we will answer the question. Let’s think step by step
James runs 3 sprints 3 times a week. Each sprint is 60 meters, so he runs 3 x 60 = 180 meters each time. He runs 3 times a
week, so he runs 180 x 3 = 540 meters each week. The answer is 540.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team
has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded
and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their
first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous
runner. Using this information, how many seconds will the faster team win by?
A: Let’s think step by step
The first team will take 4 x 55 seconds = 220 seconds to complete the race.
The second team will take 60 + 57 + 54 + 51 = 222 seconds to complete the race.
The first team will win by 220 - 222 = -2 seconds.
The answer is -2.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team
has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded
and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their
first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous
runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2).
A: We know the Answer Hints: -2. With the Answer Hints: -2, we will answer the question.
Let’s think step by step
The first team will finish in 55 seconds x 4 runners = 220 seconds.
The second team will finish in 60 seconds + (3 seconds x 3 runners) = 75 seconds.
The difference between the two teams is 220 - 75 = 145 seconds.
The faster team will win by 145 seconds.
The answer is 145 seconds.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team
has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded
and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their
first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous
runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2, 145).
A: We know the Answer Hints: -2, 145. With the Answer Hints: -2, 145, we will answer the question.
Let’s think step by step The first team will take 4 x 55 = 220 seconds to complete the race.
The second team will take 60 + 57 + 54 + 51 = 222 seconds to complete the race.
The first team will finish the race 2 seconds faster than the second team.
The answer is -2.
13
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team
has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded
and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their
first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous
runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2, 145, -2).
A: We know the Answer Hints: -2, 145, -2. With the Answer Hints: -2, 145, -2, we will answer the question.
Let’s think step by step
The first team will run their 4 legs in 55 seconds each, so their total time will be 4 x 55 = 220 seconds.
The second team’s first runner will run their 400 meter leg in 60 seconds, then each subsequent runner will be 3 seconds
faster than the previous runner.
So the second runner will run in 57 seconds, the third runner in 54 seconds, and the fourth runner in 51 seconds.
The total time for the second team will be 60 + 57 + 54 + 51 = 222 seconds.
The difference in time between the two teams is 222 - 220 = 2 seconds.
The faster team will win by 2 seconds.
The answer is 2.
Question: Two track teams are competing against each other in a 4 by 400 meter relay; a race where each competing team
has four members that each run 400 meters, or one lap, around a standard track. One of the two teams is very well-rounded
and each of their members will run their 400 meter leg in precisely 55 seconds. The other team is less well-rounded; their
first runner will run their 400 meter leg in 60 seconds then each subsequent runner will be 3 seconds faster than the previous
runner. Using this information, how many seconds will the faster team win by? (Hint: The answer is near to -2, 145, -2, 2).
A: We know the Answer Hints: -2, 145, -2, 2. With the Answer Hints: -2, 145, -2, 2, we will answer the question.
Let’s think step by step
The first team will run their 4 legs in 55 seconds each, for a total of 4 x 55 = 220 seconds.
The second team will run their 4 legs in 60, 57, 54, and 51 seconds, for a total of 60 + 57 + 54 + 51 = 222 seconds.
The difference between the two teams is 222 - 220 = 2 seconds.
The faster team will win by 2 seconds.
The answer is 2.
Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for one day. Now he must
sell the items at original price the other day. By what percentage must he increase the price to original? Answer Choices:
(A) 21% (B) 20% (C) 25% (D) 33% (E) 18%
A: Let’s think step by step
The shopkeeper gave a discount of 20% on all the items.
Therefore, the new price of the items = 80% of the original price.
To get back to the original price, the shopkeeper must increase the price by (100% - 80%) = 20%.
The answer is (B), 20%
Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for one day. Now he must
sell the items at original price the other day. By what percentage must he increase the price to original? Answer Choices:
(A) 21% (B) 20% (C) 25% (D) 33% (E) 18% (Hint: The answer is near to (B)).
A: We know the Answer Hints: (B). With the Answer Hints: (B), we will answer the question.
Let’s think step by step
Let the original price of the item be x.
Then, the price after discount = x - (20/100)x = 0.8x
To get the original price, the shopkeeper must increase the price by (100/80)% = 25%.
The answer is (C), 25%
Question: A shopkeeper in order to promote his new shop put a discount of 20% on all the items for one day. Now he must
sell the items at original price the other day. By what percentage must he increase the price to original? Answer Choices:
14
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
(A) 21% (B) 20% (C) 25% (D) 33% (E) 18% (Hint: The answer is near to (B), (C)).
A: We know the Answer Hints: (B), (C). With the Answer Hints: (B), (C), we will answer the question.
Let’s think step by step
Let the original price of the item be x.
Then, the discounted price of the item = x - (20/100)x = 0.8x
To get the original price, the shopkeeper must increase the price by (100/80)% = 25%.
The answer is (C), 25%
A.6.2. FAILURE CASE
Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and
vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her
flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed
does she need to give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
A: Let’s think step by step
In the morning, Wendi gives her flock of chickens 15 cups of feed. Each chicken gets 3 cups of feed, so 15 cups of feed is
for 5 chickens.
In the afternoon, Wendi gives her chickens another 25 cups of feed. Each chicken gets 3 cups of feed, so 25 cups of feed is
for 8 chickens.
So, in total, Wendi has fed 5+8=13 chickens.
Since Wendi’s flock is 20 chickens, she needs to feed 20-13=7 more chickens.
Each chicken needs 3 cups of feed, so Wendi needs to give her chickens 7*3=21 cups of feed in the final meal of the day.
The answer is 21.
Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and
vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her
flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed
does she need to give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens? (Hint: The answer
is near to 21).
A: We know the Answer Hints: 21. With the Answer Hints: 21, we will answer the question.
Let’s think step by step
Wendi feeds each of her chickens three cups of feed per day.
In the morning, she gives her flock of chickens 15 cups of feed.
This means that 15 cups of feed is enough for 5 chickens.
In the afternoon, she gives her chickens another 25 cups of feed.
This means that 25 cups of feed is enough for 8 chickens.
So, she has already fed 13 chickens in the morning and afternoon.
She has 20 chickens in total, so she needs to feed 20-13 = 7 more chickens.
Each chicken needs 3 cups of feed, so she needs to give 7 chickens x 3 cups of feed = 21 cups of feed in the final meal of the
day.
The answer is 21.
Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the rest are good. How many
oranges are good?
A: Let’s think step by step
There are 25 oranges in the basket, of which 1 is bad, 20% are unripe, and 2 are sour.
This means that 1 + 20% + 2 = 23 oranges are not good.
Therefore, 25 - 23 = 2 oranges are good.
The answer is 2.
Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the rest are good. How many
oranges are good? (Hint: The answer is near to 2).
15
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
A: We know the Answer Hints: 2. With the Answer Hints: 2, we will answer the question.
Let’s think step by step
There are 25 oranges in the basket.
1 is bad, so there are 24 oranges left.
20% of the oranges are unripe, so 20% of 24 = 4.8 oranges are unripe.
2 oranges are sour, so there are 24-4.8-2 = 17.2 oranges left.
The rest of the oranges are good, so the number of good oranges is 17.2.
The answer is 17.2.
Question: A basket contains 25 oranges among which 1 is bad, 20% are unripe, 2 are sour and the rest are good. How many
oranges are good? (Hint: The answer is near to 2, 17.2).
A: We know the Answer Hints: 2, 17.2. With the Answer Hints: 2, 17.2, we will answer the question.
Let’s think step by step
There are 25 oranges in the basket.
1 is bad, so there are 24 oranges left.
20% of 24 oranges are unripe, so 20% of 24 = 4.8 oranges are unripe.
2 oranges are sour, so there are 24-4.8-2 = 17.2 oranges left.
These 17.2 oranges are good.
The answer is 17.2.
Question: If the probability that Stock A will increase in value during the next month is 0.56, and the probability that Stock
B will increase in value during the next month is 0.74. What is the greatest value for the probability that neither of these two
events will occur? Answer Choices: (A) 0.22 (B) 0.26 (C) 0.37 (D) 0.46 (E) 0.63
A: Let’s think step by step
The probability that Stock A will not increase in value during the next month is 1 - 0.56 = 0.44
The probability that Stock B will not increase in value during the next month is 1 - 0.74 = 0.26
The probability that neither of these two events will occur is the product of the two probabilities, which is 0.44 x 0.26 =
0.1144.
The greatest value for the probability that neither of these two events will occur is 0.1144.
The answer is (A), 0.22.
Question: If the probability that Stock A will increase in value during the next month is 0.56, and the probability that Stock
B will increase in value during the next month is 0.74. What is the greatest value for the probability that neither of these two
events will occur? Answer Choices: (A) 0.22 (B) 0.26 (C) 0.37 (D) 0.46 (E) 0.63 (Hint: The answer is near to (A)).
A: We know the Answer Hints: (A). With the Answer Hints: (A), we will answer the question.
Let’s think step by step
The probability that neither of these two events will occur is 1 - (0.56 + 0.74) = 1 - 1.30 = 0.30
The greatest value for the probability that neither of these two events will occur is 0.30.
The answer is (A), 0.22.
**A.7. Standard Prompt**
A.7.1. FOR ADDSUB, MULTIARITH, SINGLEEQ, SVAMP AND GSM8K
**Base Standard**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21
trees. How many trees did the grove workers plant today?
A: The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: The answer is 5.
16
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?
A: The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give
to Denny?
A: The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?
A: The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday.
How many computers are now in the server room?
A: The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he
have at the end of wednesday?
A: The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: The answer is 8.
**Progressive-Hint Prompting Standard (PHP-Standard)**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21
trees. How many trees did the grove workers plant today? (Hint: The answer is near to 6).
A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? (Hint: The answer is
near to 10, 8).
A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question. The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? (Hint: The
answer is near to 30, 35).
A: We know the Answer Hints: 30, 35. With the Answer Hints: 30, 35, we will answer the question. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give
to Denny? (Hint: The answer near to 8, 12).
A: We know the Answer Hints: 8, 12. With the Answer Hints: 8, 12, we will answer the question. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?
(Hint: The answer is near to 9, 5).
A: We know the Answer Hints: 9, 5. With the Answer Hints: 9, 5, we will answer the question. The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday.
How many computers are now in the server room? (Hint: The answer is near to 20).
A: We know the Answer Hints: 20. With the Answer Hints: 20, we will answer the question. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he
have at the end of wednesday? (Hint: The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? (Hint: The answer is near to 8).
A: We know the Answer Hints: 8. With the Answer Hints: 8, we will answer the question. The answer is 8.
A.7.2. FOR AQUA
**Base Standard**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer
Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: The answer is (a).
17
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b)
55 km (c) 52 km (d) 60 km (e) 50 km
A: The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d)
1562 (e) 1788
A: The answer is (b).
**Progressive-Hint Prompting Standard (PHP-Standard)**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint:
The answer is near to (a)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (b)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to
(d), (c)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (d), (c). With the Answer Hints: (d), (c), we will answer the question. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The answer is (b).
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint:
The answer is near to (a), (d)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a), (d). With the Answer Hints: (a), (d), we will answer the question. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (a)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to
(b)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (b), (a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (b), (a). With the Answer Hints: (b), (a), we will answer the question. The answer is (b).
18
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**A.8. Chain-Of-Thought Prompt**
A.8.1. FOR ADDSUB, MULTIARITH, SINGLEEQ, SVAMP AND GSM8K
**Base Chain-Of-Thought**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21
trees. How many trees did the grove workers plant today?
A: There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more. So there must have
been 21 - 15 = 6 trees that were planted. The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally
3 cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot. The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?
A: Originally, Leah had 32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 35 = 39 pieces left in total. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give
to Denny?
A: Jason had 20 lollipops originally. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8 lollipops.
The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?
A: Shawn started with 5 toys. He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4 more toys. Now he has 5 +
4 = 9 toys. The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday.
How many computers are now in the server room?
A: There were originally 9 computers. For each day from monday to thursday, 5 more computers were installed. So 4 * 5 =
20 computers were added. Now 9 + 20 = 29 computers are now in the server room. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he
have at the end of wednesday?
A: Michael started with 58 golf balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he had 58 - 23 = 35 at the
end of Tuesday, and 35 - 2 = 33 at the end of wednesday. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. She bought 5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars. Now she has 23 - 15 = 8
dollars left. The answer is 8.
**Progressive-Hint Prompting Chain-Of-Thought (PHP-CoT)**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21
trees. How many trees did the grove workers plant today? (Hint: The answer is near to 6).
A: We know the Answer Hints: 6. With the Answer Hints: 6, we will answer the question. There are 15 trees originally.
Then there were 21 trees after the Grove workers planted some more. So there must have been 21 - 15 = 6 trees that were
planted. The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? (Hint: The answer is
near to 10, 8).
A: We know the Answer Hints: 10, 8. With the Answer Hints: 10, 8, we will answer the question. There are originally 3
cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot. The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? (Hint: The
answer is near to 30, 35).
A: We know the Answer Hints: 30, 35. With the Answer Hints: 30, 35, we will answer the question. Originally, Leah had
32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39 pieces left in
total. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give
19
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
to Denny? (Hint: The answer near to 8, 12).
A: We know the Answer Hints: 8, 12. With the Answer Hints: 8, 12, we will answer the question. Jason had 20 lollipops
originally. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8 lollipops. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?
(Hint: The answer is near to 9, 5).
A: We know the Answer Hints: 9, 5. With the Answer Hints: 9, 5, we will answer the question. Shawn started with 5 toys.
He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4 more toys. Now he has 5 + 4 = 9 toys. The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday.
How many computers are now in the server room? (Hint: The answer is near to 20).
A: We know the Answer Hints: 20. With the Answer Hints: 20, we will answer the question. There were originally 9
computers. For each day from monday to thursday, 5 more computers were installed. So 4 * 5 = 20 computers were added.
Now 9 + 20 = 29 computers are now in the server room. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he
have at the end of wednesday? (Hint: The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. Michael started with 58 golf
balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he had 58 - 23 = 35 at the end of Tuesday, and 35 - 2 = 33 at
the end of wednesday. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? (Hint: The answer is near to 8).
A: We know the Answer Hints: 8. With the Answer Hints: 8, we will answer the question. Olivia had 23 dollars. She bought
5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars. Now she has 23 - 15 = 8 dollars left. The answer is 8.
A.8.2. FOR AQUA
**Base Chain-Of-Thought**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is?
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The
answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a.
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is
equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance?
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500?
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit
numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b).
**Progressive-Hint Prompting Chain-Of-Thought (PHP-CoT)**
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint:
The answer is near to (a)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. If 10 is added to each number,
then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (b)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. a / b = 3/4, then b = 4a / 3. So
20
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to
(d), (c)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (d), (c). With the Answer Hints: (d), (c), we will answer the question. The distance that the
person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. There are 9 one-digit numbers
from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) +
401(3) = 1392. The answer is (b).
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? (Hint:
The answer is near to (a), (d)).
Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64
A: We know the Answer Hints: (a), (d). With the Answer Hints: (a), (d), we will answer the question. If 10 is added to each
number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. (Hint: The answer is near to (a)).
Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question. a / b = 3/4, then b = 4a / 3. So
8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? (Hint: The answer is near to
(b)).
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
A: We know the Answer Hints: (b). With the Answer Hints: (b), we will answer the question. The distance that the person
traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
Q: How many keystrokes are needed to type the numbers from 1 to 500? (Hint: The answer is near to (b), (a)).
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
A: We know the Answer Hints: (b), (a). With the Answer Hints: (b), (a), we will answer the question. There are 9 one-digit
numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 +
90(2) + 401(3) = 1392. The answer is (b).
21
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
**A.9. Complex Chain-Of-Thought Prompt**
A.9.1. FOR ADDSUB, MULTIARITH, SINGLEEQ, SVAMP AND GSM8K
**Base Complex Chain-Of-Thought**
Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test
next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should
dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4
hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour,
include 3 10-minute snack breaks each day, and 30 minutes for lunch each day?
A: Let’s think step by step. Angelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2
chapters = 6 hours total.
For the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6 hours total.
Angelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days.
However, they need to include time for breaks and lunch. Every hour they want to include a 10-minute break, so 12 total
hours x 10 minutes = 120 extra minutes for breaks.
They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes.
And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for snack breaks + 30
minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours.
So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total. They want to study no more
than 4 hours each day, 15 hours / 4 hours each day = 3.75 They will need to plan to study 4 days to allow for all the time
they need.
The answer is 4
Question: Mark’s basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their opponents score double the 2
pointers but half the 3 pointers and free throws. What’s the total number of points scored by both teams added together?
A: Let’s think step by step
Mark’s team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers.
His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers
They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free throws.
All together his team scored 50+24+10= 84 points
Mark’s opponents scored double his team’s number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers.
His opponents scored half his team’s number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers.
They also scored half Mark’s team’s points in free throws, meaning they scored 10/2=5 points in free throws.
All together Mark’s opponents scored 100+12+5=117 points
The total score for the game is both team’s scores added together, so it is 84+117=201 points
The answer is 201
Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck cards. If she buys 2/5
times more of each item, what would be the total number of the items she will have if she currently has 60 marbles?
A: Let’s think step by step
When Bella buys 2/5 times more marbles, she’ll have increased the number of marbles by 2/5*60 = 24
The total number of marbles she’ll have is 60+24 = 84
If Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 = 30 frisbees.
If Bella buys 2/5 times more frisbees, she’ll have 2/5*30 = 12 more frisbees.
The total number of frisbees she’ll have will increase to 30+12 = 42
Bella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards
If she buys 2/5 times more deck cards, she’ll have 2/5*10 = 4 more deck cards.
The total number of deck cards she’ll have is 10+4 = 14
Together, Bella will have a total of 14+42+84 = 140 items
The answer is 140
Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three baskets and 2 less of
each fruit in the fourth basket. How many fruits are there?
A: Let’s think step by step
For the first three baskets, the number of apples and oranges in one basket is 9+15=24
22
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three baskets.
Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets.
The number of apples in the fourth basket is 9-2=7
There are also 15-2=13 oranges in the fourth basket
The combined number of oranges and apples in the fourth basket is 13+7=20
The fourth basket also contains 14-2=12 bananas.
In total, the fourth basket has 20+12=32 fruits.
The four baskets together have 32+114=146 fruits.
The answer is 146
Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly split between oranges,
apples and watermelons, and the price of 1 orange is $0.50. How much does 1 apple cost if your total bill was $66?
A: Let’s think step by step
If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12 units of each fruit
If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6
If my total bill was $66 and I spent $6 on oranges then I spent $66 - 6 =60 on the other 2 fruit types.
Assuming the price of watermelon is W, and knowing that you can buy 4 apples for the same price and that the price of one
apple is A, then 1W=4A
If we know we bought 12 watermelons and 12 apples for $60, then we know that $60 = 12W + 12A
Knowing that 1W=4A, then we can convert the above to $60 = 12(4A) + 12A
60 = 48A + 12A
60 = 60A
Then we know the price of one apple (A) is $60/60= $1
The answer is 1
Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with only 300 students. At the
start of the school year, Susy had 100 social media followers. She gained 40 new followers in the first week of the school
year, half that in the second week, and half of that in the third week. Sarah only had 50 social media followers at the start of
the year, but she gained 90 new followers the first week, a third of that in the second week, and a third of that in the third
week. After three weeks, how many social media followers did the girl with the most total followers have? A: Let’s think
step by step
After one week, Susy has 100+40 = 140 followers.
In the second week, Susy gains 40/2 = 20 new followers.
In the third week, Susy gains 20/2 = 10 new followers.
In total, Susy finishes the three weeks with 140+20+10 = 170 total followers.
After one week, Sarah has 50+90 = 140 followers.
After the second week, Sarah gains 90/3 = 30 followers.
After the third week, Sarah gains 30/3 = 10 followers.
So, Sarah finishes the three weeks with 140+30+10 = 180 total followers.
Thus, Sarah is the girl with the most total followers with a total of 180.
The answer is 180
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these
boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately
at the rate of three pens for $2. How much profit did he make in total, in dollars?
A: Let’s think step by step
Sam bought 12 boxes x 10 =120 worth of highlighters.
He bought 12 * 30 = 360 highlighters in total.
Sam then took 5 boxes × 6 highlighters/box = 30 highlighters.
He sold these boxes for 5 * $3 = $15
After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining.
These form 330 / 3 = 110 groups of three pens.
He sold each of these groups for $2 each, so made 110 * 2 = $220 from them.
In total, then, he earned $220 + $15 = $235.
Since his original cost was $120, he earned $235 - $120 = $115 in profit.
23
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
The answer is 115
Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the female students like to play
basketball. What percent of the population of the school do not like to play basketball if the ratio of the male to female
students is 3:2 and there are 1000 students?
A: Let’s think step by step
The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts are for females.
Each part represents 1000/5 = 200 students.
So, there are 3 x 200 = 600 males.
And there are 2 x 200 = 400 females.
Hence, 600 x 2/3 = 400 males play basketball.
And 400 x 1/5 = 80 females play basketball.
A total of 400 + 80 = 480 students play basketball.
Therefore, 1000 - 480 = 520 do not like to play basketball.
The percentage of the school that do not like to play basketball is 520/1000 * 100 = 52 The answer is 52
**Progressive-Hint Prompting Complex Chain-Of-Thought (PHP-Complex CoT)**
Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test
next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should
dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4
hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour,
include 3 10-minute snack breaks each day, and 30 minutes for lunch each day? (Hint: The answer is near to 4).
A: We know the Answer Hints: 4. With the Answer Hints: 4, we will answer the question.
Let’s think step by step.
Angelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2 chapters = 6 hours total.
For the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6 hours total.
Angelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days.
However, they need to include time for breaks and lunch. Every hour they want to include a 10-minute break, so 12 total
hours x 10 minutes = 120 extra minutes for breaks.
They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes.
And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for snack breaks + 30
minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours.
So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total.
They want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75
They will need to plan to study 4 days to allow for all the time they need.
The answer is 4
Question: Mark’s basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their opponents score double the 2
pointers but half the 3 pointers and free throws. What’s the total number of points scored by both teams added together?(Hint:
The answer is near to 201).
A: We know the Answer Hints: 201. With the Answer Hints: 201, we will answer the question.
Let’s think step by step
Mark’s team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers.
His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers
They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free throws.
All together his team scored 50+24+10= 84 points
Mark’s opponents scored double his team’s number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers.
His opponents scored half his team’s number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers.
They also scored half Mark’s team’s points in free throws, meaning they scored 10/2=5 points in free throws
All together Mark’s opponents scored 100+12+5=117 points
The total score for the game is both team’s scores added together, so it is 84+117=201 points
The answer is 201
Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck cards. If she buys 2/5
24
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
times more of each item, what would be the total number of the items she will have if she currently has 60 marbles? (Hint:
The answer is near to 160,145).
A: We know the Answer Hints: 160, 145. With the Answer Hints: 160, 145, we will answer the question.
Let’s think step by step
When Bella buys 2/5 times more marbles, she’ll have increased the number of marbles by 2/5*60 = 24
The total number of marbles she’ll have is 60+24 = 84
If Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 = 30 frisbees.
If Bella buys 2/5 times more frisbees, she’ll have 2/5*30 = 12 more frisbees.
The total number of frisbees she’ll have will increase to 30+12 = 42
Bella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards
If she buys 2/5 times more deck cards, she’ll have 2/5*10 = 4 more deck cards.
The total number of deck cards she’ll have is 10+4 = 14
Together, Bella will have a total of 14+42+84 = 140 items
The answer is 140
Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three baskets and 2 less of
each fruit in the fourth basket. How many fruits are there? (Hint: The answer is near to 93, 141).
A: We know the Answer Hints: 93, 141. With the Answer Hints: 93, 141, we will answer the question.
Let’s think step by step
For the first three baskets, the number of apples and oranges in one basket is 9+15=24
In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three baskets.
Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets.
The number of apples in the fourth basket is 9-2=7
There are also 15-2=13 oranges in the fourth basket
The combined number of oranges and apples in the fourth basket is 13+7=20
The fourth basket also contains 14-2=12 bananas.
In total, the fourth basket has 20+12=32 fruits.
The four baskets together have 32+114=146 fruits.
The answer is 146
Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly split between oranges,
apples and watermelons, and the price of 1 orange is $0.50. How much does 1 apple cost if your total bill was $66? (Hint:
The answer is near to 1, 5).
A: We know the Answer Hints: 1, 5. With the Answer Hints: 1, 5, we will answer the question.
Let’s think step by step
If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12 units of each fruit
If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6
If my total bill was $66 and I spent $6 on oranges then I spent $66 - $6 = $60 on the other 2 fruit types.
Assuming the price of watermelon is W, and knowing that you can buy 4 apples for the same price and that the price of one
apple is A, then 1W=4A
If we know we bought 12 watermelons and 12 apples for $60, then we know that $60 = 12W + 12A
Knowing that 1W=4A, then we can convert the above to $60 = 12(4A) + 12A
$60 = 48A + 12A
$60 = 60A
Then we know the price of one apple (A) is $60/60= $1
The answer is 1
Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with only 300 students. At the
start of the school year, Susy had 100 social media followers. She gained 40 new followers in the first week of the school
year, half that in the second week, and half of that in the third week. Sarah only had 50 social media followers at the start of
the year, but she gained 90 new followers the first week, a third of that in the second week, and a third of that in the third
week. After three weeks, how many social media followers did the girl with the most total followers have? (Hint: The
answer is near to 180, 160).
A: We know the Answer Hints: 180, 160. With the Answer Hints: 180, 160, we will answer the question.
Let’s think step by step
25
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
After one week, Susy has 100+40 = 140 followers.
In the second week, Susy gains 40/2 = 20 new followers.
In the third week, Susy gains 20/2 = 10 new followers.
In total, Susy finishes the three weeks with 140+20+10 = 170 total followers.
After one week, Sarah has 50+90 = 140 followers.
After the second week, Sarah gains 90/3 = 30 followers.
After the third week, Sarah gains 30/3 = 10 followers.
So, Sarah finishes the three weeks with 140+30+10 = 180 total followers.
Thus, Sarah is the girl with the most total followers with a total of 180.
The answer is 180
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these
boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately
at the rate of three pens for $2. How much profit did he make in total, in dollars? (Hint: The answer is near to 120).
A: We know the Answer Hints: 120. With the Answer Hints: 120, we will answer the question.
Let’s think step by step
Sam bought 12 boxes x $10 = $120 worth of highlighters.
He bought 12 * 30 = 360 highlighters in total.
Sam then took 5 boxes × 6 highlighters/box = 30 highlighters.
He sold these boxes for 5 * $3 = $15
After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining.
These form 330 / 3 = 110 groups of three pens.
He sold each of these groups for $2 each, so made 110 * 2 = $220 from them.
In total, then, he earned $220 + $15 = $235.
Since his original cost was $120, he earned $235 - $120 = $115 in profit.
The answer is 115
Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the female students like to play
basketball. What percent of the population of the school do not like to play basketball if the ratio of the male to female
students is 3:2 and there are 1000 students? (Hint: The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question.
Let’s think step by step
The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts are for females.
Each part represents 1000/5 = 200 students.
So, there are 3 x 200 = 600 males.
And there are 2 x 200 = 400 females.
Hence, 600 x 2/3 = 400 males play basketball.
And 400 x 1/5 = 80 females play basketball.
A total of 400 + 80 = 480 students play basketball.
Therefore, 1000 - 480 = 520 do not like to play basketball.
The percentage of the school that do not like to play basketball is 520/1000 * 100 = 52
The answer is 52
26
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
A.9.2. FOR AQUA
**Base Complex Chain-Of-Thought**
Question: there were 35 students in a hostel . due to the admission of 7 new students the expenses of the mess were increased
by rs . 84 per day while the average expenditure per head diminished by re 1 . what was the original expenditure of the mess
? Options: (a) rs 450 (b) rs 920 (c) rs 550 (d) rs . 630 (e) none of these
A: Let’s think step by step let the original average expenditure be rs . x then, 42 ( x - 1 ) - 35 x = 84 7 x = 126 x = 18
therefore original expenditure = rs . ( 35 * 18 ) = rs . 630. The answer is (d), rs . 630
Question: a train 200 m long passes a man, running at 5 km / hr in the same direction in which the train is going, in 10
seconds . the speed of the train is ? Options: (a) 28 (b) 50 (c) 77 (d) 22 (e) 12
A: Let’s think step by step speed of the train relative to man = ( 200 / 10 ) m / sec = ( 20 ) m / sec. [ ( 20 ) * ( 18 / 5 ) ] km /
hr = 72 km / hr. let the speed of the train be x km / hr. then, relative speed = ( x - 5 ) km / hr. x - 5 = 72, x = 77 km / hr . The
answer is (c), 77
Question: solution x contains 20 % of material a and 80 % of material b . solution y contains 30 % of material a and 70 %
of material b . a mixture of both these solutions contains 22 % of material a in the final product . how much solution x is
present in the mixture ?
Options: (a) 40 % (b) 60 % (c) 80 % (d) 100 % (e) 110 %
A: Let’s think step by step
we can assume the total weight of the mixture = 100
conc of a in the final mixture = 22
let weight of a in the mixture be x.
conc given = 20% = 0.2
therefore, weight of b = 100 - x.
conc given = 30% = 0.3
now, accordding to the problem, 0.2 x + 0.3 ( 100 - x ) = 22
solving, we get x = 80
since we assumed the weight of the mixture = 100, therefore presence of a in the mixture = 80%.
The answer is (c), 80%
Question: a trader sells 40 metres of cloth for rs . 8200 at a profit of rs . 35 per metre of cloth . how much profit will the
trder earn on 40 metres of cloth ?
Options: (a) rs . 950 (b) rs . 1500 (c) rs . 1000 (d) rs . 1400 (e) none of these
A: Let’s think step by step
price of 1 metre cloth = 8200 / 40 = rs 205
cost of 1 metre cloth = rs 205 – 35 = rs 170
cost on 40 metres = 170 x 40 = rs . 6800
profit earned on 40 metres cloth = rs . 8200 – rs . 6800 = rs . 1400
The answer is (d), rs . 1400
Question: if x < y < z and y - x > 5, where x is an even integer and y and z are odd integers, what is the least possible value
s of z - x ?
Options: (a) 6 (b) 7 (c) 8 (d) 9 (e) 10
A: Let’s think step by step
We know x < y < z
to find the least possible value for z - x, we need to find the values for z and x that can be closest to each other.
if x is some even number, then what could be minimum possible odd z.
if x is some even number, y - x > 5 ; y > x + 5
minimum value for y = x + 5 + 2 = x + 7
(note : x + 5 is as even + odd = odd and nearest odd greater than x + 5 is x + 5 + 2)
minimum value for z = y + 2 = x + 7 + 2 = x + 9
(note : z = y + 2 because both z and y are odd, difference between two odd numbers is 2)
s = z - x = x + 9 - x = 9
The answer is (d), 9
27
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
Question: what is the difference between the c . i . on rs . 6000 for 1 1 / 2 years at 4 % per annum compounded yearly and
half - yearly ?
Options: (a) s . 2.04 (b) s . 2.08 (c) s . 2.02 (d) s . 2.83 (e) s . 2.45
A: Let’s think step by step
c . i . when interest is compounded yearly = [ 6000 * ( 1 + 4 / 100 ) * ( 1 + ( 1 / 2 * 4 ) / 100 ] = 6000 * 26 / 25 * 51 / 50 = rs
. 6364.8
c . i . when interest is compounded half - yearly = [ 6000 * ( 1 + 2 / 100 ) 2 ] = ( 6000 * 51 / 50 * 51 / 50 * 51 / 50 ) = rs .
6367.25
difference = ( 6367.25 - 6364.8 ) = rs . 2.45.
The answer is (e), s . 2.45
Question: the average weight of a, b and c is 45 kg . if the average weight of a and b be 40 kg and that of b and c be 45 kg,
then the weight of b is :
Options: (a) 31 kg (b) 32 kg (c) 33 kg (d) 35 kg (e) none of these
A: Let’s think step by step
let a, b, c represent their respective weights.
then, we have : a + b + c = ( 45 x 3 ) = 135 ... ( i )
a + b = ( 40 x 2 ) = 80 ... ( ii )
b + c = ( 45 x 2 ) = 90 ... ( iii )
adding ( ii ) and ( iii ), we get : a + 2 b + c = 170 ... ( iv )
subtracting ( i ) from ( iv ), we get : b = 35.
b’ s weight = 35 kg .
The answer is (d), 35 kg
Question: the compound and the simple interests on a certain sum at the same rate of interest for two years are rs . 11730
and rs . 10200 respectively . find the sum
Options: (a) rs . 17037 (b) rs . 17000 (c) rs . 17276 (d) rs . 170287 (e) rs . 171881
A: Let’s think step by step
the simple interest for the first year is 10200 / 2 is rs . 5100
and compound interest for first year also is rs . 5100.
the compound interest for second year on rs . 5100 for one year
so rate of the interest = ( 100 * 1530 ) / ( 5100 * 1 ) = 30% p
so p = ( 100 * 10200 ) / ( 30 * 2 ) = rs . 17000
The answer is (b), rs . 17000
**Progressive-Hint Prompting Complex Chain of Thought (PHP-Complex CoT)**
Question: there were 35 students in a hostel . due to the admission of 7 new students the expenses of the mess were
increased by rs . 84 per day while the average expenditure per head diminished by re 1 . what was the original expenditure
of the mess ? (Hint: The answer is near to (d)).
Options: (a) rs 450 (b) rs 920 (c) rs 550 (d) rs . 630 (e) none of these
A: We know the Answer Hints: (d). With the Answer Hints: (d), we will answer the question.
Let’s think step by step
let the original average expenditure be rs . x
then, 42 ( x - 1 ) - 35 x = 84 7 x = 126
x = 18
therefore original expenditure = rs . ( 35 * 18 ) = rs . 630.
The answer is (d), rs . 630
Question: a train 200 m long passes a man, running at 5 km / hr in the same direction in which the train is going, in 10
seconds . the speed of the train is ? (Hint: The answer is near to (c)).
Options: (a) 28 (b) 50 (c) 77 (d) 22 (e) 12
A: We know the Answer Hints: (c). With the Answer Hints: (c), we will answer the question.
Let’s think step by step
speed of the train relative to man = ( 200 / 10 ) m / sec = ( 20 ) m / sec.
28
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
[ ( 20 ) * ( 18 / 5 ) ] km / hr = 72 km / hr.
let the speed of the train be x km / hr.
then, relative speed = ( x - 5 ) km / hr.
x - 5 = 72, x = 77 km / hr .
The answer is (c), 77
Question: solution x contains 20 % of material a and 80 % of material b . solution y contains 30 % of material a and 70 %
of material b . a mixture of both these solutions contains 22 % of material a in the final product . how much solution x is
present in the mixture ? (Hint: The answer is near to (a)).
Options: (a) 40 % (b) 60 % (c) 80 % (d) 100 % (e) 110 %
A: We know the Answer Hints: (a). With the Answer Hints: (a), we will answer the question.
Let’s think step by step
we can assume the total weight of the mixture = 100
conc of a in the final mixture = 22
let weight of a in the mixture be x.
conc given = 20% = 0.2
therefore, weight of b = 100 - x.
conc given = 30% = 0.3
now, accordding to the problem, 0.2 x + 0.3 ( 100 - x ) = 22
solving, we get x = 80
since we assumed the weight of the mixture = 100, therefore presence of a in the mixture = 80%.
The answer is (c), 80%
Question: a trader sells 40 metres of cloth for rs . 8200 at a profit of rs . 35 per metre of cloth . how much profit will the
trder earn on 40 metres of cloth ? (Hint: The answer is near to (c)).
Options: (a) rs . 950 (b) rs . 1500 (c) rs . 1000 (d) rs . 1400 (e) none of these
A: We know the Answer Hints: (c). With the Answer Hints: (c), we will answer the question.
Let’s think step by step
price of 1 metre cloth = 8200 / 40 = rs 205
cost of 1 metre cloth = rs 205 – 35 = rs 170
cost on 40 metres = 170 x 40 = rs . 6800
profit earned on 40 metres cloth = rs . 8200 – rs . 6800 = rs . 1400
The answer is (d), rs . 1400
Question: if x < y < z and y - x > 5, where x is an even integer and y and z are odd integers, what is the least possible value
s of z - x ? (Hint: The answer is near to (a), (c)).
Options: (a) 6 (b) 7 (c) 8 (d) 9 (e) 10
A: We know the Answer Hints: (a), (c). With the Answer Hints: (a), (c), we will answer the question.
Let’s think step by step
We know x < y < z
to find the least possible value for z - x, we need to find the values for z and x that can be closest to each other.
if x is some even number, then what could be minimum possible odd z.
if x is some even number, y - x > 5 ; y > x + 5
minimum value for y = x + 5 + 2 = x + 7
(note : x + 5 is as even + odd = odd and nearest odd greater than x + 5 is x + 5 + 2)
minimum value for z = y + 2 = x + 7 + 2 = x + 9
(note : z = y + 2 because both z and y are odd, difference between two odd numbers is 2)
s = z - x = x + 9 - x = 9
The answer is (d), 9
Question: what is the difference between the c . i . on rs . 6000 for 1 1 / 2 years at 4 % per annum compounded yearly and
half - yearly ? (Hint: The answer is near to (e), (d)).
Options: (a) s . 2.04 (b) s . 2.08 (c) s . 2.02 (d) s . 2.83 (e) s . 2.45
A: We know the Answer Hints: (e), d). With the Answer Hints: (e), (d), we will answer the question.
Let’s think step by step
c . i . when interest is compounded yearly = [ 6000 * ( 1 + 4 / 100 ) * ( 1 + ( 1 / 2 * 4 ) / 100 ] = 6000 * 26 / 25 * 51 / 50 = rs
29
-----
**Progressive-Hint Prompting Improves Reasoning in Large Language Models**
. 6364.8
c . i . when interest is compounded half - yearly = [ 6000 * ( 1 + 2 / 100 ) 2 ] = ( 6000 * 51 / 50 * 51 / 50 * 51 / 50 ) = rs .
6367.25
difference = ( 6367.25 - 6364.8 ) = rs . 2.45.
The answer is (e), s . 2.45
Question: the average weight of a, b and c is 45 kg . if the average weight of a and b be 40 kg and that of b and c be 45 kg,
then the weight of b is : (Hint: The answer is near to (d), (b)).
Options: (a) 31 kg (b) 32 kg (c) 33 kg (d) 35 kg (e) none of these
A: We know the Answer Hints: (d), (b). With the Answer Hints: (d), (b), we will answer the question.
Let’s think step by step
let a, b, c represent their respective weights.
then, we have : a + b + c = ( 45 x 3 ) = 135 ... ( i )
a + b = ( 40 x 2 ) = 80 ... ( ii )
b + c = ( 45 x 2 ) = 90 ... ( iii )
adding ( ii ) and ( iii ), we get : a + 2 b + c = 170 ... ( iv )
subtracting ( i ) from ( iv ), we get : b = 35.
b’ s weight = 35 kg .
The answer is (d), 35 kg
Question: the compound and the simple interests on a certain sum at the same rate of interest for two years are rs . 11730
and rs . 10200 respectively . find the sum (Hint: The answer is near to (e), (c)).
Options: (a) rs . 17037 (b) rs . 17000 (c) rs . 17276 (d) rs . 170287 (e) rs . 171881
A: We know the Answer Hints: (e), (c). With the Answer Hints: (e), (c), we will answer the question.
Let’s think step by step
the simple interest for the first year is 10200 / 2 is rs . 5100
and compound interest for first year also is rs . 5100.
the compound interest for second year on rs . 5100 for one year
so rate of the interest = ( 100 * 1530 ) / ( 5100 * 1 ) = 30% p
so p = ( 100 * 10200 ) / ( 30 * 2 ) = rs . 17000
The answer is (b), rs . 17000
You can have as much text here as you want. The main body must be at most 8 pages long. For the final version, one more
page can be added. If you want, you can use an appendix like this one.
The \onecolumn command above can be kept in place if you prefer a one-column appendix, or can be removed if you
prefer a two-column appendix. Apart from this possible change, the style (font size, spacing, margins, page numbering, etc.)
should be kept the same as the main body.
30
-----
| [
"Chuanyang, Zheng",
"Zhengying, Liu",
"Enze, Xie",
"Zhenguo, Li",
"Yu, Li"
] | 2023-08-09T00:00:00 | null | false | 86 | 14 | null | http://arxiv.org/abs/2304.09797 | https://arxiv.org/abs/2304.09797 | https://www.semanticscholar.org/paper/261549439aebdda72b648ecc462448fd24857ac1 |
HyperTree Proof Search for Neural Theorem Proving | We propose an online training procedure for a transformer-based automated theorem prover. Our approach leverages a new search algorithm, HyperTree Proof Search (HTPS), that learns from previous proof searches through online training, allowing it to generalize to domains far from the training distribution. We report detailed ablations of our pipeline’s main components by studying performance on three environments of increasing complexity. In particular, we show that with HTPS alone, a model trained on annotated proofs manages to prove 65.4% of a held-out set of Metamath theorems, significantly outperforming the previous state of the art of 56.5% by GPT-f. Online training on these unproved theorems increases accuracy to 82.6%. With a similar computational budget, we improve the state of the art on the Lean-based miniF2F-curriculum dataset from 31% to 42% proving accuracy. | This work shows that with HTPS alone, a model trained on annotated proofs manages to prove 65.4% of a held-out set of Metamath theorems, significantly outperforming the previous state of the art of 56.5% by GPT-f. | ## HyperTree Proof Search for Neural Theorem Proving
**Guillaume Lample[∗†]** **Marie-Anne Lachaux[∗†]** **Thibaut Lavril[∗†]** **Xavier Martinet[∗†]**
**Amaury Hayat[§]** **Gabriel Ebner[‡]** **Aurélien Rodriguez[†]** **Timothée Lacroix[∗†]**
**Abstract**
We propose an online training procedure for a transformer-based automated theorem prover. Our approach leverages a new search algorithm, HyperTree Proof
Search (HTPS), inspired by the recent success of AlphaZero. Our model learns
from previous proof searches through online training, allowing it to generalize
to domains far from the training distribution. We report detailed ablations of our
pipeline’s main components by studying performance on three environments of increasing complexity. In particular, we show that with HTPS alone, a model trained
on annotated proofs manages to prove 65.4% of a held-out set of Metamath theorems, significantly outperforming the previous state of the art of 56.5% by GPT-f.
Online training on these unproved theorems increases accuracy to 82.6%. With a
similar computational budget, we improve the state of the art on the Lean-based
miniF2F-curriculum dataset from 31% to 42% proving accuracy.
**1** **Introduction**
Over the course of history, the complexity of mathematical proofs has increased dramatically. The
nineteenth century saw the emergence of proofs so involved that they could only be verified by a
handful of specialists. This limited peer review process inevitably led to invalid proofs, with mistakes
sometimes remaining undiscovered for years (e.g. the erroneous proof of the Four Colour Conjecture [1]). Some mathematicians argue that the frontier of mathematics has reached such a level of complexity that the traditional review process is no longer sufficient, envisioning a future where research
articles are submitted with formal proofs so that the correctness can be delegated to a computer [2].
Unfortunately, very few mathematicians have adopted formal systems in their work, and as of today,
only a fraction of existing mathematics has been formalized. Several obstacles have hindered the
widespread adoption of formal systems. First, formalized mathematics are quite dissimilar from
traditional mathematics, rather closer to source code written in a programming language, which
makes formal systems difficult to use, especially for newcomers. Second, formalizing an existing
proof still involves significant effort and expertise (the formalization of the Kepler conjecture took
over 20 person years to complete [3]) and even seemingly simple statements sometimes remain
frustratingly challenging to formalize.
To write a formal proof, mathematicians typically work with Interactive Theorem Provers (ITPs). The
most popular ITPs provide high-level “tactics” that can be applied on an input theorem (e.g. the initial
goal) to generate a set of subgoals, with the guarantee that proving all subgoals will result in a proof
of the initial goal (reaching an empty set means the tactic solves the goal). An example of a proof in
Lean [4], an interactive theorem prover, is given in Figure 1 and the corresponding proof hypertree
is illustrated in Figure 3. A tactic, induction k, is applied on the root goal (n + k ≤ _m + k)_
_∗Equal contribution. Corresponding authors: {glample,malachaux,tlacroix}@fb.com_
_†Meta AI Research_ _‡Vrije Universiteit Amsterdam_ _§ CERMICS École des Ponts ParisTech_
Preprint. Under review.
-----
to start a proof by induction [3]. The formal system returns two subgoals: n + 0 _m + 0 (the_
_≤_
initial case) and n + k _m + k_ _n + k + 1_ _m + k + 1 (the induction step). As the first_
_≤_ _⇒_ _≤_
subgoal is our initial hypothesis, it can be solved using the exact tactic. To prove the second
subgoal, we first rewrite it using nat.succ_le_succ_iff, a theorem from the Lean library stating
that m + 1 _n + 1_ _m_ _n. The new subgoal then becomes our induction hypothesis_
_≤_ _⇐⇒_ _≤_
_n + k ≤_ _m + k, and can again be solved using the exact tactic, thereby solving the last remaining_
subgoal and completing the proof of the initial statement. Starting from a goal and reducing it to
subgoals until these can be solved, is commonly referred to as backward proving.
First subgoal : n + 0 ≤𝑚+ 0
Second subgoal :
n + 𝑘≤𝑚+ 𝑘⇒𝑛+ 𝑘+ 1 ≤𝑚+ 𝑘+ 1
Figure 1: **A simple proof of the statement n ≤** **m ⇒** **n + k ≤** **m + k in Lean. The induction tactic**
reduces the initial statement to two subgoals, that can be solved independently.
In this paper, we aim at creating a prover that can automatically solve input theorems by generating
a sequence of suitable tactics without human interaction. Such a prover could significantly reduce
the effort required to formalize existing mathematics. The backward procedure naturally suggests
a simple approach where a machine learning model trained to map goals to tactics interacts with
an ITP to build the proof of an input goal in a backward fashion. The automated prover builds a
hypergraph with the theorem to be proved as the root node, tactics as edges and subgoals as nodes.
The prover recursively expands leaves by generating tactics with our model until we find a proof of
the initial theorem. A proof in this setup is a hypertree rooted in the initial theorem whose leaves are
empty sets. As many different tactics can be applied to a goal, and each tactic application can result
in multiple subgoals, the number of goals in the graph grows exponentially and it is critical to reduce
the search to the most promising branches. This can be done through techniques like alpha-beta
pruning [5] or Monte Carlo Tree Search (MCTS) [6], known for its recent successes in difficult two
player games [7]. However, challenges arise in search algorithms for theorem proving that do not
occur in two player games. For instance:
- The action space, i.e. the amount of possible “moves” in a given state, is infinite (there is an
unlimited number of tactics that can be applied to a given theorem). This requires sampling
possible actions from a language model for which training data is scarce. Moreover, if all
tactics sampled at a goal fail, we have no information on what region of the probability
space to sample next.
- In the context of theorem proving, we need to provide a proof of all subgoals created by
a tactic, whereas AlphaZero for two player games is allowed to focus on the most likely
adversary moves.
- In Chess or Go, playing a sub-optimal move does not necessarily lead to losing the game,
thus exploring these branches can provide information. In theorem proving, it is frequent to
generate tactics that result in subgoals that can no longer be proved and on which the model
will waste significant search budget.
This paper presents an in-depth study of our approach to overcome these difficulties and the resulting
model, Evariste. In particular, we make the following contributions:
3A hypergraph is a graph where an edge leads to a set of nodes that is potentially empty in our set-up. A
hypertree is a hypergraph without cycles. More formal definitions can be found in Appendix A.1
-----
- A new MCTS-inspired search algorithm for finding proofs in unbalanced hypergraphs.
- A new environment (Equations) to easily prototype and understand the behavior of the
models we train and our proof search.
- A detailed ablation study and analysis of the different components used in our approach on
three different theorem proving environments. We study how data is selected for training the
policy model after a successful or failed proof-search, what target should be used to train
the critic model, and the impact of online training vs. expert iteration.
- State-of-the-art performance on all analyzed environments. In particular, our model manages
to prove over 82.6% of proofs in a held-out set of theorems from set.mm in Metamath, as
well as 58.6% on miniF2F-valid [8] in Lean.
We begin by introducing related work in Section 2 and present the three theorem proving environments
that we study in Section 3. Then, we present our proof-search algorithm in Section 4, our online
training pipeline in Section 5, and all experimental details in Section 6. Finally, we describe our main
results and ablation studies in Section 7 before concluding in Section 8.
**2** **Related work**
Automated theorem proving has been a long-standing goal of artificial intelligence, with the earliest
work dating from the 1950s [9, 10]. Early approaches focused on simpler logics, culminating in
extremely efficient first-order provers such as E [11] or Vampire [12]. However, these approaches
are insufficient when it comes to theorems written in modern proof assistants such as Isabelle [13],
Coq [14], or Lean [4]. Recently, the rising success of deep language models [15] and model-guided
search methods [7] has spurred a renewed interest in the problem of automated theorem proving.
**Neural theorem provers.** Recent work applying deep learning methods to theorem proving [16–
18] are the closest to this work and obtained impressive results on difficult held-out sets for Metamath
and Lean. The main differences between their approach and ours are the proof-search algorithm we
propose, the training data we extract from proof-searches and our use of online training compared
to their expert iterations. We validate experimentally that these differences lead to improved performances as well as faster training times. Another similar approach is Holophrasm [19], which is
based on a different tree exploration technique which expands paths in an AND/OR tree, while we
expand entire proof subtrees in a proof hypergraph. Their model is only trained once from supervised
data and does not benefit from online training or expert iteration, which we found to be critical.
DeepHOL [20] focuses on the HOL-Light environment [21]. Their model relies on a classifier that
can select among a restricted set of tactics and arguments, while we rely on a seq2seq model that can
generate arbitrary tactics. The suggested tactics are then used in a breadth-first search. TacticToe [22]
uses an MCTS without learned components, using ranking on predefined features to guide the search.
Machine learning has also been used to improve classical provers by re-ranking clauses [23]. Overall,
previous studies always focus on a single proving environment (e.g. Metamath, Lean, or HOL-Light).
**Reasoning abilities of language models.** Impressive performance of large language models in one
or few shot learning [15], machine translation [24] or more recently code generation [25] spurred
interest into the reasoning capabilities of large transformers. These model perform quite well on
formal tasks such as expression simplification [26], solving differential equations [27], symbolic
regression [28, 29], or predicting complex properties of mathematical objects [30]. These studies
suggest that deep neural networks are well adapted to complex tasks, especially when coupled with a
formal system for verification.
**MCTS and two player games.** Recently, AlphaZero [7] demonstrated good performances on two
player games, replacing the Monte-Carlo evaluations of MCTS [6] with evaluations from a deep neural
network and guiding the search with an additional deep policy. This recent success follows extensive
literature into search methods for two player games, notably alpha-beta search [5]. Theorem proving
can be thought of as computing game-theoretic value for positions in a min/max tree: for a goal to be
proven, we need one move (max) that leads to subgoals that are all proven (min). Noticing heterogeneity in the arities of min or max nodes, we propose a search method that goes down simultaneously
in all children of min nodes, such that every simulation could potentially result in a full proof-tree.
-----
**3** **Proving environments**
In this paper, we develop and test our methods on three theorem proving environments: a) Metamath,
b) Lean and c) Equations. Metamath [31] comes with a database of 30k human-written theorems
called set.mm. We also evaluate our methods on the Lean proving environment, which provides a
level of automation that is helpful to solve more complex theorems. Lean comes with a human-written
library of 27k theorems called Mathlib [32]. Finally, since Metamath proofs can be quite difficult to
understand and Lean requires more computing resources, we developed our own environment called
Equations, restricted to proofs of mathematical identities. Its simplicity makes it ideal for prototyping
and debugging. We briefly introduce these environments below.
**3.1** **Metamath**
Metamath’s only rule is string substitution. Starting from a theorem to be proven, variables are
substituted until we reach axioms. In our setup, we consider a tactic to be the label of a theorem in
```
set.mm, along with the necessary substitutions. As shown in Figure 2, to show that 2 + 2 = 4, the
```
first step uses eqtr4i which states that A = B ∧ _C = B ⇒_ _A = C with substitutions: A = (2 + 2),_
_B = (2 + (1 + 1)), and C = 4. We are then left with two subgoals to prove: (2 + 2) = (2 + (1 + 1))_
and 4 = (2 + (1 + 1)).
䎶¥ÃÛæÞÅ
HTWUL
䎶¥ÃÛæޥÃÛ¥ÂÛ¦¦ 䎶ÅÞ¥ÃÛ¥ÂÛ¦¦
RYHTL HTWUL
䎶¥ÄÛ¦ޥ¥ÃÛ¦Û¦ 䎶¥¥ÃÛ¦Û¦ޥÃÛ¥ÂÛ¦¦
RYHTL DGGDVVL
Figure 2: A visualization of the proof-tree for 2 + 2 = 4 in Metamath.
The simplicity of Metamath makes it a great test bed for our algorithms. However, its lack of
automation leads to larger proof sizes and its syntax and naming conventions make each step difficult
to interpret for neophytes. Similar to GPT-f, we implement a parser for Metamath in order to
automatically prove the syntactic correctness of statements. Moreover, we use this parser to allow
generating only substitutions that cannot be inferred from the goal.
**3.2** **Lean**
Lean is a full-fledged programming language and benefits from more powerful automation than Metamath, with tactics such as ring (able to prove goals using manipulations in semirings), norm_num
(able to prove numerical goals) or linarith (able to find contradictions in a set of inequalities). An
example of a Lean proof-tree is shown in Figure 3.
States are more complex in Lean than in Metamath: metavariables can appear which are holes in
the proof to be filled later. Subgoals sharing a metavariable cannot be solved in isolation. This is
addressed in Polu and Sutskever [16] by using as input the entire tactic state. Instead, we inspect
tactic states to detect dependencies between subgoals, and split the tactic state into different subgoals
where possible in order to maximize state re-use and parallelization in the proof search algorithm.
Finally, Lean’s kernel type checker has to be called after each tactic application as tactics sometimes
generate incorrect proofs and rely on the kernel for correctness. For every goal in the previous
tactic state, we type check the proof term inserted by the tactic. Since the kernel does not support
metavariables, we replace every metavariable by a lambda abstraction.
-----
¥Z[X¦¥P䈹[Z¦[ÛXZÛX
LQGXFWLRQN
䎶[ÛÁZÛÁ ¥X¢QP[ÛXZÛX¦[ÛX¢[hkEEZÛX¢[hkEE
H[DFWK UZQDWVXFFBOHBVXFFBLII
¥X¢QP[ÛXZÛX¦[ÛX¢[ZÛX¢[
H[DFWNBLK
Figure 3: A visualization of the proof-tree for the proof discussed in the introduction in Lean.
**3.3** **Equations**
We developed the Equations environment as a simpler analogue to existing proving environments. Its
expressivity is restricted to manipulating mathematical expressions (e.g. equalities or inequalities)
with simple rules (e.g. A + B = B + A, or A < B _B <_ _A). This reduced expressivity_
_⇒−_ _−_
makes goals and tactics easy to understand, helping with interpretability and debugging: plotting
the set of goals explored during a Metamath proof search does not give a lot of insights on whether
it is on track to find a proof. In Section B of the appendix, we give an in-depth presentation of
this environment, of how we represent goals (Section B.1), tactics (Section B.2) and how we prove
statements (Section B.3).
Unlike in Metamath or Lean, we do not have access to a training set of human annotated proofs
for this environment. Instead, we create a training set composed of randomly generated synthetic
theorems and their proofs (see Sections B.6 and B.6 for details), and manually create an out-ofdomain set of non-trivial mathematical identities for which we do not provide proofs, e.g. cosh(3x) =
4 cosh(x)[3] 3 cosh(x) or sin(4x) = (4 sin(x) 8 sin(x)[3]) cos(x). We refer to this evaluation split
_−_ _−_
as Identities, a set of 160 mathematical expressions.
As synthetic theorems randomly generated are much simpler and significantly differ from statements
in the Identities split, we can evaluate the ability of our model to generalize to complex and out of
domains data. An example proof-tree in Equations is shown in Figure 4.
䎶E]h¥r¦ÛI[r]ßÂÛÃI[r]
$&%'ۂ$% &'
䎶E]h¥r¦[] 䎶I[r]ßÃI[r]
FRV$ %$!ۂ$ %$
Âßà 䎶I[r]àÁ
QRUPBQXP H[$]!
Figure 4: A visualization of the proof-tree for cos(x) + e[x] _< 1 + 2e[x]_ **in Equations.**
**4** **HyperTree Proof Search**
Given a main goal g to automatically prove, proof search is the algorithm that interacts with our
learned model and the theorem proving environment to find a proof hypertree for g. Proof search
progressively grows a hypergraph starting from g. A proof is found when there exists a hypertree
from the root to leaves that are empty sets.
-----
Expansion Back-propagation
Selection
|g5|Col2|
|---|---|
||N( W(|
|||
gg `vT(g)=(1×0.1)×0.4`
```
N(g,t1)=2
W(g,t1)=0.5+(1×0.1)×0.4
```
`vT(g0)=1×0.1` g0 g1 `vT(g1)=0.4`
```
N(g0,t0)=1 N(g1,t0)=1
W(g0,t0)=1x0.1 W(g1,t0)=0.4
```
`vvTT(g(g23)=1)=0.1` ggg222 g3 g4 `vT(g4)=0.4`
g gg
```
N(g,t1) =1 N(g,t2)=0
W(g,t1) =0.5 W(g,t2)=0.1
```
5 g0 g1 g6 g7 g0 g1
```
N(g0,t0)=0
W(g0,t0)=0
```
g2 g3 g4 gg22 g3 gg44
g4 `N(g4,t1`
```
W(g4,t1
```
Figure 5: HyperTree Proof Search. We aim at finding a proof of the root theorem g with HTPS. Proving
either {g5}, {g0, g1}, or {g6, g7} would lead to a proof of g by tactic t0, t1, or t2. The figure represents the
three steps of HTPS that are repeated until a proof is found. Guided by the search policy, we select a hypertree
whose leaves are unexpanded nodes. The selected nodes are then expanded, adding new tactics and nodes to the
hypergraph. Finally, during back-propagation we evaluate the node values of the hypertree, starting from the
leaves back to the root, and update the visit counts and total action values.
In this section, we assume a policy model Pθ and critic model cθ. Conditioned on a goal, the policy
model allows the sampling of tactics, whereas the critic model estimates our ability to find a proof for
this goal. Our proof search algorithm will be guided by these two models. Additionally, and similar
to MCTS, we store the visit count N (g, t) (the number of times the tactic t has been selected at node
_g) and the total action value W_ (g, t) for each tactic t of a goal g. These statistics will be used in the
selection phase and accumulated during the back-propagation phase of the proof search described in
Section 4.1 and Section 4.3.
The algorithm iteratively repeats the three steps described below to grow the hypergraph until either
a proof is found or we exceed our expansion budget. We show an example of these three steps of
proof search in Figure 5. We refer to this algorithm as HyperTree Proof Search (HTPS) throughout
this work. A more detailed comparison between HTPS, MCTS, and the best-first search algorithm of
Polu and Sutskever [16] can be found in Appendix A.4.
**4.1** **Selection**
The number of nodes in the proof hypergraph grows exponentially with the distance to the root. Thus,
naive breadth-first search is infeasible to find deep proofs and some prioritization criteria is required
to balance depth and breadth. This is the family of best-first search algorithms, of which A* and
_MCTS are instances. Similar to MCTS, we balance the policy model’s prior with current estimates_
from the critic. In particular, we experiment with two different search policies: PUCT [33] and
Regularized Policy (RP) [34], detailed in Appendix A.2. These search policies use the tactic prior
from the policy model, the visit count N, and the estimated value of the tactic given by Q = W/N .
A higher visit count will lead to a higher confidence in the estimated value than in the prior policy
model, and vice-versa for low visit counts.
The key difference between previous work and ours is that our proof search operates on a hypergraph.
Thus, whereas an algorithm like MCTS will go down a path from the root to an unexpanded node
during its selection phase, our algorithm will instead create a partial proof hypertree, leading to a set
of either solved or unexpanded nodes. The selection phase algorithm, described in more details in
Appendix A.3, consists in recursively following the search policy from the root until we find leaves
of the current hypergraph.
In Figure 5, we illustrate the selection step. We start at the root node g, which has three tactics
_t0, t1, t2. The search policy selects t2, leading to the set of subgoals {g0, g1}. Then, for both g0_
and g1, we again select the best tactic according to the search policy and finally reach the sets
of unexpanded subgoals {g2, g3} and {g4}. The final selected proof hypertree T is composed of
_g, {g0, g1}, {g2, g3}, {g4} and is colored in dark blue in Figure 5._
-----
In order to batch calls to the policy and critic models over more nodes to expand, we run several
selections sequentially, using a virtual loss [35, 7] to produce different partial proof-trees. Note that
solving all unexpanded leaves of any of these trees would immediately lead to a full proof of the root.
In the next section, we describe how nodes are expanded.
**4.2** **Expansion**
To expand a node g, we use the policy model to suggest tactics that would make progress on the goal,
then evaluate these tactic suggestions in the theorem proving environment. Each valid tactic will lead
to a set of new subgoals to solve, or to an empty set if the tactic solves the goal. Finally, we add a
hyperedge for each valid tactic ti from the expanded node g to its (potentially empty) set of children
for this tactic {gi[0][, ..., g]i[k][}][. Note that these children might already be part of the hypergraph. For new]
nodes, visit counts N (g, t) and total action values W (g, t) are initialized to zero. There are three
types of nodes in the hypergraph:
- Solved: at least one tactic leads to an empty set, or has all its children solved.
- Invalid: all tactics sampled from the policy model were rejected by the environment, or lead
to invalid nodes.
- Unsolved: neither solved nor invalid, some tactics have unexpanded descendants.
Note that the definitions for Solved or Invalid are recursive. These status are updated throughout the
hypergraph anytime a hyperedge is added. Tactics leading to invalid nodes are removed to prevent
simulations from reaching infeasible nodes. Once this is done, we back-propagate values from the
expanded nodes up to the root, as described in the next section.
In Figure 5, we show an example of expansion. After selecting a hypertree T during the selection
step, for each unexpanded leaf goal of T, we generate B = 2 tactics with our policy model and keep
only the valid ones. This results in two tactics for g2 and g4 and one for g3. We apply these tactics
in our formal environment and obtain new sets of subgoals and add them to the hypergraph. One
tactic of g2 solves g2, resulting in an empty set of subgoals. Note that because g3 is not solved yet, g0
remains unsolved.
**4.3** **Back-propagation**
For each expanded goal g in a simulated proof tree T, its value is set to vT (g) = 1 if it is solved,
and vT (g) = 0 if it is invalid. Otherwise, its value is estimated by the critic model: vT (g) = cθ(g).
This provides vT for all leaves of T and we can then back-propagate in topographic order (children
before parents) through all nodes of T . Interpreting the value of a node as the probability that it can
be solved, since solving a goal requires solving all of its children subgoals, the value of a parent is
the product of values of its children (we assume that the solvability of subgoals is independent, for
simplicity):
Y
_vT (g) =_ _vT (c)_
_c∈children(g,t)_
In particular, the value of a goal g is the product of the values of all leaves in T that remain to be
solved to obtain a proof of g. Once all values in T are computed, we increment the corresponding
visit count N (g, t) in the hypergraph as well as the total action values: W (g, t) += vT (g). For a
goal g, the estimated value for tactic t is then the mean of the total action value:
_Q(g, t) =_ _[W]_ [(][g, t][)]
_N_ (g, t)
We give an example of back-propagation in Figure 5. First, we evaluate the values of the leaf
nodes of T . Because g2 is solved, we set vT (g2) = 1. The values of g3 and g4 are estimated with
the critic model, e.g. vT (g3) = cθ(g3) = 0.1. The values of the internal nodes are obtained by
computing the product of their children values. Thus, we first compute vT (g0) = vT (g2) × vT (g3)
and vT (g1) = vT (g4), then vT (g) = vT (g0) × vT (g1) = (vT (g2) × vT (g3)) × (vT (g4)). Then, for
every (goal, tactic) pair (g, t) in T, we increment the visit count, N (g, t) += 1 and update the total
action value: W (g, t) += vT (g).
-----
|3URYHU 'LVWULEXWHG 7UDLQVDPSOHV 3URYHU WUDLQHUV 3URYHU 6WDWLVWLFV 6WDWHPHQWV &RQWUROOHU|Col2|3URYHU|
|---|---|---|
||3URYHU||
0RGHOZHLJKWV
Figure 6: An overview of our online training architecture. The controller sends statements to asynchronous
HTPS provers and gathers training and proving statistics. The provers send training samples to the distributed
trainers and periodically synchronize their copy of the models.
**5** **Online training from proof searches**
In the previous section, we considered the policy and critic models as given. In this section, we explain
how proof search is used to create training data for these two models. Provers are asynchronously
running proof searches using a version of the models synchronized with the trainers, coupling training
and data extraction in an online procedure that leads to continuous performance improvements.
**5.1** **Training objectives**
Both the policy model Pθ and the critic model cθ are encoder-decoder transformers [36] with shared
weights θ, which are trained with a tactic objective and a critic objective respectively.
**Tactic objective.** The policy model Pθ takes as input a tokenized goal and generates tactics. It is
trained with a standard seq2seq objective [24], where we minimize the cross-entropy loss of predicted
tactic tokens conditioned on an input goal.
**Critic objective.** In order to decode a floating point value with our seq2seq critic model cθ, we
start decoding with a special token, restrict the output vocabulary to the two tokens PROVABLE and
`UNPROVABLE, and evaluate the critic with cθ(g) = P` (PROVABLE|g, CRITIC). The critic objective is
identical to a seq2seq objective where the cross-entropy is minimized over the two special tokens.
**5.2** **Online training**
We use a distributed learning architecture reminiscent of AlphaZero [33] or distributed reinforcement
learning setups [37, 7]. A distributed data parallel trainer receives training data from a set of
asynchronous provers that run proof searches on tasks chosen by a controller that also centralizes
statistics. Provers, in turn, continuously retrieve the latest model versions produced by the trainers
in order to improve the quality of their proof search. This set-up is represented in Figure 6. Once a
prover finishes a proof-search, we extract two types of training samples from its hypergraph:
**Tactic samples.** At the end of a successful proof search, we extract (goal, tactic) pairs of a minimal
proof hypertree of the root node as training samples for the policy model. This selection has a large
impact on performances, other options such as selecting all solved nodes are investigated in Section 7.2.1. We use a different minimality criteria depending on the environment: total number of proof
steps for Metamath and Equations, and total tactic CPU time for Lean (see Appendix E for details).
**Critic samples.** In the proof search hypergraph, we select all nodes that are either solved, invalid,
or with a visit count higher than a threshold. Then, we use c(g) = 1 as the training target for solved
nodes. For internal nodes, we use the final estimated action value c(g) = W (g, t[∗])/N (g, t[∗]) where
_t[∗]_ is the tactic that maximizes the search policy at g. Finally, for invalid nodes, we use c(g) = 0.
The trainers receive training samples that are stored into two separate finite-size queues, one for each
objective. When a queue is full, appending a new sample discards the oldest one. In order to create a
batch for a task, we uniformly select samples in the corresponding queue. The two training objectives
are weighted equally. Additionally, during online training, we continue sampling from the supervised
tasks which provide high-quality data.
-----
# train theorems # train proof steps Avg. goal length
Equations 33.7
_∞_ _∞_
Metamath 35k 1M 120.1
Lean 24k 144k 169.3
Table 1: Dataset statistics for supervised training.
Our proof-search depends on many hyper-parameters, and the optimal settings might not be the same
for all statements, making tuning impractical. Thus, the controller samples these hyper-parameters
from pre-defined ranges (see Appendix C for details) for each different proof-search attempt.
**5.3** **Full training pipeline**
In order to bootstrap our online learning procedure we require a policy model Pθ that outputs coherent
tactics. While the critic is left untrained, the policy model is fine-tuned on a pretrained transformer
using a supervised dataset specific to the target environment. Overall, the full training pipeline can be
summarized as follows:
- Pretraining of the encoder-decoder model on a large unsupervised corpus (c.f. Section 6.2).
- Fine-tuning of the policy model on supervised datasets detailed in (c.f. Section 6.1).
- Online training of both the policy and critic models on data extracted from proof search.
**6** **Experiments**
In this section, we provide details about our experimental training and evaluation protocols. We
first describe the supervised datasets used to fine-tune our policy models, as well as the tokenization
used. We then give practical details on pretraining and the model architecture. Finally, we discuss the
evaluation datasets and methodology.
**6.1** **Model fine-tuning and supervised datasets**
Starting the HTPS procedure described in Section 5 from a randomly initialized model would be
sub-optimal, as no valid tactic would ever be sampled from the policy model. Thus, starting the
online training from a non-trivial model is critical. To this end, we first fine-tune our policy model Pθ
on a supervised dataset of theorems specific to each environment.
**Metamath** In Metamath, we extract all proofs from the set.mm library, composed of 37091
theorems (c.f. Section D for the version of set.mm). We first derive a graph of dependencies between
statements, and generate a random train-valid-test split of theorems, with 1000 valid and test theorems.
We use the DAG to ensure that each theorem in the valid or test set is not used to prove another
theorem. Moreover, this DAG is used to build a table of forbidden tokens: if the proof of A depends
on B, we set to zero the probability of generating the token A during a proof-search of B. We use a
seq2seq training objective, where the model is conditioned on a goal to prove, and is trained to output
a sequence of the following format:
```
LABEL MANDATORY_SUBSTS <EOU> LABEL_STATEMENT PREDICTABLE_SUBSTS <EOS>
LABEL is the label of the rule to apply, MANDATORY_SUBSTS is a serialized version of the substitutions
```
in the rule that cannot be inferred from syntactic parsing of the input goal and the theorem statement.
During proof-search, decoding is stopped at the <EOU> (End Of Useful) token and we do not generate
predictable substitutions, as this would unnecessarily increase decoding time and the probability that
our model generates invalid substitutions. Training the model to output predictable substitutions
and the rule statement serves as a co-training task and helps reduce overfitting. The training set is
composed of around 1M goal-tactic pairs; more statistics about the training data are provided in
Table 6.1. Tokenization in Metamath is trivial, as statements are composed of space-separated tokens.
-----
**Lean** Following [18], we extract a supervised dataset from the Mathlib library. The training
set is composed of 24k theorems and 144k goal-tactic pairs. In addition, we co-train with the
dataset of proof-artifacts of Han et al. [17] to reduce overfitting. To facilitate experimentation and
reproducibility, we use fixed versions of Lean, Mathlib, and miniF2F (c.f. Appendix D). Finally,
we add another supervised co-training task by converting to Lean a synthetic dataset of theorems
generated by the Equations environment (c.f. Appendix B.7). In order to avoid hooking into the
Lean parser, we tokenize goals and tactics using byte-pair encoding (BPE [38]) following previous
work [16, 18]. Statistics about the training set are available in Table 6.1.
**Equations** Unlike Metamath or Lean, the Equations environment does not come with with a dataset
of manually annotated proofs of theorems. Instead, we generate supervised data on the fly using the
random graph generator described in Appendix B.6. As the model quickly reaches a 100% proving
accuracy on these synthetic theorems, there would be no benefit in using them during online training.
Thus, we fine-tune on the synthetic dataset, and only leverage statements from the Identities split
during online training. As in Metamath, tokenization of statements for this environment is natural, as
each statement can be tokenized using the list of symbols from its prefix decomposition [27].
**6.2** **Model pretraining**
Model pretraining can be critical in low-resource scenarios where the amount of supervised data is
limited [39, 40]. Thus, we do not immediately fine-tune our model but first pretrain it on a large
dataset to reduce overfitting and improve generalization. In particular, we pretrain our model with a
masked seq2seq objective (MASS [41]) on the LaTeX source code of papers from the mathematical
section of arXiv. After tokenization, our filtered arXiv dataset contains around 6 billion tokens
for 40GB of data. Similar to Polu and Sutskever [16], we observed large performance gains using
pretraining. However, we found that arXiv alone provides a better pretraining than when it is
combined with other sources of data (e.g. GitHub, Math StackExchange, or CommonCrawl).
**6.3** **Model Architecture and Training**
**Model architecture.** Our transformer architecture uses a 12-layer encoder and a 6-layer decoder in
all experiments. We use an embedding dimension of 1600 in the encoder and 1024 in the decoder
for both Metamath and Lean. For Equations, where we expect the model to require less decoding
capacity, the decoding dimension is lowered to 512. We found that reducing the decoder capacity
increases the decoding speed without impacting the performance, as previously observed by Kasai
et al. [42] in the context of machine translation. Our models are composed of 440M parameters
for Equations and 600M parameters for Metamath and Lean (for comparison, GPT-f uses a 770M
parameter, 36-layer model).
**Supervised fine-tuning.** During fine-tuning, we train our models with the Adam optimizer [43]
and an inverse square-root learning rate scheduler [36]. We use a dropout of 0.2 [44] to reduce the
overfitting of our models. We also apply layer-dropout [45] with a dropout rate of 0.1 to further
reduce overfitting and stabilize training. We implement our models in PyTorch [46] and use float16
operations to speed up training and to reduce the memory usage of our models.
**Online training.** During online training, we alternate between the goal-tactic objective, used during
fine-tuning on the supervised dataset, and the goal-tactic and goal-critic objectives on data generated
by the provers. As the model and the data generated by the provers are constantly evolving, we do
not want the learning rate to decrease to 0, and we fix it to 3 10[−][5] after the warm-up phase. Unless
_×_
mentioned otherwise (e.g. for large experiments), we run all Metamath and Equations experiments
with 16 trainers and 32 provers for a total of 48 V100 GPUs.
**6.4** **Evaluation settings and protocol**
In Polu et al. [18], the model is fine-tuned on theorems from the training set and expert iteration is done
on theorems from different sources: train theorems, synthetic statements, and an extra curriculum of
statements without proofs (miniF2F-curriculum). The produced model is then evaluated on unseen
statements, namely the validation and test splits of the miniF2F dataset [8].
-----
In this work, we also consider the transductive setup: on a corpus of unproved statements available
at train time, how many proofs can our method learn to generate? This protocol is also sensible, as
allowing the model to learn from a failed proof-search can lead to more focused exploration on the
next attempt, proving more statements overall than a model that would not be trained online.
Following [16], we also evaluate the pass@k by running k proof searches on the evaluated statements
with the policy and critic obtained by online training. In the transductive setup, we also report the
_cumulative pass rate, i.e. the proportion of theorems solved at least once during online training._
**7** **Results**
In this section, we present our results and study the moving parts of our pipeline through ablations.
Each experiment is run on a single environment (e.g. Lean, Metamath, or Equations). We compare
our model with GPT-f[16–18] which represents the state of the art on Metamath and Lean.
**7.1** **Main Results**
|Online training statements|Supervised -|GPT-f Evariste-1d Evariste-7d miniF2F-curriculum|Evariste miniF2F-valid|
|---|---|---|---|
|miniF2F-valid miniF2F-test miniF2F-curriculum|38.5 35.3 20.8|47.3 46.7 47.5 36.6 38.9 40.6 30.6 33.6† 42.5†|58.6† 41.0 32.1|
|---|---|---|---|
|Train time (A100 days)|50|2000 230 1620|1360|
|---|---|---|---|
Table 2: Pass rate on Lean environment using 64 trials (pass@64). Numbers with a _[†]_ exponent correspond
to the cumulative pass-rate since the evaluated statements are part of the online training. Evariste refers to the
method described in this paper.
**7.1.1** **Lean**
In Lean, we run our experiments on A100 GPUs with 32 trainers and 200 provers. Each prover
runs our Lean API on 48 CPU cores. Unlike Polu et al. [18], we sample statements equally from
mathlib-train and miniF2F-curriculum, to avoid giving too much importance to statements from a
different domain than the target. After 1 day of training (i.e. (200 + 32) A100 days of compute), each
statement from miniF2F-curriculum has been sampled on average 250 times, and 110 out of the 327
statements have been solved. Our model outperforms GPT-f on miniF2F-test, with an approximately
10 training time speed-up. After 7 days, we solve 139 statements of miniF2F-curriculum (100 for
_×_
GPT-f), and observe further improvements on miniF2F-valid or miniF2F-test.
For other evaluations, we depart from the set-up of Polu et al. [18], directly using the statements
from the miniF2F-valid split in our online training, obtaining 58.6% cumulative pass rate. We then
evaluate the final model on miniF2F-test, reaching 41% pass@64, against 36.6% for GPT-f.
Without the synthetic data co-training task, the performance drops to 54.9% cumulative pass rate on
the miniF2F-valid split, and 38.5% pass@64 on the miniF2F-test split. Examples of proofs found by
our model can be found in Appendix F.
**7.1.2** **Metamath**
On Metamath, we train our model on V100 GPUs, with 128 trainers and 256 provers, whereas
ablations are run on 16 trainers and 32 provers. We report our results in Table 3 for the supervised
model and for a model trained with online training. During online training, we sample equally
statements from the training and from the validation splits of set.mm.
Online training dramatically improves performances on valid statements, going from a 61% pass@8 to
a cumulative pass rate of 82.6% on this split. This improvement cannot solely be explained by the high
number of attempts on validation theorems during training. Indeed, the ablation in Figure 7 (right)
shows that Evariste significantly outperforms a supervised model with the same number of attempts.
-----
Figure 7: Comparison between online setup, expert iteration, and fixed model. We report the cumulative
pass rate on the Identities (resp. valid) split for the Equations (resp. Metamath) environment. Reloading the
model more frequently converges faster and to a better performance. When “No training” is done (i.e. the model
is the supervised one), the final performance is much lower despite using as many attempts. This shows that
online training is able to learn from previous proof searches.
The supervised model plateaus at 66% while Evariste keeps improving beyond 74% after 7 days of
training, showing that the model is able to learn from previous proof searches through online training.
On test theorems, for which statements were not provided during online training, the pass@32
accuracy increased by 10% compared to the supervised model, from 55.8% to 65.6%. Note that the
supervised model already obtains an accuracy of 65.4% (resp. 61.2%) on the validation (resp. test)
split, compared to GPT-f’s 56.5% (resp. 56.2%) after expert iteration, showing the benefits of HTPS.
|Col1|Valid cumulative pass@8 pass@32|Test pass@8 pass@32|
|---|---|---|
|Supervised Evariste|N/A 61.0% 65.4% 82.6% 81.0% 81.2%|55.8% 61.2% 65.6% 72.4%|
|---|---|---|
Table 3: Results on Metamath for a supervised model and Evariste. We report the pass@8 and pass@32
scores on the validation and test splits. Additionally, for Evariste we also report the cumulative score on the
validation set, i.e. the fraction of theorems proved at least one time during online training. Note that for Evariste
on Valid, the cumulative and pass@k performances are close since these statements were seen during training.
**7.1.3** **Equations**
In Equations, we run our main experiment with 32 trainers and 64 provers, whereas ablations are run
on 16 trainers and 32 provers. In this environment, the model easily learns the training distribution of
our random generator, and solves all synthetically generated problems. Thus, online training is run
on the Identities statements only. Our main experiment reaches a cumulative pass rate of 91.3% on
the Identities split, while a supervised model never exceeds 36% even after a similar number of proof
attempts. In Appendix B.8, we give examples of Identities statements proved during online training,
as well as the size and depth of proofs found by the model.
In particular, Evariste managed to find the proof of complex mathematical statements,
such as sinh(x/2) = sinh(x)/p2(1 + cosh(x)) and tan(3x)(1 3(tan(x))[2]) = 3 tan(x)
_−_ _−_
(tan(x))[2] tan(x) that required 82 and 117 proof steps respectively, showing the abilities of HTPS to
prioritize subgoals and guide the search in very large proof graphs. This shows that online training is
able to adapt our policy and critic models to a completely new domain, going from automatically
generated statements to identities found in math books. Examples to understand the gap between
these two domains can be found in Appendix B.
-----
**7.2** **Ablation study**
In this section, we present an ablation study on several components of our system. Since Lean
experiments are CPU intensive, we run most of our ablations on the Equations and Metamath
environments. On Lean, we ran experiments on a smaller subset of hyper-parameters that consistently
performed well on the other environments.
**7.2.1** **Online training data for tactic objective**
|Proof Of Type of Proof|All Solved Root All Min All Min|All Nodes -|
|---|---|---|
|Metamath (valid) Metamath (test)|61.2 65 57.4 68.6 57.2 58.8 54.8 57.4|51.6 54.4|
|---|---|---|
|Equations (Identities)|40.6 78.1 37.5 71.3|37.5|
|---|---|---|
Table 4: **Performance of our model for different online training data for tactic objective We report the**
pass@8 score for Metamath and cumulative pass rate for Equations. We try to keep all nodes and sample the
tactics of these nodes according to the policy. We also try to extract proofs or minimal proofs of solved nodes, or
the proofs or minimal proofs of the root theorem only. Selecting minimal proofs always improves performances
and gives the best results in both environments.
The way we filter tactics sent to the trainers has a large impact on final performances. We investigated
several filtering methods and report the results in Table 4. The first method is similar to the one used
in AlphaZero and exposed in [33]: we select all nodes of the proof search hypergraph where the visit
count is above a certain threshold and we filter tactics above a given search policy score. At training
time, tactics are sampled according to the filtered search policy. With this method the model reaches
51.6% pass@8 on the valid set of Metamath and 37.5% cumulative pass rate on Equations.
We then experimented with other filtering criteria, selecting only goal-tactic pairs that are part of
proofs: either a proof of the root node, or of any solved node in the hypergraph. Then, we learn from
all possible proofs, or only from proofs that are minimal according to a criteria (number of proof
steps for Equations and Metamath, cumulative CPU time for Lean).
Learning only from the minimal proofs always leads to improved performance, regardless of the selected roots. Learning from the minimal proofs of all solved nodes, we reach a cumulative pass rate of
78.1% on Equations, compared to 40.6% when learning from all proofs. On Metamath, only learning
from the root’s minimal proof gives the best result on the validation set, reaching a pass@8 of 68.6%.
**7.2.2** **Critic**
|Col1|Evariste|No critic Hard critic|Fixed search params|
|---|---|---|---|
|Metamath (valid) Metamath (test)|68.6 57.4|64.8 67.6 52.2 57.4|69.8 56.2|
|---|---|---|---|
|Equations (Identities)|78.1|65.6 63.1|73.8|
|---|---|---|---|
Table 5: Ablation study on the critic and search hyper-parameters in HTPS. We report the pass@8 score
for Metamath, and the cumulative pass rate for Equations. Evariste, trained with a soft critic and stochastic
hyper-parameters, obtains the best performance in both environments. Removing the critic, or using a hard critic
(e.g. when the critic is trained to predict 1 on solved nodes and 0 on others) leads to reduced performances.
In Equations, adding stochasticity in the proof search hyper-parameters increases the performance by 4.3% in
Equations, and slightly improves performance in Metamath.
To measure the impact of our critic model, we run an experiment where the proof search is only guided
by the policy model. During the back-propagation phase, we set vT (g) to 0.5 for the leaves of T . In
that context, our model is no longer trained with a critic objective. We run this experiment for both
Equations and Metamath, and report the results in Table 5. In both environments, using a critic model
improved the performance significantly, by 5.2% and 12.5% on Metamath and Equations respectively.
-----
As mentioned in Section 5, to train the critic objective, we set the training targets as c(g) = 1 for
solved nodes, c(g) = 0 for invalid nodes and c(g) = W (g, t[∗])/N (g, t[∗]) where t[∗] is the tactic that
maximizes the search policy at g., for internal nodes. We also tested a hard critic estimation of the
target values, following Polu and Sutskever [16], where c(g) = 1 for solved nodes and c(g) = 0 for
both invalid and internal nodes. We report results in Table 5. For both Metamath and Equations,
estimating the critic target of internal nodes with the HTPS action value estimate allows Evariste
to reach its best performance. In Equations, the model reaches a cumulative pass rate of 78.1%,
compared to 63.1% with hard critic estimates. In Equations, using hard critic targets gives worse
performances than having no critic model at all, showing that these targets are a bad estimation:
setting all internal nodes to zero is too pessimistic.
**7.2.3** **Fixed proof search parameters**
We study the impact of sampling HTPS hyper-parameters for each attempt during online training. We
run experiments with fixed, chosen search parameters for Equations and Metamath to compare with
random sampling, and report results in Table 5. Evariste achieves better performances than the model
trained with fixed search parameters on Metamath test set and Equations Identities, reaching 78.1%
pass rate compared to 73.8% in Equations Identities.
**7.2.4** **Model update frequency during online training**
In our online training procedure, the policy and critic models are updated every five minutes on the
provers. We measure the impact of the frequency of these updates by trying different refresh rates: 5
minutes, 1 hour, 6 hours for Equations, and no updates at all for both Equations and Metamath. We
report the cumulative pass rate over training hours in Figure 7. The higher the refresh rate, the better
the cumulative pass rate over time, confirming the benefits of online training over expert iteration.
**8** **Conclusion**
In this work, we introduce HTPS, an AlphaZero-inspired proof search algorithm for automated
theorem proving, along with an online training procedure. We run an extensive study of our pipeline,
and present state-of-the-art results on multiple proving environments. We show that online training
provides large speed-ups over expert iteration, and allows generalization of the policy and critic
models to completely new domains. Despite large number of attempts per theorem, proving the
entirety of datasets like miniF2F remains elusive, and generating data from proof-search on the
currently available corpora will likely be insufficient in the long term. As manually annotated formal
datasets are limited, another way of providing exploration and additional training data (in the spirit
of self-play for two player games) is required. Automated generation of new theorems is likely to
be one of the future milestones.
**Acknowledgments**
We thank the Meta AI and FLARE teams for useful comments and discussions throughout this work,
notably, Faisal Azhar, Antoine Bordes, Quentin Carbonneaux, Maxime Darrin, Alexander Miller,
Vincent Siles, Joe Spisak and Pierre-Yves Strub. We also thank the members of the Lean community
for their help, notably Fabian Glöckle for valuable feedback on this project.
**References**
[1] Alfred B Kempe. On the geographical problem of the four colours. American journal of
_mathematics, 2(3):193–200, 1879._
[2] Vladimir Voevodsky. Univalent foundations of mathematics. In International Workshop on
_Logic, Language, Information, and Computation, pages 4–4. Springer, 2011._
[3] Thomas Hales, Mark Adams, Gertrud Bauer, Dat Dang, John Harrison, Truong Hoang, Cezary
Kaliszyk, Victor Magron, Sean McLaughlin, Thang Nguyen, Truong Nguyen, Tobias Nipkow,
Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, An Ta, Trân Trung, Diep Trieu, and
Roland Zumkeller. A formal proof of the kepler conjecture. Forum of Mathematics, Pi, 5, 01
2017. doi: 10.1017/fmp.2017.1.
-----
[4] Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer.
The lean theorem prover (system description). In International Conference on Automated
_Deduction, pages 378–388. Springer, 2015._
[5] Donald E Knuth and Ronald W Moore. An analysis of alpha-beta pruning. Artificial intelligence,
6(4):293–326, 1975.
[6] Bruce Abramson and Richard E Korf. A model of two-player evaluation functions. In AAAI,
pages 90–94, 1987.
[7] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general
reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science,
362(6419):1140–1144, 2018.
[8] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
[9] P. C. Gilmore. A proof method for quantification theory: Its justification and realization.
_IBM J. Res. Dev., 4(1):28–35, jan 1960. ISSN 0018-8646. doi: 10.1147/rd.41.0028. URL_
```
https://doi.org/10.1147/rd.41.0028.
```
[10] Martin Davis and Hilary Putnam. A computing procedure for quantification theory. J. ACM, 7
[(3):201–215, jul 1960. ISSN 0004-5411. doi: 10.1145/321033.321034. URL https://doi.](https://doi.org/10.1145/321033.321034)
```
org/10.1145/321033.321034.
```
[11] Stephan Schulz. E—a brainiac theorem prover. AI Commun., 15(2–3):111–126, 2002.
[12] Alexandre Riazanov and Andrei Voronkov. Vampire 1.1 (system description). In IJCAR, 2001.
[13] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL: A Proof Assistant
_for Higher-Order Logic, volume 2283 of LNCS. Springer, 2002._
[14] Yves Bertot and Pierre Castéran. Interactive theorem proving and program development:
_Coq’Art: the calculus of inductive constructions. Springer Science & Business Media, 2013._
[15] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[16] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
[17] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203, 2021.
[18] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344,
2022.
[19] Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv
_preprint arXiv:1608.02644, 2016._
[20] Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An
environment for machine learning of higher order logic theorem proving. In International
_Conference on Machine Learning, pages 454–463. PMLR, 2019._
[21] John Harrison. Hol light: A tutorial introduction. In International Conference on Formal
_Methods in Computer-Aided Design, pages 265–269. Springer, 1996._
[22] Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. Tactictoe: learning to prove with tactics. Journal of Automated Reasoning, 65(2):257–286, 2021.
[23] Karel Chvalovsky, Jan Jakub˚` uv, Miroslav Olšák, and Josef Urban. Learning theorem proving
components. In International Conference on Automated Reasoning with Analytic Tableaux and
_Related Methods, pages 266–278. Springer, 2021._
-----
[24] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural
networks. Advances in neural information processing systems, 27, 2014.
[25] Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, and Guillaume Lample. Unsupervised translation of programming languages. arXiv preprint arXiv:2006.03511, 2020.
[26] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In International Conference on Learning Representations,
[2019. URL https://openreview.net/forum?id=H1gR5iR5FX.](https://openreview.net/forum?id=H1gR5iR5FX)
[27] Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In Inter_[national Conference on Learning Representations, 2020. URL https://openreview.net/](https://openreview.net/forum?id=S1eZYeHFDS)_
```
forum?id=S1eZYeHFDS.
```
[28] Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton.
Deep symbolic regression for recurrent sequences. arXiv preprint arXiv:2201.04600, 2022.
[29] Brenden K Petersen, Mikel Landajuela Larma, Terrell N. Mundhenk, Claudio Prata Santiago,
Soo Kyung Kim, and Joanne Taery Kim. Deep symbolic regression: Recovering mathematical
expressions from data via risk-seeking policy gradients. In International Conference on Learning
_[Representations, 2021. URL https://openreview.net/forum?id=m5Qsh0kBQG.](https://openreview.net/forum?id=m5Qsh0kBQG)_
[30] François Charton, Amaury Hayat, and Guillaume Lample. Learning advanced mathematical
computations from examples. arXiv preprint arXiv:2006.06462, 2020.
[31] Norman D. Megill and David A. Wheeler. _Metamath:_ _A_ _Computer_ _Lan-_
_guage for Mathematical Proofs._ Lulu Press, Morrisville, North Carolina, 2019.
```
http://us.metamath.org/downloads/metamath.pdf.
```
[32] The mathlib Community. The lean mathematical library. In Proceedings of the 9th ACM
_SIGPLAN International Conference on Certified Programs and Proofs. ACM, jan 2020. doi:_
[10.1145/3372885.3373824. URL https://doi.org/10.1145%2F3372885.3373824.](https://doi.org/10.1145%2F3372885.3373824)
[33] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering
chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint
_arXiv:1712.01815, 2017._
[34] Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, Ioannis
Antonoglou, and Rémi Munos. Monte-carlo tree search as regularized policy optimization. In
_International Conference on Machine Learning, pages 3769–3778. PMLR, 2020._
[35] Guillaume MJ-B Chaslot, Mark HM Winands, and HJVD Herik. Parallel monte-carlo tree
search. In International Conference on Computers and Games, pages 60–71. Springer, 2008.
[36] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information
_processing systems, pages 5998–6008, 2017._
[37] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro
De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al.
Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296,
2015.
[38] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare
words with subword units. In Proceedings of the 54th Annual Meeting of the Association for
_Computational Linguistics, pages 1715–1725, 2015._
[39] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of
deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
[40] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv
_preprint arXiv:1901.07291, 2019._
-----
[41] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mass: Masked sequence to
sequence pre-training for language generation. arXiv preprint arXiv:1905.02450, 2019.
[42] Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A Smith. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. arXiv preprint
_arXiv:2006.10369, 2020._
[43] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
_arXiv:1412.6980, 2014._
[44] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
Dropout: a simple way to prevent neural networks from overfitting. The journal of machine
_learning research, 15(1):1929–1958, 2014._
[45] Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with
structured dropout. In International Conference on Learning Representations, 2020. URL
```
https://openreview.net/forum?id=SylO2yStDr.
```
[46] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito,
Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in
pytorch. NIPS 2017 Autodiff Workshop, 2017.
[47] Yizao Wang and Sylvain Gelly. Modifications of uct and sequence-like simulations for montecarlo go. In 2007 IEEE Symposium on Computational Intelligence and Games, pages 175–182.
IEEE, 2007.
[48] Mark HM Winands, Yngvi Björnsson, and Jahn-Takeshi Saito. Monte-carlo tree search solver.
In International Conference on Computers and Games, pages 25–36. Springer, 2008.
-----
**A** **Proof search in more details**
**A.1** **Hypergraph and definitions**
We begin with some useful notations and concepts for our hypergraphs.
Formally, let be a set of nodes, and a set of tactics. A hypergraph is a tuple H = (G, r, T, U )
_G_ _T_
with G the nodes, r _G the root, and T_ _G_ (G) the admissible tactics. An element
_⊂G_ _∈_ _⊂_ _× T × P_
of T is written (g, t, c) where g is the start goal, t is the applied tactic and c is the potentially empty
set of children that the tactic creates when applied to g in the proving environment.
A hypertree is a hypergraph without cycles, i.e, such that we cannot find a path g0, . . ., gℓ = g0 with
_ℓ> 0 and with gi+1 in the children of gi for all i’s._
Let S _G be the set of solved nodes. A node g_ _G_ _U is solved if one of its tactic leads to no_
_⊂_ _∈_ _\_
subgoals, or one of its tactics leads to only solved nodes. Formally: (g, t, ) _T or_ (g, t, c) _T_
_∃_ _∅_ _∈_ _∃_ _∈_
such that c _S. We say that a tactic t is solving for g if all the children it leads to are solved._
_⊂_
Conversely, let U _G be the set of invalid nodes. A node g_ _G_ _U is invalid if it has been_
_⊂_ _∈_ _\_
expanded but has no tactics in the hypergraph, or all of its tactics have an invalid child. Formally:
(g, t, c) _T_ = or (g, t, c) _T_, c _I_ = .
_{_ _∈_ _}_ _∅_ _∀_ _∈_ _∩_ _̸_ _∅_
These recursive definitions naturally lead to algorithms MaintainSolved and MaintainStatus to
maintain sets S and I when elements are added to H.
A sub-hypertree HT of H is a connected hypertree rooted at some goal of H. Its leaves leaves(HT )
are its subgoals without children (either elements of U or S). The set of proofs of g in H, Proofs(g, H)
are all the hypertrees rooted at g that have all their leaves in S. Similarly, the expandable subtrees
of H rooted in g, Expandable(g, H) are the subtrees with at least one leaf in U . A tactic is said
to be expandable if it is part of an expandable subtree, this can be computed with a graph-search
```
ComputeExpandable.
```
We can now reformulate the process of proof-search. Starting from a hypergraph that contains only
the root theorem r, we produce a sequence of expandable subtrees. The unexpanded leaves of these
subtrees are expanded in the hypergraph, then the new value estimates are backed-up. The hypergraph
grows until we use all our expansion budget, or we find a proof of r.
**A.2** **Policies**
When a goal g is added to the hypergraph, its visit count N (g, t) and total action value W (g, t) are
initialized to zero. Its virtual visit count V C(g, t) are updated during proof search. Let C(g, t) =
_N_ (g, t) + V C(g, t) be the total counts. These values are used to define the value estimate with a
constant first play urgency [47]:
max(1max(1,N,C((g,tg,t)))) if t is solving for g
max(10,C.5(g,t)) if N (g, t) = 0
_WC( (g,tg,t))_ otherwise.
_Q(g, t) =_
Notice that the value of solving tactics decreases with virtual counts, allowing exploration of already
solved subtrees.
Given the visit count N, the total counts C, value estimates Q, the model prior Pθ and an exploration
constant c. The policy used in Alpha-Zero is PUCT:
pP N (g, )
_·_
1 + C(g, t)
PUCT(g) = arg max
_t∈A_
_Q(g, t) + c · Pθ(t|g) ·_
Notice that more weight is given to the value estimate Q as N grows which decreases the second term.
Another work [34] obtains good performances using as search policy the greedy policy regularized
by the prior policy.
pP C(g, )
_·_
P(C(g, ) + 1) _KL(πθ, y)_
_·_
_Q(g)[T]_ _y_ _c_
_−_ _·_
_πRP (g) = arg max_
_y∈S_
with the policy simplex at g
_S_
-----
Again, note that this policy balances the prior with the value estimates as the count grows, but does
not account for individual disparities of visits of each tactics. In our experiments, we obtained better
performances with πRP on Equations, and better performances with PUCT on Metamath and Lean.
**A.3** **Algorithms**
**Simulation** During simulation, we only consider subtrees that could become proofs once expanded.
This means we cannot consider any invalid nodes or consider subgraphs containing cycles. If we encounter a tactic that creates a cycle during a simulation, this tactic is removed from the hypergraph, virtual counts from this simulation are removed and we restart the search from the root. This may remove
some valid proofs, but does not require a backup through the entire partial subtree which would lead
to underestimating the value of all ancestors. Removing tactics from the hypergraph also invalidates
computations of expandable tactics. This is dealt with by periodically calling MaintainExpandable
if no valid simulation can be found. A full description of the algorithm that finds one expandable
subtree is available in Algorithm 1. Selection of nodes to expand requires finding expandable subtrees
until a maximum number of simulations is reached, or no expandable tactic exists at the root.
**Algorithm 1 Finding an expandable subtree**
**Input: A hypergraph H and its root**
**Output: A partial proof tree with unexpanded leaves**
_:start_
T: hypertree(root)
to_explore: list = [root]
**while to_explore do**
g = to_explore.pop()
**if g is internal then**
**if expandable(g) ̸= ∅** **then**
tactic = arg maxt π expandable(g)[π][(][g, t][)]
**else**
continue { expandable nodes are in a sibling branch }
**end if**
**if tactic leads to cycle then**
kill tactic
remove virtual counts for elements of T
goto start
**end if**
_V C(g, tactic) += 1_
T.add(g, tactic, children(g, tactic))
to_explore += [children(g, tactic)]
**end if**
**end while**
**Expansion** The policy model produces tactics for an unexpanded node g. These tactics are evaluated
in the proving environments. Valid tactics are filtered to keep a unique tactic (e.g. the fastest in
Lean) among those leading to the same set of children. Finally, we add the filtered tactics and
their children to the hypergraph. If no tactics are valid, the node is marked as invalid and we call
```
MaintainInvalid. If a tactic solves g, the node is marked as solved and we call MaintainSolved.
```
**Backup** The backup follows topological order from the leaves of a simulated partial proof-tree T,
updates W and N, and removes the added virtual count. The algorithm is described in Algorithm 2
**A.4** **Comparison with other search algorithms**
**Best First Search [16].** This best-first search expands goals one at a time according to a priorityqueue of either a value model or the cumulative log-prior from the language model. Since the priority
is equal among siblings but strictly decreasing with depth, this means siblings will always be expanded
together. However, nothing prevents the algorithm from jumping from one potential proof-tree to
-----
**Algorithm 2 Back-propagation of total action value W**
**Input: Partial proof-tree T and value estimates cθ(g) of its leaves.**
to_backup = []
**for g in leaves of T do**
_vT (g) = cθ(g)_
to_backup.append(parentT (g))
**end for**
**while to_backup do**
_g = to_backup.pop()_
to_update = [Q]c∈childrenT (g) _[v][T][ (][c][)]_
_W_ (g, t) += to_update
_N_ (g, t) += 1
_V C(g, t)_ = 1
_−_
_v(g, t) = to_update_
_g.is_prop = true_
**if all c.is_prop for c in siblingsT (g) then**
to_backup.append(parentT (g))
**end if**
**end while**
another, and potentially favoring breadth over depth. In comparison, depth does not appear in the
value estimate we compute, but rather the remaining number of nodes to solve a particular proof-tree.
Moreover, our algorithm leads to value estimates that can be used to train our critic, which performs
better than 0-1 estimates provided by best-first search (c.f. Section 7.2.2).
**Monte Carlo Tree Search [6].** MCTS has been famously used as part of AlphaZero [33] to obtain
great performances on two player games. This two player set-up can be mapped to theorem-proving
by assigning one player to choosing the best tactics while the other player picks the most difficult goal
to solve (a method explored in Holophrasm [19]). However, since we need to provide a proof of the
root theorem, we need to ensure that we can solve all goals that a tactic leads to. This set-up has been
studied for two player games when attempting to compute the game-theoretical value of positions.
Using MCTS in this set-up is suboptimal [48], ignoring unlikely but critical moves from the opponent
(in our case, a subgoal that looks easy but is impossible to solve). We decided to exploit the highly
asymmetrical arities of our two players (most tactics lead to one or two goals) which makes simulating
partial proof-trees computationally feasible. Thus, the values we back-propagate always take into
account all possible moves from the opponent, while only requiring a few expansions per simulation.
**B** **Equations environment**
In this section, we give additional details about the environment Equations. First, we described its
main elements, theorems (resp. tactics) in Section B.1 (resp. B.2). Then, we describe a proof in
this environment in Section B.3, how numerical expressions are evaluated in Section B.4 and what
vulnerabilities this can lead to in Section B.5. Finally, we describe our random theorem generator in
Section B.6 and how theorems and their proofs can be translated to Lean in Section B.7.
**B.1** **Theorems**
Each theorem in Equations consists in proving mathematical expressions composed of functions of
real numbers, by manipulating and rewriting expressions. A theorem to prove can be an inequality or
an equality, conditioned to a set (potentially empty) of initial assumptions. For instance:
_x[2]_ + 1 2x or _x > y =_ _e[y][−][x]_ 1 < 0
_≥_ _⇒_ _−_
In the first example, the goal does not have any hypothesis and consists in proving that for every
_x ∈_ R, x[2] + 1 ≥ 2x. In the second example, the goal consists in proving that e[y][−][x] _−_ 1 < 0 for every
_x, y ∈_ R that satisfy the hypothesis x > y.
-----
Equalities and inequalities are represented as trees with the three following elements:
- Leaves: represent a variable, an integer, or a constant (e.g. π).
- Internal nodes: represent unary or binary operators, e.g. +, −, /, ×, exp, ln, cos, sin,
sinh, cosh, etc. More advanced operators such as gcd, lcm, mod (the rest of an euclidean
division) are possible when dealing with integers.
- A root node: represents a comparison operator, e.g. =, ≤, <, ≥, >, ̸=. More advanced
comparison operators such as (divides) are possible when dealing with integers.
_|_
**B.2** **Tactics**
Equations allows to deduce equalities and inequalities from simpler subgoals, using elementary
rules (i.e. tactics). The environment contains two types of rules: transformations, which consist in
matching a pattern in an expression and replacing it by an equivalent expression; and assertions,
which consist in asserting that an expression is true. Both types of rules can have assumptions.
**Transformation rules** A transformation rule (TRule) consists in a set of two expressions, L and
_R, equivalent under a set of assumptions S. For instance TRule(A_ + B, B + A) is the transformation
rule stating the commutativity of the addition, namely that A + B = B + A for any expressions A
and B. Note that in this case, the set of assumption S is empty as the equality always holds. Another
_√_ _√_
example is TRule( _A[2], A, [A ≥_ 0]) that states that _A[2]_ = A provided that A ≥ 0.
Applying such a rule to an existing equation works as follows:
- matching a term T in the expression that has the pattern of L
- identifying the matching variables and substituting them in R
- replacing T by R in the input equation
- return the resulting equation with the set of hypotheses required for the transformation
For instance, if the input goal is:
p
(e[x])[2] = e[x]
Applying TRule(
_A[2], A, [A_ 0]) on this expression will result in two subgoals:
_≥_
_A[2]_ has been replaced by A: e[x] = e[x]
- The same expression, where
- The hypothesis required for the assumption to hold: e[x] 0
_≥_
More generally, a transformation rule will result in N + 1 subgoals, where N is the number of
hypotheses required by the rule.
**Assertion rules** An assertion rule (ARule) expresses the fact that an expression is true, provided
some hypotheses. It is represented by a main expression, and a set of assumptions sufficient for
the main expression to hold. For instance, the rule ARule(A ≤ _C, [A ≤_ _B, B ≤_ _C]) states the_
transitivity of the partial order, i.e. A _C provided that there exists an expression B such that_
_≤_ _≤_
_A_ _B and B_ _C._
_≤_ _≤_
Assertion rules do not always have hypotheses, for instance the reflexivity rule ARule(A = A), or
the rule ARule(e[A] _> 0) stating that e[A]_ is positive, for any real value A. Note that the two subgoals
generated in the previous paragraph (e[x] = e[x] and e[x] _> 0) can be respectively solved by these two_
assertion rules (i.e. by matching A = e[x] and A = x).
Unlike transformation rules that always result in at least one subgoal (the initial expression on which
we applied the transformation), assertion rules will only generate N subgoals, where N is the number
of hypotheses. As a result, being able to apply an assertion rule without hypotheses to an expression
is enough to close (e.g. solve) the goal. Assertion rules are in fact very similar to rules in Metamath.
In Table 6, we provide the number of Equations rules in different categories. Some examples of
transformation and assertion rules are given in Table 7.
-----
Rule type Basic Exponential Trigonometry Hyperbolic All
Transformation 74 18 9 8 109
Assertion 90 11 9 0 110
Total 171 29 18 11 219
Table 6: Number of Equations rules in each category.
Transformation rules Assertion rules
sin(0) = 0 _| cos(A)| ≤_ 1
cos(0) = 1 _| sin(A)| ≤_ 1
sin( _[π]2_ [) = 1] _| sin(A)| ≤|A|_
cos( _[π]2_ [) = 0] _A = B =⇒_ sin(A) = sin(B)
sin(−A) = − sin(A) _A = B =⇒_ cos(A) = cos(B)
cos(−A) = cos(A) sin(A) ̸= sin(B) =⇒ _A ̸= B_
cos(A) ̸= 0 =⇒ tan(A) = cos([sin(][A]A[)]) cos(A) ̸= cos(B) =⇒ _A ̸= B_
sin(A + B) = sin(B) cos(A) + sin(A) cos(B) _A = B, cos(A) ̸= 0 =⇒_ tan(A) = tan(B)
cos(A + B) = cos(A) cos(B) − sin(A) sin(B) tan(A) ̸= tan(B), cos(A) cos(B) ̸= 0 =⇒ _A ̸= B_
Table 7: Trigonometric rules accessible by the model. The model only has access to these elementary
rules when proving statements from Identities. In particular, it cannot use more involved theorems
such as cos[2](x) + sin[2](x) = 1 or sin(π) = 0.
**B.3** **Proving a statement with Equations**
In order to prove a theorem with Equations, the user (or automated prover) has to apply tactics on the
current expression. A tactic can correspond either to a transformation rule, or to an assertion rule.
For transformation rules, the model needs to provide:
- the rule (using a token identifier)
- the direction in which the rule is applied (a Boolean symbol, for forward or backward)
- an integer that represents the position where the rule is applied
- an optional list of variables to specify (c.f. paragraph below)
The direction of the rule indicates whether we want to transform L by R or R by L (e.g. replace A
_√_
by _A[2], or the opposite). The position where the rule is applied is given by the prefix decomposition_
of the input expression. For instance, the prefix notation of (x + y) + 1 is given by + + x y 1.
Applying the commutativity rule A + B = B + A to the expression in position 0 will result in
1 + (x + y). Applying it in position 1 will result in (y + x) + 1, since the rule was applied to (x + y).
Note that for the commutativity rule, the direction in which we apply the rule does not matter. The
list of variables to specify is required when variables in the target patterns are absent from the source
pattern. For instance, applying the transformation rule TRule(A,A+B-B) in the forward direction
will require to provide the value of B.
For assertion rules, the format is simpler. We no longer need to specify a direction or a position
(the position is always 0 as the assertion statement must match the expression to prove), but only:
- the rule (using a token identifier)
- an optional list of variables to specify
In this case, the list of variables to specify corresponds to variables that appear in hypotheses
and that cannot be inferred from the main expression. For instance, to apply the assertion rule
_A_ _B_ _B_ _C =_ _A_ _C, we need to specify the value of B. We will then be left with two_
_≤_ _∧_ _≤_ _⇒_ _≤_
subgoals: A _B and B_ _C._
_≤_ _≤_
Proving a statement in Equations requires to recursively apply tactics to unproved subgoals, until
we are left with no subgoals to prove. An example of proof-tree in Equations is shown in Figure 4.
-----
Figure 8 shows an example of proof of the statement (x **y)** (x + y) + 2y = 0 using rules from
_−_ _−_
the environment. Although simple, this statement already requires 22 proof steps and highlights
the difficulty proving complex mathematical identities when using elementary proof steps. In the
rest of this appendix, we give more details about how we represent expressions in Equations, how
we generate random theorems to provide initial training data, the list of rules we provided to the
environment, and the set of expressions we use to evaluate the model.
Statement to prove Rule used
(x _y)_ (x + y) + 2y = 0 _A_ _B = A + (_ _B)_
_−_ _−_ _−_ _−_
(x _y) + (_ (x + y)) + 2y = 0 (A + B) = ( _A) + (_ _B)_
_−_ _−_ _−_ _−_ _−_
(x _y) + ((_ _x) + (_ _y)) + 2y = 0_ _A + (B + C) = A + B + C_
_−_ _−_ _−_
(x _y) + (_ _x) + (_ _y) + 2y = 0_ _A + (_ _B) = A_ _B_
_−_ _−_ _−_ _−_ _−_
(x _y) + (_ _x)_ _y + 2y = 0_ _A + (_ _B) = A_ _B_
_−_ _−_ _−_ _−_ _−_
(x _y)_ _x_ _y + 2y = 0_ int(a + b) = int(a) + int(b)
_−_ _−_ _−_
(x _y)_ _x_ _y + (1 + 1)_ _y = 0_ _A_ _B = B_ _A_
_−_ _−_ _−_ _×_ _×_ _×_
(x _y)_ _x_ _y + y_ (1 + 1) = 0 _A_ (B + C) = A _B + A_ _C_
_−_ _−_ _−_ _×_ _×_ _×_ _×_
(x _y)_ _x_ _y + y_ 1 + y 1 = 0 _A_ 1 = A
_−_ _−_ _−_ _×_ _×_ _×_
(x _y)_ _x_ _y + y + y_ 1 = 0 _A_ _B = A + (_ _B)_
_−_ _−_ _−_ _×_ _−_ _−_
(x _y)_ _x + (_ _y) + y + y_ 1 = 0 _A + B = B + A_
_−_ _−_ _−_ _×_
(x _y)_ _x + y + (_ _y) + y_ 1 = 0 _A + (_ _B) = A_ _B_
_−_ _−_ _−_ _×_ _−_ _−_
(x _y)_ _x + y_ _y + y_ 1 = 0 _A_ _A = 0_
_−_ _−_ _−_ _×_ _−_
(x _y)_ _x + 0 + y_ 1 = 0 _A + 0 = A_
_−_ _−_ _×_
(x _y)_ _x + y_ 1 = 0 _A_ _B = A + (_ _B)_
_−_ _−_ _×_ _−_ _−_
_x + (_ _y)_ _x + y_ 1 = 0 _A + B = B + A_
_−_ _−_ _×_
( _y) + x_ _x + y_ 1 = 0 _A_ _A = 0_
_−_ _−_ _×_ _−_
( _y) + 0 + y_ 1 = 0 _A + 0 = A_
_−_ _×_
( _y) + y_ 1 = 0 _A + B = B + A_
_−_ _×_
_y_ 1 + ( _y) = 0_ _A + (_ _B) = A_ _B_
_×_ _−_ _−_ _−_
_y_ 1 _y = 0_ _A_ 1 = A
_×_ _−_ _×_
_y_ _y = 0_ _A_ _A = 0_
_−_ _−_
0 = 0
Figure 8: **Proof of the identity (x −** **y) −** (x + y) + 2y = 0 with elementary rules. In this
example, we provide at each step the current goal and the rule that is used to obtain the next goal.
This example shows how difficult it can be to prove even simple statements in Equations as they may
require a significant number of proof steps (22 in that case). This explains that proving more involved
statements from Identities such as cosh(3x) = 4 cosh(x)[3] 3 cosh(x) or even sin(2π + x) = sin(x)
_−_
can require to generate very large proof trees.
**B.4** **True expressions and numerical evaluation**
Some theorems are trivial, either because their statements match the pattern of an assertion rule that
has no assumptions (e.g. x[2] 0 or e[y][−][x] = 0), or because they do not contain any variable and an
_≥_ _̸_
exact numerical evaluation can attest that they are true (e.g ( 1)/2 < 6 or 1 7/4 = 6/8).
_−_ _−_ _−_
To prevent the model from wasting budget in “uninteresting” branches, we automatically discard
generated subgoals that can be trivially verified. However, we only perform numerical verification of
expressions without variables when they exclusively involve rational numbers. For instance, we will
automatically close subgoals such as 5 < (−3)[2] or [1]2 _[>][ 1]4_ [, but not][ e][1][ < e][2][ or][ cos(3)][ ̸][= 0][. To prove]
-----
that e[1] _< e[2]_ the model will need to use, for instance, an assertion rule such as A < B = _e[A]_ _< e[B]_
_⇒_
(1 < 2 will then be closed automatically).
**B.5** **Environment vulnerabilities due to initial numerical approximations**
In early implementations of the Equations environment, we found that the model was able to leverage
vulnerabilities in the environment to reach a 100% accuracy and to prove any statement. These
issues where coming from numerical approximations that were initially allowed during the numerical
verification of constant expressions (c.f. Section B.4). To prevent these vulnerabilities, we restricted
the numerical verification to rational expressions, in order to have an exact numerical evaluation and
to avoid errors due to approximations. We give two examples of vulnerabilities found by the model
when expressions were verified with an approximate numerical evaluation.
In Figure 9, the model manages to prove that 2 = 3 by using the injectivity of the exponential
function, and the fact that for NumPy, exp( exp(exp(2))) = exp( exp(exp(3))). Evaluating
_−_ _−_
the left and the right-hand side both numerically evaluate to 0.0, and the environment incorrectly
considered the expression to be valid.
In Figure 10, the model manages to prove that 0 = 0 by first proving that cos(π/2) = 0, and combin_̸_ _̸_
ing this result with the fact that cos(π/2) = 0. The imprecision came from the NumPy approximation
of cos(π/2) in 6.123 10[−][17], and in particular the fact that (((cos(π/2)[0][.][5])[0][.][5])[0][.][5]) 9.4 10[−][3],
_×_ _≈_ _×_
which was considered large enough by our threshold to be considered non-zero. By using this
_√_
approximation, and the assertion rule _A_ = 0 = _A_ = 0, the model was able to conclude that
_̸_ _⇒_ _̸_
(((cos(π/2)[0][.][5])[0][.][5])[0][.][5]) = 0 = cos(π/2) = 0 = 0 = 0.
_̸_ _⇒_ _̸_ _⇒_ _̸_
2 = 3 Statement to prove
_e[2]_ = e[3] Rule: A = B _e[A]_ = e[B],
_⇐⇒_ _⇐⇒_
_e[e][2]_ = e[e][3] Rule: A = B _e[A]_ = e[B],
_⇐⇒_ _⇐⇒_
_e[e][2]_ = _e[e][3]_ Rule: A = B _A =_ _B,_
_⇐⇒−_ _−_ _⇐⇒−_ _−_
_e[−][e][e][2]_ = e[−][e][e][3] Rule: A = B _e[A]_ = e[B],
_⇐⇒_ _⇐⇒_
0 = 0 Numerical evaluation
_⇐⇒_
Figure 9: False “proof” of 2 = 3 found by the model when allowing numerical approximation
**to verify constant expressions. The model noticed that exp(−e[e][2]** ) = exp(−e[e][3] ) is considered true
by NumPy (as the left and the right hand side are both approximated to 0.0) to conclude that 2 = 3
using the injectivity of the exponential function.
0.5
0 = 0 cos _[π]_ cos _[π]_ = 0
_̸_ _⇐⇒_ _⇒_ _̸_
2 2
_[̸][= 0][ ⇐]_
0.5[][0][.][5] 0.5[][...][][0][.][5]
= 0 _. . ._ cos _[π]_ = 0
_̸_ _⇐⇒_ _⇐⇒_ _̸_
2
cos _[π]_
Figure 10: False “proof” that 0 ̸= 0 found by the model when allowing numerical approxima**tion to verify constant expressions. Since cos(** _[π]2_ [)][ evaluates to][ 6][.][123][ ×][ 10][−][17][ in NumPy (and]
not exactly to 0), the model found that for any tolerance threshold applying the assertion rule
_√_
_A_ = 0 = _A_ = 0 enough times lead to an expression where the left-hand side evaluates numeri_̸_ _⇒_ _̸_
cally to a strictly positive value. In particular, (((cos( _[π]2_ [)][2][−][3] [)][ ≈] [9][.][4][ ×][ 10][−][3][, which was considered]
large enough by our threshold to be considered non-zero. After that, any expressions A and B can be
shown to be equal using the assertion rule (A _C = B_ _C_ _C_ = 0) = _A = B where C is_
_×_ _×_ _∧_ _̸_ _⇒_
chosen to be 0 since 0 = 0.
_̸_
-----
**B.6** **Random theorem generator**
While Metamath and Lean come with a collection of annotated theorems that can be used for training,
Equations does not have an equivalent of manually proved statements. Instead, we generate a
supervised training set of theorems to pretrain the model before we start the online training. We
propose two simple generation procedures: a random walk, and a graph generation approach.
**Random walk generation** The random walk is the simplest way to generate a theorem. We start
from an initial expression A0 and a set of initial hypotheses, both randomly generated following the
method of Lample and Charton [27]. Then, we randomly apply an admissible transformation rule on
_A0 to get an equivalent expression A1. The process is repeated, to get a sequence A0, A1, . . ., AN_
of equivalent expressions. The final theorem consists in proving that A0 = AN, and the proof
corresponds to the sequence of rules sequentially applied. To increase the diversity of generations,
and to avoid sampling only rules without or with simple assumptions, we add a bias in the random
sampling of rules to over-sample the underrepresented ones.
**Graph generation** Because of the simplicity of the random walk approach, the generated theorems
usually tend to be very easy to prove, and the model quickly reaches a perfect accuracy on the generated theorems. Moreover, proofs generated by the random walk are only composed of transformation
rules. To generate a more diverse set of theorems, we also use a graph generation procedure, that
creates a large acyclic graph of theorems, where each node is connected to its children by a rule in
the environment. To create such a graph, we proceed as follows. We first generate a set of initial
hypotheses, and initialize the graph with a node for each hypothesis. We then randomly apply a
transformation or assertion rule on nodes already in the graph.
For instance, if A _B and B_ _C are two nodes in the graph, then we can add the node A_ _C_
_≤_ _≤_ _≤_
using the assertion rule A _B_ _B_ _C =_ _A_ _C. If x = y_ (z 1) is a node in the graph,
_≤_ _∧_ _≤_ _⇒_ _≤_ _×_ _−_
we can use the transformation rule B = 0 = _A/B = C_ _A = B_ _C to add the node_
_̸_ _⇒_ _⇐⇒_ _×_
_x/y = z_ 1, provided that the node y = 0 is also in the graph. Required hypotheses that are trivially
_−_ _̸_
verifiable (e.g. 2 > 0 or e[−][x] _> 0) are automatically added to the graph._
**B.7** **Translating Equations theorems to Lean**
**Exporting theorems to Lean.** To enrich the existing Lean supervised dataset with synthetic data,
we built a translator from Equations to Lean. Although Equations statements are easy to translate,
proofs can only be translated if they involve rules that also exist in Lean. Since Equations is a modular
environment where rules can be specified by the user, we created a collection of Equations rules from
existing Mathlib statements. Synthetic theorems can then be generated using the random walk or
random graph approaches described in Section B.6, and converted into Lean to augment the existing
supervised dataset. Examples of randomly generated Lean proofs are provided in Figure 11.
**Importing rules from Mathlib.** To allow interfacing Equations and Lean, we automatically parsed
Mathlib statements from the Lean library, and extracted theorems with a statement compatible with
the Equations environment. Compatible theorems are converted into Equations transformation or
assertion rules. Overall, we converted 1702 theorems from the Lean Library into our Equations
environment. Details about the number of converted theorems are provided in Table 8.
Rule type Natural numbers Integers Real numbers
Transformation 304 452 799
Assertion 314 292 407
**Total** 618 744 1206
Table 8: Number of Equations rules converted from Lean. The converted Lean theorems can be
used to generate synthetic theorems within the Equations environment. The generated theorems can
then in turn be converted back to Lean, along with their proofs. Some theorems are generic and can
be applied to different types of variables (e.g. add_comm), and will appear in different categories.
Overall, we automatically converted 1702 different Lean rules in our Equations environment.
-----
1 `theorem SYNTHETIC_0`
2 `(x1 x3 x4 : R) :`
3 `((0:R) ≤` `((real.cos (real.cos ((-6:R) / ((x1 - x4) / x3)))) / (2:R))) :=`
4 `begin`
5 `apply norm_num.nonneg_pos,`
6 `apply half_pos,`
7 `apply real.cos_pos_of_le_one,`
8 `apply real.abs_cos_le_one,`
9 `end`
10
11 `theorem SYNTHETIC_1`
12 `(x2 : R)`
13 `(h0 : ((abs (5:R)) < x2)) :`
14 `((1:R) < (real.exp ((5:R) + x2))) :=`
15 `begin`
16 `rw real.one_lt_exp_iff,`
17 `rw ←` `neg_lt_iff_pos_add,`
18 `apply neg_lt_of_abs_lt h0,`
19 `end`
20
21 `theorem SYNTHETIC_2`
22 `(x1 x4 : R)`
23 `(h0 : ((x4 * (real.exp x1)) < 10)) :`
24 `((-((abs ((x4 * (real.exp x1)) - 10)) / 2)) < ((abs (10 - (x4 * (real.exp x1)))) / 2)) :=`
25 `begin`
26 `have h1 : ((0:R) < ((abs ((x4 * (real.exp x1)) - 10)) / 2)),`
27 `apply half_pos,`
28 `apply abs_pos_of_neg,`
29 `apply sub_neg_of_lt h0,`
30 `apply norm_num.lt_neg_pos _ _ h1,`
31 `rw ←` `abs_sub_comm,`
32 `apply half_pos,`
33 `apply abs_pos_of_neg,`
34 `apply sub_neg_of_lt h0,`
35 `end`
Figure 11: Example of a randomly generated theorems in Lean. The theorems were initially
generated in the Equations environment using rules from the Mathlib library, and converted to Lean.
**B.8** **Examples of identities solved by the model on Equations**
In this section, we give some examples of identities solved by the model. For each statement, we
indicate the proof size and the proof depth, for the first proof found by the model, and for the optimal
proof. We observe that the first proofs are sometimes very large, with more than 100 nodes, and that
the model later manages to find shorter proofs as it improves.
-----
|Identity|Proof size Proof depth First Best First Best|
|---|---|
|exp(−x) exp(x −y) = exp(−y) cosh(−x) = cosh(x) sin(π/2 + x) = cos(x) √ 0 < x =⇒ 2 ln( x) = ln(x) cos(π/2 −x) = sin(x) sin(π/2 −x) = cos(x) cos(x)2 + sin(x)2 = 1 cos(x) = cos(x/2)2 −sin(x/2)2 sin(x + y) −sin(x −y) = 2 sin(y) cos(x) 0 < x =⇒ 2x cosh(ln(x)) = x2 + 1 tanh(x) = (exp(x) −exp(−x))/(exp(x) + exp(−x)) cos(x −y) + cos(x + y) = 2 cos(x) cos(y) cosh(x) −sinh(x) = exp(−x) cosh(x) −sinh(x) = sinh(x)+1 cosh(x) sin(2x) = 2 sin(x) cos(x) cos(2x) = 1 −2 sin(x)2 cosh(x −y) + cosh(x + y) = 2 cosh(x) cosh(y) tanh(x) = (exp(2x) −1)/(exp(2x) + 1) sin(x) = 2 sin(x/2) cos(x/2) cos(2x) = 2 cos(x)2 −1 cos(x)2 = 1 + cos(2x)/2 sinh(x) = 2 sinh(x/2) cosh(x/2) sinh(2x) = 2 sinh(x) cosh(x) sinh(x + y) = sinh(x) cosh(y) + cosh(x) sinh(y) cosh(x −y) = cosh(x) cosh(y) −sinh(x) sinh(y) cos(x + y) cos(x −y) = cos(x)2 −sin(y)2 sin(x + y) sin(y −x) = cos(x)2 −cos(y)2 |sinh(x/2)| =p(cosh(x) −1)/2 sin(x + y) sin(x −y) = sin(x)2 −sin(y)2 cosh(x)2 = 1 + cosh(2x)/2 cosh(2x) = 2 cosh(x)2 −1 cosh(2x) = cosh(x)2 + sinh(x)2 tanh(x) −tanh(y) = sinh(x −y)/(cosh(x) cosh(y)) tanh(x) + tanh(y) = sinh(x + y)/(cosh(x) cosh(y)) p1 + sinh(x)2 = cosh(x) sin(x)3 = (3 sin(x) −sin(3x))/4 sin(3x) = 3 sin(x) −4 sin(x)3 cosh(3x) = 4 cosh(x)3 −3 cosh(x) cosh(x)3 = (3 cosh(x) + cosh(3x))/4 sin(4x) = cos(x)(4 sin(x) −8 sin(x)3) cos(π + x) = −cos(x) sin(π −x) = sin(x) cos(π/3) = sin(π/6) cos(π/4) = sin(π/4) cos(π/6) = sin(π/3) cos(2π + x) = cos(x) sin(2π + x) = sin(x)|6 6 4 4 8 8 16 3 19 11 14 10 13 11 16 11 24 14 20 14 46 23 33 19 27 20 55 38 27 15 130 27 84 31 205 65 29 17 72 26 71 30 64 37 71 34 130 77 90 66 117 64 118 64 86 53 183 66 87 40 78 42 97 72 154 135 162 144 82 70 72 58 80 56 204 105 162 106 73 73 148 28 73 28 26 17 24 17 22 17 125 70 353 69|6 6 4 4 8 7 7 3 19 10 14 10 13 10 16 7 23 14 18 12 30 11 33 13 27 19 40 20 19 8 118 21 84 29 176 39 21 8 68 19 61 16 51 25 61 24 121 63 75 56 117 64 118 63 61 36 183 65 71 32 62 33 80 64 85 81 95 91 76 62 63 49 71 47 176 79 137 79 60 60 118 9 45 11 26 17 24 17 22 17 37 18 62 16|
|---|---|---|
Table 9: Examples of identities solved. Some of the 144 identities found by our model, in the order
they were first solved. For each identity, we provide the size and the depth, both the for first proof, and
for the minimal proof (i.e. the proof with the smaller number of steps) found during online training.
The model found proofs with over 350 steps, some exceeding a depth of 100. After additional proof
search, the model is often able to find shorter proofs. The proof of sin(2π + x) = sin(x) requires a
large number of steps, as the model can only use simple rules (e.g. the trigonometric rules provided
in Table 7), and it does not have access to the value of sin(2π) or sin(π).
-----
**C** **Proof Search Hyper-Parameters**
HTPS depends on many hyper-parameters: the decoding hyper-parameters of the policy model and
the search hyper-parameters. Selecting their optimal values would be difficult in practice, if not
impractical, for several reasons. First, the model is constantly evolving over time, and the optimal
parameters may evolve as well. For instance, if the model becomes too confident about its predictions,
we may want to increase the decoding temperature to ensure a large diversity of tactics. Second, even
for a fixed model, the ideal parameters may be goal-specific. If an input statement can only be proved
with very deep proofs, we will favor depth over breadth, and a small number of tactics per node. If the
proof is expected to be shallow and to use rare tactics, we will want to penalize the exploration in depth
and increase the number of tactics sampled per node. Finally, there are simply too many parameters
to tune and running each experiment is expensive. Thus, we do not set HTPS hyper-parameters to
a fixed value, but sample them from pre-defined ranges at the beginning of each proof.
The decoding parameters and the chosen distribution are the following:
- Number of samples: the number of tactics sampled from the policy model when a node is
expanded. Distribution: uniform on discrete values [8, 16, 32, 48].
- Temperature: temperature used for decoding. Distribution: uniform on range [0.8, 2.0].
- Length penalty: length penalty used for decoding. Distribution: uniform on range [0, 1.2].
Also, for the search parameters we have:
- Number of expansions: the search budget, i.e. the maximum number or nodes in the proof
graph before we stop the search. Distribution: log-uniform with range [1000, 10000].
- Depth penalty: an exponential value decay during the backup-phase, decaying with depth
to favor breadth or depth. Distribution: uniform on discrete values [0.8, 0.9, 0.95, 1].
- Exploration: the exploration constant c in the policy (PUCT or RT). Distribution: loguniform with range [0.01, 100].
When sampling proof search parameters during evaluation, we use the same distributions than at training time, with two differences: we fix the number of expansions to 5k in Lean and 10k in Metamath.
**D** **Metamath Lean versions**
To compare our models in the same setup while working on this project, we ran all our experiments
with a fixed version of Metamath and Lean. In particular, all experiments were run with the following
GitHub commits of set.mm, Lean, miniF2F, and Mathlib:
[• https://github.com/metamath/set.mm: 861bd3552636dcdb9cbc8df59d01b14520c72f82](https://github.com/metamath/set.mm)
[• https://github.com/leanprover/lean/: tag/v3.3.0](https://github.com/leanprover/lean/)
[• https://github.com/openai/miniF2F: 21723db70bbd030e034ed374db74cea4be1bf681](https://github.com/openai/miniF2F)
[• https://github.com/openai/miniF2F/tree/statement_curriculum_learning:](https://github.com/openai/miniF2F/tree/statement_curriculum_learning)
c9d827c871aff2ab0f5ec64a0d72e61111a7f072
[• https://github.com/leanprover-community/mathlib:](https://github.com/leanprover-community/mathlib)
9a8dcb9be408e7ae8af9f6832c08c021007f40ec
-----
**Time vs Depth minimization**
In Lean, powerful tactics like ring, linarith, or simp are helpful when it comes to proving
theorems, but the model might become over-reliant on these automations and never learn what the
underlying arguments of the proofs were. Since these tactics are usually slower than rewrite or
```
apply, we experimented with minimizing the total Lean CPU time of the proof. After 3 days of
```
training, the cumulative pass-rate is higher when minimizing this objective, with 50.4% problems
solved on miniF2F-valid, compared to 47.5% with the depth objective. Besides, we found that a
model trained to minimize the execution time will reduce the fraction of tactics that timeout by
40%. In Table 10 we provide a comparison of the tactic usage for two models trained with different
minimization objectives, and we show the average Lean proof execution time in Figure 12.
1100
1000
900
800
700
600
|Col1|Col2|
|---|---|
|||
|||
|||
||Time Depth|
25 50 75 100 125 150 175
Epoch
Figure 12: **Average CPU time of proofs found by our model using different minimization**
**objectives. When the model is trained to generate proofs that minimize the running time in Lean**
(red), the average proof time is significantly lower.
Depth Time
ring 21 20
norm_num 58 55
linarith 66 66
simp **62** 38
refl 3 **7**
exact 4 **17**
apply 11 **35**
rewrite 20 **65**
Table 10: Qualitative comparison of the frequency of specific Lean tactics when using different
**minimization objective. Tactics in the upper half are slow automated tactics, while tactics in the**
bottom half are faster and simpler.
-----
**F** **Example Lean proofs**
In this section, we show examples of proofs found by our model.
1 `theorem imo_1964_p1_2 (n : N) : ¬7|2[n]` + 1 :=
2 `begin`
3 `rw nat.dvd_iff_mod_eq_zero,`
4 `rewrite [nat.add_mod, nat.mod_eq_of_lt],`
5 `obviously,`
6 `apply nat.strong_induction_on n,`
7 `induction n,`
8 `{`
9 `intros n IH,`
10 `cases n,`
11 `norm_num,`
12 `cases n,`
13 `norm_num,`
14 `rw [nat.succ_eq_add_one, pow_succ],`
15 `rw [nat.succ_eq_add_one, pow_succ],`
16 `induction n,`
17 `norm_num,`
18 `rw [nat.succ_eq_add_one, pow_succ],`
19 `norm_num [nat.mul_mod, ←mul_assoc],`
20 `contrapose! IH,`
21 `refine ⟨n_n, nat.lt_succ_iff.mpr _, IH⟩,`
22 `exact nat.le_succ_of_le (nat.le_succ _),`
23 `},`
24 `exact n_ih,`
25 `end`
Figure 13: A proof of the imo_1964_p1_2 problem found by our model. The model shows that for any
value of n ∈ N, 2[n] + 1 is not divisible by 7, by showing that 2[n] mod 7 + 1 ̸= 0, and 2[n] mod 7 + 1 < 7. The
second part of the proof uses strong induction and the fact that 2[n] _≡_ 2[n][+3] mod 7. We provide a version of the
proof that was automatically cleaned by removing unnecessary tactics and tactic arguments.
1 `theorem imo_2001_p6`
2 `(a b c d : N)`
3 `(h0 : 0 < a ∧` `0 < b ∧` `0 < c ∧` `0 < d)`
4 `(h1 : d < c)`
5 `(h2 : c < b)`
6 `(h3 : b < a)`
7 `(h4 : a * c + b * d = (b + d + a - c) * (b + d - a + c)) :`
8 _¬ nat.prime (a * b + c * d) :=_
9 `begin`
10 `contrapose h4,`
11 `rw mul_comm,`
12 `simp [nat.prime, not_le_of_gt h0.1, not_forall, not_le_of_gt h3,`
13 `nat.mul_sub_right_distrib, nat.add_comm],`
14 `contrapose! h4,`
15 `contrapose! h4,`
16 `apply has_lt.lt.ne,`
17 `apply nat.lt_sub_right_of_add_lt,`
18 `nlinarith,`
19 `end`
Figure 14: A proof found by our model of another IMO problem in miniF2F. Although the proof is valid,
the statement is erroneous. The hypothesis h4 : b + d − _a + c actually represents max(b + d −_ _a, 0) + c. This_
is due to Lean’s nat type behaviour where (a : N) − (b : N) = (0 : N) if b ≥ _a. This makes the exercise easier_
than it should be, and the proof is no longer valid on the fixed statement.
-----
**A** **Proving environments**
In this section we present the three proving environments used in this paper briefly. For each
environment, we show a proof hypertree representation of a theorem, and give an example of
tokenized goal and tactic from the training set.
**A.1** **Metamath**
Metamath’s only rule is string substitution. Starting from a theorem to be proven, variables are
substituted until we reach axioms. In our setup, we consider a tactic to be the label of a theorem in
```
set.mm, along with the necessary substitutions. For instance, to show that 2 + 2 = 4, we can use
```
the rule eqtr4i which states that A = B ∧ _C = B ⇒_ _A = C with substitutions: A = (2 + 2),_
_B = (2 + (1 + 1)), and C = 4. We are then left with two subgoals to prove: (2 + 2) = (2 + (1 + 1))_
and 4 = (2 + (1 + 1)). The corresponding proof-tree can be found in Figure 4.
䎶¥ÃÛæÞÅ
HTWUL
䎶¥ÃÛæޥÃÛ¥ÂÛ¦¦ 䎶ÅÞ¥ÃÛ¥ÂÛ¦¦
RYHTL HTWUL
䎶¥ÄÛ¦ޥ¥ÃÛ¦Û¦ 䎶¥¥ÃÛ¦Û¦ޥÃÛ¥ÂÛ¦¦
RYHTL DGGDVVL
Figure 4: A visualization of the proof-tree for 2 + 2 = 4 in Metamath.
The simplicity of Metamath makes it a great test bed for our algorithms. However, its lack of
automation leads to larger proof sizes and its syntax and naming conventions make each step difficult
to interpret for neophytes. Similar to GPT-f[9], we implement a parser for Metamath in order to
automatically prove the syntactic correctness of statements. Moreover, we use this parser to allow
generating only substitutions that cannot be inferred from the goal. The model is conditioned on a
goal to prove, and is trained to output a sequence of the following format:
```
LABEL MANDATORY_SUBSTS <EOU> PREDICTABLE_SUBSTS
```
Below is a concrete example of tokenized goal along with its corresponding tactic. The applied rule
is ee10an[4]. Since the values of ph and th can be directly inferred from the goal, we do not need
generate them at training time, but we still use them during training to reduce overfitting.
```
<GOAL> |- ( A e. RR -> ( ( A - 2 ) + 2 ) = A ) </GOAL>
<TACTIC>
ee10an
<VAR> ps = A e. CC </VAR>
<VAR> ch = 2 e. CC </VAR>
<EOU>
<VAR> ph = A e. RR </VAR>
<VAR> th = ( ( A - 2 ) + 2 ) = A </VAR>
</TACTIC>
```
In order to speed-up decoding, we use a maximum decoding length of 512 tokens for Metamath
which covers over 99% of the human tactics in the supervised dataset.
[4https://us.metamath.org/mpeuni/ee10an.html](https://us.metamath.org/mpeuni/ee10an.html)
-----
**A.2** **Lean**
Lean is a full-fledged programming language and benefits from more powerful automation than Meta
**A.3** **Equations**
math, with tactics such as ring (able to prove goals using manipulations in semirings), norm_num
(able to prove numerical goals) or linarith (able to find contradictions in a set of inequalities).
Unlike Polu and Sutskever [9], our Lean API attempts to split tactic states into separate subgoals
when no metavariable is shared. More details about our API and an example proof-tree can be found
in Appendix D.4 and Figure 5.
¥Z[X¦¥P䈹[Z¦[ÛXZÛX
LQGXFWLRQN
䎶[ÛÁZÛÁ ¥X¢QP[ÛXZÛX¦[ÛX¢[hkEEZÛX¢[hkEE
H[DFWK UZQDWVXFFBOHBVXFFBLII
¥X¢QP[ÛXZÛX¦[ÛX¢[ZÛX¢[
H[DFWNBLK
Figure 5: A visualization of the proof-tree for the proof discussed in the introduction in Lean.
Similar to Metamath, we use a maximum decoding length of 128 tokens which covers over 99% of
the supervised human tactic dataset.
We developed the Equations environment as a simpler analogue to existing proving environments. Its
expressivity is restricted to manipulating mathematical expressions (e.g. equalities or inequalities)
with simple rules (e.g. A + B = B + A, or A < B ⇒−B < −A). This reduced expressivity makes
goals and tactics easy to understand, helping with interpretability and debugging: plotting the set of
goals explored during a Metamath proof search does not give a lot of insights on whether it is on
track to find a proof. In Section E, we give an in-depth presentation of this environment.
Unlike in Metamath or Lean, we do not have access to a training set of human annotated proofs
for this environment. Instead, we create a training set composed of randomly generated synthetic
theorems and their proofs (see Section E.5 for details), and manually create an out-of-domain set of
non-trivial mathematical identities for which we do not provide proofs. We refer to this evaluation
split as Identities, a set of 160 mathematical expressions. As synthetic theorems randomly generated
are much simpler and significantly differ from statements in the Identities split, we can evaluate the
ability of our model to generalize to complex and out of domains data. An example proof-tree in
Equations is shown in Figure 6.
|ÂßÃ|Col2|䎶IràÁ|
|---|---|---|
|Col1|������ ���|Col3|
|---|---|---|
||QRUPBQXP||
||||
䎶E]h¥r¦ÛI[r]ßÂÛÃI[r]
$&%'ۂ$% &'
䎶E]h¥r¦[] 䎶I[r]ßÃI[r]
FRV$ %$!ۂ$ %$
Âßà 䎶I[r]àÁ
Figure 6: A visualization of the proof-tree for cos(x) + e[x] _< 1 + 2e[x]_ **in Equations.**
Below is an example tokenized goal with its tactic. The goal to prove is ((−(−((4 × x4) + (x1 ×
_x4)))) <= (−(−((4 + x1) × x4)))), and is tokenized using the reverse Polish notation of the_
expression. The tactic factorizes the term in position 3, i.e. (4 × x4) + (x1 × x4):
-----
```
<GOAL> <= neg neg add mul + 4 x4 mul x1 x4 neg neg mul add + 4 x1 x4 <GOAL>
<TACTIC> ((A_*_C)_+_(B_*_C))|((A_+_B)_*_C) 3 <TACTIC>
```
Similar to Metamath and Lean, we limit the decoder size to 32 tokens to speed-up decoding.
**B** **Related works**
Early approaches focused on simpler logics, culminating in extremely efficient first-order provers
such as E [28] or Vampire [29]. However, these approaches are insufficient when it comes to theorems
written in modern proof assistants such as Isabelle [30], Coq [31], or Lean [4]. Recently, the rising
success of deep language models [32] and model-guided search methods [5] has spurred a renewed
interest in the problem of automated theorem proving.
**Neural theorem provers** DeepHOL [33] focuses on the HOL-Light environment [34]. Their model
relies on a classifier that can select among a restricted set of tactics and arguments, while we rely
on a seq2seq model that can generate arbitrary tactics. The suggested tactics are then used in a
breadth-first search. TacticToe [35] uses an MCTS without learned components, using ranking on
predefined features to guide the search. Machine learning has also been used to improve classical
provers by re-ranking clauses [36]. Finally, [37] uses a neural theorem generator to add data for
policy and value training in Holophrasm [12].
**Reasoning abilities of language models.** Impressive performance of large language models in one
or few shot learning [32], machine translation [22] or more recently code generation [38] spurred
interest into the reasoning capabilities of large transformers. These model perform quite well on
formal tasks such as expression simplification [39], solving differential equations [40], symbolic
regression [41, 42], or predicting complex properties of mathematical objects [43]. These studies
suggest that deep neural networks are well adapted to complex tasks, especially when coupled with a
formal system for verification.
**C** **Hypertree Proof Search**
**C.1** **Hypergraph and definitions**
We begin with some useful notations and concepts for our hypergraphs.
Formally, let G be a set of nodes, and T a set of tactics. A hypergraph is a tuple H = (G, r, T, U )
with G ⊂G the nodes, r ∈ _G the root, and T ⊂_ _G × T × P(G) the admissible tactics. An element_
of T is written (g, t, c) where g is the start goal, t is the applied tactic and c is the potentially empty
set of children that the tactic creates when applied to g in the proving environment.
A hypertree is a hypergraph without cycles, i.e, such that we cannot find a path g0, . . ., gℓ = g0 with
_ℓ> 0 and with gi+1 in the children of gi for all i’s._
Let S ⊂ _G be the set of solved nodes. A node g ∈_ _G \ U is solved if one of its tactic leads to no_
subgoals, or one of its tactics leads to only solved nodes. Formally: ∃(g, t, ∅) ∈ _T or ∃(g, t, c) ∈_ _T_
such that c ⊂ _S. We say that a tactic t is solving for g if all the children it leads to are solved._
Conversely, let U ⊂ _G be the set of invalid nodes. A node g ∈_ _G \ U is invalid if it has been_
expanded but has no tactics in the hypergraph, or all of its tactics have an invalid child. Formally:
_{(g, t, c) ∈_ _T_ _} = ∅_ or ∀(g, t, c) ∈ _T_, c ∩ _I ̸= ∅._
These recursive definitions naturally lead to algorithms MaintainSolved and MaintainStatus to
maintain sets S and I when elements are added to H.
A sub-hypertree HT of H is a connected hypertree rooted at some goal of H. Its leaves leaves(HT )
are its subgoals without children (either elements of U or S). The set of proofs of g in H, Proofs(g, H)
are all the hypertrees rooted at g that have all their leaves in S. Similarly, the expandable subtrees
of H rooted in g, Expandable(g, H) are the subtrees with at least one leaf in U . A tactic is said
to be expandable if it is part of an expandable subtree, this can be computed with a graph-search
```
ComputeExpandable.
```
-----
We can now reformulate the process of proof-search. Starting from a hypergraph that contains only
the root theorem r, we produce a sequence of expandable subtrees. The unexpanded leaves of these
subtrees are expanded in the hypergraph, then the new value estimates are backed-up. The hypergraph
grows until we use all our expansion budget, or we find a proof of r.
**C.2** **Algorithm**
**C.3** **Policies**
When a goal g is added to the hypergraph, its visit count N (g, t) and total action value W (g, t) are
initialized to zero. Its virtual visit count V C(g, t) are updated during proof search. Let C(g, t) =
_N_ (g, t) + V C(g, t) be the total counts. These values are used to define the value estimate with a
constant first play urgency [44]:
max(1max(1,N,C((g,tg,t)))) if t is solving for g
max(10,C.5(g,t)) if N (g, t) = 0
_WC( (g,tg,t))_ otherwise.
_Q(g, t) =_
Notice that the value of solving tactics decreases with virtual counts, allowing exploration of already
solved subtrees.
Given the visit count N, the total counts C, value estimates Q, the model prior Pθ and an exploration
constant c. The policy used in Alpha-Zero is PUCT:
_N_ (g, ·)
1 + C(g, t)
pP
PUCT(g) = arg max
_t∈A_
_Q(g, t) + c_ _Pθ(t_ _g)_
_·_ _|_ _·_
Notice that more weight is given to the value estimate Q as N grows which decreases the second term.
Another work [20] obtains good performances using as search policy the greedy policy regularized
by the prior policy.
_C(g,_ )
_πRP (g) = arg max_ _Q(g)[T]_ _y_ _c_ _·_ _KL(πθ, y)_ with the policy simplex at g
_y∈S_ " _−_ _·_ pP(C(g, ·) + 1) # _S_
Again, note that this policy balances the prior with the value estimates as the count grows, but doesP
not account for individual disparities of visits of each tactics. In our experiments, we obtained better
performances with πRP on Equations, and better performances with PUCT on Metamath and Lean.
**C.4** **Implementation details**
**Simulation** During simulation, we only consider subtrees that could become proofs once expanded.
This means we cannot consider any invalid nodes or consider subgraphs containing cycles. If we
encounter a tactic that creates a cycle during a simulation, this tactic is removed from the hypergraph,
virtual counts from this simulation are removed and we restart the search from the root. This may
remove some valid proofs, but does not require a backup through the entire partial subtree which
would lead to underestimating the value of all ancestors. Removing tactics from the hypergraph
also invalidates computations of expandable tactics. This is dealt with by periodically calling
```
MaintainExpandable if no valid simulation can be found. A full description of the algorithm that
```
finds one expandable subtree is available in Algorithm 1. Selection of nodes to expand requires
finding expandable subtrees until a maximum number of simulations is reached, or no expandable
tactic exists at the root. In addition to W, N and vT, we maintain a virtual loss counter V C following
Chaslot et al. [21], Silver et al. [5], so that successive simulations select different leaf subsets. This
counter is initialized to zero for all nodes.
**Expansion** The policy model produces tactics for an unexpanded node g. These tactics are evaluated
in the proving environments. Valid tactics are filtered to keep a unique tactic (e.g. the fastest in
Lean) among those leading to the same set of children. Finally, we add the filtered tactics and
their children to the hypergraph. If no tactics are valid, the node is marked as invalid and we call
```
MaintainInvalid. If a tactic solves g, the node is marked as solved and we call MaintainSolved.
```
-----
**Algorithm 1 Finding an expandable subtree**
**Input: A hypergraph H and its root**
**Output: A partial proof tree with unexpanded leaves**
_:start_
T: hypertree(root)
to_explore: list = [root]
**while to_explore do**
g = to_explore.pop()
**if g is internal then**
**if expandable(g) ̸= ∅** **then**
tactic = arg maxt π expandable(g)[π][(][g, t][)]
**else**
continue { expandable nodes are in a sibling branch }
**end if**
**if tactic leads to cycle then**
kill tactic
remove virtual counts for elements of T
goto start
**end if**
_V C(g, tactic) += 1_
T.add(g, tactic, children(g, tactic))
to_explore += [children(g, tactic)]
**end if**
**end while**
**Algorithm 2 Back-propagation of total action value W**
**Input: Partial proof-tree T and value estimates cθ(g) of its leaves.**
to_backup = []
**for g in leaves of T do**
_vT (g) = cθ(g)_
to_backup.append((parentT (g), parent_tacticT (g)))
**end for**
**while to_backup do**
_g, t = to_backup.pop()_
to_update = _c∈childrenT (g)_ _[v][T][ (][c][)]_
_W_ (g, t) += to_update
_N_ (g, t) += 1[Q]
_V C(g, t) −= 1_
_vT (g) = to_update_
_g.is_prop = true_
**if all c.is_prop for c in siblingsT (g) then**
to_backup.append((parentT (g), parent_tacticT (g)))
**end if**
**end while**
-----
**Backup** The backup follows topological order from the leaves of a simulated partial proof-tree T,
updates W and N, and removes the added virtual count. The algorithm is described in Algorithm 2
**C.5** **Comparison with other search algorithms**
**Best First Search** Several best-first search variations have been suggested for theorem proving.
Proof number search (PNS) [17] is a best-first search algorithm that maintains the minimum number
of expansion required to prove or disprove a node. Recent extensions have been proposed, including
using a model for estimating the remaining proof / disproof number of a newly expanded node [18].
Similarly, Polu et al. [11] prioritize estimated proof-size in their best-first search objective.
Our node selection heuristic differs slightly: our critic gives the probability of solving a leaf, but
our updates to W (g) sums these log-probabilities and thus includes the proof-number information.
Moreover, the arities at AND and OR nodes in our cases are highly unbalanced (up to 32 tactics at
OR nodes, but very few children per tactic), selecting all children at AND nodes is computationally
feasible (which is not the case in games where this would lead to an exponential growth of states to
expand). Thus, we depart from the standard best-first search by expanding full candidate proof-trees
at once.
**Monte Carlo Tree Search [15].** MCTS has been famously used as part of AlphaZero [19] to obtain
great performances on two player games. This two player set-up can be mapped to theorem-proving
by assigning one player to choosing the best tactics while the other player picks the most difficult goal
to solve (a method explored in Holophrasm [12]). However, since we need to provide a proof of the
root theorem, we need to ensure that we can solve all goals that a tactic leads to. This set-up has been
studied for two player games when attempting to compute the game-theoretical value of positions.
Using MCTS in this set-up is suboptimal [45], ignoring unlikely but critical moves from the opponent
(in our case, a subgoal that looks easy but is impossible to solve). We decided to exploit the highly
asymmetrical arities of our two players (most tactics lead to one or two goals) which makes simulating
partial proof-trees computationally feasible. Thus, the values we back-propagate always take into
account all possible moves from the opponent, while only requiring a few expansions per simulation.
**Polu and Sutskever [9]** This best-first search expands goals one at a time according to a priorityqueue of either a value model or the cumulative log-prior from the language model. Since the priority
is equal among siblings but strictly decreasing with depth, this means siblings will always be expanded
together. However, nothing prevents the algorithm from jumping from one potential proof-tree to
another, and potentially favoring breadth over depth. In comparison, depth does not appear in the
value estimate we compute, but rather the remaining number of nodes to solve a particular proof-tree.
Moreover, our algorithm leads to value estimates that can be used to train our critic, which performs
better than 0-1 estimates provided by best-first search (c.f. Section 5.2.2).
**D** **Training details**
**D.1** **Full training pipeline**
In order to bootstrap our online learning procedure we require a policy model Pθ that outputs coherent
tactics. While the critic is left untrained, the policy model is fine-tuned on a pretrained transformer
using a supervised dataset specific to the target environment. The full training pipeline can be
summarized as follows:
- Pretraining of the encoder-decoder model on a large unsupervised corpus (c.f. Section D.2).
- Fine-tuning of the policy model on supervised datasets detailed in (c.f. Section 4.1).
- Online training of both the policy and critic models on data extracted from proof search
(illustrated in Figure 7).
**D.2** **Model architecture and training**
**Model architecture.** Our transformer architecture uses a 12-layer encoder and a 6-layer decoder in
all experiments. We use an embedding dimension of 1600 in the encoder and 1024 in the decoder
-----
|3URYHU 'LVWULEXWHG 7UDLQVDPSOHV 3URYHU WUDLQHUV 3URYHU 6WDWLVWLFV 6WDWHPHQWV &RQWUROOHU|Col2|3URYHU|
|---|---|---|
||3URYHU||
0RGHOZHLJKWV
Figure 7: An overview of our online training architecture. The controller sends statements to asynchronous
HTPS provers and gathers training and proving statistics. The provers send training samples to the distributed
trainers and periodically synchronize their copy of the models.
for both Metamath and Lean. For Equations, where we expect the model to require less decoding
capacity, the decoding dimension is lowered to 512. We found that reducing the decoder capacity
increases the decoding speed without impacting the performance, as previously observed by Kasai
et al. [46] in the context of machine translation. This observation led us to use “encoder-decoder”
architecture and not a “decoder only” model (as in Polu and Sutskever [9]), in order to store most of
the model capacity in the encoder and to sample tactics efficiently with a small decoder. Our models
are composed of 440M parameters for Equations and 600M parameters for Metamath and Lean (for
comparison, GPT-f uses a 770M parameter, 36-layer model).
**Model pretraining.** Model pretraining can be critical in low-resource scenarios where the amount
of supervised data is limited [47, 48]. Thus, we do not immediately fine-tune our model but first
pretrain it on a large dataset to reduce overfitting and improve generalization. In particular, we pretrain
our model with a masked seq2seq objective (MASS [49]) on the LaTeX source code of papers from
the mathematical section of arXiv. After tokenization, our filtered arXiv dataset contains around 6
billion tokens for 40GB of data. Similar to Polu and Sutskever [9], we observed large performance
gains using pretraining. However, we found that arXiv alone provides a better pretraining than when
it is combined with other sources of data (e.g. GitHub, Math StackExchange, or CommonCrawl).
**Supervised fine-tuning.** During fine-tuning, we train our models with the Adam optimizer [50]
and an inverse square-root learning rate scheduler [23]. We use a dropout of 0.2 [51] to reduce the
overfitting of our models. We also apply layer-dropout [52] with a dropout rate of 0.1 to further
reduce overfitting and stabilize training. We implement our models in PyTorch [53] and use float16
operations to speed up training and to reduce the memory usage of our models.
**Online training.** During online training, we alternate between the goal-tactic objective, used during
fine-tuning on the supervised dataset, and the goal-tactic and goal-critic objectives on data generated
by the provers. As the model and the data generated by the provers are constantly evolving, we do
not want the learning rate to decrease to 0, and we fix it to 3 × 10[−][5] after the warm-up phase. Unless
mentioned otherwise (e.g. for large experiments), we run all Metamath and Equations experiments
with 16 trainers and 32 provers for a total of 48 V100 GPUs.
**D.3** **Proof search hyper-parameters**
HTPS depends on many hyper-parameters: the decoding hyper-parameters of the policy model and
the search hyper-parameters. Selecting their optimal values would be difficult in practice, if not
impractical, for several reasons. First, the model is constantly evolving over time, and the optimal
parameters may evolve as well. For instance, if the model becomes too confident about its predictions,
we may want to increase the decoding temperature to ensure a large diversity of tactics. Second, even
for a fixed model, the ideal parameters may be goal-specific. If an input statement can only be proved
with deep proofs, we should favor depth over breadth, and a small number of tactics per node. If the
proof is expected to be shallow and to use rare tactics, we will want to penalize the exploration in
depth and increase the number of tactics sampled per node. Finally, there are too many parameters
to tune and running each experiment is expensive. Thus, we do not set HTPS hyper-parameters
to a fixed value, but sample them from pre-defined ranges at the beginning of each proof. These
pre-defined ranges were set a priori and were not tuned over the course of the experiments.
-----
The decoding parameters and the chosen distribution are the following:
- Number of samples: the number of tactics sampled from the policy model when a node is
expanded. Distribution: uniform on discrete values [8, 16, 32, 48].
- Temperature: sampling temperature used during decoding. Distribution: uniform on range
[0.8, 2.0].
- Length penalty: penalty on the length of generated sequence. Distribution: uniform on
range [0, 1.2].
For the search parameters we have:
- Number of expansions: the search budget, i.e. the maximum number or nodes in the proof
graph before we stop the search. Distribution: log-uniform with range [1000, 10000].
- Depth penalty: an exponential value decay during the backup-phase, decaying with depth
to favor breadth or depth. Distribution: uniform on discrete values [0.8, 0.9, 0.95, 1].
- Exploration: the exploration constant c in the policy (PUCT or RT). Distribution: loguniform with range [0.01, 100].
When sampling proof search parameters during evaluation, we use the same distributions than at training time, with two differences: we fix the number of expansions to 5k in Lean and 10k in Metamath.
**D.4** **Details on our Lean API**
States are more complex in Lean than in Metamath: metavariables can appear which are holes in
the proof to be filled later. Subgoals sharing a metavariable cannot be solved in isolation. This is
addressed in Polu and Sutskever [9] by using as input the entire tactic state. Instead, we inspect tactic
states to detect dependencies between subgoals, and split the tactic state into different subgoals where
possible in order to maximize state re-use and parallelization in the proof search algorithm. We only
ever split the tactic state into contiguous lists of subgoals to make exporting the final proof easier.
Lean’s kernel type checker has to be called after each tactic application as tactics sometimes generate
incorrect proofs and rely on the kernel for correctness. For every goal in the previous tactic state, we
type check the proof term inserted by the tactic. Since the kernel does not support metavariables, we
replace every metavariable by a lambda abstraction.
**D.5** **Metamath Lean versions**
To compare our models in the same setup while working on this project, we ran all our experiments
with a fixed version of Metamath and Lean. In particular, all experiments were run with the following
GitHub commits of set.mm, Lean, miniF2F, and Mathlib:
[• https://github.com/metamath/set.mm: 861bd3552636dcdb9cbc8df59d01b14520c72f82](https://github.com/metamath/set.mm)
[• https://github.com/leanprover/lean/: tag/v3.3.0](https://github.com/leanprover/lean/)
[• https://github.com/openai/miniF2F: 21723db70bbd030e034ed374db74cea4be1bf681](https://github.com/openai/miniF2F)
[• https://github.com/openai/miniF2F/tree/statement_curriculum_learning:](https://github.com/openai/miniF2F/tree/statement_curriculum_learning)
c9d827c871aff2ab0f5ec64a0d72e61111a7f072
[• https://github.com/leanprover-community/mathlib:](https://github.com/leanprover-community/mathlib)
9a8dcb9be408e7ae8af9f6832c08c021007f40ec
**E** **Equations environment**
In this section, we give additional details about the environment Equations. First, we described its
main elements, theorems (resp. tactics) in Section E.1 (resp. E.2). Then, we describe a proof in this
environment in Section E.3, how numerical expressions are evaluated and what vulnerabilities this
can lead to in Section E.4. Finally, we describe our random theorem generator in Section E.5 and
how theorems and their proofs can be translated to Lean in Section E.6.
-----
**E.1** **Theorems**
Each theorem in Equations consists in proving mathematical expressions composed of functions of
real numbers, by manipulating and rewriting expressions. A theorem to prove can be an inequality or
an equality, conditioned to a set of (potentially empty) initial assumptions. For instance:
_x[2]_ + 1 ≥ 2x or _x > y =⇒_ _e[y][−][x]_ _−_ 1 < 0
In the first example, the goal does not have any hypothesis and consists in proving that for every
_x ∈_ R, x[2] + 1 ≥ 2x. In the second example, the goal consists in proving that e[y][−][x] _−_ 1 < 0 for every
_x, y ∈_ R that satisfy the hypothesis x > y.
Equalities and inequalities are represented as trees with the three following elements:
- Leaves: represent a variable, an integer, or a constant (e.g. π).
- Internal nodes: represent unary or binary operators, e.g. +, −, /, ×, exp, ln, cos, sin,
sinh, cosh, etc. More advanced operators such as gcd, lcm, mod (the rest of an euclidean
division) are possible when dealing with integers.
- A root node: represents a comparison operator, e.g. =, ≤, <, ≥, >, ̸=. More advanced
comparison operators such as | (divides) are possible when dealing with integers.
**E.2** **Tactics**
Equations allows to deduce equalities and inequalities from simpler subgoals, using elementary
rules (i.e. tactics). The environment contains two types of rules: transformations, which consist in
matching a pattern in an expression and replacing it by an equivalent expression; and assertions,
which consist in asserting that an expression is true. Both types of rules can have assumptions.
**Transformation rules** A transformation rule (TRule) consists in a set of two expressions, L and R,
equivalent under a set of assumptions S. For instance TRule(A + B, B + A) is the transformation
rule stating the commutativity of the addition, namely that A + B = B + A for any expressions A
and B. Note that in this case, the set of assumption S is empty as the equality always holds. Another
example is TRule(√A[2], A, [A ≥ 0]) that states that _√A[2]_ = A provided that A ≥ 0.
Applying such a rule to an existing equation works as follows:
- matching a term T in the expression that has the pattern of L
- identifying the matching variables and substituting them in R
- replacing T by R in the input equation
- return the resulting equation with the set of hypotheses required for the transformation
For instance, if the input goal is:
(e[x])[2] = e[x]
p
Applying TRule(
_A[2], A, [A ≥_ 0]) on this expression will result in two subgoals:
_A[2]_ has been replaced by A: e[x] = e[x]
- The same expression, where
- The hypothesis required for the assumption to hold: e[x] _≥_ 0
More generally, a transformation rule will result in N + 1 subgoals, where N is the number of
hypotheses required by the rule.
**Assertion rules** An assertion rule (ARule) expresses the fact that an expression is true, provided
some hypotheses. It is represented by a main expression, and a set of assumptions sufficient for
the main expression to hold. For instance, the rule ARule(A ≤ _C, [A ≤_ _B, B ≤_ _C]) states the_
transitivity of the partial order ≤, i.e. A ≤ _C provided that there exists an expression B such that_
_A ≤_ _B and B ≤_ _C._
-----
Assertion rules do not always have hypotheses, for instance the reflexivity rule ARule(A = A), or
the rule ARule(e[A] _> 0) stating that e[A]_ is positive, for any real value A. Note that the two subgoals
generated in the previous paragraph (e[x] = e[x] and e[x] _> 0) can be respectively solved by these two_
assertion rules (i.e. by matching A = e[x] and A = x).
Unlike transformation rules that always result in at least one subgoal (the initial expression on which
we applied the transformation), assertion rules will only generate N subgoals, where N is the number
of hypotheses. As a result, being able to apply an assertion rule without hypotheses to an expression
is enough to close (e.g. solve) the goal. Assertion rules are in fact very similar to rules in Metamath.
In Table 6, we provide the number of Equations rules in different categories. Some examples of
transformation and assertion rules are given in Table 7.
Table 6: Number of Equations rules in each category.
Rule type Basic Exponential Trigonometry Hyperbolic All
Transformation 74 18 9 8 109
Assertion 90 11 9 0 110
Total 171 29 18 11 219
Table 7: Trigonometric rules accessible by the model. The model only has access to these elementary
rules when proving statements from Identities. In particular, it cannot use more involved theorems
such as cos[2](x) + sin[2](x) = 1.
Transformation rules Assertion rules
sin(0) = 0 _| cos(A)| ≤_ 1
cos(0) = 1sin(cos([π]2[π]2[) = 1][) = 0] _||A sin( sin( = BAA)) =| ≤| ≤|⇒1Asin(|_ _A) = sin(B)_
sin(−A) = − sin(A) _A = B =⇒_ cos(A) = cos(B)
cos(−A) = cos(A) sin(A) ̸= sin(B) =⇒ _A ̸= B_
cos(A) = 0 = tan(A) = cos([sin(][A]A[)]) cos(A) = cos(B) = _A_ = B
_̸_ _⇒_ _̸_ _⇒_ _̸_
sin(A + B) = sin(B) cos(A) + sin(A) cos(B) _A = B, cos(A) ̸= 0 =⇒_ tan(A) = tan(B)
cos(A + B) = cos(A) cos(B) − sin(A) sin(B) tan(A) ̸= tan(B), cos(A) cos(B) ̸= 0 =⇒ _A ̸= B_
**E.3** **Proving a statement with Equations**
In order to prove a theorem with Equations, the user (or automated prover) has to apply tactics on the
current expression. A tactic can correspond either to a transformation rule, or to an assertion rule.
For transformation rules, the model needs to provide:
- the rule (using a token identifier)
- the direction in which the rule is applied (a Boolean symbol, for forward or backward)
- an integer that represents the position where the rule is applied
- an optional list of variables to specify (c.f. paragraph below)
The direction of the rule indicates whether we want to transform L by R or R by L (e.g. replace A
by _√A[2], or the opposite). The position where the rule is applied is given by the prefix decomposition_
of the input expression. For instance, the prefix notation of (x + y) + 1 is given by + + x y 1.
Applying the commutativity rule A + B = B + A to the expression in position 0 will result in
1 + (x + y). Applying it in position 1 will result in (y + x) + 1, since the rule was applied to (x + y).
Note that for the commutativity rule, the direction in which we apply the rule does not matter. The
list of variables to specify is required when variables in the target patterns are absent from the source
pattern. For instance, applying the transformation rule TRule(A,A+B-B) in the forward direction
will require to provide the value of B.
-----
For assertion rules, the format is simpler. We no longer need to specify a direction or a position (the
position is always 0 as the assertion statement must match the expression to prove). We just need to
provide:
- the rule (using a token identifier)
- an optional list of variables to specify
In this case, the list of variables to specify corresponds to variables that appear in hypotheses and
cannot be inferred from the main expression. For instance, to apply the assertion rule A ≤ _B, B ≤_
_C =⇒_ _A ≤_ _C, we need to specify the value of B. We will then be left with two subgoals: A ≤_ _B_
and B ≤ _C._
Proving a statement in Equations requires to recursively apply tactics to unproved subgoals, until we
are left with no subgoals to prove.
An example of proof-tree in Equations is shown in Figure 6. Figure 8 shows an example proof of
the statement (x − **y) −** (x + y) + 2y = 0 using rules from the environment. Although simple, this
statement requires 22 proof steps and highlights the depth required to prove complex mathematical
identities when using elementary proof steps.
Statement to prove Rule used
(x − _y) −_ (x + y) + 2y = 0 _A −_ _B = A + (−B)_
(x − _y) + (−(x + y)) + 2y = 0_ _−_ (A + B) = (−A) + (−B)
(x − _y) + ((−x) + (−y)) + 2y = 0_ _A + (B + C) = A + B + C_
(x − _y) + (−x) + (−y) + 2y = 0_ _A + (−B) = A −_ _B_
(x − _y) + (−x) −_ _y + 2y = 0_ _A + (−B) = A −_ _B_
(x − _y) −_ _x −_ _y + 2y = 0_ int(a + b) = int(a) + int(b)
(x − _y) −_ _x −_ _y + (1 + 1) × y = 0_ _A × B = B × A_
(x − _y) −_ _x −_ _y + y × (1 + 1) = 0_ _A × (B + C) = A × B + A × C_
(x − _y) −_ _x −_ _y + y × 1 + y × 1 = 0_ _A × 1 = A_
(x − _y) −_ _x −_ _y + y + y × 1 = 0_ _A −_ _B = A + (−B)_
(x − _y) −_ _x + (−y) + y + y × 1 = 0_ _A + B = B + A_
(x − _y) −_ _x + y + (−y) + y × 1 = 0_ _A + (−B) = A −_ _B_
(x − _y) −_ _x + y −_ _y + y × 1 = 0_ _A −_ _A = 0_
(x − _y) −_ _x + 0 + y × 1 = 0_ _A + 0 = A_
(x − _y) −_ _x + y × 1 = 0_ _A −_ _B = A + (−B)_
_x + (−y) −_ _x + y × 1 = 0_ _A + B = B + A_
(−y) + x − _x + y × 1 = 0_ _A −_ _A = 0_
(−y) + 0 + y × 1 = 0 _A + 0 = A_
(−y) + y × 1 = 0 _A + B = B + A_
_y × 1 + (−y) = 0_ _A + (−B) = A −_ _B_
_y × 1 −_ _y = 0_ _A × 1 = A_
_y −_ _y = 0_ _A −_ _A = 0_
0 = 0
Figure 8: **Proof of the identity (x −** **y) −** (x + y) + 2y = 0 with elementary rules. In this
example we provide at each step the current goal and the rule that is used to obtain the next goal.
This example shows how difficult it can be to prove even simple statements in Equations as they may
require a significant number of proof steps (22 in that case). This explains that proving more involved
statements from Identities such as cosh(3x) = 4 cosh(x)[3] _−_ 3 cosh(x) can require to generate very
large proof trees.
-----
**E.4** **True expressions and numerical evaluation**
Some theorems are trivial, either because their statements match the pattern of an assertion rule that
has no assumptions (e.g. x[2] _≥_ 0 or e[y][−][x] = 0̸ ), or because they do not contain any variable and an
exact numerical evaluation can attest that they are true (e.g (−1)/2 < 6 or 1 − 7/4 = −6/8).
To prevent the model from wasting budget in “uninteresting” branches, we automatically discard
generated subgoals that can be trivially verified. However, we only perform numerical verification of
expressions without variables when they exclusively involve rational numbers. For instance, we will
automatically close subgoals such as 5 < (−3)[2] or [1]2 _[>][ 1]4_ [, but not][ e][1][ < e][2][ or][ cos(3)][ ̸][= 0][. To prove]
that e[1] _< e[2]_ the model will need to use, for instance, an assertion rule such as A < B =⇒ _e[A]_ _< e[B]_
(1 < 2 will then be closed automatically).
In early implementations of the Equations environment, we found that the model was able to leverage
vulnerabilities in the environment to reach a 100% accuracy and to prove any statement. These
issues where coming from numerical approximations that were initially allowed during the numerical
verification of constant expressions (c.f. Section E.4). To prevent these vulnerabilities, we restricted
the numerical verification to rational expressions, in order to have an exact numerical evaluation and
to avoid errors due to approximations. We give two examples of vulnerabilities found by the model
when expressions were verified with an approximate numerical evaluation.
In Figure 9, the model manages to prove that 2 = 3 by using the injectivity of the exponential
function, and the fact that for NumPy, exp(− exp(exp(2))) = exp(− exp(exp(3))). Evaluating
the left and the right-hand side both numerically evaluate to 0.0, and the environment incorrectly
considered the expression to be valid.
In Figure 10, the model manages to prove that 0 ̸= 0 by first proving that cos(π/2) ̸= 0, and combining this result with the fact that cos(π/2) = 0. The imprecision came from the NumPy approximation
of cos(π/2) in 6.123 × 10[−][17], and in particular the fact that (((cos(π/2)[0][.][5])[0][.][5])[0][.][5]) ≈ 9.4 × 10[−][3],
which was considered large enough by our threshold to be considered non-zero. By using this
approximation, and the assertion rule _√A ̸= 0 =⇒_ _A ̸= 0, the model was able to conclude that_
(((cos(π/2)[0][.][5])[0][.][5])[0][.][5]) ̸= 0 =⇒ cos(π/2) ̸= 0 =⇒ 0 ̸= 0.
2 = 3 Statement to prove
_⇐⇒_ _e[2]_ = e[3] Rule: A = B ⇐⇒ _e[A]_ = e[B],
_⇐⇒_ _e[e][2]_ = e[e][3] Rule: A = B ⇐⇒ _e[A]_ = e[B],
_⇐⇒−e[e][2]_ = −e[e][3] Rule: A = B ⇐⇒−A = −B,
_⇐⇒_ _e[−][e][e][2]_ = e[−][e][e][3] Rule: A = B ⇐⇒ _e[A]_ = e[B],
_⇐⇒_ 0 = 0 Numerical evaluation
Figure 9: False “proof” of 2 = 3 found by the model when allowing numerical approximation
**to verify constant expressions. The model noticed that exp(−e[e][2]** ) = exp(−e[e][3] ) is considered true
by NumPy (as the left and the right hand side are both approximated to 0.0) to conclude that 2 = 3
using the injectivity of the exponential function.
**E.5** **Random theorem generator**
While Metamath and Lean come with a collection of annotated theorems that can be used for training,
Equations does not have an equivalent of manually proved statements. Instead, we generate a
supervised training set of theorems to pretrain the model before we start the online training. We
propose two simple generation procedures: a random walk, and a graph generation approach.
**Random walk generation** The random walk is the simplest way to generate a theorem. We start
from an initial expression A0 and a set of initial hypotheses, both randomly generated following the
method of Lample and Charton [40]. Then, we randomly apply an admissible transformation rule on
-----
0.5
0 = 0 cos _[π]_ cos _[π]_ = 0
0.5[][0][.][5] ̸ _⇐⇒_ 2 _[̸][= 0][ ⇐]⇒_ 2 _̸_ 0.5[][...][][0][.][5]
= 0 _. . ._ cos _[π]_ = 0
_̸_ _⇐⇒_ _⇐⇒_ 2 _̸_
cos _[π]_
2
Figure 10: False “proof” that 0 ̸= 0 found by the model when allowing numerical approx**imation to verify constant expressions. Since cos(** _[π]2_ [)][ evaluates to][ 6][.][123][ ×][ 10][−][17][ in NumPy]
(and not exactly to 0), the model found that for any tolerance threshold applying the assertion rule
_√A ̸= 0 =⇒_ _A ̸= 0 enough times lead to an expression where the left-hand side evaluates numeri-_
cally to a strictly positive value. In particular, (((cos( _[π]2_ [)][2][−][3] [)][ ≈] [9][.][4][ ×][ 10][−][3][, which was considered]
large enough by our threshold to be considered non-zero. After that, any expressions A and B can be
shown to be equal using the assertion rule (A × C = B × C ∧ _C ̸= 0) =⇒_ _A = B where C is_
chosen to be 0 since 0 ̸= 0.
_A0 to get an equivalent expression A1. The process is repeated, to get a sequence A0, A1, . . ., AN_
of equivalent expressions. The final theorem consists in proving that A0 = AN, and the proof
corresponds to the sequence of rules sequentially applied. To increase the diversity of generations,
and to avoid sampling only rules without or with simple assumptions, we add a bias in the random
sampling of rules to over-sample the underrepresented ones.
**Graph generation** Because of the simplicity of the random walk approach, the generated theorems
tend to be easy to prove, and the model quickly reaches a perfect accuracy on the generated theorems.
Moreover, proofs generated by the random walk are only composed of transformation rules. To
generate a more diverse set of theorems, we also use a graph generation procedure, that creates a large
acyclic graph of theorems, where each node is connected to its children by a rule in the environment.
To create such a graph, we proceed as follows. We first generate a set of initial hypotheses, and
initialize the graph with a node for each hypothesis. We then randomly apply a transformation or
assertion rule on nodes already in the graph.
For instance, if A ≤ _B and B ≤_ _C are two nodes in the graph, then we can add the node A ≤_ _C_
using the assertion rule A ≤ _B ∧_ _B ≤_ _C =⇒_ _A ≤_ _C. If x = y × (z −_ 1) is a node in the graph,
we can use the transformation rule B ̸= 0 =⇒ _A/B = C ⇐⇒_ _A = B × C to add the node_
_x/y = z −_ 1, provided that the node y ̸= 0 is also in the graph. Required hypotheses that are trivially
verifiable (e.g. 2 > 0 or e[−][x] _> 0) are automatically added to the graph._
**E.6** **Translating Equations theorems to Lean**
**Exporting theorems to Lean.** To enrich the existing Lean supervised dataset with synthetic data,
we built a translator from Equations to Lean. Although Equations statements are easy to translate,
proofs can only be translated if they involve rules that also exist in Lean. Since Equations is a modular
environment where rules can be specified by the user, we created a collection of Equations rules from
existing Mathlib statements. Synthetic theorems can then be generated using the random walk or
random graph approaches described in Section E.5, and converted into Lean to augment the existing
supervised dataset. Examples of randomly generated Lean proofs are provided in Figure 11.
-----
1 `theorem SYNTHETIC_0`
2 `(x1 x3 x4 : R) :`
3 `((0:R) ≤` `((real.cos (real.cos ((-6:R) / ((x1 - x4) / x3)))) / (2:R))) :=`
4 `begin`
5 `apply norm_num.nonneg_pos,`
6 `apply half_pos,`
7 `apply real.cos_pos_of_le_one,`
8 `apply real.abs_cos_le_one,`
9 `end`
10
11 `theorem SYNTHETIC_1`
12 `(x1 x4 : R)`
13 `(h0 : ((x4 * (real.exp x1)) < 10)) :`
14 `((-((abs ((x4 * (real.exp x1)) - 10)) / 2)) < ((abs (10 - (x4 * (real.exp x1)))) / 2)) :=`
15 `begin`
16 `have h1 : ((0:R) < ((abs ((x4 * (real.exp x1)) - 10)) / 2)),`
17 `apply half_pos,`
18 `apply abs_pos_of_neg,`
19 `apply sub_neg_of_lt h0,`
20 `apply norm_num.lt_neg_pos _ _ h1,`
21 `rw ←` `abs_sub_comm,`
22 `apply half_pos,`
23 `apply abs_pos_of_neg,`
24 `apply sub_neg_of_lt h0,`
25 `end`
Figure 11: Example of a randomly generated theorems in Lean. The theorems were initially
generated in the Equations environment using rules from the Mathlib library, and converted to Lean.
**Importing rules from Mathlib.** To allow interfacing Equations and Lean, we automatically parsed
Mathlib statements from the Lean library, and extracted theorems with a statement compatible with
the Equations environment. Compatible theorems are converted into Equations transformation or
assertion rules. Overall, we converted 1702 theorems from the Lean Library into our Equations
environment. Details about the number of converted theorems are provided in Table 8.
Table 8: Number of Equations rules converted from Lean. The converted Lean theorems can be
used to generate synthetic theorems within the Equations environment. The generated theorems can
then in turn be converted back to Lean, along with their proofs. Some theorems are generic and can
be applied to different types of variables (e.g. add_comm), and will appear in different categories.
Overall, we automatically converted 1702 different Lean rules in our Equations environment.
Rule type Natural numbers Integers Real numbers
Transformation 304 452 799
Assertion 314 292 407
**Total** 618 744 1206
**E.7** **Examples of identities solved by the model on Equations**
In Table 9, we give some examples of identities solved by the model. For each statement, we indicate
the proof size and the proof depth, for the first proof found by the model, and for the optimal proof.
We observe that the first proofs are sometimes very large, with more than 100 nodes, and that the
model later manages to find shorter proofs as it improves.
-----
Table 9: Examples of identities solved. Some of the 144 identities found by our model, in the order
they were first solved. For each identity, we provide the size and the depth, both the for first proof, and
for the minimal proof (i.e. the proof with the smaller number of steps) found during online training.
The model found proofs with over 350 steps, some exceeding a depth of 100. After additional proof
search, the model is often able to find shorter proofs. The proof of sin(2π + x) = sin(x) requires a
large number of steps, as the model can only use simple rules (e.g. the trigonometric rules provided
in Table 7), and it does not have access to the value of sin(2π) or sin(π).
|Identity|Proof size Proof depth First Best First Best|
|---|---|
|exp(−x) exp(x −y) = exp(−y) cosh(−x) = cosh(x) sin(π/2 + x) = cos(x) √ 0 < x =⇒ 2 ln( x) = ln(x) cos(π/2 −x) = sin(x) sin(π/2 −x) = cos(x) cos(x)2 + sin(x)2 = 1 cos(x) = cos(x/2)2 −sin(x/2)2 sin(x + y) −sin(x −y) = 2 sin(y) cos(x) 0 < x =⇒ 2x cosh(ln(x)) = x2 + 1 tanh(x) = (exp(x) −exp(−x))/(exp(x) + exp(−x)) cos(x −y) + cos(x + y) = 2 cos(x) cos(y) cosh(x) −sinh(x) = exp(−x) cosh(x) −sinh(x) = sinh(x)+1 cosh(x) sin(2x) = 2 sin(x) cos(x) cos(2x) = 1 −2 sin(x)2 cosh(x −y) + cosh(x + y) = 2 cosh(x) cosh(y) tanh(x) = (exp(2x) −1)/(exp(2x) + 1) sin(x) = 2 sin(x/2) cos(x/2) cos(2x) = 2 cos(x)2 −1 cos(x)2 = 1 + cos(2x)/2 sinh(x) = 2 sinh(x/2) cosh(x/2) sinh(2x) = 2 sinh(x) cosh(x) sinh(x + y) = sinh(x) cosh(y) + cosh(x) sinh(y) cosh(x −y) = cosh(x) cosh(y) −sinh(x) sinh(y) cos(x + y) cos(x −y) = cos(x)2 −sin(y)2 sin(x + y) sin(y −x) = cos(x)2 −cos(y)2 |sinh(x/2)| =p(cosh(x) −1)/2 sin(x + y) sin(x −y) = sin(x)2 −sin(y)2 cosh(x)2 = 1 + cosh(2x)/2 cosh(2x) = 2 cosh(x)2 −1 cosh(2x) = cosh(x)2 + sinh(x)2 tanh(x) −tanh(y) = sinh(x −y)/(cosh(x) cosh(y)) tanh(x) + tanh(y) = sinh(x + y)/(cosh(x) cosh(y)) p1 + sinh(x)2 = cosh(x) sin(x)3 = (3 sin(x) −sin(3x))/4 sin(3x) = 3 sin(x) −4 sin(x)3 cosh(3x) = 4 cosh(x)3 −3 cosh(x) cosh(x)3 = (3 cosh(x) + cosh(3x))/4 sin(4x) = cos(x)(4 sin(x) −8 sin(x)3) cos(π + x) = −cos(x) sin(π −x) = sin(x) cos(π/3) = sin(π/6) cos(π/4) = sin(π/4) cos(π/6) = sin(π/3) cos(2π + x) = cos(x) sin(2π + x) = sin(x)|6 6 4 4 8 8 16 3 19 11 14 10 13 11 16 11 24 14 20 14 46 23 33 19 27 20 55 38 27 15 130 27 84 31 205 65 29 17 72 26 71 30 64 37 71 34 130 77 90 66 117 64 118 64 86 53 183 66 87 40 78 42 97 72 154 135 162 144 82 70 72 58 80 56 204 105 162 106 73 73 148 28 73 28 26 17 24 17 22 17 125 70 353 69|6 6 4 4 8 7 7 3 19 10 14 10 13 10 16 7 23 14 18 12 30 11 33 13 27 19 40 20 19 8 118 21 84 29 176 39 21 8 68 19 61 16 51 25 61 24 121 63 75 56 117 64 118 63 61 36 183 65 71 32 62 33 80 64 85 81 95 91 76 62 63 49 71 47 176 79 137 79 60 60 118 9 45 11 26 17 24 17 22 17 37 18 62 16|
|---|---|---|
-----
**F** **Example Lean proofs**
In this section, we show examples of proofs found by our model.
1 `theorem imo_1964_p1_2 (n : N) : ¬7|2[n]` + 1 :=
2 `begin`
3 `rw nat.dvd_iff_mod_eq_zero,`
4 `rewrite [nat.add_mod, nat.mod_eq_of_lt],`
5 `obviously,`
6 `apply nat.strong_induction_on n,`
7 `induction n,`
8 `{`
9 `intros n IH,`
10 `cases n,`
11 `norm_num,`
12 `cases n,`
13 `norm_num,`
14 `rw [nat.succ_eq_add_one, pow_succ],`
15 `rw [nat.succ_eq_add_one, pow_succ],`
16 `induction n,`
17 `norm_num,`
18 `rw [nat.succ_eq_add_one, pow_succ],`
19 `norm_num [nat.mul_mod, ←mul_assoc],`
20 `contrapose! IH,`
21 `refine ⟨n_n, nat.lt_succ_iff.mpr _, IH⟩,`
22 `exact nat.le_succ_of_le (nat.le_succ _),`
23 `},`
24 `exact n_ih,`
25 `end`
Figure 12: A proof of the imo_1964_p1_2 problem found by our model. The model shows that for any
value of n ∈ N, 2[n] + 1 is not divisible by 7, by showing that 2[n] mod 7 + 1 ̸= 0, and 2[n] mod 7 + 1 < 7. The
second part of the proof uses strong induction and the fact that 2[n] _≡_ 2[n][+3] mod 7. We provide a version of the
proof that was automatically cleaned by removing unnecessary tactics and tactic arguments.
1 `theorem imo_2001_p6`
2 `(a b c d : N)`
3 `(h0 : 0 < a ∧` `0 < b ∧` `0 < c ∧` `0 < d)`
4 `(h1 : d < c)`
5 `(h2 : c < b)`
6 `(h3 : b < a)`
7 `(h4 : a * c + b * d = (b + d + a - c) * (b + d - a + c)) :`
8 _¬ nat.prime (a * b + c * d) :=_
9 `begin`
10 `contrapose h4,`
11 `rw mul_comm,`
12 `simp [nat.prime, not_le_of_gt h0.1, not_forall, not_le_of_gt h3,`
13 `nat.mul_sub_right_distrib, nat.add_comm],`
14 `contrapose! h4,`
15 `contrapose! h4,`
16 `apply has_lt.lt.ne,`
17 `apply nat.lt_sub_right_of_add_lt,`
18 `nlinarith,`
19 `end`
Figure 13: A proof found by our model of another IMO problem in miniF2F. Although the proof is valid,
the statement is erroneous. The hypothesis h4 : b + d − _a + c actually represents max(b + d −_ _a, 0) + c. This_
is due to Lean’s nat type behaviour where (a : N) − (b : N) = (0 : N) if b ≥ _a. This makes the exercise easier_
than it should be, and the proof is no longer valid on the fixed statement.
-----
# HyperTree Proof Search for Neural Theorem Proving
**Guillaume Lample[∗†]** **Marie-Anne Lachaux[∗†]** **Thibaut Lavril[∗†]** **Xavier Martinet[∗†]**
**Amaury Hayat[§]** **Gabriel Ebner[‡]** **Aurélien Rodriguez[†]** **Timothée Lacroix[∗†]**
**Abstract**
We propose an online training procedure for a transformer-based automated theorem prover. Our approach leverages a new search algorithm, HyperTree Proof
Search (HTPS), that learns from previous proof searches through online training,
allowing it to generalize to domains far from the training distribution. We report
detailed ablations of our pipeline’s main components by studying performance on
three environments of increasing complexity. In particular, we show that with HTPS
alone, a model trained on annotated proofs manages to prove 65.4% of a held-out
set of Metamath theorems, significantly outperforming the previous state of the art
of 56.5% by GPT-f. Online training on these unproved theorems increases accuracy
to 82.6%. With a similar computational budget, we improve the state of the art on
the Lean-based miniF2F-curriculum dataset from 31% to 42% proving accuracy.
**1** **Introduction**
Over the course of history, the complexity of mathematical proofs has increased dramatically. The
nineteenth century saw the emergence of proofs so involved that they could only be verified by
a handful of specialists. This limited peer review process inevitably led to invalid proofs, with
mistakes sometimes remaining undiscovered for years (e.g. the erroneous proof of the Four Colour
Conjecture [1]). Some mathematicians argue that the frontier of mathematics has reached such a level
of complexity that the traditional review process is no longer sufficient, envisioning a future where
articles are submitted with formal proofs so that the correctness can be delegated to a computer [2].
Unfortunately, very few mathematicians have adopted formal systems in their work, and as of today,
only a fraction of existing mathematics has been formalized. Several obstacles have hindered the
widespread adoption of formal systems. First, formalized mathematics are quite dissimilar from
traditional mathematics, rather closer to source code written in a programming language, which
makes formal systems difficult to use, especially for newcomers. Second, formalizing an existing
proof still involves significant effort and expertise (the formalization of the Kepler conjecture took
over 20 person years to complete [3]) and even seemingly simple statements sometimes remain
frustratingly challenging to formalize.
To write a formal proof, mathematicians typically work with Interactive Theorem Provers (ITPs).
The most popular ITPs provide high-level “tactics” that can be applied on an input theorem (e.g. the
initial goal) to generate a set of subgoals, with the guarantee that proving all subgoals will result in a
proof of the initial goal (reaching an empty set means the tactic solves the goal). An example of a
proof in Lean [4], an interactive theorem prover, is given in Figure 1 and the corresponding proof
hypertree[3] is illustrated in Figure 5 of the Appendix.
_∗Equal contribution. Corresponding authors: {glample,malachaux,tlacroix}@fb.com_
_†Meta AI Research_ _‡Vrije Universiteit Amsterdam_ _§ CERMICS École des Ponts ParisTech_
3A hypergraph is a graph where an edge leads to a set of nodes that is potentially empty in our set-up. A
hypertree, in this work, is a hypergraph without cycles. Formal definitions can be found in Appendix C.1
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
-----
First subgoal : n + 0 ≤𝑚+ 0
Second subgoal :
n + 𝑘≤𝑚+ 𝑘⇒𝑛+ 𝑘+ 1 ≤𝑚+ 𝑘+ 1
Figure 1: A simple proof of the statement n ≤ **m ⇒** **n + k ≤** **m + k in Lean. The induction**
tactic reduces the initial statement to two subgoals, that can be solved independently.
In this paper, we aim at creating a prover that can automatically solve input theorems by generating a
sequence of suitable tactics without human interaction. The backward procedure naturally suggests a
simple approach where a machine learning model trained to map goals to tactics interacts with an ITP
to build the proof of an input goal in a backward fashion. The automated prover builds a hypergraph
with the theorem to be proved as the root node, tactics as edges and subgoals as nodes. The prover
recursively expands leaves by generating tactics with our model until we find a proof of the initial
theorem. A proof is then a hypertree rooted in the initial theorem whose leaves are empty sets.
Unlike Chess or Go, particular challenges arise for tree-search in theorem proving. First, the action
space, i.e. the amount of possible “moves” in a given state, is infinite (there is an unlimited number
of tactics that can be applied to a given theorem). This requires sampling possible actions from a
language model for which training data is scarce. Moreover, if all tactics sampled at a goal fail, we
have no information on what region of the probability space to sample next. Second, in the context of
theorem proving, we need to provide a proof of all subgoals created by a tactic, whereas AlphaZero[5]
for two player games is allowed to focus on the most likely adversary moves.
This paper presents an in-depth study of our approach to overcome these difficulties and the resulting
model, Evariste. In particular, we make the following contributions:
- A new MCTS-inspired search algorithm for finding proofs in unbalanced hypergraphs.
- A new environment (Equations) to easily prototype and understand the behavior of the
models we train and our proof search.
- A detailed ablation study and analysis of the different components used in our approach on
three different theorem proving environments. We study how data is selected for training the
policy model after a successful or failed proof-search, what target should be used to train
the critic model, and the impact of online training vs. expert iteration.
- State-of-the-art performance on all analyzed environments. In particular, our model manages
to prove over 82.6% of proofs in a held-out set of theorems from set.mm in Metamath, as
well as 58.6% on miniF2F-valid [6] in Lean.
**2** **Related work**
Automated theorem proving has been a long-standing goal of artificial intelligence, with the earliest
work dating from the 1950s [7, 8]. We focus here on recent work closest to ours and defer additional
related work to Appendix B.
**Neural theorem provers.** Recent work applying deep learning methods to theorem proving [9–11]
are the closest to this work and obtained impressive results on difficult held-out sets for Metamath
and Lean. The main differences between their approach and ours are the proof-search algorithm we
propose, the training data we extract from proof-searches and our use of online training compared
to their expert iterations. Another similar approach, Holophrasm [12], uses a different tree-search
algorithm while others [13, 14] learn the search policy along with the tactic model. Unlike previous
studies that focus on a single proving environment (e.g. Metamath, Lean, or HOL-Light), we
extensively study the performance of our prover on three different formal languages, and found that
some conclusions significantly vary based on the considered environment.
-----
**MCTS and two player games.** AlphaZero [5] demonstrated good performances on two player
games, replacing the Monte-Carlo evaluations of MCTS [15] with evaluations from a deep neural
network and guiding the search with an additional deep policy. These ideas have been applied to first
order logic proving in Kaliszyk et al. [16] with gradient boosted trees as policy and value models.
Theorem proving can be thought of as computing game-theoretic value for positions in a min/max
tree: to prove a goal, we need one move (max) that leads to subgoals that are all proven (min). This
has led to other tree search algorithms such as Proof Number Search [17] or a more recent version,
using a neural estimate of proof size: Wu et al. [18]. Noticing heterogeneity in the arities of min or
max nodes, we propose a search method that goes down simultaneously in all children of min nodes,
such that every simulation can potentially result in a full proof-tree.
**3** **Online training from proof searches**
In this section, we introduce our Hypertree Proof Search (HTPS) algorithm and describe how it is
used to generate training data for our model. We then detail our online training method.
**3.1** **Hypertree Proof Search**
Given a main goal g to automatically prove, HTPS is the algorithm that interacts with a policy
model Pθ and a critic model cθ, and the theorem proving environment to find a proof hypertree for g.
Proof search progressively grows a hypergraph starting from g, iteratively repeating the three steps
illustrated in Figure 2: selection, expansion and back-propagation. The main difference with other
search algorithms is our parallel descent in all subgoals of a tactic. A proof is found when there exists
a hypertree from the root to leaves that are empty sets.
Expansion Back-propagation
Selection
|g5|Col2|
|---|---|
||N( W(|
|||
gg `vT(g)=(1×0.1)×0.4`
```
N(g,t1)=2
W(g,t1)=0.5+(1×0.1)×0.4
```
`vT(g0)=1×0.1` g0 g1 `vT(g1)=0.4`
```
N(g0,t0)=1 N(g1,t0)=1
W(g0,t0)=1x0.1 W(g1,t0)=0.4
```
`vvTT(g(g23)=1)=0.1` ggg222 g3 g4 `vT(g4)=0.4`
g gg
```
N(g,t1) =1 N(g,t2)=0
W(g,t1) =0.5 W(g,t2)=0.1
```
5 g0 g1 g6 g7 g0 g1
```
N(g0,t0)=0
W(g0,t0)=0
```
g2 g3 g4 gg22 g3 gg44
g4 `N(g4,t1`
```
W(g4,t1
```
Figure 2: HyperTree Proof Search. We aim at finding a proof of the root theorem g. The figure represents the
three steps of HTPS that are repeated until a proof is found. Proving either {g5}, {g0, g1}, or {g6, g7} would
lead to a proof of g by tactic t0, t1, or t2. Guided by the search policy, we select a hypertree whose leaves are
unexpanded nodes. Selected nodes are then expanded, adding new tactics and nodes to the hypergraph. Finally,
we evaluate the node values vT of the hypertree starting from the leaves, using the critic, and back-propagating
to the root, updating the visit counts N and total action values W .
We assume a policy model Pθ and critic model cθ. Conditioned on a goal, the policy model allows
the sampling of tactics, whereas the critic model estimates our ability to find a proof for this goal.
Our proof search algorithm will be guided by these two models. Additionally, and similar to MCTS,
we store the visit count N (g, t) (the number of times the tactic t has been selected at node g) and the
total action value W (g, t) for each tactic t of a goal g. These statistics will be used in the selection
phase and accumulated during the back-propagation phase of the proof search described below.
**Selection** The number of nodes in the proof hypergraph grows exponentially with the distance to
the root. Thus, naive breadth-first search is infeasible to find deep proofs and some prioritization
criteria is required to balance depth and breadth. Similar to MCTS, we balance the policy model’s
prior with current estimates from the critic. In particular, we experiment with two different search
policies: PUCT [19] and Regularized Policy (RP) [20] (algorithms are detailed in Appendix C.3).
-----
A key difference between previous work and ours is that our proof search operates on a hypergraph.
Thus, whereas MCTS goes down a path from the root to an unexpanded node during its selection
phase, our algorithm will instead create a partial proof hypertree, leading to a set of either solved
or unexpanded nodes. To do so, we recursively follow the arg-max of the search policy from the
root, until we reach the leaves of the current hypergraph (the detailed pseudo-code can be found in
Section C.4). This selection step is illustrated in Figure 2.
In order to batch calls to the policy and critic models over more nodes to expand, we run several
selections sequentially, using a virtual loss [21, 5] to produce different partial proof-trees. Note that
solving all unexpanded leaves of any of these trees would immediately lead to a full proof of the root.
In the next section, we describe how nodes are expanded.
**Expansion** To expand a node g, we use the policy model to sample tactics that would make progress
on the goal. Tactics are sampled in an auto-regressive fashion (token by token) by the decoder [22],
based on the previously generated tokens, and on a representation of the goal provided by the encoder.
The generated tactics are then evaluated in the theorem proving environment. Each valid tactic will
lead to a set of new subgoals to solve, or to an empty set if the tactic solves the goal. Finally, we
add a hyperedge for each valid tactic ti from the expanded node g to its (potentially empty) set of
children for this tactic {gi[0][, ..., g]i[k][}][. Note that these children might already be part of the hypergraph.]
For new nodes, visit counts N (g, t) and total action values W (g, t) are initialized to zero. There are
three types of nodes in the hypergraph:
- Solved: at least one tactic leads to an empty set, or has all its children solved.
- Invalid: all tactics sampled from the policy model were rejected by the environment, or lead
to invalid nodes.
- Unsolved: neither solved nor invalid, some tactics have unexpanded descendants.
Note that the definitions for Solved or Invalid are recursive. These status are updated throughout the
hypergraph anytime a hyperedge is added. Tactics leading to invalid nodes are removed to prevent
simulations from reaching infeasible nodes. Once this is done, we back-propagate values from the
expanded nodes up to the root, as described in the next section.
**Back-propagation** For each expanded goal g in a simulated proof tree T, its value is set to
_vT (g) = 1 if it is solved, and vT (g) = 0 if it is invalid. Otherwise, its value is estimated by the_
critic model: vT (g) = cθ(g). This provides vT for all leaves of T and we can then back-propagate
in topographic order (children before parents) through all nodes of T . Interpreting the value of a
node as the probability that it can be solved, since solving a goal requires solving all of its children
subgoals, the value of a parent is the product of values of its children (we assume that the solvability
of subgoals is independent, for simplicity):
_vT (g) =_
Y
_vT (c)_
_c∈children(g,t)_
In particular, the value of a goal g is the product of the values of all leaves in T that remain to be
solved to obtain a proof of g. Once all values in T are computed, we increment the corresponding
visit count N (g, t) in the hypergraph as well as the total action values: W (g, t) += vT (g). For a
goal g, the estimated value for tactic t is then the mean of the total action value:
_Q(g, t) =_ _[W]_ [(][g, t][)]
_N_ (g, t)
A fully detailed back-propagation step is illustrated in Figure 2.
**3.2** **Online training**
Both the policy model Pθ and the critic model cθ are encoder-decoder transformers [23] with shared
weights θ, which are trained online on two different objectives. The policy model Pθ takes as input a
tokenized goal and generates tactics. It is trained with a standard seq2seq objective [22], where we
minimize the cross-entropy loss of predicted tactic tokens conditioned on the input goal.
-----
Our critic model cθ is used to predict floating point values representing how likely a goal is to be
solved. We start decoding with a special token, restrict the output vocabulary to two tokens PROVABLE
and UNPROVABLE, and evaluate the critic with cθ(g) = P (PROVABLE|g, CRITIC). This objective is
identical to a seq2seq objective where the cross-entropy is minimized over the two special tokens.
Our online training uses a distributed learning architecture reminiscent of AlphaZero [19] or distributed reinforcement learning setups [24, 5]. A distributed data parallel trainer receives training
data from a set of asynchronous provers that run proof searches on theorems chosen by a controller.
Provers, in turn, continuously retrieve the latest model versions produced by the trainers in order to
improve the quality of their proof search. This set-up is represented in Figure 7 of the Appendix.
Once a prover finishes a proof-search, we extract two types of training samples from its hypergraph:
**Tactic samples.** At the end of a successful proof search, we extract (goal, tactic) pairs of a minimal
proof hypertree of the root node as training samples for the policy model. We use a different
minimality criterion depending on the environment: number of proof steps for Metamath and
Equations and total tactic CPU time for Lean. We show that this selection has a large impact on
performances, other options such as selecting all solved nodes are investigated in Section 5.2.1. The
policy model is trained with a standard seq2seq objective [22], where we minimize the cross-entropy
loss of predicted tactic tokens conditioned on an input goal.
**Critic samples.** In the proof search hypergraph, we select all nodes that are either solved, invalid
(all tactics failed or led to invalid nodes), or with a visit count higher than a threshold. Then, we use
_c(g) = 1 as the training target for solved nodes. For internal nodes, we use the final estimated action_
value c(g) = W (g, t[∗])/N (g, t[∗]) where t[∗] is the tactic that maximizes the search policy at g. Finally,
for invalid nodes, we use c(g) = 0.
The trainers receive training samples that are stored into two separate finite-size queues, one for each
objective. When a queue is full, appending a new sample discards the oldest one. In order to create a
batch for a task, we uniformly select samples in the corresponding queue.
During online training, in addition from this generated data, we also sample from the supervised
datasets used for fine-tuning our models (see 4.1) which provide high-quality data. All training
objectives are weighted equally. An overview of our full training pipeline is given in Appendix D.1.
Our proof-search depends on many hyper-parameters, and the optimal settings might not be the same
for all statements, making tuning impractical. Thus, the controller samples these hyper-parameters
from pre-defined ranges (see Appendix D.3 for details) for each different proof-search attempt.
**4** **Experiments**
In this section, we provide details about our experimental training and evaluation protocols. We first
describe the supervised datasets used to fine-tune our policy models, as well as the tokenization used.
Then, we discuss the evaluation datasets and methodology. In Appendix D.2, we provide additional
details about our model pretraining and architecture.
We develop and test our methods on three theorem proving environments: Metamath, Lean and
Equations. Metamath [25] comes with a database of 35k human-written theorems called set.mm. We
also evaluate our methods on the Lean proving environment, which provides a level of automation
that is helpful to solve more complex theorems. Lean comes with a human-written library of 27k
theorems called Mathlib [26].
**4.1** **Model fine-tuning and supervised datasets**
Starting the HTPS procedure described in Section 3 from a randomly initialized model would be suboptimal, as no valid tactic would ever be sampled from the policy model. Thus, starting the online training from a non-trivial model is critical. To this end, we first fine-tune our policy model Pθ on a supervised dataset of theorems specific to each environment. We refer to this model as the supervised model.
**Metamath** In Metamath, we extract all proofs from the set.mm library, composed of 37091
theorems (c.f. Section D.5 for the version of set.mm). The training set is composed of around 1M
-----
Table 1: Dataset statistics for supervised training.
# train theorems # train proof steps Avg. goal length
Equations 33.7
_∞_ _∞_
Metamath 35k 1M 120.1
Lean 24k 144k 169.3
goal-tactic pairs; more statistics about the training data are provided in Table 1. Tokenization in
Metamath is trivial, as statements are composed of space-separated tokens.
**Lean** Following [11], we extract a supervised dataset from the Mathlib library and co-train with the
dataset of proof-artifacts of Han et al. [10] to reduce overfitting. To facilitate experimentation and
reproducibility, we use fixed versions of Lean, Mathlib, and miniF2F (c.f. Appendix D.5). Finally,
we add another supervised co-training task by converting to Lean a synthetic dataset of theorems
generated by the Equations environment (c.f. Appendix E.6). Statistics about the training set are
available in Table 1.
**Equations** Finally, we developed a new environment, Equations, in the spirit of INT [27], as a
simpler analogue to existing proving environments. Its expressivity is restricted to manipulating mathematical expressions (e.g. equalities or inequalities) with simple rules (e.g. A + B = B + A, or A <
_B_ _B <_ _A). Unlike Metamath or Lean, the Equations environment does not come with a dataset_
_⇒−_ _−_
of manually annotated proofs of theorems. Instead, we generate supervised data on the fly using the
random graph generator described in Appendix E.5. As the model quickly reaches perfect accuracy on
these synthetic theorems, we only leverage statements from the Identities split during online training.
**4.2** **Evaluation settings and protocol**
In Polu et al. [11], the model is fine-tuned on theorems from the training set and expert iteration is done
on theorems from different sources: train theorems, synthetic statements, and an extra curriculum of
statements without proofs (miniF2F-curriculum). The produced model is then evaluated on unseen
statements, namely the validation and test splits of the miniF2F dataset [6].
In this work, we also consider the transductive setup: on a corpus of unproved statements available
at train time, how many proofs can our method learn to generate? This protocol is also sensible, as
allowing the model to learn from a failed proof-search can lead to more focused exploration on the
next attempt, proving more statements overall than a model that would not be trained online.
Following [9], we also evaluate the pass@k by running k proof searches on the evaluated statements
with the policy and critic obtained by online training. Given the many evaluations presented in this
work, we only run them once. We give more details on the hyper-parameters used in Appendix D.3.
In the transductive setup, we also report the cumulative pass rate, i.e. the proportion of theorems
solved at least once during online training.
**5** **Results**
In this section, we present our results and study the moving parts of our pipeline through ablations.
We compare our results with GPT-f which represents the state of the art on Metamath and Lean.
Table 2: Pass rate on Lean environment using 64 trials (pass@64) Numbers with a _[†]_ exponent correspond to
the cumulative pass-rate since the evaluated statements are part of the online training.
|Online training statements|Supervised -|GPT-f Evariste-1d Evariste-7d miniF2F-curriculum|Evariste miniF2F-valid|
|---|---|---|---|
|miniF2F-valid miniF2F-test miniF2F-curriculum|38.5 35.3 20.8|47.3 46.7 47.5 36.6 38.9 40.6 30.6 33.6† 42.5†|58.6† 41.0 32.1|
|---|---|---|---|
|Train time (A100 days)|50|2000 230 1620|1360|
|---|---|---|---|
-----
**5.1** **Main results**
**5.1.1** **Lean**
In Lean, we run our experiments on A100 GPUs with 32 trainers and 200 provers. Each prover
runs our Lean API on 48 CPU cores. Unlike Polu et al. [11], we sample statements equally from
mathlib-train and miniF2F-curriculum, to avoid giving too much importance to statements from a
different domain than the target. Results can be found in Table 2. After 1 day of training, each
statement from miniF2F-curriculum has been sampled on average 250 times, and 110 out of the 327
statements have been solved. Our model outperforms GPT-f on miniF2F-test, with an approximately
10 training time speed-up. After 7 days, we solve 139 statements of miniF2F-curriculum (100 for
_×_
GPT-f), and observe further improvements on miniF2F-valid or miniF2F-test.
For other evaluations, we depart from the set-up of Polu et al. [11], directly using the statements
from the miniF2F-valid split in our online training, obtaining 58.6% cumulative pass rate. We then
evaluate the final model on miniF2F-test, reaching 41% pass@64, against 36.6% for GPT-f. Without
the synthetic data co-training task, the performance drops to 54.9% cumulative pass rate on the
miniF2F-valid split, and 38.5% pass@64 on the miniF2F-test split. Examples of proofs found by our
model can be found in Appendix F.
**5.1.2** **Metamath**
On Metamath, we train our model on V100 GPUs, with 128 trainers and 256 provers, whereas
ablations are run on 16 trainers and 32 provers. We report our results in Table 3 for the supervised
model and for a model trained with online training. During online training, we sample statements
from the training and from the validation splits of set.mm equally.
Online training dramatically improves performances on valid statements, going from a 61% pass@8
to a cumulative pass rate of 82.6%. This improvement cannot solely be explained by the high number
of attempts on validation theorems during training. Indeed, the ablation in Figure 3 (right) shows
that Evariste significantly outperforms a supervised model with the same number of attempts. The
supervised model plateaus at 66% while Evariste keeps improving beyond 74% after 7 days of
training, showing that the model is able to learn from previous proof searches through online training.
On test theorems, for which statements were not provided during online training, the accuracy
increased by 10% compared to the supervised model, from 55.8% to 65.6% accuracy. The supervised
model obtains a pass@32 accuracy of 65.4% (resp. 61.2%) on the validation (resp. test) splits,
compared to GPT-f’s 56.5% (resp. 56.2%) after expert iteration.
Table 3: Results on Metamath for a supervised model and Evariste. We report the pass@8 and pass@32
scores on the validation and test splits. Additionally, for Evariste we also report the cumulative score on the
validation set, i.e. the fraction of theorems proved at least one time during online training. Note that for Evariste
on Valid, the cumulative and pass@k performances are close since these statements were seen during training.
|Col1|Valid cumulative pass@8 pass@32|Test pass@8 pass@32|
|---|---|---|
|Supervised Evariste|N/A 61.0% 65.4% 82.6% 81.0% 81.2%|55.8% 61.2% 65.6% 72.4%|
|---|---|---|
**5.1.3** **Equations**
In Equations, we run our main experiment with 32 trainers and 64 provers, whereas ablations are run
on 16 trainers and 32 provers. In this environment, the model easily learns the training distribution of
our random generator, and solves all synthetically generated problems. Thus, online training is run
on the Identities statements only. Our main experiment reaches a cumulative pass rate of 91.3% on
the Identities split, while a supervised model never exceeds 36% even after a similar number of proof
attempts. In Appendix 9, we give examples of Identities statements proved during online training, as
well as the size and depth of proofs found by the model.
In particular, Evariste managed to find the proof of complex mathematical statements,
such as sinh(x/2) = sinh(x)/p2(1 + cosh(x)) and tan(3x)(1 3(tan(x))[2]) = 3 tan(x)
_−_ _−_
(tan(x))[2] tan(x) that required 82 and 117 proof steps respectively, showing the abilities of HTPS to
-----
Figure 3: Comparison between online setup, expert iteration, and fixed model. We report the cumulative
pass rate on the Identities (resp. valid) split on Equations (resp. Metamath). Reloading the model more frequently
converges faster and to a better performance. When no training is done, the final performance is much lower
despite using as many attempts, showing that online training is able to learn from previous proof searches.
prioritize subgoals and guide the search in very large proof graphs. This shows that online training is
able to adapt our policy and critic models to a completely new domain, going from automatically
generated statements to identities found in math books. Examples to understand the gap between
these two domains can be found in Appendix E.
**5.2** **Ablation study**
In this section, we present an ablation study on several components of our system. Since Lean
experiments are CPU intensive, we run most of our ablations on the Equations and Metamath
environments. On Lean, we ran experiments on a smaller subset of hyper-parameters that consistently
performed well on the other environments.
**5.2.1** **Online training data for tactic objective**
Table 4: Performance of our model for different online training data for tactic objective. We report the
pass@8 score for Metamath and cumulative pass rate for Equations. We either keep all nodes and sample tactics
according to the policy, or, extract (minimal) proofs of solved nodes, or (minimal) proofs of the root theorem
only. Selecting minimal proofs always improves performance.
|Proof Of Type of Proof|All Solved Root All Min All Min|All Nodes|
|---|---|---|
|Metamath (valid) Metamath (test)|61.2 65 57.4 68.6 57.2 58.8 54.8 57.4|51.6 54.4|
|---|---|---|
|Equations (Identities)|40.6 78.1 37.5 71.3|37.5|
|---|---|---|
The way we filter tactics sent to the trainers has a large impact on final performances. We investigated
several filtering methods and report the results in Table 4. The first method is similar to the one used
in AlphaZero and exposed in [19]: we select all nodes of the proof search hypergraph where the visit
count is above a certain threshold and we filter tactics above a given search policy score. At training
time, tactics are sampled according to the filtered search policy. With this method the model reaches
51.6% pass@8 on the valid set of Metamath and 37.5% cumulative pass rate on Equations.
We then experimented with other filtering criteria, selecting only goal-tactic pairs that are part of
proofs: either a proof of the root node, or of any solved node in the hypergraph. Then, we learn from
all possible proofs, or only from proofs that are minimal according to a criteria (number of proof
steps for Equations and Metamath, cumulative CPU time for Lean).
Learning only from minimal proofs always leads to improved performance, regardless of the selected
roots. Learning from the minimal proofs of all solved nodes, we reach a cumulative pass rate of
78.1% on Equations, compared to 40.6% when learning from all proofs. On Metamath, only learning
from the root’s minimal proof gives the best result on the valid set, reaching a pass@8 of 68.6%.
-----
**5.2.2** **Critic**
Table 5: Ablation study on the critic and search hyper-parameters in HTPS. We report the pass@8 score
for Metamath, and the cumulative pass rate for Equations. Evariste, trained with a soft critic and stochastic
hyper-parameters, obtains the best performance in both environments. Removing the critic, or using a hard critic
leads to reduced performances. In Equations, adding stochasticity in the proof search hyper-parameters increases
the performance by 4.3% in Equations, and slightly improves performance in Metamath.
|Col1|Evariste|No critic Hard critic|Fixed search params|
|---|---|---|---|
|Metamath (valid) Metamath (test)|68.6 57.4|64.8 67.6 52.2 57.4|69.8 56.2|
|---|---|---|---|
|Equations (Identities)|78.1|65.6 63.1|73.8|
|---|---|---|---|
To measure the impact of our critic model, we run an experiment where the proof search is only
guided by the policy model. In particular, during the back-propagation phase, we set vT (g) = 0.5
for leaves of T . In that context, our model is no longer trained with a critic objective. We report the
results in Table 5. We find that using a critic model improves the performance significantly, by 5.2%
and 12.5% on Metamath and Equations respectively.
As mentioned in Section 3, to train the critic objective, we set the training targets as c(g) = 1 for
solved nodes, c(g) = 0 for invalid nodes and c(g) = W (g, t[∗])/N (g, t[∗]) where t[∗] is the tactic that
maximizes the search policy at g, for internal nodes. We also tested a hard critic estimation of the
target values, following Polu and Sutskever [9], where c(g) = 1 for solved nodes and c(g) = 0 for
both invalid and internal nodes. We report results in Table 5. For both Metamath and Equations,
HTPS critic targets allow Evariste to reach its best performance. In Equations, the model reaches a
cumulative pass rate of 78.1%, compared to 63.1% with hard critic estimates. In Equations, using
hard critic targets gives worse performances than having no critic model at all, showing that these
targets are a bad estimation: setting all internal nodes to zero is too pessimistic.
**5.2.3** **Fixed proof search parameters**
We study the impact of sampling HTPS hyper-parameters for each attempt during online training. We
run experiments with fixed, chosen search parameters for Equations and Metamath to compare with
random sampling, and report results in Table 5. Evariste achieves better performances than the model
trained with fixed search parameters on Metamath test set and Equations Identities, reaching 78.1%
pass rate compared to 73.8% in Equations Identities.
**5.2.4** **Model update frequency during online training**
In our online training procedure, the policy and critic models are updated every five minutes on the
provers. We measure the impact of the frequency of these updates by trying different refresh rates: 5
minutes, 1 hour, 6 hours for Equations, and no updates at all for both Equations and Metamath. We
report the cumulative pass rate over training hours in Figure 3. The higher the refresh rate, the better
the cumulative pass rate over time, confirming the benefits of online training over expert iteration.
**6** **Conclusion**
In this work, we introduce HTPS, an AlphaZero-inspired proof search algorithm for automated
theorem proving, along with an online training procedure. We run an extensive study of our pipeline,
and present state-of-the-art results on multiple proving environments. We show that online training
provides large speed-ups over expert iteration, and allows generalization of the policy and critic
models to completely new domains. Despite large number of attempts per theorem, proving the
entirety of datasets like miniF2F remains elusive, and generating data from proof-search on the
currently available corpora will likely be insufficient in the long term. As manually annotated formal
datasets are limited, another way of providing exploration and additional training data (in the spirit of
self-play for two player games) is required. Automated generation of new theorems is likely to be
one of the future milestones.
-----
**Acknowledgments**
We thank the Meta AI and FLARE teams for useful comments and discussions throughout this work,
notably, Baptiste Rozière, Faisal Azhar, Antoine Bordes, Quentin Carbonneaux, Maxime Darrin,
Alexander Miller, Vincent Siles, Joe Spisak and Pierre-Yves Strub. We thank François Charton for
his initial contributions to the Equations environment. We also thank Tristan Cazenave for interesting
discussions around search algorithms, as well as the members of the Lean community for their help,
notably Fabian Glöckle for valuable feedback on this project.
**References**
[1] Alfred B Kempe. On the geographical problem of the four colours. American journal of
_mathematics, 2(3):193–200, 1879._
[2] Vladimir Voevodsky. Univalent foundations of mathematics. In International Workshop on
_Logic, Language, Information, and Computation, pages 4–4. Springer, 2011._
[3] Thomas Hales, Mark Adams, Gertrud Bauer, Dat Dang, John Harrison, Truong Hoang, Cezary
Kaliszyk, Victor Magron, Sean McLaughlin, Thang Nguyen, Truong Nguyen, Tobias Nipkow,
Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, An Ta, Trân Trung, Diep Trieu, and
Roland Zumkeller. A formal proof of the kepler conjecture. Forum of Mathematics, Pi, 5, 01
2017. doi: 10.1017/fmp.2017.1.
[4] Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer.
The lean theorem prover (system description). In International Conference on Automated
_Deduction, pages 378–388. Springer, 2015._
[5] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general
reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science,
362(6419):1140–1144, 2018.
[6] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
[7] P. C. Gilmore. A proof method for quantification theory: Its justification and realization.
_IBM J. Res. Dev., 4(1):28–35, jan 1960. ISSN 0018-8646. doi: 10.1147/rd.41.0028. URL_
```
https://doi.org/10.1147/rd.41.0028.
```
[8] Martin Davis and Hilary Putnam. A computing procedure for quantification theory. J. ACM, 7
[(3):201–215, jul 1960. ISSN 0004-5411. doi: 10.1145/321033.321034. URL https://doi.](https://doi.org/10.1145/321033.321034)
```
org/10.1145/321033.321034.
```
[9] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
[10] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203, 2021.
[11] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344,
2022.
[12] Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv
_preprint arXiv:1608.02644, 2016._
[13] Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. Tacticzero: Learning to
prove theorems from scratch with deep reinforcement learning. Advances in Neural Information
_Processing Systems, 34:9330–9342, 2021._
[14] Maxwell Crouse, Ibrahim Abdelaziz, Bassem Makni, Spencer Whitehead, Cristina Cornelio,
Pavan Kapanipathi, Kavitha Srinivas, Veronika Thost, Michael Witbrock, and Achille Fokoue.
A deep reinforcement learning approach to first-order logic theorem proving. In Proceedings of
_the AAAI Conference on Artificial Intelligence, volume 35, pages 6279–6287, 2021._
-----
[15] Bruce Abramson and Richard E Korf. A model of two-player evaluation functions. In AAAI,
pages 90–94, 1987.
[16] Cezary Kaliszyk, Josef Urban, Henryk Michalewski, and Miroslav Olšák. Reinforcement
learning of theorem proving. Advances in Neural Information Processing Systems, 31, 2018.
[17] L Victor Allis, Maarten van der Meulen, and H Jaap Van Den Herik. Proof-number search.
_Artificial Intelligence, 66(1):91–124, 1994._
[18] Ti-Rong Wu, Chung-Chin Shih, Ting Han Wei, Meng-Yu Tsai, Wei-Yuan Hsu, and I-Chen
Wu. Alphazero-based proof cost network to aid game solving. In International Conference on
_Learning Representations, 2021._
[19] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering
chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint
_arXiv:1712.01815, 2017._
[20] Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, Ioannis
Antonoglou, and Rémi Munos. Monte-carlo tree search as regularized policy optimization. In
_International Conference on Machine Learning, pages 3769–3778. PMLR, 2020._
[21] Guillaume MJ-B Chaslot, Mark HM Winands, and HJVD Herik. Parallel monte-carlo tree
search. In International Conference on Computers and Games, pages 60–71. Springer, 2008.
[22] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural
networks. Advances in neural information processing systems, 27, 2014.
[23] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information
_processing systems, pages 5998–6008, 2017._
[24] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro
De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al.
Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296,
2015.
[25] Norman D. Megill and David A. Wheeler. _Metamath:_ _A_ _Computer_ _Lan-_
_guage for Mathematical Proofs._ Lulu Press, Morrisville, North Carolina, 2019.
```
http://us.metamath.org/downloads/metamath.pdf.
```
[26] The mathlib Community. The lean mathematical library. In Proceedings of the 9th ACM
_SIGPLAN International Conference on Certified Programs and Proofs. ACM, jan 2020. doi:_
[10.1145/3372885.3373824. URL https://doi.org/10.1145%2F3372885.3373824.](https://doi.org/10.1145%2F3372885.3373824)
[27] Yuhuai Wu, Albert Qiaochu Jiang, Jimmy Ba, and Roger Grosse. Int: An inequality benchmark
for evaluating generalization in theorem proving. arXiv preprint arXiv:2007.02924, 2020.
[28] Stephan Schulz. E—a brainiac theorem prover. AI Commun., 15(2–3):111–126, 2002.
[29] Alexandre Riazanov and Andrei Voronkov. Vampire 1.1 (system description). In IJCAR, 2001.
[30] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL: A Proof Assistant
_for Higher-Order Logic, volume 2283 of LNCS. Springer, 2002._
[31] Yves Bertot and Pierre Castéran. Interactive theorem proving and program development:
_Coq’Art: the calculus of inductive constructions. Springer Science & Business Media, 2013._
[32] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[33] Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An
environment for machine learning of higher order logic theorem proving. In International
_Conference on Machine Learning, pages 454–463. PMLR, 2019._
-----
[34] John Harrison. Hol light: A tutorial introduction. In International Conference on Formal
_Methods in Computer-Aided Design, pages 265–269. Springer, 1996._
[35] Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. Tactictoe: learning to prove with tactics. Journal of Automated Reasoning, 65(2):257–286, 2021.
[36] Karel Chvalovsky, Jan Jakub˚` uv, Miroslav Olšák, and Josef Urban. Learning theorem proving
components. In International Conference on Automated Reasoning with Analytic Tableaux and
_Related Methods, pages 266–278. Springer, 2021._
[37] Mingzhe Wang and Jia Deng. Learning to prove theorems by learning to generate theorems.
_Advances in Neural Information Processing Systems, 33:18146–18157, 2020._
[38] Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, and Guillaume Lample. Unsupervised translation of programming languages. arXiv preprint arXiv:2006.03511, 2020.
[39] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In International Conference on Learning Representations,
[2019. URL https://openreview.net/forum?id=H1gR5iR5FX.](https://openreview.net/forum?id=H1gR5iR5FX)
[40] Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In Inter_[national Conference on Learning Representations, 2020. URL https://openreview.net/](https://openreview.net/forum?id=S1eZYeHFDS)_
```
forum?id=S1eZYeHFDS.
```
[41] Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton.
Deep symbolic regression for recurrent sequences. arXiv preprint arXiv:2201.04600, 2022.
[42] Brenden K Petersen, Mikel Landajuela Larma, Terrell N. Mundhenk, Claudio Prata Santiago,
Soo Kyung Kim, and Joanne Taery Kim. Deep symbolic regression: Recovering mathematical
expressions from data via risk-seeking policy gradients. In International Conference on Learning
_[Representations, 2021. URL https://openreview.net/forum?id=m5Qsh0kBQG.](https://openreview.net/forum?id=m5Qsh0kBQG)_
[43] François Charton, Amaury Hayat, and Guillaume Lample. Learning advanced mathematical
computations from examples. arXiv preprint arXiv:2006.06462, 2020.
[44] Yizao Wang and Sylvain Gelly. Modifications of uct and sequence-like simulations for montecarlo go. In 2007 IEEE Symposium on Computational Intelligence and Games, pages 175–182.
IEEE, 2007.
[45] Mark HM Winands, Yngvi Björnsson, and Jahn-Takeshi Saito. Monte-carlo tree search solver.
In International Conference on Computers and Games, pages 25–36. Springer, 2008.
[46] Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A Smith. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. arXiv preprint
_arXiv:2006.10369, 2020._
[47] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of
deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
[48] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv
_preprint arXiv:1901.07291, 2019._
[49] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mass: Masked sequence to
sequence pre-training for language generation. arXiv preprint arXiv:1905.02450, 2019.
[50] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
_arXiv:1412.6980, 2014._
[51] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
Dropout: a simple way to prevent neural networks from overfitting. The journal of machine
_learning research, 15(1):1929–1958, 2014._
[52] Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with
structured dropout. In International Conference on Learning Representations, 2020. URL
```
https://openreview.net/forum?id=SylO2yStDr.
```
-----
[53] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito,
Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in
pytorch. NIPS 2017 Autodiff Workshop, 2017.
**7** **Checklist**
1. For all authors...
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s
contributions and scope? [Yes]
(b) Did you describe the limitations of your work? [Yes] Section 5.1 includes our results
as well as their limitations.
(c) Did you discuss any potential negative societal impacts of your work? [No] Beyond
environmental cost of training models, we do not see any direct path from the research
presented in this paper to negative applications.
(d) Have you read the ethics review guidelines and ensured that your paper conforms to
them? [Yes]
2. If you are including theoretical results...
(a) Did you state the full set of assumptions of all theoretical results? [N/A]
(b) Did you include complete proofs of all theoretical results? [N/A]
3. If you ran experiments...
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] The code for
the Equations environment will be open-sourced. We also plan to make our trained
model publicly available to help people in the formal community. Code for the overall
distributed training architecture is tied to our infrastructure and will be difficult to
open-source.
(b) Did you specify all the training details (e.g., data splits, hyper-parameters, how they
were chosen)? [Yes] We provide all training details in Section D.2 of the Appendix.
(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] As stated in Section 4.2, we report the result of one
training and one evaluation as running all evaluations multiple times would be too
costly.
(d) Did you include the total amount of compute and the type of resources used (e.g., type
of GPUs, internal cluster, or cloud provider)? [Yes]
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
(a) If your work uses existing assets, did you cite the creators? [Yes]
(b) Did you mention the license of the assets? [No] License is available on the repositories
associated with each asset.
(c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
No new assets in this work.
(d) Did you discuss whether and how consent was obtained from people whose data you’re
using/curating? [N/A]
(e) Did you discuss whether the data you are using/curating contains personally identifiable
information or offensive content? [N/A]
5. If you used crowdsourcing or conducted research with human subjects...
(a) Did you include the full text of instructions given to participants and screenshots, if
applicable? [N/A]
(b) Did you describe any potential participant risks, with links to Institutional Review
Board (IRB) approvals, if applicable? [N/A]
(c) Did you include the estimated hourly wage paid to participants and the total amount
spent on participant compensation? [N/A]
-----
| [
"Guillaume, Lample",
"Marie-Anne, Lachaux",
"Thibaut, Lavril",
"Xavier, Martinet",
"Amaury, Hayat",
"Gabriel, Ebner",
"Aurélien, Rodriguez",
"Timothée, Lacroix"
] | 2022-05-23T00:00:00 | NeurIPS 2022 | true | 84 | 19 | [
"Lean",
"MetaMath"
] | http://arxiv.org/abs/2205.11491 | https://arxiv.org/abs/2205.11491 | https://www.semanticscholar.org/paper/65b4b25272c50dc376f5c018338931bfd349e532 |
miniF2F: a cross-system benchmark for formal Olympiad-level mathematics | We present $\textsf{miniF2F}$, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The $\textsf{miniF2F}$ benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f, a neural theorem prover based on GPT-3 and provide an analysis of its performance. We intend for $\textsf{miniF2F}$ to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving. | The miniF2F benchmark currently targets Metamath, Lean, Isabelle, and HOL Light and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad, as well as material from high-school and undergraduate mathematics courses. | ### MINIF2F: A CROSS-SYSTEM BENCHMARK FOR FORMAL OLYMPIAD-LEVEL MATHEMATICS
**Kunhao Zheng**
Ecole Polytechnique´
[email protected]
**Stanislas Polu**
OpenAI
[email protected]
**Jesse Michael Han**
OpenAI
University of Pittsburgh
[email protected]
ABSTRACT
We present miniF2F, a dataset of formal Olympiad-level mathematics problems
statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad
(IMO), as well as material from high-school and undergraduate mathematics
courses. We report baseline results using GPT-f (Polu & Sutskever, 2020), a
neural theorem prover based on GPT-3 (Brown et al., 2020) and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort
and hope that our benchmark will help spur advances in neural theorem proving.
1 INTRODUCTION
Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning, e.g. in computer vision (Deng et al., 2009) and natural language
processing (Wang et al., 2019; Rajpurkar et al., 2016; Paperno et al., 2016). Neural theorem prov_ing is a rapidly developing area which aims to apply techniques from deep learning to interactive_
theorem proving. To date, most contributions in this area have focused on individual theorem proving systems, each with a separately-implemented mathematics library and with results reported on
a dataset-specific test split; examples include the HOList (Bansal et al., 2019a), CoqGym (Yang
& Deng, 2019) and LeanStep (Han et al., 2021) theorem proving environments and benchmarks.
However, benchmarks from this paradigm are not ideal for measuring the mathematical reasoning
ability of neural theorem provers for several reasons. Library-specific train/test splits are siloed by
construction, dependent on how theorems and lemmas are split in these libraries, and as such are not
directly comparable across systems. Moreover, formal mathematics libraries are closer to software
repositories than informal mathematical exposition, and many lemmas are implementation-specific
artifacts without precise informal mathematical or cross-system translations.
To date, the neural theorem proving community has not organized its efforts around a cross-system
benchmark. To address this need and to provide a common resource to research groups working on
formal theorem proving, we present miniF2F, a unified cross-system benchmark of formal mathematics of progressively increasing difficulty, centering around Olympiad-level problem statements
(AMC, AIME, IMO) as well as high-school and undergraduate maths classes. Both the content and
name of miniF2F are inspired by the IMO Grand Challenge (Selsam et al., 2019): to build an AI
that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal (F2F)
format. More precisely, the agent must receive IMO problems written in a formal mathematical
format, and must produce a formal (i.e. machine-checkable) proof for that problem.
We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO
Grand Challenge (Selsam et al., 2019), as it is end-to-end verifiable, cross-platform and spans a wide
range of difficulty. While we report baseline results on miniF2F using GPT-f, a language model
-----
based on GPT-3 which has been finetuned for theorem proving, language models are not a mandatory
approach for Olympiad problems and this assumption is not reflected in miniF2F, preserving the
generality and widespread applicability of the benchmark to systems similar to DeepHOL (Bansal
et al., 2019a) or Holophrasm (Whalen, 2016).
2 BACKGROUND AND RELATED WORK
BENCHMARKS
In the closely related field of (first-order) automated theorem proving (ATP), the TPTP (Sutcliffe,
2017) benchmark is a library of test problems in a unified format for ATP systems. In interactive
theorem proving, the ”Freek 100” (Wiedijk, 2008) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems. Wu et al. (2021) built a simplified formal proof
environment INT with an associated synthetic inequality benchmark. Competitions and communal challenges have also spurred development in formal theorem proving. The CADE ATP System
Competition (CASC) (Sutcliffe, 2016) is a competition that evaluates the performance of first-order
automated theorem proving systems. Proof Ground (Haslbeck et al., 2019), part of the ITP conference, is an interactive proving contest (for humans) that supports Coq, Isabelle, and Lean, which
focuses on evaluating the formalization effort of proof to given problems within limited time. Finally, the IMO Grand Challenge (Selsam et al., 2019), a proposal from researchers working on the
interactive proof assistant Lean, aims to build a system capable of solving IMO problems in the
formal-to-formal format.
Due to its convenient framing as a natural language processing task, the domain of informal mathematical reasoning has received more attention than the formal one. MATH (Hendrycks et al.,
2021) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains. Each exercise is combined
with a detailed step-by-step proof in natural language. Scaling state-of-the-art models shows little
amelioration on MATH, which requires advanced mathematical reasoning capabilities. miniF2F
includes a number of formalized statements from MATH. NaturalProofs (Welleck et al., 2021) is
another benchmark of natural proof in mathematics, containing 32k theorem statements and proofs.
It essentially contains the proofs in ProofWiki and other resources. While MATH is more oriented
towards mathematics exercises, NaturalProofs is focused on proofs of general mathematics theorems. Saxton et al. (2019) built a mathematics dataset with 2 × 10[6] training data and 10[4] test data,
presented in a question-answering format where each statement is paired with a question written in
natural language and a direct answer without proof.
NEURAL THEOREM PROVING
HOList (Bansal et al., 2019a;b; Paliwal et al., 2020) provides an environment as well as a benchmark
for HOL Light. They also proposes various deep reinforcement learning approaches for theorem
proving and report a pass rate of 59.91% on their benchmark. Yang & Deng (2019) built CoqGym, a
large-scale dataset, which comes also with a learning environment, of 71k human-written proofs in
Coq proof assistant. They report a 30.0% pass rate on the held-out test theorems in CoqGym. Polu
& Sutskever (2020) applied a decoder-only transformer similar to GPT-3 (Brown et al., 2020) to
proof steps prediction in Metamath combined with a log-probability based proof search. They also
proposed a methodology to train a value function to further guide proof search, achieving a 56.22%
pass rate on the held-out test set. Large language models were applied to Lean by Han et al. (2021).
They created an environment around the Lean prover targeted to machine learning and propose a
dataset extracted from low level proof artifacts that is shown to boost performance when used as
a self-supervised co-training objective. They report a 48.4% pass rate on held-out test statements
from mathlib, Lean’s mathematical library (mathlib Community, 2020).
3 MINIF2F BENCHMARK
miniF2F is a dataset of manually formalized statements of Olympiad type problems, aligned in
Lean, Metamath, and Isabelle (partial at the time of writing), providing a cross-platform benchmark
for formal mathematical reasoning. Olympiad type problems are of particular interest to compare
-----
Table 1: Number of statements and their provenance in miniF2F v1
|Col1|Col2|Col3|Test Set Validation Set|
|---|---|---|---|
|TOTAL|||244 244|
|IMO AIME AMC|||20 20 15 15 45 45 14 14 14 14 14 14 14 14 14 14 16 16 11 11 11 11 11 11 11 11 18 18 8 8 8 8|
|MATH|Algebra|Level 5 Level 4 Level 3 Level 2 Level 1||
||Number Theory|Level 5 Level 4 Level 3 Level 2 Level 1||
|CUSTOM|Algebra Number Theory Induction|||
automated provers across different formal systems as the theories required to solve them are well
identified and they generally do not require the definition of new mathematical concepts (a capability
that remains beyond the current neural theorem proving state of the art).
The formalized statements in miniF2F are drawn from multiple sources, ranging from high school
and undergraduate level exercises to Olympiad problems. miniF2F also covers different subsubjects in mathematics as well as proof strategies, focusing on the types of exercises whose statements are expressible in most formal systems. This leads to a systemic focus on algebra, number
theory and inequalities because, for example, geometry and combinatorial problems are generally
challenging to formalize due to only nascent efforts in these areas in most formal systems. The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for
both humans and machines. Formal proofs for these statements are optionally attached.
miniF2F draws from AIME, AMC, IMO problems as well as problems from the MATH (Hendrycks
et al., 2021) informal dataset. Formalizing problems from the MATH dataset serves two purposes.
First, problems in MATH are segmented by difficulty level (from 1 to 5), randomly selecting a subset
from each of these difficulty levels allows miniF2F to cover a wider range of difficulty. Second, it
provides the community an opportunity to compare capabilities of formal automated prover to their
informal counter-parts as discussed in later sections.
miniF2F comprises a test set and a validation set, which are a stratified random split from the
statements we formalized such that each set equally covers each problem type and difficulty (when
available). Table 1 shows a detailed distribution of these statements.
**Versioning** miniF2F is an evolving effort and new statements will continuously be added. Periodically, we will freeze versions of the benchmark. The current version of the benchmark is v1[1] and
results in this paper are reported using this version. v1 comprises 244 test and 244 valid statements.
The set of statements of each version is guaranteed to remain stable, only allowing fixes in case
errors are later discovered.
**Rules of engagement and License** miniF2F is meant to serve as a shared resource for research
groups working on applying deep learning to formal theorem proving. There is no formal process to
submit evaluation results and researchers are simply invited to cite miniF2F indicating the version
used in their evaluations. We also encourage them to contribute proofs found by their approaches
back to the benchmark. The parts of the benchmark associated with each theorem prover (Metamath,
[1https://github.com/openai/miniF2F/tree/v1](https://github.com/openai/miniF2F/tree/v1)
-----
Lean, Isabelle) are meant to be licensed in a way that is aligned with the licensing usage associated
with the theorem prover’s main library. As a result, the Metamath version of the benchmark is
released under the MIT License, while the Lean and Isabelle versions are released under the Apache
License.
**Formalization effort and challenges** We found that, for trained practitioners (but not necessarily
experts, including students recently introduced to formal systems), formalizing a statement takes
about 15 minutes on average, and reviewing a formalized statement, about half of that on average.
Note that not all exercises are directly or naturally formalizable. In particular, multi-choice questions, word problems, and exercises that require to explicit a witness or a set as part of the answer
present interesting challenges:
_multi-choice questions[2]_ these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only, and could be made “fair” in a competitive
setup by formalizing all possible choices and running automated provers on all of them,
attributing points only if a proof of the correct answer is provided.
_word problems[3]_ where significant information is presented in natural language generally require
non-trivial efforts to be formalized. We generally formalized them by explicitly modeling
the mathematics concepts and expression presented in natural language while attempting as
best as possible to preserve the mathematical difficulty of the original problem. Sometime
the formalization work is most of the difficulty associated with the original question; in
such cases we would discard the problem entirely.
_problems that require to explicit a set or witness[4]_ (e.g. find all ... such that ...) are not directly
formalizable. The best approximation we relied on for these was to formalize the statement
with the witness or answer provided, turning such exercises into the generation of a proof
that the answer is correct, and if needed, that it is the unique one–which is, at times, a much
easier exercise. A non negligible portion of IMO problems are as such, which we foresee
could become a challenge in the future, to fairly compare humans to automated proving
systems in a competitive setup.
**Porting effort** In addition to Metamath, Lean, Isabelle (work in progress) and HOL Light (work
in progress), we are eager to extend the coverage of miniF2F to Coq, and will welcome any effort
in that direction or to extend miniF2F to further systems.
4 EXPERIMENTS
In this section, in order to study baseline performances associated with existing systems, we report
pass rates achieved by GPT-f (Polu & Sutskever, 2020) applied to Metamath, GPT-f /PACT (Polu
& Sutskever, 2020; Han et al., 2021) applied to Lean as well as a baseline prover implemented in
Lean denoted as the tidy baseline. Pass rates are reported as Pass@N where N is the number of
proof search attempts per statement. Pass@N is computed by running more attempts per statement,
averaged to get an unbiased, low-variance estimate.
4.1 METAMATH
Metamath is powered by a meta logic system based on a single substitution rule. It’s characterized
by its simplicity which makes it convenient to study machine learning. Proofs in Metamath are, as a
consequence of the low-level proofsteps, much longer than in other systems as there is no assistance
from high-level tactics. Proofs which are trivial in other systems (e.g. n-digit addition or simple
ring arithmetic transformations) can be quite tedious in Metamath. The absence of tactics is both
2Example: [amc12a 2020 p10 in https://github.com/openai/miniF2F/blob/main/](https://github.com/openai/miniF2F/blob/main/lean/src/test.lean)
[lean/src/test.lean](https://github.com/openai/miniF2F/blob/main/lean/src/test.lean)
[3Example: mathd algebra 398 in https://github.com/openai/miniF2F/blob/main/](https://github.com/openai/miniF2F/blob/main/lean/src/test.lean)
[lean/src/test.lean](https://github.com/openai/miniF2F/blob/main/lean/src/test.lean)
4Example: [imo 1997 p5 in https://github.com/openai/miniF2F/blob/main/lean/](https://github.com/openai/miniF2F/blob/main/lean/src/test.lean)
[src/test.lean](https://github.com/openai/miniF2F/blob/main/lean/src/test.lean)
-----
Figure 1: Counts of successfully proved statements in miniF2F. Green bar: results from Lean GPT-f.
Red bar: best result from the tidy baseline. Blue bar: results from Metamath GPT-f.
a benefit, as the models sees and learns on everything, and a challenge, as proofs of even simple
exercises require hundreds of proofsteps.
4.1.1 GPT-F
We report the pass rate of GPT-f applied to Metamath as described in Polu & Sutskever (2020).
We use a model with 700m learnable parameters. The model is trained on an updated dump of the
set.mm library (but similar synthetic datasets), using the log-probability based search as reported in
Table 8 of the GPT-f paper (Polu & Sutskever, 2020).
The model achieves a Pass@1 of 1.3% and a Pass@8 of 1.6% on miniF2F-test. As expected, these
numbers are quite low due to the length of typical proofs for even simple math exercises. The
average proof length is also reported in Table 3.
4.2 LEAN
In comparison to Metamath, Lean benefits from a large number of powerful tactics to assist formalization efforts. Typical Lean proofs are much shorter than Metamath’s. This is also a formal system
of interest as it has received a lot of attention from the mathematical community as recent theories
have successfully been formalized in Lean (Perfectoid Spaces (Buzzard et al., 2019), Liquid Tensor
experiment (Scholze, 2020)).
Lean is also associated with the IMO Grand Challenge (Selsam et al., 2019) which aims to organize
a formal-to-formal challenge during the upcoming IMO competitions.
4.2.1 TIDY BASELINE
We use the generic best-first search algorithm presented in PACT (Han et al., 2021). The algorithm
works as follows: Given a list of tactics L with priority, we maintain a priority queue Q of tactic
states whose priority is given by the priority of the last applied tactic in L that led to it. While Q is
not empty, we pop the top tactic state t from Q. We iterate through L and apply each tactic to t. If
no error is raised, we capture the returned tactic states from Lean and insert them back into Q.
We use the same terminology as in PACT (Han et al., 2021): maximum queue size ωmax, depth limit
_dmax. We also enforce a budget of imax iterations of the outer loop. When Q’s size reach qmax, all_
the tactic states to be inserted are discarded. We do not expand the next tactic state when the depth
is beyond dmax. This loop is run until a proof is found or the iterations budget is exhausted.
For consistency checking, we run the tidy baseline under the same settings and on the same test
set as in PACT (Han et al., 2021) except that we don’t set a global timeout. Our implementation
-----
achieved a 10.5% pass rate on mathlib’s test split. This result is comparable to the reported 9.9% in
PACT given the waived global timeout.
In addition to the curated list of tactics L used in PACT (Han et al., 2021), we added 4 high-level
tactics HL =[nlinarith, linarith, ring nf, norm num] to L with higher priorities
than the others. We report our pass rate on miniF2F in Table 2.
Table 2: The table shows the number of solved statement in miniF2F when running the tidy
baseline with different values of imax as well Lean’s built-in tidy tactic. All tidy baseline experiments are run with ωmax = 128, dmax = 8 using L + HL. Despite the tidy baseline being
deterministic, it is still subject to per-tactic application timeouts, explaining the number 43 reported
on miniF2F-test for imax = 32.
parameters miniF2F-valid miniF2F-test
Lean’s tidy tactic 12 / 244 13 / 244
_imax = 1_ 21 / 244 23 / 244
_imax = 2_ 31 / 244 29 / 244
_imax = 4_ 38 / 244 41 / 244
_imax = 8_ 41 / 244 44 / 244
_imax = 16_ 41 / 244 44 / 244
_imax = 32_ 41 / 244 43 / 244
_imax = 64_ 41 / 244 44 / 244
_imax = 128_ 41 / 244 44 / 244
4.2.2 GPT-F/PACT
We report the pass rate of GPT-f /PACT as described in Han et al. (2021). We use a model with
700M learnable parameters. The model is trained on an updated dump[56] of the mathlib library using
the PACT methodology denoted in the paper as mix2 > mix1 + tactic in Figure 6.
The model achieves a Pass@1 of 24.6% and a Pass@8 of 29.2% on miniF2F-test. The average proof
length is also reported in Table 3.
Table 3: Baseline performance on Metamath and Lean. All proof searches are provided with a 128
expansions budget. GPT-f attempts e = 16 tactics per expansion while the tidy baseline attempts
_e = 17 tactics per expansion (L + HL, see section 4.2.1). Reported proof lengths are averages over_
all the proofs found in each run. Note that the tidy baseline being deterministic, there is no point
attempting a proof search more than once.
miniF2F-valid miniF2F-test
Formal Proof Pass rate Proof Pass rate
Model
System Length Pass@1 Pass@8 Length Pass@1 Pass@8
Metamath GPT-f 16.2 1.0% 2.0% 20.3 1.3% 1.6%
Lean tidy 1.7 16.8% - 1.8 18.0% -
Lean GPT-f 2.6 23.9% 29.3% 2.5 24.6% 29.2%
4.3 DISCUSSION
4.3.1 ACCESS TO HIGH-LEVEL TACTICS
One goal of miniF2F is to study the comparison of performance across formal systems. In this
section we reported the performance of the same methodology (GPT-f (Polu & Sutskever, 2020))
[5https://github.com/jasonrute/lean_proof_recording/commit/](https://github.com/jasonrute/lean_proof_recording/commit/8499f10c2e10dd533152070ed933c4f0b21ecdc0)
[8499f10c2e10dd533152070ed933c4f0b21ecdc0](https://github.com/jasonrute/lean_proof_recording/commit/8499f10c2e10dd533152070ed933c4f0b21ecdc0)
[6https://github.com/jesse-michael-han/lean-step-public/commit/](https://github.com/jesse-michael-han/lean-step-public/commit/a2b83c237bfe4d6f1c48bb48bc0769b5940e614a)
[a2b83c237bfe4d6f1c48bb48bc0769b5940e614a](https://github.com/jesse-michael-han/lean-step-public/commit/a2b83c237bfe4d6f1c48bb48bc0769b5940e614a)
-----
applied to both Lean and Metamath. Both models are pre-trained on WebMath (Polu & Sutskever,
2020) and respectively trained on datasets extracted from Lean (Han et al., 2021) and Metamath (Polu & Sutskever, 2020). The overall compute deployed at training is comparable in both
setup and exactly equivalent at test-time, yet the achieved performance appears drastically superior
when applied to Lean. We hypothesize that this is mainly explained by the model’s access to highlevel tactics when applied to Lean, enabling the model to learn how to guide Lean’s automation in
an effective way.
An example of this high-level guidance behavior is well exemplified by the following proof of the
statement algebra_sqineq_2unitcircatblt1 where the model heavily relies on Lean’s
nlinarith solver but provides it with essential premises to successfully guide the search.
theorem algebra_sqineq_2unitcircatblt1
(a b : R)
(h0 : aˆ2 + bˆ2 = 2) :
a * b ≤ 1 :=
begin
nlinarith [sq_nonneg a,sq_nonneg b,sq_nonneg (a - b)]
end
(The statement above (algebra_sqineq_2unitcircatblt1) requires to prove the assertion
_∀a, b ∈_ R, a[2] + b[2] = 2 → _a · b ≤_ 1).
In Metamath, GPT-f fails to find a proof as it requires a very large number of steps to appropriately
rewrite the goal in a way that is amenable to the use of set.mm’s existing theorems. The tidy
baseline also fails to find a proof of that statement as nlinarith is not capable of solving the goal
without being passed extraneous premises.
These results motivate the use of neural theorem proving with formal systems that expose powerful
high level tactics and also suggest the potential of a closer collaboration between formal systems and
machine learning practitioners. It also motivates the use of generative models in that setup as the
arguments required by high-level tactics to succeed on non trivial problems generally do not exist in
the context of the statement and therefore have to be generated ex-nihilo.
4.3.2 COMPARISON OF INFORMAL AND FORMAL SETUPS
The use of formal systems for neural theorem proving is often motivated by the role of the formal
system as a verifier, enabling more advanced neural search strategies than possible in a fully informal
setup where the generation of a model can’t be verified automatically, as well as the access to
powerful tactics. Our formalization of a subset of the MATH (Hendrycks et al., 2021) informal
dataset provides an interesting approximate quantification of the benefit of having access to a formal
system in the context of neural theorem proving. Approximate, because we only formalized a small
subset of the MATH statements, but nonetheless useful since we drew uniformly from the 5 difficulty
levels.
In Hendrycks et al. (2021), the performance of GPT-3 (which is a larger model than the GPT-f model
studied here) is reported to be 6.0% in the algebra category and 3.9% in the number theory category.
GPT-f applied to Lean by comparison achieves 51.4% in the algebra category and 41.7% in the
number theory category. It is also worthwhile to note that the tidy baseline also highly outperforms
(31.4% in algebra and 30.0% in number theory) GPT-3 in an informal setup demonstrating the
benefit of proof automation alone.
4.3.3 LIMITATION
With miniF2F being cross-system as the goal, types of problems that are less expressible in certain
systems such as geometry and combinatorial problems are less covered. The shift of distribution
of problem types may result in skewing the research direction of models when benchmarking on
miniF2F. Directionally we aim to fix it and extend the coverage of miniF2F as we grow the benchmark. However, works and efforts on the corresponding library of other systems are required as
well.
-----
5 CONCLUSION
We presented miniF2F, a dataset of formal Olympiad-level mathematics problem statements, meant
to serve as an initial effort towards cross-system benchmarking of neural mathematical reasoning
capabilities in formal environments. We reported the performance of the neural theorem prover
GPT-f (Polu & Sutskever, 2020) on both the Lean and Metamath parts of miniF2F as well as the
performance of our non-neural tidy baseline applied to Lean. Then, we discussed these baselines and put them in perspective with previously reported comparable results in informal environments (Hendrycks et al., 2021).
Finally, we hope that miniF2F will prove to be useful to the scientific community working on neural
theorem proving and spur advances in this domain.
ACKNOWLEDGMENTS
We are grateful to Wenda Li and Xavier Martinet for contributing the Isabelle and HOL Light statements currently available in miniF2F, paving the way towards a full support of Isabelle and HOL
Light, as well as their feedback and encouragement in the process. We thank Harri Edwards for his
comments that greatly improved the manuscript.
REFERENCES
Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An environment for machine learning of higher order logic theorem proving. In International Conference
_on Machine Learning, pp. 454–463. PMLR, 2019a._
Kshitij Bansal, Christian Szegedy, Markus N Rabe, Sarah M Loos, and Viktor Toman. Learning to
reason in large theories without imitation. arXiv preprint arXiv:1905.10501, 2019b.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot
learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Ad_vances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Asso-_
[ciates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
Kevin Buzzard, Johan Commelin, and Patrick Massot. Lean perfectoid spaces. [https://](https://leanprover-community.github.io/lean-perfectoid-spaces/)
[leanprover-community.github.io/lean-perfectoid-spaces/, 2019.](https://leanprover-community.github.io/lean-perfectoid-spaces/)
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition,
pp. 248–255. Ieee, 2009.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203, 2021.
Maximilian P. L. Haslbeck, Tobias Nipkow, and Simon Wimmer. Proof ground. [https:](https://www21.in.tum.de/~wimmers/proofground/)
[//www21.in.tum.de/˜wimmers/proofground/, 2019.](https://www21.in.tum.de/~wimmers/proofground/)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021._
The mathlib Community. The lean mathematical library. In Jasmin Blanchette and Catalin Hritcu
(eds.), Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs
_and Proofs, CPP 2020, New Orleans, LA, USA, January 20-21, 2020, pp. 367–381. ACM,_
2020. doi: 10.1145/3372885.3373824. [URL https://doi.org/10.1145/3372885.](https://doi.org/10.1145/3372885.3373824)
[3373824.](https://doi.org/10.1145/3372885.3373824)
-----
Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, and Christian Szegedy. Graph representations for higher-order logic and theorem proving. In Proceedings of the AAAI Conference on
_Artificial Intelligence, volume 34, pp. 2967–2974, 2020._
Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi,
Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. The LAMBADA dataset:
Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting
_of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany,_
_Volume 1: Long Papers. The Association for Computer Linguistics, 2016. doi: 10.18653/v1/_
[p16-1144. URL https://doi.org/10.18653/v1/p16-1144.](https://doi.org/10.18653/v1/p16-1144)
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, 000+ questions for
machine comprehension of text. In Jian Su, Xavier Carreras, and Kevin Duh (eds.), Proceedings
_of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016,_
_Austin, Texas, USA, November 1-4, 2016, pp. 2383–2392. The Association for Computational_
[Linguistics, 2016. doi: 10.18653/v1/d16-1264. URL https://doi.org/10.18653/v1/](https://doi.org/10.18653/v1/d16-1264)
[d16-1264.](https://doi.org/10.18653/v1/d16-1264)
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In 7th International Conference on Learning Represen_tations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019._ URL
[https://openreview.net/forum?id=H1gR5iR5FX.](https://openreview.net/forum?id=H1gR5iR5FX)
[Peter Scholze. Liquid tensor experiment. https://xenaproject.wordpress.com/2020/](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
[12/05/liquid-tensor-experiment/, 2020.](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
Daniel Selsam, Kevin Buzzard, Reid Barton, Percey Liang, Sarah Loss, and Freek Wiedijk. Imo
[grand challenge. https://imo-grand-challenge.github.io/, 2019.](https://imo-grand-challenge.github.io/)
G. Sutcliffe. The CADE ATP System Competition - CASC. AI Magazine, 37(2):99–101, 2016.
G. Sutcliffe. The TPTP Problem Library and Associated Infrastructure. From CNF to TH0, TPTP
v6.4.0. Journal of Automated Reasoning, 59(4):483–502, 2017.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In
_7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA,_
_May 6-9, 2019. OpenReview.net, 2019._ [URL https://openreview.net/forum?id=](https://openreview.net/forum?id=rJ4km2R5t7)
[rJ4km2R5t7.](https://openreview.net/forum?id=rJ4km2R5t7)
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun
Cho. Naturalproofs: Mathematical theorem proving in natural language. _arXiv preprint_
_arXiv:2104.01112, 2021._
Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic. CoRR,
[abs/1608.02644, 2016. URL http://arxiv.org/abs/1608.02644.](http://arxiv.org/abs/1608.02644)
[Freek Wiedijk. Formalizing 100 theorems. https://www.cs.ru.nl/˜freek/100/, 2008.](https://www.cs.ru.nl/~freek/100/)
Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Baker Grosse. INT: an inequality benchmark
for evaluating generalization in theorem proving. In 9th International Conference on Learning
_Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL_
[https://openreview.net/forum?id=O6LPudowNQm.](https://openreview.net/forum?id=O6LPudowNQm)
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
_International Conference on Machine Learning, pp. 6984–6994. PMLR, 2019._
-----
A EXAMPLE OF STATEMENT IN MINIF2F
Table 4: Problem 11 of 2000 AMC 12 is formalized with proof in different languages in miniF2F.
The proof is optionally attached thus not part of the benchmark. The proof in Metamath is too long
to be fully displayed.
|Natural Language|Two non-zero real numbers, a and b, satisfy ab = a b. Which of the − following is a possible value of a + ab −ab? (A) -2 (B) − 21 (C) 1 (D) 21 b 3 (E) 2|
|---|---|
|Metamath|${ amc12-2000-p11.0 $e |- ( ph -> A e. RR ) $. amc12-2000-p11.1 $e |- ( ph -> B e. RR ) $. amc12-2000-p11.2 $e |- ( ph -> A =/= 0 ) $. amc12-2000-p11.3 $e |- ( ph -> B =/= 0 ) $. amc12-2000-p11.4 $e |- ( ph -> ( A x. B ) = ( A - B ) ) $. amc12-2000-p11 $p |- ( ph -> ( ( ( A / B ) + ( B / A ) ) - ( A x. B ) ) = 2 ) $= ( cdiv co caddc cmul cmin c2 cexp eqcomd ... $. $}|
|Lean|theorem amc12 2000 p11 (a b : R) (h : a ̸= 0 b ̸= 0) 0 ∧ (h 1 : a * b = a - b) : a / b + b / a - a * b = 2 := begin field simp [h .1, h .2], 0 0 simp only [h , mul comm, mul sub], 1 ring, end|
|Isabelle|theorem amc12 2000 p11: fixes a b::real assumes "a <noteq> 0" "b <noteq> 0" \ \ and "a * b = a - b" shows "a / b + b / a - a * b = 2" using assms by (smt (verit, ccfv threshold) diff divide distrib div self divide divide times eq eq divide imp nonzero mult div cancel left) end|
-----
B PERFORMANCE BY DIFFICULTY ON STATEMENTS FORMALIZED FROM
MATH DATASET
The MATH dataset assigns a difficulty ranging from 1 to 5 to each of its problem. Tables 5 and
6 report the number of proved statement split by difficulty level on the algebra and number theory
categories.
Table 5: Counts of successfully proved statements formalized from MATH-Algebra in miniF2F v1
split by difficulty. This table corresponds to “MATH Algebra” in Figure 1.
miniF2F-valid miniF2F-test
Difficulty Level 1 2 3 4 5 1 2 3 4 5
Metamath/GPT-f 1 0 0 0 0 2 0 1 0 1
Lean/tidy 6 4 2 2 1 6 4 7 3 1
Lean/GPT-f 9 7 8 6 2 8 7 10 7 3
Table 6: Counts of successfully proved statements formalized from MATH-Number theory in
miniF2F v1 split by difficulty. This table corresponds to “MATH Number Theory” in Figure 1.
miniF2F-valid miniF2F-test
Difficulty Level 1 2 3 4 5 1 2 3 4 5
Metamath/GPT-f 0 0 0 0 0 0 0 0 0 0
Lean/tidy 8 3 2 2 2 7 4 3 2 2
Lean/GPT-f 9 5 5 4 2 10 5 5 3 2
More broadly, Lean GPT-f is capable of solving any problem that the tidy baseline or Metamath
GPT-f can solve in MiniF2F. Qualitatively, the problems on which it fail either require multiple nontrivial reasoning steps (outside a few exceptions, problems requiring more than 2 non-trivial steps of
mathematical reasoning are generally out of reach of these baselines) or require a cut introduction
that is hard to generate, such as generating a non trivial witness.
-----
| [
"Stanislas, Polu",
"Jesse Michael, Han",
"Kunhao, Zheng"
] | 2021-09-01T00:00:00 | ICLR 2022 | true | 84 | 23 | [
"Lean",
"Isabelle",
"HOL Light",
"MetaMath"
] | https://arxiv.org/abs/2109.00110 | https://arxiv.org/abs/2109.00110 | https://www.semanticscholar.org/paper/7ba98b00a224094c09676090f5d6d69498f5b299 |
EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference | Quantitative reasoning is a higher-order reasoning skill that any intelligent natural language understanding system can reasonably be expected to handle. We present EQUATE (Evaluating Quantitative Understanding Aptitude in Textual Entailment), a new framework for quantitative reasoning in textual entailment. We benchmark the performance of 9 published NLI models on EQUATE, and find that on average, state-of-the-art methods do not achieve an absolute improvement over a majority-class baseline, suggesting that they do not implicitly learn to reason with quantities. We establish a new baseline Q-REAS that manipulates quantities symbolically. In comparison to the best performing NLI model, it achieves success on numerical reasoning tests (+24.2 %), but has limited verbal reasoning capabilities (-8.1 %). We hope our evaluation framework will support the development of models of quantitative reasoning in language understanding. | Equal (Evaluating Quantitative Understanding Aptitude in Textual Entailment), a new framework for quantitative reasoning in textual entailment, is presented and it is found that on average, state-of-the-art methods do not achieve an absolute improvement over a majority-class baseline, suggesting that they do not implicitly learn to reason with quantities. | # EQUATE : A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference
**Abhilasha Ravichander[∗], Aakanksha Naik[∗],**
**Carolyn Rose, Eduard Hovy**
Language Technologies Institute, Carnegie Mellon University
_{aravicha, anaik, cprose, hovy}@cs.cmu.edu_
**Abstract**
Quantitative reasoning is a higher-order reasoning skill that any intelligent natural language
understanding system can reasonably be expected to handle. We present EQUATE[1] (Evaluating Quantitative Understanding Aptitude
in Textual Entailment), a new framework for
quantitative reasoning in textual entailment.
We benchmark the performance of 9 published
NLI models on EQUATE, and find that on average, state-of-the-art methods do not achieve
an absolute improvement over a majority-class
baseline, suggesting that they do not implicitly
learn to reason with quantities. We establish a
new baseline Q-REAS that manipulates quantities symbolically. In comparison to the best
performing NLI model, it achieves success on
numerical reasoning tests (+24.2%), but has
limited verbal reasoning capabilities (-8.1%).
We hope our evaluation framework will support the development of models of quantitative
reasoning in language understanding.
**1** **Introduction**
Numbers play a vital role in our lives. We reason with numbers in day-to-day tasks ranging
from handling currency to reading news articles to
understanding sports results, elections and stock
markets. As numbers are used to communicate information accurately, reasoning with them is an
essential core competence in understanding natural language (Levinson, 2001; Frank et al., 2008;
Dehaene, 2011). A benchmark task in natural language understanding is natural language inference
(NLI)(or recognizing textual entailment (RTE))
(Cooper et al., 1996; Condoravdi et al., 2003; Bos
and Markert, 2005; Dagan et al., 2006), wherein a
model determines if a natural language hypothesis
_∗*The first two authors contributed equally to this work._
[1Code and data available at https://github.com/](https://github.com/AbhilashaRavichander/EQUATE)
[AbhilashaRavichander/EQUATE.](https://github.com/AbhilashaRavichander/EQUATE)
349
**RTE-QUANT**
**P: After the deal closes, Teva will generate sales of about**
$ 7 billion a year, the company said.
**H: Teva earns $ 7 billion a year.**
**AWP-NLI**
**P: Each of farmer Cunningham’s 6048 lambs is either**
black or white and there are 193 white ones.
**H: 5855 of Farmer Cunningham’s lambs are black.**
**NEWSNLI**
**P: Emmanuel Miller, 16, and Zachary Watson, 17, are**
charged as adults, police said.
**H: Two teen suspects charged as adults.**
**REDDITNLI**
**P: Oxfam says richest one percent to own more than rest**
by 2016.
**H: Richest 1% To Own More Than Half Worlds Wealth**
By 2016 Oxfam.
Table 1: Examples from evaluation sets in EQUATE
can be justifiably inferred from a given premise[2].
Making such inferences often necessitates reasoning about quantities.
Consider the following example from Table 1,
P: With 99.6% of precincts counted, Dewhurst
held 48% of the vote to 30% for Cruz .
H: Lt. Gov. David Dewhurst fails to get 50% of
primary vote.
To conclude the hypothesis is inferable, a model
must reason that since 99.6% precincts are counted, even if all remaining precincts vote for Dewhurst, he would fail to get 50% of the primary
vote. Scant attention has been paid to building datasets to evaluate this reasoning ability. To address
this gap, we present EQUATE (Evaluating Quantity Understanding Aptitude in Textual Entailment)
(§3), which consists of five evaluation sets, each
2Often, this is posed as a three-way decision where the
hypothesis can be inferred to be true (entailment), false (contradiction) or cannot be determined.
-----
featuring different facets of quantitative reasoning
in textual entailment (Table 1) (including verbal
reasoning with quantities, basic arithmetic computation, dealing with approximations and range
comparisons.).
We evaluate the ability of existing state-of-theart NLI models to perform quantitative reasoning
(§4.1), by benchmarking 9 published models on
EQUATE. Our results show that most models are
incapable of quantitative reasoning, instead relying on lexical cues for prediction. Additionally,
we build Q-REAS, a shallow semantic reasoning
baseline for quantitative reasoning in NLI (§4.2).
Q-REAS is effective on synthetic test sets which
contain more quantity-based inference, but shows
limited success on natural test sets which require
deeper linguistic reasoning. However, the hardest
cases require a complex interplay between linguistic and numerical reasoning. The EQUATE evaluation framework makes it clear where this new
challenge area for textual entailment stands.
**2** **Related Work**
NLI has attracted community-wide interest as a
stringent test for natural language understanding
(Cooper et al., 1996; Fyodorov; Glickman et al.,
2005; Haghighi et al., 2005; Harabagiu and Hickl,
2006; Romano et al., 2006; Dagan et al., 2006;
Giampiccolo et al., 2007; Zanzotto et al., 2006;
Malakasiotis and Androutsopoulos, 2007; MacCartney, 2009; de2; Dagan et al., 2010; Angeli
and Manning, 2014; Marelli et al., 2014). Recently, the creation of large-scale datasets (Bowman
et al., 2015; wil; Khot et al., 2018) spurred the development of many neural models (Parikh et al.,
2016; Nie and Bansal, 2017; Conneau et al., 2017;
Balazs et al., 2017; Chen et al., 2017a; Radford
et al., 2018; Devlin et al., 2018).
However, state-of-the-art models for NLI treat
the task like a matching problem, which appears
to work in many cases, but breaks down in others.
As the field moves past current models of the matching variety to ones that embody more of the
reasoning we know is part of the task, we need
benchmarks that will enable us to mark progress
in the field. Prior work on challenge tasks has already made headway in defining tasks for subproblems such as lexical inference with hypernymy,
co-hyponymy, antonymy (Glockner et al., 2018;
Naik et al., 2018). In this work, we specifically
probe into quantitative reasoning.
350
De Marneffe et al. (2008) find that in a corpus
of real-life contradiction pairs collected from Wikipedia and Google News, 29% contradictions arise from numeric discrepancies, and in the RTE3 (Recognizing Textual Entailment) development
set, numeric contradictions make up 8.8% of contradictory pairs. Naik et al. (2018) find that model
inability to do numerical reasoning causes 4% of
errors made by state-of-the-art models. Sammons
et al. (2010); Clark (2018) argue for a systematic knowledge-oriented approach in NLI by evaluating specific semantic analysis tasks, identifying quantitative reasoning in particular as a focus
area. Bentivogli et al. (2010) propose creating specialized datasets, but feature only 6 examples with
quantitative reasoning. Our work bridges this gap
by providing a more comprehensive examination
of quantitative reasoning in NLI.
While to the best of our knowledge, prior work
has not studied quantitative reasoning in NLI, Roy
(2017) propose a model for a related subtask called quantity entailment, which aims to determine
if a given quantity can be inferred from a sentence. In contrast, our work is concerned with
general-purpose textual entailment which considers if a given sentence can be inferred from another. Our work also relates to solving arithmetic word problems (Hosseini et al., 2014; Mitra
and Baral, 2016; Zhou et al., 2015; Upadhyay
et al., 2016; Huang et al., 2017; Kushman et al.,
2014a; Koncel-Kedziorski et al., 2015; roy; Roy,
2017; Ling et al., 2017a). A key difference is
that word problems focus on arithmetic reasoning,
while the requirement for linguistic reasoning and
world knowledge is limited as the text is concise, straightforward, and self-contained (Hosseini
et al., 2014; Kushman et al., 2014b). Our work
provides a testbed that evaluates basic arithmetic
reasoning while incorporating the complexity of
natural language.
Recently, Dua et al. (2019) also recognize the
importance of quantitative reasoning for text understanding. They propose DROP, a reading comprehension dataset focused on a limited set of
discrete operations such as counting, comparison,
sorting and arithmetic. In contrast, EQUATE features diverse phenomena that occur naturally in
text, including reasoning with approximation, ordinals, implicit quantities and quantifiers, requiring NLI models to reason comprehensively about
the interplay between quantities and language. Ad
-----
ditionally, through EQUATE we suggest the inclusion of controlled synthetic tests in evaluation
benchmarks. Controlled tests act as basic validation of model behaviour, isolating model ability to
reason about a property of interest.
**3** **Quantitative Reasoning in NLI**
Our interpretation of “quantitative reasoning”
draws from cognitive testing and education (Stafford, 1972; Ekstrom et al., 1976), which considers
it “verbal problem-solving ability”. While inextricably linked to mathematics, it is an inclusive skill
involving everyday language rather than a specialized lexicon. To excel at quantitative reasoning,
one must interpret quantities expressed in language, perform basic calculations, judge their accuracy, and justify quantitative claims using verbal and
numeric reasoning. These requirements show a reciprocity: NLI lends itself as a test bed for quantitative reasoning, which conversely, is important
for NLI (Sammons et al., 2010; Clark, 2018). Motivated by this, we present the EQUATE (Evaluating Quantity Understanding Aptitude in Textual
Entailment) framework.
**3.1** **The EQUATE Dataset**
EQUATE consists of five NLI test sets featuring
quantities. Three of these tests for quantitative reasoning feature language from real-world sources
such as news articles and social media (§3.2; §3.3;
_§3.4). We focus on sentences containing quantities_
with numerical values, and consider an entailment
pair to feature quantitative reasoning if it is at least
one component of the reasoning required to determine the entailment label (but not necessarily
the only reasoning component). Quantitative reasoning features quantity matching, quantity comparison, quantity conversion, arithmetic, qualitative processes, ordinality and quantifiers, quantity
noun and adverb resolution[3] as well as verbal reasoning with the quantity’s textual context[4]. Appendix B gives some examples for these quantitative phenomena. We further filter sentence pairs
which require only temporal reasoning, since specialized knowledge is needed to reason about time.
These three test sets contain pairs which conflate
multiple lexical and quantitative reasoning phenomena. In order to study aspects of quantitative rea
3Such as the quantities represented in dozen, twice, teena_gers._
4For example, ⟨Obama cuts tax rate to 28%, Obama wants
to cut tax rate to 28% as part of overhaul⟩.
351
soning in isolation, EQUATE further features two
controlled synthetic tests (§3.5; §3.6).
**3.2** **RTE-Quant**
This test set is constructed from the RTE subcorpus for quantity entailment (Roy, 2017), originally drawn from the RTE2-RTE4 datasets (Dagan et al., 2006). The original sub-corpus conflates
temporal and quantitative reasoning. We discarded
pairs requiring temporal reasoning, obtaining a set
of 166 entailment pairs.
**3.3** **NewsNLI**
This test set is created from the CNN corpus (Hermann et al., 2015) of news articles with abstractive summaries. We identify summary points with
quantities, filtering out temporal expressions. For
a summary point, the two most similar sentences[5]
from the article are chosen, flipping pairs where the premise begins with a first-person pronoun
(eg:⟨“He had nine pears”, “Bob had nine pears”⟩
becomes ⟨“Bob had nine pears”, “He had nine
pears”⟩). The top 50% of similar pairs are retained to avoid lexical overlap bias. We crowdsource
annotations for a subset of this data from Amazon Mechanical Turk. Crowdworkers[6] are shown
two sentences and asked to determine whether the
second sentence is definitely true, definitely false, or not inferable given the first. We collect 5
annotations per pair, and consider pairs with lowest token overlap between premise and hypothesis and least difference in premise-hypothesis
lengths when stratified by entailment label. Top
1000 samples meeting these criteria form our final set. To validate crowdsourced labels, experts
are asked to annotate 100 pairs. Crowdsourced
gold labels match expert gold labels in 85% cases,
while individual crowdworker labels match expert
gold labels in 75.8%. Disagreements are manually resolved by experts and examples not featuring
quantitiative reasoning are filtered, leaving a set of
968 samples.
**3.4** **RedditNLI**
This test set is sourced from the popular social forum \reddit[7]. Since reasoning about quanti
5According to Jaccard similarity.
6We require crowdworkers to have an approval rate of
95% on at least 100 tasks and pass a qualification test.
7According to the Reddit User Agreement, users grant
Reddit the right to make their content available to other organizations or individuals.
-----
|Source|Test Set|Size|Classes|Data Source|Annotation Source|Quantitative Phenomena|
|---|---|---|---|---|---|---|
|Natural|RTE-Quant|166|2|RTE2-RTE4|Experts|Arithmetic, Ranges, Quan- tifiers|
||NewsNLI|968|2|CNN|Crowdworkers|Ordinals, Quantifiers, Arithmetic, Approximati- on, Magnitude, Ratios|
||RedditNLI|250|3|Reddit|Experts|Range, Arithmetic, Appro- ximation, Verbal|
|Synthetic|Stress Test|7500|3|AQuA-RAT|Automatic|Quantifiers|
||AwpNLI|722|2|Arithmetic Word Problems|Automatic|Arithmetic|
Table 2: An overview of test sets included in EQUATE. RedditNLI and Stress Test are framed as 3-class (entailment, neutral, contradiction) while RTE-Quant, NewsNLI and AwpNLI are 2-class (entails=yes/no). RTE 2-4 formulate entailment as a 2-way decision. We find that few news article headlines are contradictory, thus NewsNLI is
similarly framed as a 2-way decision. For algebra word problems, substituting the wrong answer in the hypothesis
necessarily creates a contradiction under the event coreference assumption (De Marneffe et al., 2008), thus it is
framed as a 2-way decision as well.
ties is important in domains like finance or economics, we scrape all headlines from the posts
on \r\economics, considering titles that contain
quantities and do not have meta-forum information. Titles appearing within three days of each
other are clustered by Jaccard similarity, and the
top 300 pairs are extracted. After filtering out
nonsensical titles, such as concatenated stock prices, we are left with 250 sentence pairs. Similar
to RTE, two expert annotators label these pairs,
achieving a Cohen’s kappa of 0.82. Disagreements
are discussed to resolve final labels.
**3.5** **Stress Test**
We include the numerical reasoning stress test
from (Naik et al., 2018) as a synthetic sanity
check. The stress test consists of 7500 entailment pairs constructed from sentences in algebra
word problems (Ling et al., 2017b). Focusing on
quantifiers, it requires models to compare entities
from hypothesis to the premise while incorporating quantifiers, but does not require them to perform the computation from the original algebra
word problem (eg: ⟨“NHAI employs 100 men to
build a highway of 2 km in 50 days working 8
hours a day”,“NHAI employs less than 700 men
to build a highway of 2 km in 50 days working 8
hours a day”⟩).
**3.6** **AwpNLI**
To evaluate arithmetic ability of NLI models, we
repurpose data from arithmetic word problems
(roy). They have the following characteristic structure. First, they establish a world and optionally update its state. Then, a question is posed
352
about the world. This structure forms the basis
of our pair creation procedure. World building
and update statements form the premise. A hypothesis template is generated by identifying modal/auxiliary verbs in the question, and subsequent
verbs, which we call secondary verbs. We identify the agent and conjugate the secondary verb
in present tense followed by the identified unit
to form the final template (for example, the algebra word problem ‘Gary had 73.0 dollars. He
spent 55.0 dollars on a pet snake. How many dollars did Gary have left?’ would generate the hypothesis template ‘Agent(Gary) Verb(Has) Answer(18.0) Unit(dollars) left’). For every template, the correct guess is used to create an entailed
hypothesis. Contradictory hypotheses are created
by randomly sampling a wrong guess (x ∈ Z[+] if
correct guess is an integer, and x ∈ R[+] if it is a
real number) [8]. We check for grammaticality, finding only 2% ungrammatical hypotheses, which
are manually corrected leaving a set of 722 pairs.
**4** **Models**
We describe the 9 NLI models[9] used in this study,
as well as our new baseline. The interested reader
is invited to refer to the corresponding publications for further details.
**4.1** **NLI Models**
1) Majority Class (MAJ): Simple baseline that
always predicts the majority class in test set.
8From a uniform distribution over an interval of 10 around
the correct guess (or 5 for numbers less than 5), to identify
plausible wrong guesses.
9Accuracy of all models on MultiNLI closely matches original publications (numbers in appendix A).
-----
INPUT
_Pc_ Set of “compatible” single-valued premise quantities
_Pr_ Set of “compatible” range-valued premise quantities
_H_ Hypothesis quantity
_O_ Operator set {+, −, ∗, /, =, ∩, ∪, \, ⊆}
_L_ Length of equation to be generated
_SLTL_ Symbol list (Type list (set of types fromPc ∪ _Pr ∪_ _H ∪ PcO, P)_ _r, H)_
_N_ Length of symbol list
_K_ Index of first range quantity in symbol list
_M_ Index of first operator in symbol list
OUTPUT
_ei_ Index of symbol assigned to i[th] position in postfix
equation
VARIABLES
_xi_ Main ILP variable for position i
_ci_ Indicator variable: is ei a single value?
_ri_ Indicator variable: is ei a range?
_oi_ Indicator variable: is ei an operator?
_di_ Stack depth of ei
_ti_ Type index for ei
Table 3: Input, output and variable definitions for the
Integer Linear Programming (ILP) framework used for
quantity composition
four stages: Quantity mentions are extracted and
parsed into semantic representations called NUM
SETS (§4.2.1, §4.2.2); compatible NUMSETS are
extracted (§4.2.3) and composed (§4.2.4) to form
_justifications; Justifications are analyzed to deter-_
mine entailment labels (§4.2.5).
**4.2.1** **Quantity Segmenter**
We follow Barwise and Cooper (1981) in defining
quantities as having a number, unit, and an optional approximator. Quantity mentions are identified
as least ancestor noun phrases from the constituency parse of the sentence containing cardinal numbers.
**4.2.2** **Quantity Parser**
The quantity parser constructs a grounded representation for each quantity mention in the premise
or hypothesis, henceforth known as a NUMSET [11].
A NUMSET is a tuple (val, unit, ent, adj, loc, verb,
freq, flux)[12] with:
1. val ∈ [R, R]: quantity value represented as a
range
2. unit ∈ _S: unit noun associated with quantity_
3. ent ∈ _S[φ]: entity noun associated with unit (e.g.,_
_donations worth 100$)_
11A NUMSET may be a composition of other NUMSETS .
12As in (Koncel-Kedziorski et al., 2015) S denotes all possible spans in the sentence, φ represents the empty span, and
_S[φ]=S ∪_ _φ_
Figure 1: Overview of Q-REAS baseline.
2) Hypothesis-Only (HYP): FastText classifier
(Mikolov et al., 2018) trained on only hypotheses
(Gururangan et al., 2018).
3) ALIGN: A bag-of-words alignment model inspired by MacCartney (2009).[10]
4) CBOW: A simple bag-of-embeddings sentence
representation model (wil).
5) BiLSTM: The simple BiLSTM model described by wil.
6) Chen (CH): Stacked BiLSTM-RNNs with
shortcut connections and character word embeddings (Chen et al., 2017b).
7) InferSent: A single-layer BiLSTM-RNN model with max-pooling (Conneau et al., 2017).
8) SSEN: Stacked BiLSTM-RNNs with shortcut
connections (Nie and Bansal, 2017).
9) ESIM: Sequential inference model proposed
by Chen et al. (2017a) which uses BiLSTMs with
an attention mechanism.
10) OpenAI GPT: Transformer-based language
model (Vaswani et al., 2017), with finetuning on
NLI (Radford et al., 2018).
11) BERT: Transformer-based language model
(Vaswani et al., 2017), with a cloze-style and nextsentence prediction objective, and finetuning on
NLI (Devlin et al., 2018).
**4.2** **Q-REAS Baseline System**
Figure 1 describes the Q-REAS baseline for
quantitative reasoning in NLI. The model manipulates quantity representations symbolically to make entailment decisions, and is intended to serve
as a strong heuristic baseline for numerical reasoning on the EQUATE benchmark. This model has
10Model accuracy on RTE-3 test is 61.12%, comparable to
the reported average model performance in the RTE competition of 62.4% .
353
-----
|Definitional Constraints|Col2|
|---|---|
|Range restriction Uniqueness Stack definition|x < K or x = M −1 for i ∈[0, L −1] if c = 1 i i i x ≥K and x < M for i ∈[0, L −1] if r = 1 i i i x ≥M for i ∈[0, L −1] if o = 1 i i c + r + o = 1 for i ∈[0, L −1] i i i d = 0 (Stack depth initialization) 0 d = d −2o + 1 for i ∈[0, L −1] (Stack depth update) i i−1 i|
|Syntactic Constraints||
|First two operands Last operator Last operand Other operators Other operands Empty stack Premise usage|c + r = 1 and c + r = 1 0 0 1 1 x ≥N −1 (Last operator should be one of {=, ⊆}) L−1 x = M −1 (Last operand should be hypothesis quantity) L−2 x ≤N −2 for i ∈[0, L −3] if o = 1 i i x < K for i ∈[0, L −3] if c = 1 i i x < M for i ∈[0, L −3] if r = 1 i i d = 0 (Non-empty stack indicates invalid postfix expression) L−1 x ̸= x for i, j ∈[0, L −1] if o ̸= 1, o ̸= 1 i j i j|
|Operand Access||
|Right operand Left operand|op2(x i) = x for i ∈[0, L −1] such that o = 1 i−1 i op1(x i) = x l for i, l ∈ [0, L −1] where o i = 1 and l is the largest index such that l ≤(i −2) and d = d l i|
Table 4: Mathematical validity constraint definitions for the ILP framework. Functions op1() and op2() return the
left and right operands for an operator respectively. Variables defined in table 3.
|Type Consistency Constraints|Col2|
|---|---|
|Type assignment Two type match One type match|t = TL[k] for i ∈[0, L −1] if c + r = 1 and type(SL i) = k i i i t i = t a = t b for i ∈ [0, L −1] such that o i = 1, x i ∈ {+, −, ∗, /, =, ∩, ∪, \, ⊆}, a = op1(x ), b = op2(x ) i i t i ∈ {t a, t b}, t a ̸= t b for i ∈ [0, L −1] such that o i = 1, x i = ∗, a = op1(x i), b = op2(x ) i t = t ̸= t for i ∈[0, L −1] such that o = 1, x = /, a = op1(x i), b = op2(x i) i a b i i|
|Operator Consistency Constraints||
|Arithmetic operators Range operators|c = c = 1 for i ∈[0, L −1] such that o = 1, x ∈{+, −, ∗, /, =}, a = op1(x i), b = a b i i op2(x ) i r = r = 1 for i ∈[0, L−1] such that o = 1, x ∈{∩, ∪, \}, a = op1(x i), b = op2(x i) a b i i r = 1 for i ∈[0, L −1] such that o = 1, x =⊆, b = op2(x i) b i i|
Table 5: Linguistic consistency constraint definitions for the ILP framework. Functions op1() and op2() return the
left and right operands for an operator respectively. Variables defined in table 3.
4. adj ∈ _S[φ]: adjective associated with unit if_
any[13],
5. loc ⊆ _S[φ]: location of unit (e.g.,’in the bag’)[14]_
6. verb ∈ _S[φ]: action verb associated with quanti-_
ty[15].
7. freq ⊆ _S[φ]: if quantity recurs[16]_ (e.g, ’per hour’),
8. flux ∈{increase to, increase from, decrease to,
decrease from}[φ]: if quantity is in a state of flux[17].
To extract values for a quantity, we extract cardinal numbers, recording contiguity. We normalize the number[18]. We also handle simple ratios
13Extracted as governing verb linked to entity by an amod
relation.
14Extracted as prepositional phrase attached to the quantity
and containing noun phrase.
15Extracted as governing verb linked to entity by dobj or
_nsubj relation._
16extracted using keywords per and every
17using gazetteer: increasing, rising, rose, decreasing, fal_ling, fell, drop_
18(remove “,”s, convert written numbers to float, decide the
354
such as quarter, half etc, and extract bounds (eg:
_fewer than 10 apples is parsed to [−∞, 10] app-_
les.)
To extract units, we examine tokens adjacent
to cardinal numbers in the quantity mention and
identify known units. If no known units are found,
we assign the token in a numerical modifier relationship with the cardinal number, else we assign the nearest noun to the cardinal number as
the unit. A quantity is determined to be approxi**mate if the word in an adverbial modifier relation**
with the cardinal number appears in a gazetteer[19].
If approximate, range is extended to (+/-)2% of the
numerical value, for example hundred fifty eight thousand
is 158000, two fifty eight is 258, 374m is 3740000 etc.). If
cardinal numbers are non-adjacent, we look for an explicitly
mentioned range such as ‘to’ and ‘between’.
19roughly, approximately, about, nearly, roundabout,
around, circa, almost, approaching, pushing, more or less,
in the neighborhood of, in the region of, on the order
of,something like, give or take (a few), near to, close to, in
the ballpark of
-----
|MAJ 57.8 0.0 50.7 0.0 58.4 0.0 33.3 0.0 50.0 0.0 HYP 49.4 -8.4 52.5 +1.8 40.8 -17.6 31.2 -2.1 50.1 +0.1 ALIGN 62.1 +4.3 56.0 +5.3 34.8 -23.6 22.6 -10.7 47.2 -2.8|+0.0 +0.0 +0.0 -8.1 -1.0 -5.2 -4.7 -6.8 -5.5|
|---|---|
|CBOW 47.0 -10.8 61.8 +11.1 42.4 -16.0 30.2 -3.1 50.7 +0.7 BiLSTM 51.2 -6.6 63.3 +12.6 50.8 -7.6 31.2 -2.1 50.7 +0.7 CH 54.2 -3.6 64.0 +13.3 55.2 -3.2 30.3 -3.0 50.7 +0.7 InferSent 66.3 +8.5 65.3 +14.6 29.6 -28.8 28.8 -4.5 50.7 +0.7 SSEN 58.4 +0.6 65.1 +14.4 49.2 -9.2 28.4 -4.9 50.7 +0.7 ESIM 54.8 -3.0 62.0 +11.3 45.6 -12.8 21.8 -11.5 50.1 +0.1 GPT 68.1 +10.3 72.2 +21.5 52.4 -6.0 36.4 +3.1 50.0 +0.0 BERT 57.2 -0.6 72.8 +22.1 49.6 -8.8 36.9 +3.6 42.2 -7.8|-5.2 -1.2 -3.6 -0.5 -0.7 -0.6 +2.2 -1.2 +0.9 -1.9 -1.9 -1.9 +1.9 -2.1 +0.3 -1.5 -5.7 -3.2 +8.6 +1.6 +5.8 +4.2 -2.1 +1.7|
|Q-REAS 56.6 -1.2 61.1 +10.4 50.8 -7.6 63.3 +30 71.5 +21.5|+0.5 +25.8 +10.6|
|---|---|
D Nat. Synth. All
RTE-Q ∆ NewsNLI ∆ RedditNLI ∆ NR ST ∆ AWPNLI ∆
M Avg. ∆ Avg. ∆ Avg. ∆
Table 6: Accuracies(%) of 9 NLI Models on five tests for quantitiative reasoning in entailment. M and D represent models and datasets respectively. ∆ captures improvement over majority-class baseline for a dataset. Column
Nat.Avg. reports the average accuracy(%) of each model across 3 evaluation sets constructed from natural sources
(RTE-Quant, NewsNLI, RedditNLI), whereas Synth.Avg. reports the average accuracy(%) on 2 synthetic evaluation sets (Stress Test, AwpNLI). Column Avg. represents the average accuracy(%) of each model across all 5
evaluation sets in EQUATE.
that justify the hypothesis NUMSET [22]. In this example, the expression < P1, P2, _, H1, => will_
_−_
be generated.
The set of possible equations is exponential in
number of NUMSETS, making exhaustive generation intractable. But a large number of equations
are invalid as they violate constraints such as unit
consistency. Thus, our framework uses integer linear programming (ILP) to constrain the equation
space. It is inspired by prior work on algebra word
problems (Koncel-Kedziorski et al., 2015), with
some key differences:
1. Arithmetic equations: We focus on arithmetic
equations instead of algebraic ones.
2. Range arithmetic: Quantitative reasoning involves ranges, which are handled by representing
then as endpoint-inclusive intervals and adding
the four operators (∪, ∩, \, ⊆)
3. Hypothesis quantity-driven: We optimize an
ILP model for each hypothesis NUMSET because
a sentence pair is marked “entailment” iff every
hypothesis quantity is justified.
Table 3 describes ILP variables. We impose the
following types of constraints:
**1. Definitional Constraints: Ensure that ILP**
variables take on valid values by constraining
initialization, range, and update.
**2. Syntactic Constraints: Assure syntactic vali-**
dity of generated postfix expressions by limiting
22Direct comparisons are incorporated by adding “=” as an
operator.
current value.
**4.2.3** **Quantity Pruner**
The pruner constructs “compatible” premisehypothesis NUMSET pairs. Consider the pair “Insurgents killed 7 U.S. soldiers, set off a car bomb
that killed four Iraqi policemen” and “7 US sol_diers were killed, and at least 10 Iraqis died”. Our_
parser extracts NUMSETS corresponding to “four
_Iraqi policemen” and “7 US soldiers” from pre-_
mise and hypothesis respectively. But these NUM
SETS should not be compared as they involve different units. The pruner discards such incompatible pairs. Heuristics to identify unit-compatible
NUMSET pairs include three cases- 1) direct string
match, 2) synonymy/hypernymy relations from
WordNet, 3) one unit is a nationality/ job[20] and the
other unit is synonymous with person (Roy, 2017).
**4.2.4** **Quantity Composition**
The composition module detects whether a hypothesis NUMSET is justified by composing “compatible” premise NUMSETS . For example, consider
the pair “I had 3 apples but gave one to my brother” and “I have two apples”. Here, the premise
NUMSETS P1 (“3 apples”) and P2 (“one apple”)
must be composed to deduce that the hypothesis
NUMSET _H1 (“2 apples”) is justified. Our fra-_
mework accomplishes this by generating postfix
arithmetic equations[21] from premise NUMSETS,
20Lists of jobs, nationalities scraped from Wikipedia.
21Note that arithmetic equations differ from algebraic
equations in that they do not contain unknown variables
355
-----
**Algorithm 1 PredictEntailmentLabel(P, H, C, E)**
**Input: Premise quantities P**, Hypothesis quantities H, Compatible pairs C, Equations E
**Output: Entailment label l ∈{ e, c, n }**
1: if C = ∅ **then return n**
2: J ←∅
3: L ← []
4: for qh _H do_
_∈_
5: _Jh_ _qp_ _qp_ _P, (qp, qh)_ _C_
_←{_ _|_ _∈_ _∈_ _}_
6: _J_ _J_ (qh, Jh)
_←_ _∪{_ _}_
7: _L ←_ _L + [false]_
8: for (qh, Jh) _J do_
_∈_
9: **if Jh = ∅** **then return n**
10: **for qp ∈** _Jh do_
11: _s_ MaxSimilarityClass(qp, qh)
_←_
12: **if s = e then**
13: **if ValueMatch(qp, qh) then**
14: _L[qh] = true_
15: **if !ValueMatch(qp, qh) then**
16: _L[qh] = false_
17: **if s = c then**
18: **if ValueMatch(qp, qh) then**
19: _L[qh] = c_
20: for qh _H do_
_∈_
21: _Eq_ _ei_ _E_ _hyp(ei) = qh_
_←{_ _∈_ _|_ _}_
22: **if Eq ̸= ∅** **then**
23: _L[qh] = true_
operator-operand ordering.
**3.** **Operand** **Access:** Simulate stack-based
evaluation correctly by choosing correct operatoroperand assignments.
**4. Type Consistency: Ensure that all operations**
are type-compatible.
**5. Operator Consistency: Force range operators**
to have range operands and mathematical operators to have single-valued operands.
Definitional, syntactic, and operand access
constraints ensure mathematical validity while type and operator consistency constraints add linguistic consistency. Constraint formulations are provided in Tables 4 and 5. We limit tree depth to 3
and retrieve a maximum of 50 solutions per hypothesis NUMSET, then solve to determine whether
the equation is mathematically correct. We discard
equations that use invalid operations (division by
0) or add unnecessary complexity (multiplication/
division by 1). The remaining equations are considered plausible justifications .
**4.2.5** **Global Reasoner**
The global reasoner predicts the final entailment
label as shown in Algorithm 1[23], on the assumption that every NUMSET in the hypothesis has to be
justified [24] for entailment.
**5** **Results and Discussion**
Table 6 presents results on EQUATE. All models,
except Q-REAS are trained on MultiNLI. QREAS utilizes WordNet and lists from Wikipedia.
We observe that neural models, particularly
OpenAI GPT excel at verbal aspects of quantitative reasoning (RTE-Quant, NewsNLI), whereas
Q-REAS excels at numerical aspects (Stress Test,
AwpNLI).
**5.1** **Neural Models on NewsNLI:**
To tease apart contributory effects of numerical
and verbal reasoning in natural data, we experiment with NewsNLI. We extract all entailed
pairs where a quantity appears in both premise
23MaxSimilarityClass() takes two quantities and returns a
probability distribution over entailment labels based on unit
match. Similarly, ValueMatch() detects whether two quantities match in value (this function can also handle ranges).
24This is a necessary but not sufficient condition for entailment. Consider the example, ⟨‘Sam believed Joan had 5 apples’, ‘Joan had 5 apples’⟩. The hypothesis quantities of 5 apples is justified but is not a sufficient condition for entailment.
356
24: if c ∈ _L then return c_
25: if count(L, true) = len(L) then return e
26: return n
and hypothesis, and perturb the quantity in the
hypothesis generating contradictory pairs. For
example, the pair ⟨‘In addition to 79 fatalities,
some 170 passengers were injured.’⟩ ‘The crash
took the lives of 79 people and injured some 170’,
‘entailment’ is changed to ⟨‘In addition to 79
fatalities, some 170 passengers were injured.’,
‘The crash took the lives of 80 people and injured
some 170’, ‘contradiction’⟩, assuming scalar
implicature and event coreference. Our perturbed
test set contains 218 pairs. On this set, GPT[25]
achieves an accuracy of 51.18%, as compared to
72.04% on the unperturbed set, suggesting the
model relies on verbal cues rather than numerical
reasoning. In comparison, Q-REAS achieves an
accuracy of 98.1% on the perturbed set, compared
to 75.36% on the unperturbed set, highlighting
25the best-performing neural model on EQUATE.
-----
reliance on quantities rather than verbal information. Closer examination reveals that OpenAI
switches to predicting the ‘neutral’ category for
perturbed samples instead of entailment, accounting for 42.7% of its errors, possibly symptomatic
of lexical bias issues (Naik et al., 2018).
**5.2** **What Quantitative Phenomena Are**
**Hard?**
We sample 100 errors made by Q-REAS on
each test in EQUATE, to identify phenomena not
addressed by simple quantity comparison. Our
analysis of causes for error suggest avenues for
future research:
1. **Multi-step** **numerical-verbal** **reasoning:**
Models do not perform well on examples
requiring interleaved verbal and quantitative
reasoning, especially multi-step deduction. Consider the pair ⟨“Two people were injured in the
attack”, “Two people perpetrated the attack”⟩.
Quantities “two people” and “two people” are
unit-compatible, but must not be compared.
Another example is the NewsNLI entailment
pair in Table 1. This pair requires us to identify
that 16 and 17 refer to Emmanuel and Zachary’s
ages (quantitative), deduce that this implies they
are teenagers (verbal) and finally count them
(quantitative) to get the hypothesis quantity “two
teens”. Numbers and language are intricately
interleaved and developing a reasoner capable
of handling such complex interplay is challenging.
2. Lexical inference: Lack of real world knowledge causes errors in identifying quantities
and valid comparisons. Errors include mapping
abbreviations to correct units (“m” to “meters”),
detecting part-whole coreference (“seats” can be
used to refer to “buses”), and resolving hypernymy/hyponymy (“young men” to “boys”).
3. Inferring underspecified quantities: Quantity
attributes can be implicitly specified, requiring
inference to generate a complete representation.
Consider “A mortar attack killed four people
and injured 80”. A system must infer that the
quantity “80” refers to people. On RTE-Quant,
20% of such cases stem from zero anaphora, a
hard problem in coreference resolution.
357
4. Arithmetic comparison limitations: These examples require composition between incompatible
quantities. For example, consider ⟨“There were 3
birds and 6 nests”, “There were 3 more nests than
birds”⟩. To correctly label this pair “3 birds” and
“6 nests” must be composed.
**6** **Conclusion**
In this work, we present EQUATE, an evaluation
framework to estimate the ability of models to reason quantitatively in textual entailment. We observe that existing neural approaches rely heavily
on the lexical matching aspect of the task to succeed rather than reasoning about quantities. We
implement a strong symbolic baseline Q-REAS
that achieves success at numerical reasoning, but
lacks sophisticated verbal reasoning capabilities.
The EQUATE resource presents an opportunity for the community to develop powerful hybrid neuro-symbolic architectures, combining the
strengths of neural models with specialized reasoners such as Q-REAS . We hope our insights
lead to the development of models that can more
precisely reason about the important, frequent, but
understudied, phenomena of quantities in natural
language.
**Acknowledgments**
This research was supported in part by grants
from the National Science Foundation Secure and Trustworthy Computing program (CNS1330596, CNS-15-13957, CNS-1801316, CNS1914486) and a DARPA Brandeis grant (FA875015-2-0277). The views and conclusions contained
herein are those of the authors and should not be
interpreted as necessarily representing the official
policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government.
The author Naik was supported by a fellowship
from the Center of Machine Learning and Health
at Carnegie Mellon University. The authors would
like to thank Graham Neubig, Mohit Bansal and
Dongyeop Kang for helpful discussion regarding
this work, and Shruti Rijhwani and Siddharth Dalmia for reviews while drafting this paper. The authors are also grateful to Lisa Carey Lohmueller
and Xinru Yan for volunteering their time for pilot
studies.
-----
**References**
Gabor Angeli and Christopher D Manning. 2014. Naturalli: Natural logic inference for common sense reasoning. In Proceedings of the 2014 conference on
_empirical methods in natural language processing_
_(EMNLP), pages 534–545._
Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola,
[and Yutaka Matsuo. 2017. Refining raw sentence re-](http://www.aclweb.org/anthology/W17-5310)
[presentations for textual entailment recognition via](http://www.aclweb.org/anthology/W17-5310)
[attention. In Proceedings of the 2nd Workshop on](http://www.aclweb.org/anthology/W17-5310)
_Evaluating Vector Space Representations for NLP,_
pages 51–55, Copenhagen, Denmark. Association
for Computational Linguistics.
Jon Barwise and Robin Cooper. 1981. Generalized
quantifiers and natural language. In Philosophy,
_Language, and Artificial Intelligence, pages 241–_
301. Springer.
Luisa Bentivogli, Elena Cabrio, Ido Dagan, Danilo
Giampiccolo, Medea Lo Leggio, and Bernardo Ma[gnini. 2010. Building textual entailment specialized](http://www.lrec-conf.org/proceedings/lrec2010/pdf/478_Paper.pdf)
[data sets: a methodology for isolating linguistic phe-](http://www.lrec-conf.org/proceedings/lrec2010/pdf/478_Paper.pdf)
[nomena relevant to inference. In Proceedings of the](http://www.lrec-conf.org/proceedings/lrec2010/pdf/478_Paper.pdf)
_Seventh conference on International Language Re-_
_sources and Evaluation (LREC’10), Valletta, Mal-_
ta. European Languages Resources Association (ELRA).
Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical inference. In Procee_dings of the conference on Human Language Tech-_
_nology and Empirical Methods in Natural Language_
_Processing, pages 628–635. Association for Compu-_
tational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts,
[and Christopher D. Manning. 2015. A large anno-](http://aclweb.org/anthology/D15-1075)
[tated corpus for learning natural language inference.](http://aclweb.org/anthology/D15-1075)
In Proceedings of the 2015 Conference on Empiri_cal Methods in Natural Language Processing, pages_
632–642, Lisbon, Portugal. Association for Computational Linguistics.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui
[Jiang, and Diana Inkpen. 2017a. Enhanced lstm for](http://aclweb.org/anthology/P17-1152)
[natural language inference. In Proceedings of the](http://aclweb.org/anthology/P17-1152)
_55th Annual Meeting of the Association for Com-_
_putational Linguistics (Volume 1: Long Papers), pa-_
ges 1657–1668, Vancouver, Canada. Association for
Computational Linguistics.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui
[Jiang, and Diana Inkpen. 2017b. Recurrent neural](http://www.aclweb.org/anthology/W17-5307)
[network-based sentence encoder with gated attenti-](http://www.aclweb.org/anthology/W17-5307)
[on for natural language inference. In Proceedings](http://www.aclweb.org/anthology/W17-5307)
_of the 2nd Workshop on Evaluating Vector Space_
358
_Representations for NLP, pages 36–40, Copenha-_
gen, Denmark. Association for Computational Linguistics.
Peter Clark. 2018. What knowledge is needed to solve
the rte5 textual entailment challenge? arXiv preprint
_arXiv:1806.03561._
Cleo Condoravdi, Dick Crouch, Valeria De Paiva,
Reinhard Stolle, and Daniel G Bobrow. 2003.
Entailment, intensionality and text understanding.
In Proceedings of the HLT-NAACL 2003 workshop
_on Text meaning-Volume 9, pages 38–45. Associati-_
on for Computational Linguistics.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc
Barrault, and Antoine Bordes. 2017. [Supervised](https://www.aclweb.org/anthology/D17-1070)
[learning of universal sentence representations from](https://www.aclweb.org/anthology/D17-1070)
[natural language inference data. In Proceedings of](https://www.aclweb.org/anthology/D17-1070)
_the 2017 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 670–680, Copenha-_
gen, Denmark. Association for Computational Linguistics.
Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox,
Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al.
1996. Using the framework. Technical report.
Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan
Roth. 2010. The fourth pascal recognizing textual
entailment challenge. Journal of Natural Language
_Engineering._
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The pascal recognising textual entailment
challenge. In Machine learning challenges. evalua_ting predictive uncertainty, visual object classificati-_
_on, and recognising tectual entailment, pages 177–_
190. Springer.
Marie-Catherine De Marneffe, Anna N. Rafferty, and
[Christopher D. Manning. 2008. Finding contradicti-](https://www.aclweb.org/anthology/P08-1118)
[ons in text. In Proceedings of ACL-08: HLT, pages](https://www.aclweb.org/anthology/P08-1118)
1039–1047, Columbus, Ohio. Association for Computational Linguistics.
Stanislas Dehaene. 2011. The number sense: How the
_mind creates mathematics. OUP USA._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding.
_arXiv preprint arXiv:1810.04805._
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. _CoRR,_
abs/1903.00161.
Ruth B Ekstrom, Diran Dermen, and Harry Horace
Harman. 1976. Manual for kit of factor-referenced
_cognitive tests, volume 102._ Educational Testing
Service Princeton, NJ.
-----
Michael C Frank, Daniel L Everett, Evelina Fedorenko, and Edward Gibson. 2008. Number as a cognitive technology: Evidence from pirah˜a language and
cognition. Cognition, 108(3):819–824.
Yaroslav Fyodorov. A natural logic inference system.
Citeseer.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan,
and Bill Dolan. 2007. The third pascal recognizing
textual entailment challenge. In Proceedings of the
_ACL-PASCAL workshop on textual entailment and_
_paraphrasing, pages 1–9. Association for Computa-_
tional Linguistics.
Oren Glickman, Ido Dagan, and Moshe Koppel. 2005.
Web based probabilistic textual entailment.
Max Glockner, Vered Shwartz, and Yoav Goldberg.
[2018. Breaking nli systems with sentences that re-](http://www.aclweb.org/anthology/P18-2103)
[quire simple lexical inferences. In Proceedings of](http://www.aclweb.org/anthology/P18-2103)
_the 56th Annual Meeting of the Association for Com-_
_putational Linguistics (Volume 2: Short Papers), pa-_
ges 650–655, Melbourne, Australia. Association for
Computational Linguistics.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A
Smith. 2018. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324.
Aria Haghighi, Andrew Ng, and Christopher Manning. 2005. Robust textual inference via graph matching. In Proceedings of Human Language Techno_logy Conference and Conference on Empirical Me-_
_thods in Natural Language Processing._
Sanda Harabagiu and Andrew Hickl. 2006. Methods
for using textual entailment in open-domain question answering. In Proceedings of the 21st Interna_tional Conference on Computational Linguistics and_
_the 44th annual meeting of the Association for Com-_
_putational Linguistics, pages 905–912. Association_
for Computational Linguistics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines
to read and comprehend. In Advances in Neural In_formation Processing Systems, pages 1693–1701._
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533._
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. 2017. Learning fine-grained expressions to
solve math word problems. In Proceedings of the
_2017 Conference on Empirical Methods in Natural_
_Language Processing, pages 805–814._
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018.
Scitail: A textual entailment dataset from science
question answering.
359
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014a. Learning to automatically solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), volu-_
me 1, pages 271–281.
Nate Kushman, Luke S. Zettlemoyer, Regina Barzilay,
and Yoav Artzi. 2014b. Learning to automatically
solve algebra word problems. In ACL.
Stephen C Levinson. 2001. Pragmatics. In Inter_national Encyclopedia of Social and Behavioral_
_Sciences: Vol. 17, pages 11948–11954. Pergamon._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017a. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. arXiv preprint arXiv:1705.04146.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017b. Program induction by rationale gene-](https://doi.org/10.18653/v1/P17-1015)
[ration: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Mee-](https://doi.org/10.18653/v1/P17-1015)
_ting of the Association for Computational Lingui-_
_stics (Volume 1: Long Papers), pages 158–167, Van-_
couver, Canada. Association for Computational Linguistics.
Bill MacCartney. 2009. Natural language inference.
Stanford University.
Prodromos Malakasiotis and Ion Androutsopoulos.
2007. Learning textual entailment using svms and
string similarity measures. In Proceedings of the
_ACL-PASCAL Workshop on Textual Entailment and_
_Paraphrasing, pages 42–47. Association for Com-_
putational Linguistics.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa
Bentivogli, Raffaella Bernardi, Roberto Zamparelli,
et al. 2014. A sick cure for the evaluation of compositional distributional semantic models.
Tomas Mikolov, Edouard Grave, Piotr Bojanowski,
Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the International Con_ference on Language Resources and Evaluation_
_(LREC 2018)._
Arindam Mitra and Chitta Baral. 2016. Learning to
use formulas to solve simple arithmetic problems.
In Proceedings of the 54th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), volume 1, pages 2144–2153._
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018.
[Stress test evaluation for natural language inference.](http://www.aclweb.org/anthology/C18-1198)
-----
**Appendix**
**A** **Baseline performance on**
**MultiNLI-Dev Matched**
|Model|MultiNLI Dev|
|---|---|
|Hyp Only|53.18%|
|ALIGN|45.0%|
|CBOW|63.5%|
|BiLSTM|70.2%|
|Chen|73.7%|
|NB|74.2%|
|InferSent|70.3%|
|ESIM|76.2%|
|OpenAI Transformer|81.35%|
|BERT|83.8%|
Table 7: Performance of all baseline models used in the
paper on the matched subset of MultiNLI-Dev
Table 7 presents classification accuracies of all
baseline models used in this work on the matched
subset of MultiNLI-Dev. These scores are very
close to the numbers reported by the original publications, affirming the correctness of our baseline setup.
**B** **Examples of quantitative phenomena**
**present in EQUATE**
Table 8 presents some examples from EQUATE
which demonstrate interesting quantitative phenomena that must be understood to label the pair correctly.
In Proceedings of the 27th International Conference
_on Computational Linguistics, pages 2340–2353,_
Santa Fe, New Mexico, USA. Association for Computational Linguistics.
[Yixin Nie and Mohit Bansal. 2017. Shortcut-stacked](http://www.aclweb.org/anthology/W17-5308)
[sentence encoders for multi-domain inference.](http://www.aclweb.org/anthology/W17-5308) In
_Proceedings of the 2nd Workshop on Evaluating_
_Vector Space Representations for NLP, pages 41–_
45, Copenhagen, Denmark. Association for Computational Linguistics.
Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and
Jakob Uszkoreit. 2016. A decomposable attention
model for natural language inference. arXiv preprint
_arXiv:1606.01933._
Alec Radford, Karthik Narasimhan, Tim Salimans, and
Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido
Dagan, and Alberto Lavelli. 2006. Investigating a
generic paraphrase-based approach for relation extraction.
Subhro Roy. 2017. Reasoning about quantities in na_tural language. Ph.D. thesis, University of Illinois_
at Urbana-Champaign.
Mark Sammons, VG Vydiswaran, and Dan Roth. 2010.
Ask not what textual entailment can do for you... In
_Proceedings of the 48th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 1199–_
1208. Association for Computational Linguistics.
Richard E Stafford. 1972. Hereditary and environmental components of quantitative reasoning. Review of
_Educational Research, 42(2):183–201._
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang,
and Wen-tau Yih. 2016. Learning from explicit and
implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on
_Empirical Methods in Natural Language Proces-_
_sing, pages 297–306._
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. arXiv preprint arXiv:1706.03762.
F Zanzotto, Alessandro Moschitti, Marco Pennacchiotti, and M Pazienza. 2006. Learning textual entailment from examples. In Second PASCAL recogni_zing textual entailment challenge, page 50. PAS-_
CAL.
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Con_ference on Empirical Methods in Natural Language_
_Processing, pages 817–822._
360
-----
**Phenomenon** **Example**
**P: Sharper faces charges in Arizona and California**
Arithmetic
**H: Sharper has been charged in two states**
**P: Between 20 and 30 people were trapped in the casino**
Ranges
**H: Upto 30 people thought trapped in casino**
**P: Poll: Obama over 50% in Florida**
Quantifiers
**H: New poll shows Obama ahead in Florida**
**P: Second-placed Nancy celebrated their 40th anniversary with a win**
Ordinals
**H: Nancy stay second with a win**
Approximation **[P:][ Rwanda has dispatched 1917 soldiers]**
**H: Rwanda has dispatched some 1900 soldiers**
**P: Londoners had the highest incidence of E. Coli bacteria (25%)**
Ratios
**H: 1 in 4 Londoners have E. Coli bacteria**
**P: Treacherous currents took four lives on the Alabama Gulf coast**
Comparison
**H: Rip currents kill four in Alabama**
**P: If the abuser has access to a gun, it increases chances of death by 500%**
Conversion
**H: Victim five times more likely to die if abuser is armed**
**P: Eight suspects were arrested**
Numeration
**H: 8 suspects have been arrested**
**P: The boat capsized two more times**
Implicit
**H: His sailboat capsized three times**
Quantities
Table 8: Examples of quantitative phenomena present in EQUATE
361
-----
| [
"Abhilasha, Ravichander",
"Aakanksha, Naik",
"Carolyn, Rose",
"Eduard, Hovy"
] | 2019-01-01T00:00:00 | null | false | 81 | 7 | null | https://www.aclweb.org/anthology/K19-1033 | null | https://www.semanticscholar.org/paper/1eff94b44432a8e6af29288b8234494516579ad3 |
Tangent-CFT: An Embedding Model for Mathematical Formulas | When searching for mathematical content, accurate measures of formula similarity can help with tasks such as document ranking, query recommendation, and result set clustering. While there have been many attempts at embedding words and graphs, formula embedding is in its early stages. We introduce a new formula embedding model that we use with two hierarchical representations, (1) Symbol Layout Trees (SLTs) for appearance, and (2) Operator Trees (OPTs) for mathematical content. Following the approach of graph embeddings such as DeepWalk, we generate tuples representing paths between pairs of symbols depth-first, embed tuples using the fastText n-gram embedding model, and then represent an SLT or OPT by its average tuple embedding vector. We then combine SLT and OPT embeddings, leading to state-of-the-art results for the NTCIR-12 formula retrieval task. Our fine-grained holistic vector representations allow us to retrieve many more partially similar formulas than methods using structural matching in trees. Combining our embedding model with structural matching in the Approach0 formula search engine produces state-of-the-art results for both fully and partially relevant results on the NTCIR-12 benchmark. Source code for our system is publicly available. | A new formula embedding model that is used with two hierarchical representations, Symbol Layout Trees for appearance, and Operator Trees for mathematical content, which allows for many more partially similar formulas than methods using structural matching in trees. | [
"Behrooz, Mansouri",
"Shaurya, Rohatgi",
"C. Lee, Giles",
"Douglas W., Oard",
"Jian, Wu",
"Richard, Zanibbi"
] | 2019-09-26T00:00:00 | null | false | 81 | 5 | null | https://dl.acm.org/doi/10.1145/3341981.3344235 | null | https://www.semanticscholar.org/paper/90884139b5c1ea35f4a6547b838501f116526291 |
|
Tree-structured Decoding for Solving Math Word Problems | Automatically solving math word problems is an interesting research topic that needs to bridge natural language descriptions and formal math equations. Previous studies introduced end-to-end neural network methods, but these approaches did not efficiently consider an important characteristic of the equation, i.e., an abstract syntax tree. To address this problem, we propose a tree-structured decoding method that generates the abstract syntax tree of the equation in a top-down manner. In addition, our approach can automatically stop during decoding without a redundant stop token. The experimental results show that our method achieves single model state-of-the-art performance on Math23K, which is the largest dataset on this task. | A tree-structured decoding method that generates the abstract syntax tree of the equation in a top-down manner and can automatically stop during decoding without a redundant stop token is proposed. | # Tree-structured Decoding for Solving Math Word Problems
**Qianying Liu[∗][1], Wenyu Guan[∗][23], Sujian Li[2]** **and Daisuke Kawahara[1]**
1
Graduate School of Informatics, Kyoto University
2
Key Laboratory of Computational Linguistics, MOE, Peking University
3
School of Software and Microelectronics, Peking University
[email protected]; [email protected];
[email protected]; [email protected]
**Abstract**
Automatically solving math word problems
is an interesting research topic that needs to
bridge natural language descriptions and formal math equations. Previous studies introduced end-to-end neural network methods, but
these approaches did not efficiently consider
an important characteristic of the equation,
i.e., an abstract syntax tree. To address this
problem, we propose a tree-structured decoding method that generates the abstract syntax
tree of the equation in a top-down manner. In
addition, our approach can automatically stop
during decoding without a redundant stop token. The experimental results show that our
method achieves single model state-of-the-art
performance on Math23K, which is the largest
dataset on this task.
**1** **Introduction**
Math word problem (MWP) solving is the task of
obtaining a solution from math problems that are
described in natural language. One example of
MWP is illustrated in Figure 1. Early approaches
rely on either predefined rules or statistical machine learning-based methods to map the problems into several predefined templates in classification style or retrieval style. The major drawback
of these approaches is that they are inflexible for
new templates and require extra effort to design
rules, features and templates.
Modeling the tree structure of math equations
has been considered as an important factor when
building models for MWP. As shown in Figure 1,
each equation could be transformed into an abstract syntax tree (AST). Roy and Roth (2017)
built an expression tree and combined two classifiers for quantity relevance prediction and operator
classification. Koncel-Kedziorski et al. (2015) designed a template ranking function based on the
_∗_ This denotes equal contribution.
Problem The distance between the two places
A and B is 660 km, the car starting from A drives 32 km/h, and the
car starting from B drives 34 km/h.
The two cars are starting from the two
places at the same time in inverse direction. How many hours later would
the two cars meet?
Equation x = 660 / ( 32 + 34 )
Prefix x = / 660 + 32 34
Answer 10
## /
660
AST +
32 34
Figure 1: One example of MWP. Problem refers to the
natural language descriptions. Equation refers to the
formal math equation. Prefix refers to the prefix notation of the equation. Answer refers to the final quantity
solution. AST refers to the AST of the equation.
AST of the equations. However, these approaches
are based on traditional methods and require feature engineering.
Recently, the appearance of large-scale datasets
and the development of neural generative models
have opened a new research line for MWP. Wang
et al. (2017) cast this task as a sequence generation problem and used a sequence-to-sequence
(seq2seq) model to learn the mapping from natural language text to a math equation. Recent
approaches use the Reverse Polish notation, also
called suffix notation of equations in which operators follow their operands to implicitly model
the tree structure (Wang et al., 2018a; Chiang and
Chen, 2018). However, these studies lost sight of
important information of the math equation ASTs
2370
-----
(e.g., parent and siblings of each node), despite of
their promising results. Thus, their model has to
use extra effort to memorize various pieces of auxiliary information such as the sibling node of the
current step and learn the implicit tree structure in
the sequence.
Meanwhile, in similar tasks such as semantic
parsing and code generation, which also try to
convert natural language into a well-formed executable tree structured output, they first decode
the function or operator at the root node before
decoding the variables. Such top-down decoding
matches with the Polish notation, also called the
prefix order of the AST of math equations, shown
in Figure 1. Human also does reasoning and reference in a similar order, usually first determines the
function before filling in the variables, but not the
inverse. Thus we present a top-down hierarchical
sequence-to-tree (seq2tree) model which explicitly models the tree structure. This decoder considers the prefix order of the equation and feeds the
information of the parent and sibling nodes during
decoding.
Another advantage of this system is that it can
use a stack to monitor the decoding process and
automatically end. By pushing in new generated
nodes to the stack and popping out the completed
subtrees, the decoding process can naturally end
when the tree is completed and the stack is empty.
There is no need for a special redundant end-ofsequence (EOS) token which is used in ordinary
text sequence generation. Without the EOS token,
the model has a higher possibility to generate valid
equation answers.
In summary, the contributions of this work are
as follows:
1. We design a hierarchical seq2tree model that
can better capture information of the AST of the
equation.
2. We are the first to use prefix order decoding
for MWP.
3. We abandon the EOS token for end-to-end
equation generation.
4. Our model outperforms previous systems for
solving math word problems. On the large-scale
dataset Math23K, we achieve state-of-the-art single model performance.
**2** **Related Work**
Our work synthesizes two strands of research,
which are math word problems and seq2tree
encoder-decoder architectures.
**2.1** **Math Word Problems**
Early approaches hand engineered rule-based systems to solve MWPs (Mukherjee and Garain,
2008). The rules could only cover a limited domain of problems, while math word problems are
flexible in real-world settings.
There are currently three major research lines
in solving MWPs. The first research line maps
a problem text into logical forms, and then uses
the logical forms to obtain the equation (Shi et al.,
2015; Roy and Roth, 2015; Huang et al., 2017;
Roy and Roth, 2018). Shi et al. (2015) defined a
Dolphin language to connect math word problems
and logical forms. The major drawback is that
it requires extra human annotation for the logic
forms.
Another research line uses either a retrieval or
classification model to maintain a template, and
then fills in the slots with quantities. Kushman
et al. (2014) first introduced the idea of ‘equation
template’. For example, x = 6*7 and x = 10*5
belong to the same template x = n1*n2. They collected the first dataset of this task, ALG514, which
contained 514 samples. They brought out a twostep pipeline model, which first used a classifier
to select a template and then mapped the numbers
into the slots. One major drawback is that it cannot
solve problems beyond the templates in the training data. This two-step pipeline model was further
extended with tree-based features, ranking style
retrieval models and so on (Upadhyay and Chang,
2017; Roy and Roth, 2017). Huang et al. (2016)
released the first large-scale dataset Dophin18K
and trained a similar system on it.
The third research line directly generates the
equation. Hosseini et al. (2014) cast the problem into a State Transition Diagram of verbs and
trained a binary classifier that could solve problems with only add and minus operators. Wang
et al. (2017) first used a seq2seq model to directly
generate the equation template and released a Chinese high-quality large-scale dataset, Math23K.
Reinforcement learning was used to further improve the seq2seq framework. Wang et al. (2018b)
first extended the seq2seq model by decoding the
suffix order sequence of the equations. Wang
et al. (2018a) introduced equation normalization
techniques that leverage the duplicated template
problem. Chiang and Chen (2018) used the copy
2371
-----
|Equation Quantities Template Prefix template|x = 130 * ( 1 - 0.8 ) n : 130, n : 0.8 1 2 n * ( 1 - n ) 1 2 * n - 1 n 1 2|
|---|---|
Table 1: One example of prefix template
the help of an auxiliary stack.
**3.1** **Preprocessing**
**Significant Number Identification** A Significant Number Identification (SNI) unit is used for
reducing the noise in the input numbers. Significance refers to whether the number appears in
the equation. In MWPs, it is very common that
the input text contains irrelevant numbers, such as
date or descriptive text such as ‘third grade student’. We follow Wang et al. (2017) and simply
use whether the numbers appear in the equation as
gold labels and build an LSTM-based binary classification model to determine the significance of
the input numbers. The accuracy of this unit is
99% and thus it can efficiently reduce the noise.
**Prefix Order Equation Template** For the output equations, we first turn them into prefix order
equation templates before using them to train the
model. In Table 1 we show one example of prefix
templates. Given a problem-equation pair, we first
build the mapping between numbers in the problem and equation, ni denotes the ith number in the
problem after SNI. We use this mapping to convert
the equation into a template by replacing the numbers with ni tokens, and at last convert the template into prefix order.
To be noticed, equations can be mapped to more
than one AST. For example, n1 + n2 + n3 could
be mapped to an AST with either the first + or
the second + as the root node, and in that case
the prefix order notation would also be different.
We assume the first operator is the root node of
the AST here. Further details are shown in the
appendix.
**Equation Normalization** One math word problem can be solved by different but equivalent equations, which bring noise during training. For example, 10 - (8 + 5) and 10 - 8 - 5 are equivalent, but
the templates n1 − (n2 + _n3) and n1 −_ _n2 −_ _n3 are_
different. This problem is called the equation duplication problem. We follow Wang et al. (2018a)
and use several rules for equation normalization
mechanism to improve the semantic representation of quantities. However, in these work the
model needs extra effort to learn the implicit tree
structure and memorize tree information in the
sequence. Wang et al. (2019) proposed a twostep pipeline method that first generates a template
with unknown operators, and then uses a recursive
neural network to combine tree structure information and predict the operators. However, the topology of the AST is determined in the first step without tree structure information. To our best knowledge, we are the first to explicitly give the model
guidance of parent and sibling nodes.
**2.2** **Seq2Tree Architectures**
Seq2Tree-style encoder-decoder is mainly used in
two fields both of which try to bridge natural language and a tree structured output.
Semantic parsing is the task that translates natural language text to formal meaning logical forms
or structured queries. Code generation maps a
piece of program description to programming language source code. Dong and Lapata (2016)
first used recurrent neural networks (RNNs) based
seq2tree for semantic parsing and out performed
the seq2seq model. One drawback is that their
generation is at token level so it cannot guarantee the result is syntactically correct. Grammar
rules were used to solve this problem. Another
drawback is that they needed special tokens for
predicting branches, which are not necessary for
MWPs because all operators are binary operators. The similar framework is also used in code
generation (Zhang et al., 2016; Yin and Neubig,
2017). Alvarez-Melis and Jaakkola (2017) presented doubly recurrent neural networks to predict
tree topology explicitly. Rabinovich et al. (2017)
presented a abstract syntax network that combines
edge information for code generation. Convolution neural networks (CNNs) were used for code
generation decoding because the output program
is much longer than semantic parsing and MWPs,
and RNNs suffer from the long dependency problem (Sun et al., 2018).
**3** **Model**
Our model consists of two stages as shown in Figure 2: the encoder stage that encodes the input
natural language into a sequence of representation
vectors and the decoder stage that receives these
vectors and decodes the AST of the equation with
2372
-----
|ℎ "|Col2|
|---|---|
|||
|𝑒 "||
|ℎ #|Col2|
|---|---|
|||
|𝑒 #||
## * - n"
ℎ" ℎ# ℎ$ ℎ% ℎ'"
ℎ'# ℎ'(
𝑒" 𝑒# 𝑒$ 𝑒% ℎ'$ ℎ')
Alice has 10. …… ?
Figure 2: Framework of our seq2tree model. The blue blocks refer to the encoder. The yellow blocks refer to the
decoder. The green blocks refer to the auxiliary stack.
to alleviate the equation duplication problem. The 0
rules are listed below. <s>
_• If one long equation template could be con-_
verted into a shorter one, then it should be -
shortened. For example, n1 + _n2 +_ _n3 +_ _n3 −_
_n3 and n1 + n2 + n3 are equivalent. In this_
case the former one should be normalized as
the latter one.
_• The order of number tokens in the template_
should follow their occurrence order in the
problem text as much as possible. For example, n1 + _n3 +_ _n2 should be normalized as_
_n1 + n2 + n3._
0
<s>
-
(
+ 𝑛#
(
Figure 3: Framework of the tree structured decoder.
The yellow dotted line refers to sibling feeding. The
blue dotted line refers to previous token feeding. The
orange solid line refers to parent feeding. The dotted
line means it uses embedding information, while the
solid line means it uses the hidden state information.
have two children and number nodes must be leaf
nodes, we build a decoder which is specialized for
math equation AST, as shown in Figure 3. It extends a vanilla sequence-based LSTM decoder by
using tree-based information feeding as the input,
and also an auxiliary stack to help the model know
which is the next token to decode and automatically end the decoding process without a special
token.
**3.3.1** **Tree-based Information Feeding**
The input of each time step consists of three parts:
parent feeding, sibling feeding and previous token
feeding.
The parent feeding h[parent] refers to using the
LSTM hidden state of the parent node as the input
when decoding children nodes, which is shown as
**3.2** **Encoder**
Bi-directional Long Short Term Memory Network
(BiLSTM) is an efficient method to encode sequential information. Formally, given an input
math word problem sentence x = {xt}t[n]=1 [, we]
first embed each word into a vector et. Then these
embeddings are fed into a BiLSTM layer to model
the sequential information.
_ht = (LSTM_ (h[f]t 1[, e][t][);][ LSTM] [(][h]t[b] 1[, e][t][))][,]
_−_ _−_
(1)
where ht is the concatenation of the hidden states
_h[f]t_ [and][ h]t[b][, which are from both forward and back-]
ward LSTMs. These representation vectors are
then fed into the following decoder stage.
**3.3** **Tree-structured Decoder**
For decoding, we follow Dong and Lapata (2016)
and build a top-down hierarchical tree-structured
decoder. However, their model is built for semantic parsing and some components are unnecessary and redundant for equation template decoding. Benefited by the fact that operator nodes must
2373
-----
Generate 𝑛$ Ans:-+𝑛"𝑛#𝑛$
𝑛#
pop and push 𝑛$ pop and pop
𝑛" push 𝑛$ push
+ +𝑛"𝑛# +𝑛"𝑛#
- - - -+𝑛"𝑛#𝑛$
- - - - -
+ + ? + 𝑛$ + 𝑛$ + 𝑛$
𝑛" 𝑛# 𝑛" 𝑛# 𝑛" 𝑛# 𝑛" 𝑛# 𝑛" 𝑛#
Figure 4: One example of tree-structured decoding with the auxiliary stack.
the orange solid line in Figure 3. This can let the
model be informed of the parent node status. For
the root node, this part of the input is padded as
zeros.
The sibling feeding e[sibling] refers to using the
embedding of the left sibling node as the input
when decoding the right sibling node, which is
shown as the yellow dotted line in Figure 3. This
can let the model be informed whether we are decoding the left or right sibling. For the root node,
we use a special token ⟨s⟩ for sibling feeding. For
the left-most node, we use a special token ( for
sibling feeding.
The previous token feeding e[prev] refers to using the previous token in prefix order as the input
when decoding the next token, which is shown as
the blue dot line in Figure 3. This can let the model
be informed of what part is already decoded by the
tree. For the root node, we also use the special token ⟨s⟩ for previous token feeding.
At time step t, the input e[d]t [of the LSTM unit is]
the concatenation of these three components.
_e[d]t_ [= (][h][parent][;][ e][sibling][;][ e][prev][)][.] (2)
**3.3.2** **Tree-Structured LSTM**
The tree-structured decoder uses LSTM to generate the equation template in a top-down manner, as
the grey solid line in Figure 3, and use an auxiliary
stack to guide the decoding process. Given the input e[d]t [shown in the previous section, we generate]
the output token yt with one LSTM layer and one
Multi-layer Perceptron (MLP) layer.
_h[d]t_ [=][ LSTM] [(][h]t[d] 1[, e][d]t [)] (3)
_−_
_yt = argmax(MLP_ (h[d]t [))] (4)
**Algorithm 1: The algorithm of using the aux-**
iliary stack to guide tree decoding.
**Input: Encoder Output hn**
**Output: The Predicted Prefix Notation**
Equation Equa[p] = _s[p]i_ _i=1_
**1 initialize empty stack S and list {** _Equa[}][n][2]_ _[p];_
**2 while S.size!=1 or S.top is not quantity do**
**3** generate token yt;
**4** _tmp = S.top;_
**5** _S.push(yt);_
**6** **if yt is quantity then**
**7** **while tmp is a quantity do**
**8** subtree t = S.top[3];
**9** _S.pop;S.pop;S.pop;_
**10** _tmp = S.top;S.push(t);_
**11 Equa[p]** = S.pop;
As shown in Algorithm 1 and Figure 3, if the
predicted token yt is an operator, then we predict
the left child node of the operator and push this token into the stack S. If the predicted token yt is a
quantity, we check the stack to determine which is
the next token that needs to be decoded. If the top
of the stack is an operator, then we push yt into the
stack and go on to decode the right sibling node of
the current node. If the top of S is a quantity, we
follow Algorithm 1 to find the next position to decode. We push yt to the stack and pop out the top
three tokens, which should be ⟨op⟩⟨num⟩⟨num⟩.
These three tokens form a subtree t and we regard
this subtree t as one quantity unit in the following
process. Then we examine the status of the stack
again, if the top of the stack is still a number, we
push t back to the stack and continue until the top
of the stack is not a quantity. When the loop stops,
2374
-----
**4** **Experiments**
To demonstrate the effectiveness of our model, we
conduct experiments on the Math23K dataset. Our
method achieves the state-of-the-art (SOTA) single model performance and also exceeds the previous ensemble model SOTA.
**4.1** **Dataset**
**Math23K** Math23K is one large-scale Chinese
MWP dataset that contains 23,162 math problems
and math equation solutions. The questions are
elementary school level. Every question is linear
and contains only one unknown variable.
Although there are other large-scale datasets
such as Dolphin18K and AQuA, which are in English, they either contain many small typos (e.g.,
using x to represent ∗) or contain wrong answers
and templates. Other datasets such as ALG514
and MAWPS are much more smaller. Therefore,
we decide to conduct experiments on Math23K,
which is the only large-scale, clean and highquality dataset.
**4.2** **Implementation Details**
The embedding vectors are pretrained on the training set with the word2vec algorithm. The dimension of the embedding is 128. We use a two-layer
BiLSTM with the hidden size 512 for the encoder.
The decoder is a two-layer LSTM with 1024 hidden size. We use a teacher forcing ratio of 0.83
during training. We use cross entropy as the loss
function and Adam to optimize the parameters.
We also use dropout to avoid over-fitting. The
batch size is 128.
**4.3** **Results**
Table 2 shows the results of our system and other
novel systems of MWP on the Math23k test set.
The retrieval-style models compare a question
in the test set with the questions in the training set, choose the template that has the highest
similarity, and then fill in the numbers into the
template (Upadhyay and Chang, 2017; Robaidek
et al., 2018). The classification-style models train
a classifier to select an equation template, and
then map the numbers into the template (Kushman et al., 2014). For retrieval and classification models, we use the results of Robaidek
et al. (2018). The generation models use endto-end style encoder-decoder systems to generate
an equation template and then fill in the numbers.
if the top of the stack is an operator, we push back
_t and continue to decode the operator’s right child_
node, and if the stack is empty, the decoding process ends because the AST is completed. Here we
still push back the tree unit to the stack and treat
the status that only one number is in the stack as
the ending condition, which refers to line 2 in Algorithm 1. In this way the condition that the first
generated token is the answer number can be unified. With the help of this stack, we can guide the
decoding process, including which token to generate next and when to stop naturally without any
special tokens.
We show one example of the decoding process
with the an auxiliary stack in Figure 4. The upper half part shows the status of the stack, where
the solid blocks stand for the inserted tokens. The
lower half part shows the status of the AST during decoding, where the solid line stands for the
generated nodes and the dotted line stands for the
node that should be generated in the next step. The
decoder first generates −, +, n1 and n2 and forms
a complete subtree in the AST. This subtree +, n1
and n2 is then popped out of the stack and +n1n2
are pushed back as one unit. The model then continues to predict the sibling node of the subtree’s
root node, which is the dotted line circle in Figure 4. The top three tokens of the stack now form
a complete subtree again and is popped out of the
stack. The stack is now empty, we push it back and
the stack only contains one number unit, then the
decoding process ends and the equation template
is popped out.
**3.3.3** **Attention Mechanism**
An attention mechanism has shown its effectiveness in various natural language processing tasks.
We extend the decoder with an attention mechanism by adjusting Equation 4. Instead of directly
using the hidden state h[d]t [to predict the output to-]
ken yt, we consider relevant information from the
input vectors to better predict yt. Formally, given
the LSTM hidden state h[d]t [and the encoder outputs]
_{ht}t[n]=1[, we calculate the attention weights][ α]t[i]_ [and]
attention vector st as follows:
_n_ _n_
_exp(hi_ _h[d]t_ [)]
_st =_ _αt[i]_ _[·][h][i]_ [=] _n_ _·_
_i=1_ _i=1_ _j=1_ _[exp][(][h][j][ ·][ h]t[d][)][ ·][h][i][ (5)]_
X X
P
In lieu of Equation 4, we use the attention vector st, which considers relevance of the encoder
information to predict the output token yt.
2375
-----
**Model** **Acc**
Cosine (Robaidek et al., 2018) 23.8%
Retrieval
Jaccard (Robaidek et al., 2018) 47.2%
Self-Attention (Robaidek et al., 2018) 56.8%
Classification
Bi-LSTM (Robaidek et al., 2018) 57.9%
DNS (Wang et al., 2017) 58.1%
Generation BiLSTM+Suffix+EN (Wang et al., 2018a) 66.7%
Semantically-Aligned† (Chiang and Chen, 2018) 65.8%
T-RNN (Wang et al., 2019) 66.9%
**Ours** **69.0%**
DNS+Retrieval (Wang et al., 2017) 64.7%
Ensemble DNS+suffix+EN Ensemble (Wang et al., 2018a) 68.4%
T-RNN+Retrieval (Wang et al., 2019) 68.7%
Table 2: Math word problem solving accuracy on Math23K. † denotes that the result is 5-fold cross validation
performance. All other models are tested on the test set.
**Model** **Acc**
Prefix Baseline 66.8%
+ Sibling Feeding 67.8 %
+ Parent Feeding 68.1 %
Full Model **69.5%**
Table 3: Ablation study on Math23K by removing
modules.
Wang et al. (2017) proposed a DNS model, which
used seq2seq with SNI to generate an equation
template. Wang et al. (2018a) proposed a BiLSTM+Suffix+EN model, which extends the DNS
model by decoding the suffix notation and applies equation normalization. Chiang and Chen
(2018) introduced a copy mechanism to generate
equation templates in a semantically-aligned manner. T-RNN (Wang et al., 2019) extends the BiLSTM+Suffix+EN model by first generating a template with unknown operators and then filling in
the operators with a Recursive Neural Network.
The ensemble models use bagging to combine the
results of different models.
Our seq2tree model is also a generation-style
model. As shown in Table 2, we achieve state-ofthe-art single model performance on the test set,
and even better results than all the previous ensemble models, which can demonstrate the effectiveness of our proposed method.
**Model** **Invalid Templates**
EOS as terminator 1.3%
Stack as terminator **0.2%**
Table 4: Ablation study on Math23K by using different
terminating methods.
**5** **Analysis and Discussion**
**5.1** **Ablation Study**
To get better insight into our seq2tree system, we
conduct ablation study on Math23K development
set, which is shown in Table 3. The prefix baseline denotes the model that removes parent feeding
and sibling feeding, but only uses previous token
feeding for the input. Thus, this model loses parent and sibling information and falls into a linear
seq2seq model based on the prefix notation. The
prefix baseline performs competitive results compared to previous single model SOTA (66.9%),
which proves the effectiveness of top-down decoding. Parent feeding and sibling feeding separately improve the baseline model by 1.1% and
1.0%, demonstrating the importance of informing
the model of AST structure information.
In Table 4, we also report the percentage of
invalid templates by using different terminating
methods. We remove the auxiliary stack and use
a special end-of-sentence (EOS) token, which was
used in previous studies to terminate the decoding process (Wang et al., 2017, 2018a). We can
see that using the stack as a terminator can let
the model generate very low percentage of invalid
templates and outperforms the EOS method.
2376
-----
Problem A person is taking a trip from A to B. He took a train for n1 of the trip the first day. He
took a bus and travelled for n2 km the second day. He still needs to travel for n3 of the total
distance. How far is it from A to B?
Gold **suffix order: x = n1 1 n2 - n3 - / prefix order: x = / n1 - - 1 n2 n3**
Prediction BiLSTM+Suffix+EN: n1 n2 - n3 / (error) Ours: / n1 - - 1 n2 n3 (correct)
Figure 5: One example of our system compared with BiLSTM+Suffix+EN (Wang et al., 2018a).
**#Operators** **Proportion(%)** **Acc(%)**
0 0.1 100.0
1 17.3 82.7
2 52.2 74.5
3 19.1 59.9
4 6.6 42.4
5 3.4 44.1
6 0.9 55.6
7 0.3 0
8 0 0
9 0.1 100.0
Table 5: Accuracy of different template lengths on
Math23K.
**5.2** **Case Study**
Here we give an example that is improved by our
tree-structured decoding system. As shown in Figure 5, in the gold equation of this example, there
is a long distance between two pairs of parentchild nodes n1 and / and also 1 and −. The BiLSTM+Suffix+EN model failed to capture the relationship between these two pairs of parent-child
nodes and caused an error. Our model has a better ability to capture the relation between pairs of
parent-child nodes even if there is a long distance
between them in the notation.
**5.3** **Error Analysis**
In Table 5, we show the results of how the accuracy changes as the template becomes longer.
We can see that our model’s performance becomes
very low when the equation becomes longer. This
shows that our model has limitations at predicting long templates. This is because longer templates often match with more complex questions
which are more difficult to solve. Thus, this model
still has room for improvement on reasoning, inference and semantic understanding. Meanwhile,
only a few examples in Math23K match with
complex templates, so introducing data augmentation techniques or constructing a new dataset with
more complex examples may further improve the
**Domain** **Proportion(%)** **Acc(%)**
Distance & Speed 20.5 70.2
Tracing 27.0 74.1
Engineering 5.4 64.1
Interval 0.2 50.0
Circle Geometry 1.5 33.3
Plane Geometry 1.1 81.8
Profit 0.5 40.0
Solid Geometry 1.3 46.2
Interest Rate 0.5 80.0
Production 0.4 100.0
Table 6: Accuracy of different question domains on
Math23K.
model’s performance.
In Table 6, we examine the performance of the
model in different question domains. MWPs in
the same domain usually share similar logic, while
there is an obvious difference between questions
across different domains. Accurately detecting the
question domain is very laborious, so we do this
experiment by simply detecting frequent keywords
of each domain in the question. We show further details in the appendix. The results show that
the performance of the model has obvious variance among different domains and limitations in
some domains such as solid geometry. This is
because these domains require complicated external knowledge for solving these questions, such as
_Scircle = πr[2]. It is difficult for the model to au-_
tomatically summarize these kinds of information
with only supervision of the equation templates.
Adding external knowledge for this task may further improve the model.
**6** **Conclusion**
We proposed a sequence-to-tree generative model
to improve template generation for solving math
word problems. The hierarchical top-down treestructure decoder can use the information of the
abstract syntax tree of an equation during decoding. With the help of an auxiliary stack, this
2377
-----
decoding process can end without any redundant
special tokens. Our model achieves state-of-theart results on the large-scale dataset, Math23k,
demonstrating the effectiveness of our approach.
**Acknowledgement**
This work was supported by JSPS KAKENHI
Grant Number JP18H03286.
**References**
David Alvarez-Melis and Tommi S. Jaakkola. 2017.
Tree-structured decoding with doubly-recurrent
neural networks. In 5th International Conference
_on Learning Representations, ICLR 2017, Toulon,_
_France, April 24-26, 2017, Conference Track Pro-_
_ceedings._
Ting-Rui Chiang and Yun-Nung Chen. 2018.
Semantically-aligned equation generation for
solving and reasoning math word problems. arXiv
_preprint arXiv:1811.00720._
Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), vol-_
ume 1, pages 33–43.
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533._
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian
Yin. 2017. Learning fine-grained expressions to
solve math word problems. In Proceedings of the
_2017 Conference on Empirical Methods in Natural_
_Language Processing, pages 805–814._
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), vol-_
ume 1, pages 887–896.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), vol-_
ume 1, pages 271–281.
Anirban Mukherjee and Utpal Garain. 2008. A review
of methods for automatic understanding of natural
language mathematical problems. Artificial Intelli_gence Review, 29(2):93–122._
Maxim Rabinovich, Mitchell Stern, and Dan Klein.
2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the
_55th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
1139–1149.
Benjamin Robaidek, Rik Koncel-Kedziorski, and Hannaneh Hajishirzi. 2018. Data-driven methods for
solving algebra word problems. _arXiv preprint_
_arXiv:1804.10718._
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In Proceedings of the
_2015 Conference on Empirical Methods in Natural_
_Language Processing, pages 1743–1752._
Subhro Roy and Dan Roth. 2017. Unit dependency
graph and its application to arithmetic word problem
solving. In Thirty-First AAAI Conference on Artifi_cial Intelligence._
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transac_tions of the Association of Computational Linguis-_
_tics, 6:159–172._
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Process-_
_ing, pages 1132–1142._
Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li,
and Lu Zhang. 2018. A grammar-based structural
cnn decoder for code generation. _arXiv preprint_
_arXiv:1811.06837._
Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and
dataset for algebra word problems. In Proceedings
_of the 15th Conference of the European Chapter of_
_the Association for Computational Linguistics: Vol-_
_ume 1, Long Papers, pages 494–504._
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math word
problem to a expression tree. In Proceedings of the
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 1064–1069._
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Thirty-Second AAAI Con_ference on Artificial Intelligence._
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bingtian Dai, and Heng Tao Shen. 2019.
Template-based math word problem solvers with recursive neural networks.
2378
-----
by observation.
**Domain** **Keywords**
Distance & Speed `速` `度,千` `米,路` `程,相`
```
距,全程
```
Tracing `相` `遇,相` `对,相` `反,相`
```
背,相向
```
Engineering `工程,零件,工程队,公`
```
路,修路
```
Interval `利润`
Circle `半径,圆,直径,周长`
Plane Geometry `三` `角` `形,正` `方` `形,长` `方`
```
形,边长
```
Profit `间隔,隔`
Solid Geometry `体` `积,侧` `面` `积,横` `截`
```
面,表面积,圆柱,长方
体
```
Interest Rate `利息,利率`
Production `超产,减产`
Table 7: The list of keywords in the domain specific
studies.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854.
Pengcheng Yin and Graham Neubig. 2017. A syntactic
neural model for general-purpose code generation.
In Proceedings of the 55th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 440–450._
Xingxing Zhang, Liang Lu, and Mirella Lapata. 2016.
Top-down tree long short-term memory networks.
In Proceedings of NAACL-HLT, pages 310–320.
**A** **Prefix Notation Algorithm**
**Algorithm 2: The algorithm of converting or-**
dinary equations to its prefix notation.
**Input: The Ordinary Equation String**
_Equa =_ _si_ _i=1_
_{_ _}[n][1]_
**Output: The Prefix Notation Equation String**
_Equa[p]_ = _s[p]i_ _i=1_
**12 initialize empty stack {** _S[}] and list[n][2]_ _Equa[p]inv[;]_
**13 for i ←** _n1 to 1 do_
**14** **if si is quantity then**
**15** _Equa[p]inv[.append(][s][i][);]_
**16** **else if si is ) then**
**17** _S.push(si);_
**18** **else if si is operator then**
**19** **while True do**
**20** **if si is prior to S.top or S.top is )**
_or S is empty then_
**21** _S.push(si);_
**22** break;
**23** **else**
**24** _Equa[p]inv[.append(][S][.top);]_
**25** _S.pop();_
**26** **else if S.top is ( then**
**27** **while S.top is not ) do**
**28** _Equa[p]inv[.append(][S][.top);]_
**29** _S.pop();_
**30** _S.pop()_
**31 Equa[p]** = Equa[p]inv[.inverse()]
Here we show the details of the converting algorithm. The input is the ordinary equation string
and the output is the prefix notation of the equation.
**B** **Domain Keywords**
We show the table of the keywords for each domain. These keywords were manually collected
2379
-----
| [
"Qianying, Liu",
"Wenyv, Guan",
"Sujian, Li",
"Daisuke, Kawahara"
] | 2019-01-01T00:00:00 | EMNLP 2019 Main | true | 81 | 11 | null | https://www.aclweb.org/anthology/D19-1241 | null | https://www.semanticscholar.org/paper/cd36b981e39edc5517d2a508e879b4d9bdc1b1c6 |
AdaPlanner: Adaptive Planning from Feedback with Language Models | Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively. The implementation of AdaPlanner is available at https://github.com/haotiansun14/AdaPlanner. | A closed-loop approach, AdaPlanner, is proposed, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback, and develops a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. | ## AdaPlanner: Adaptive Planning from Feedback with Language Models
**Haotian Sun[1][∗],** **Yuchen Zhuang[1][∗],** **Lingkai Kong[1],** **Bo Dai[1],** **Chao Zhang[1]**
1
Georgia Institute of Technology
```
{haotian.sun, yczhuang, lkkong, chaozhang}@gatech.edu, [email protected]
```
**Abstract**
Large language models (LLMs) have recently demonstrated the potential in acting
as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that
are not adaptable to environmental feedback. Consequently, the sequential decisionmaking performance of LLM agents degenerates with problem complexity and plan
horizons increase. We propose a closed-loop approach, AdaPlanner, which allows
the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from
feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore,
we propose a skill discovery mechanism that leverages successful plans as few-shot
exemplars, enabling the agent to plan and refine with fewer task demonstrations.
Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that
AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively. The implementation of AdaPlanner
[is available on https://github.com/haotiansun14/AdaPlanner.](https://github.com/haotiansun14/AdaPlanner)
**1** **Introduction**
Large language models (LLMs) have recently emerged as versatile autonomous agents for sequential decision-making in grounded environments. Traditional decision-making methodologies like
Reinforcement Learning (RL) require extensive task-specific training data and often lack the ability
to generalize across tasks and environments. In contrast, LLMs are pre-trained on massive and
diverse textual data, which gives them extensive world knowledge and the ability to reason over the
knowledge. This makes them highly versatile and able to handle complex, real-world scenarios that
may involve multiple steps of planning and decision-making.
Existing methods that leverage LLMs as autonomous agents for decision-making can be briefly
categorized into two groups (Table 1): open-loop systems and closed-loop systems. Open-loop
methods [21, 23, 5, 16, 12, 15, 14] rely on pre-determined plans to accomplish the desired task without
any feedback adaptation mechanism. On the other hand, closed-loop systems [22, 6, 9, 19, 10, 17, 20]
incorporate environment feedback to continuously monitor system behaviors and make refinements
and adjustments of the plans accordingly, which therefore is more flexible.
However, both existing open-loop and closed-loop LLM agents have inherent drawbacks. Open-loop
systems are computationally cheap and simple; however, they do not consider feedback from the
environment and stick to the initial plan, which lack of adaptability, and, thus, can easily generate
suboptimal plans. On the other hand, most existing closed-loop methods generate a fixed plan
and only update their executing actions upon environment feedback. This causes them to make
_∗These authors contributed equally to this work._
-----
Methods Feedback Utilization Instruction Type Prompting Decomposition Experience Refinement
_Open-Loop Methods_
CoT [21] - Prompting Language - -
Least-To-Most [23] - Prompting Language Sub-Goals -
Zero-Shot Planner [5] - Prompting Language - -
HuggingGPT [16] - Prompting Language Sub-Goals -
Chameleon [12] - Prompting Language Sub-Goals -
_Implicit Closed-Loop Methods with Fixed Plan_
ReAct [22] Taking Action Prompting Language - -
Inner Monologue [6] Taking Action Prompting Language - -
RCI [9] Taking Action Prompting Language - -
ProgPrompt [19] Taking Action Prompting Code - -
Code as Policies [10] Taking Action Prompting Code - -
Reflexion [17] Taking Action Prompting Language - Past Failure
_Explicit Closed-Loop Methods with Plan Refinement_
DEPS [20] Modifying Plan Prompting & Training Language Sub-Goals Past Failure
AdaPlanner Action & Plan Prompting Code Sub-Goals Past Failure & Success
Table 1: A comparison of methods that leverage LLMs for decision making. Each method’s features
are reported across five categories: 1) Environment Feedback Utilization: The method can use
feedback to decide the next action (Taking Action), revise the entire plan (Modifying Plan), or do
both (Action & Plan). 2) Instruction Type: The method may require prompting, training, or both. 3)
Prompting Style: The method can employ either natural language or code for its planning backend.
4) Task Decomposition: The method might decompose the task into sub-goals or not. 5) Experience
Refinement: The method can learn from past failure, past success, or both. The AdaPlanner proposed
in this paper is highlighted in gray.
sub-optimal decisions that adapt to the environment in the short term but could have detrimental
implications for future steps. DEPS [20] is the only exception, a method that modifies its entire plan
based on feedback from the environment. However, it requires training a plan selector to choose the
most successful plan, which requires a significant amount of task-specific data. As a result, applying
this method to different tasks can be challenging.
To address the limitations of existing LLM agents, we propose AdaPlanner, a closed-loop planning
method with LLM playing two roles – planner and refiner. The planner decomposes the task
into manageable sub-goals and predicts environmental feedback for each. During execution, the
refiner distinguishes and responds to two types of environment feedback – in-plan feedback is the
environmental observation that aligns with the prediction, and out-of-plan feedback is one that
deviates from the prediction. For in-plan feedback, the refiner can dynamically query the LLM to
perform reasoning and extract key information from in-plan feedback expressed in natural language.
This is achieved through a specific action called ask_LLM(), in which the LLM separately parses the
observation and obtains information pertinent to subsequent actions. For out-of-plan feedback, the
refiner proactively revises the entire plan and resumes to solve the current task from an intermediate
point. AdaPlanner’s adaptive closed-loop framework alleviates the need for prior knowledge about
the feedback structure and permits the agent to instantly adopt a refined plan rather than restarting
from scratch in a reset episode. This leads to a more efficient and adaptive decision-making process.
AdaPlanner operates solely via prompting, eliminating the need for a dedicated training phase and
reducing its computational cost. Furthermore, AdaPlanner leverages a code-based prompting for
precise planning and refinement. The use of code prompts facilitates task decomposition into subgoals and mitigates LLM hallucination during the decision-making process. AdaPlanner also features
a skill discovery process, which accumulates successful experiences to guide future planning. This
feature further enhances its long-term planning ability and sample efficiency.
We formally define the planning problem with LLM, and introduce open-loop vs. closed-loop control
system, which will motivate our method, in Section 2. Each component of the proposed AdaPlanner
is specified in Section 3, including code-based prompting in Section 3.1, closed-loop adaptation
in Section 3.2, and skill discovery in Section 3.3, and empirically justified in Section 4. The superior
performance of AdaPlanner on both ALFWorld and MiniWoB++ demonstrates our proposed adaptive
-----
closed-loop framework can effectively enhance planning performance, even when faced with a limited
number of samples.
**2** **Preliminaries**
**Problem Formulation.** We consider adopting an LLM as an autonomous agent to solve different
tasks in text-based environments. For initialization, the agent is provided with allowed actions A in
the environment, as well as a text-grounded task definition g ∈G from the task space G. Besides,
the initial state of the environment is also observed as o1 from the observation space . With
such inputs, the LLM agent needs to first generate an initial planning policy for solving the task ∈O _O_
_ρ(P0_ _g, o1) :_ ∆( ), where T is the total length of steps in the generated plan and ∆( ) is
_|_ _G × O →_ _A[T]_ _·_
probability simplex function. Also, the agent can interact with the environment for feedback: When
the agent interacts with the environment at the t-th step, the agent receives an observation ot
from the environment and generates a trajectory-like context ct = (o1, a[′]1[, o][2][, a]2[′] _[,][ · · ·][, a]t[′]_ 1 ∈O[, o][t][)][,]
_−_
where a[′]1[, a]2[′] _[,][ · · ·][, a]t[′]_ 1 [are the executed actions within the environment. As the agent may modify]
_−_
the actions according to the feedback, the executed actions a[′]1[, a][′]2[,][ · · ·][, a][′]t 1 [can be different from]
_−_
the actions a1, a2, _, at_ 1 in the initial plan. We denote ρ( _g, ct, Pt) as the high-level planning_
_· · ·_ _−_ _·|_
policy that generates an entire plan and π( _g, ct, Pt) as the action-generation policy conditioned_
_·|_
on a given plan Pt. Given the context ct and the entire plan at the last step Pt 1, the agent refines
_−_
future decisions. In the end, the LLM agent should model both the initial planning policy and the
environment feedback-conditioned policy to complete the given task successfully.
**Open-Loop System. An open-loop system is a non-feedback system (Figure 1), where the output is**
solely dependent on the input, without any consideration of the environmental feedback. Thus, in an
open-loop system, the entire initial plan over the time horizon T is predetermined and static by the
initial planning policy ρ( _g, o1), without any feedback-based refinement. Despite their simplicity,_
_·|_
open-loop systems are notably vulnerable to environmental changes, as they lack the capacity to
adapt or adjust their plans based on environmental feedback.
**Closed-Loop Systems. On the contrary, a closed-loop system (Figure 1) refers to a planning process**
that incorporates environment feedback to adjust and refine future decisions, involving both initial
planning ρ(·|g, o1) and two levels of feedback-based refinements, ρ(·|g, ct, Pt−1) and π(·|g, ct, Pt−1),
in the system.
_Implicit Closed-Loop Systems. After each step of interaction with the environment, implicit closed-_
loop systems will maintain the initial plan (i.e., Pt = P0) and only modify a single action based on
the feedback. Therefore, the feedback-based refinement is defined as π(a[′]t[|][g, c][t][, P][0][)][, where][ a][′]t
is the modified action from action space, while the remaining actions a>t for future steps remain the[∈A]
same as the initial plan. Although locally-optimal actions are adopted at each step, inaccuracies in
initial planning can result in task failure or non-completion.
_Explicit Closed-Loop Systems. Explicit closed-loop systems refine the entire plan based on environ-_
ment feedback following the policytime step t containing the modified future actions ρ(Pt|g, ct, Pt a−[′]1)t, where[to execute and] Pt ∈ ∆([ P]A[t][−][T][1][ −][ is the old plan modified][t]) is the refined plan at
_≥_
in the previous time step. Allowing for constant refinement and improvement of the plan, explicit
closed-loop systems can help prevent costly mistakes or missed opportunities that might arise from
adhering to outdated plans. Our proposed AdaPlanner is an explicit closed-loop system.
**3** **AdaPlanner**
**Model Architecture.** Our AdaPlanner model, shown in Figure 1, consists of two main components:
- an LLM-based agent that functions dually as a planner and a plan refiner, and
- a skill memory module designed to enhance sample efficiency through skill discovery.
The LLM-based agent, in its planner role, generates a comprehensive plan and performs preliminary
assessments to determine its feasibility. This initial planning is modeled as ρ(P0 _g, o1). As the_
_|_
plan unfolds, the agent also operates as a refiner, conducting feedback-based refinement in both
in-plan and out-of-plan manners. In-plan and out-of-plan refinement processes primarily differ
in how they impact future actions. In-plan refinement is a one-step action that integrates useful
information into the existing plan for better action grounding. After this in-plan phase, future
-----
(𝑎1, 𝑎2, ⋯, 𝑎𝑇) (𝑜1, 𝑎1, ⋯, 𝑜𝑡, 𝑎𝑡)
(𝑜1, 𝑎1, ⋯, 𝑜𝑡, 𝑎𝑡, 𝒐𝒕+𝟏)
|Col1|(𝑎, 𝑎, ⋯, 𝑎)|Col3|
|---|---|---|
|Planner|(𝑎1, 𝑎2, ⋯, 𝑎𝑇)|Environment|
||||
|Col1|(𝑜1, 𝑎1, ⋯, 𝑜𝑡, 𝑎𝑡)|Col3|
|---|---|---|
|Planner||Environment|
||||
**Implicit Closed-Loop**
**Open-Loop**
|𝑜1, 𝑎1, ⋯, 𝑜𝑇, 𝑎𝑇, 𝑜𝑇+1 Skill Memory extract info from 𝑜𝑡+1 In-Plan Refiner (𝑎1, 𝑎2, ⋯, 𝑎𝑡, 𝑎𝑡+1, ⋯, 𝑎𝑇), 𝑎 𝑡′ +1, ⋯, 𝑎′ ) 𝑇 Planner Out-of-Plan Refiner (𝑜1, 𝑎1, ⋯, 𝑜𝑡, 𝑎𝑡, 𝑜𝑡+1) AdaPlanner|𝑜1, 𝑎1, ⋯, 𝑜𝑇, 𝑎𝑇, 𝑜𝑇+1 𝑜𝑡+1 aligns with prediction Environment 𝑜𝑡+1 violates prediction Explicit Closed-Loop|
|---|---|
Figure 1: A comparison between open-loop, implicit closed-loop, and explicit closed-loop systems.
actions will be generated using the updated contextthe new information obtained from ct via in-plan refinement at timestep π(a[′]>t[|][g, c][>t] _[∪{][h][t][}] t[, P]. Out-of-plan refinement,[0][)][, where][ h][t]_ [represents]
on the other hand, leverages environmental feedback to directly revise the entire plan, denoted as
_ρ(Pt|g, ct, Pt−1). This mechanism allows for comprehensive adjustments to be made to the plan_
in response to unexpected environmental feedback. Skill memory serves as a repository, archiving
past successful plans and their respective interactions with the environment. If the agent encounters
a task resembling the skills stored in memory, these skills can serve as few-shot exemplars in the
LLM agent’s prompt. This feature improves not only sample efficiency but also reliability for future
planning.
**Environment Interaction.** AdaPlanner employs adaptive closed-loop planning and active environment interaction for task solving. It can anticipate environmental observations and proactively refine
the plan only when there is a discrepancy between expected and actual outcomes. This is achieved by
decomposing the planning process into N manageable sub-goals. During the planning and actiontaking process, the agent selects from a set of timestamps, _t1, . . ., tN_, to evaluate the success of
_{_ _}_
each sub-goal. If the sub-goal does not align with the planned prediction at timestep t _t1, . . ., tN_,
_∈{_ _}_
the environment actively sends the previous sub-trajectories (o1, a[′]1[,][ · · ·][, o][t][, a]t[′] _[, o][t][+1][)][ back to the]_
refiner for plan revision. This process allows the agent to check the success status only at N crucial
points, thereby reducing computational costs (number of API calls) and enhancing efficiency.
**3.1** **Plan Generation via Code-Based LLM Prompting**
AdaPlanner plans and refines by using Pythonic code prompts for LLMs. Consistent with previous
observations [3, 2], we have found that using code prompts instead of natural language prompts
for LLMs reduces ambiguity and misinterpretation, which significantly reduces LLM hallucination
during plan generation and refinement. We design code prompts during different stages of decisionmaking, including adaptive planning, feedback generation, and in-episode refinement. We provide a
detailed description of the prompts used at each stage in Appendix 8.3. To generate an initial plan for
solving a given task, we input a task description, the permissible actions in the environment, and,
when available, sample demonstrations of task resolution into LLM. These pieces of information
are all formatted into Pythonic code format for LLM prompting. To generate a plan for solving the
given task, AdaPlanner is prompted with the task goal, a list of admissible actions, and possibly a few
demonstrations. Figure 2 (a) shows an example programming-based plan generated by AdaPlanner for
solving a put task in the ALFWorld environment. The generated solution function is provided with
two input arguments: the first is the agent object, which encapsulates environmental information
to be used by the agent. The second is the variable start_from, which is a parameter indicating
the subgoal from which the agent will later resume its execution with a refined plan. By default,
the start_from is initialized as 1. The value of this variable can be further reassigned during the
refinement. When prompting LLM to generate the code-based plan, we design the prompt to teach
LLM to decompose a complex task into sub-goals. As shown in Figure 2(a), the generated code plan
```
solution(agent, start_from=1) consists of: 1) a general plan at the outset that decomposes
```
-----
|Environment (b) Out-of-plan Feedback (Extracted) cution Halt at Error in [Step 3]: I cannot clean lettuce 2 [Step 3] using the sinkbasin 1. I am at toilet 1 and holding None. The last three interactions before the error were: AdaPlanner Act: go to sinkbasin 2 Obs: On the sinkbasin 2, you see nothing. Out-of-plan … refine|(b) Out-of-plan Feedback (Extracted)|
|---|---|
||Error in [Step 3]: I cannot clean lettuce 2 using the sinkbasin 1. I am at toilet 1 and holding None. The last three interactions before the error were: Act: go to sinkbasin 2 Obs: On the sinkbasin 2, you see nothing. …|
|refine||
Figure 2: An illustrative example from ALFWorld to show the proposed adaptive closed-loop planning
**(a) Initial Plan (Extracted)** **Environment** **(b) Out-of-plan Feedback (Extracted)**
**def** **_solution(agent, start_from=1):_** **Execution** **Halt at** Error in [Step 3]: I cannot clean lettuce 2
**General** **# General plan: I need to get a list of receptacles to find the** **[Step 3]** using the sinkbasin 1. I am at toilet 1 and
**plan** **_lettuce, clean it, and put it in a diningtable._** holding None. The last three interactions
before the error were:
**# [Step 1] Get a list of receptacles where the lettuce is likely** **AdaPlanner** Act: go to sinkbasin 2
**Sub-plan** **_to appear._** Obs: On the sinkbasin 2, you see nothing.
**_if start_from <=_** **1:** **Out-of-plan** …
**recep_to_go=** **literal_eval(ask_LLM(f'Sort** **refine**
**In-plan**
**{agent.receptacles} in descending order based on the**
**refine** **likelihood of finding a lettuce.'))** **(c) Refined Plan (Extracted, revisions highlighted)**
**_assert recep_to_go, f'Error in [Step 1]: recep_to_go_** **def** **_solution(agent, start_from=3):_**
**should not be empty. {agent.report()}'** **_# General plan: I need to get a list of receptacles to find the lettuce,_**
**_take the lettuce to the sinkbasin, clean it, and put it in a diningtable._**
**# [Step 2] Go to each receptacle in the list until seeing a**
**_lettuce_** **# [Step 1] Get a list of receptacles where the lettuce is likely to appear.**
**_if start_from <=_** **2:** **…**
**_for receptacle in recep_to_go:_** **# [Step 2] Go to each receptacle in the list until seeing a lettuce**
**observation = agent.goto(receptacle)** **…**
**_if_** **'lettuce'** **in observation:** **# [Step 3] Identify the lettuce I just found and take it**
**_break_** **_if start_from <= 3:_**
**Predict** **_assert_** **'lettuce'** **in observation, f'Error in [Step 2]: There** **found_lettuce = 'lettuce' + ask_LLM(f'From the observation, get**
**via assert** **is no lettuce in/on {recep_to_go}. {agent.report()}'** **the identifier of the lettuce: {observation}. ')**
**observation = agent.take(found_lettuce, receptacle)**
**# [Step 3] Identify the lettuce I just found and clean it** **_assert agent.holding == found_lettuce, f'Error in [Step 3]: I cannot_**
**_if start_from <=_** **3:** **take {found_lettuce} from the {receptacle}. {agent.report()}'**
**In-plan** **found_lettuce =** **' lettuce' + ask_LLM(f'From the**
**refine** **observation, get the identifier of the lettuce: {observation}.')** **# [Step 4] Go to a sinkbasin to clean the lettuce.**
**observation = agent.take(found_lettuce, receptacle)** **_if start_from <= 4:_**
**observation = agent.clean(found_lettuce,** **'sinkbasin 1')** **observation = agent.goto('sinkbasin 1')**
**_assert_** **'clean'** **in observation, f'Error in [Step 3]: I cannot** **observation = agent.clean(found_lettuce, 'sinkbasin 1')**
**clean {found_lettuce} using the sinkbasin 1. {agent.report()}'** **_assert 'clean' in observation, f'Error in [Step 4]: I cannot clean_**
**{found_lettuce} using the sinkbasin 1. {agent.report()}'**
**# [Step 4] Go to a diningtable and put the lettuce on it.**
**…** **# [Step 5] Go to a diningtable and put the lettuce on it.**
**…**
through code. The task is to put some clean lettuce on the diningtable. The in-plan
_feedback in (a) is a sentence like On the countertop 2, you see a knife 1, a lettuce_
```
1, a saltshaker 2, and a soapbottle 1. This feedback is managed by the ask_LLM()
```
action. The execution of the initial plan might yield misaligned observations, triggering an outof-plan feedback and refinement process. For instance, the agent cannot clean the lettuce if
it is not currently located at a sinkbasin. The out-of-plan feedback in (b) assists AdaPlanner
in generating a revised plan (c) so that the agent will move to a sinkbasin before cleaning the
```
lettuce. AdaPlanner then determines to resume from step 3 within the same episode. The task can
```
be successfully completed using the refined plan.
the task into subgoals in the form of comments; and 2) a sequence of sub-plans, each consisting of
admissible actions corresponding to a specific subgoal. Such a mechanism allows our method to
handle complex, long-horizon tasks by hierarchically decomposing them into sequences of subgoals.
Furthermore, each subgoal ends with an assertion statement to test its fulfillment, which allows our
method to interact actively with the environment and later resume its execution with a refined plan.
**3.2** **Adaptive Closed-Loop Plan Refinement**
Once an initial plan is generated, AdaPlanner then prompts the LLM to correct any syntax errors. After
this, the code undergoes execution through the environment interface. The interface is responsible for
grounding the actions in the environment, and also for routing environmental observations back to
the code as a return value. This bi-directional flow allows AdaPlanner to adapt and refine its plan in
response to environmental observations in a closed-loop manner.
**In-Plan Feedback and Refinement via ask_LLM() Action. When AdaPlanner observes that the**
environment is aligned with the anticipated plan, it performs in-plan refinement. This allows it to
extract useful information from the observation that can be used for upcoming actions. To achieve
this, we provide the agent with an additional action called ask_LLM(), which is used to formulate
a plan alongside task-specific actions. The ask_LLM() function enables AdaPlanner to self-query
and perform reasoning based on specific information parsed from environmental observations. For
instance, in [Step 3] in Figure 2 (a), the ask_LLM() action extracts the identifier of the found
-----
object lettuce from the natural-language observation. This information can then be fed into later
actions. As an additional atomic action, this in-plan refinement is integrated into the plan at any
point where the planner deems a reasoning process is necessary. Existing code-generation-based
methods [19, 10, 3] face a challenge in this task, especially when there is no prior knowledge of the
structure and organization of these feedback sentences. In contrast, our AdaPlanner method leverages
LLM to parse critical information from diverse feedback presented in natural-language sentences to
streamline plan execution.
**Out-of-Plan Refinement with the Refine-Then-Resume Mechanism. After each sub-plan execu-**
tion, AdaPlanner actively checks an assertion condition to ensure that the current plan is proceeding
as expected. If the assertion fails, AdaPlanner performs out-of-plan refinement. For example, in
Figure 2 (a), after [Step 3], the agent is expected to hold lettuce. If this condition is not met,
AdaPlanner generates an error message that details the current progress of execution gathered by the
```
report() function. In ALFWorld tasks, this function provides a report of the agent’s location, the
```
object it is holding, and the last three interactions with the environment, as shown in Figure 2 (b).
AdaPlanner then utilizes this information to perform out-of-plan refinement.
During the out-of-plan refinement as in Figure 2 (c), AdaPlanner uses a prompt similar to the one
used during the initial planning stage, but with an additional feedback message that reflects the
current state. Detailed prompts are provided in Appendix 8.3. AdaPlanner then refines the plan based
on the newly acquired information and also determines the value of start_from by comparing
the plan before and after the refinement. The newly refined solution() is then executed from
the breakpoint start_from. This breakpoint contains all variable states that were saved prior to
refinement. Consequently, the current episode can continue from an intermediate checkpoint without
restarting from scratch. We call this mechanism refine-then-resume. It significantly speeds up task
completion and reduces the number of LLM calls required.
**3.3** **Skill Discovery**
Acquiring expert demonstrations for task solving can be costly, particularly as the number of tasks
increases. To address this issue, we have equipped AdaPlanner with a skill discovery feature.
This is a memory scheme that discovers and archives successful trajectories, thereby improving
planning performance when dealing with similar tasks. The skill discovery process consists of two
stages, which can be conducted alternately over several rounds, based on the interaction costs and
computation resources.
**Skill Acquisition. In the first stage, AdaPlanner attempts to solve unseen tasks, leveraging a limited**
number of human demonstrations of other simpler tasks, or even no demonstrations. The model
capitalizes on adaptive closed-loop planning to iteratively explore and refine solutions via a trial-anderror approach. Upon successful completion of a given task, the latest solution and the corresponding
interactions are treated as candidate discovered skills.
**Skill Filtering. In the second stage, we compare the planning performance with and without the**
integration of the discovered solution into the prompt. If the inclusion of this solution boosts the
success rate, it is archived as a discovered skill. Conversely, if it does not improve performance, it is
discarded. This filtering stage is crucial because the iterative closed-loop refinement may integrate
episode-specific information into the revised solution, potentially compromising its generalizability.
**4** **Evaluation**
We test AdaPlanner on two text-based decision-making environments: 1) ALFWorld [18] is a
text-based virtual household environment encompassing six distinct task types set. We evaluate
AdaPlanner on a total of 134 tasks across these six types. 2) MiniWoB++ [11] is a simulation environment that covers a large range of computer tasks. We select 9 MiniWoB++ tasks with environmental
feedback, and we also adopt and test the 53 tasks evaluated in RCI [9]. Both environments aim to
solve complicated challenges with long-horizon solutions and sparse rewards. We also carefully
designed ablation studies to justify the significance of each component in AdaPlanner. The Setup
details and prompts for AdaPlanner are depicted in Appendix 8.1 and 8.3. Detailed introductions to
each baseline are presented in Appendix 8.2 Note that we evaluate different baselines for these two
-----
|Method|Pick Clean Heat Cool Examine Pick two|All (134 tasks)|
|---|---|---|
_Training-Based Methods_
|BUTLER [18]|46.00 39.00 74.00 100.00 22.00 24.00|37.00|
|---|---|---|
_Implicit Closed-Loop Methods with Fixed Plan_
|ReAct [22] (GPT-3) ReAct [22] (GPT-3.5) Reflexion [17] (GPT-3 + 3.5) Reflexion [17] (GPT-3.5)|66.67 41.94 91.03 80.95 55.56 35.29 37.50 64.52 69.57 42.86 38.89 17.65 75.00 90.32 91.30 90.48 88.89 94.12 50.00 41.94 65.22 52.38 66.67 47.06|61.94 47.76 88.06 52.99|
|---|---|---|
_Explicit Closed-Loop Methods with Plan Refinement_
|AdaPlanner (GPT-3) AdaPlanner (GPT-3.5)|100.00 96.77 95.65 100.00 100.00 47.06 77.78 93.55 69.57 93.65 62.96 78.43|91.79 80.60|
|---|---|---|
Table 2: Success rate (%) of tested methods on six ALFWorld tasks. For ReAct and AdaPlanner,
GPT-3.5 refers to gpt-3.5-turbo, while GPT-3 represents text-davinci-002. For Reflexion,
GPT-3.5 indicates gpt-3.5-turbo. GPT-3+3.5 is used in the original Reflexion implementation,
which utilizes both GPT-3 (text-davinci-002) and GPT-3.5 (text-davinci-003) for action
generation and failure reflection, respectively. Our AdaPlanner method is prompted with one specific
example per task, making up six demonstrations in total. This is half the number of samples used in
React and Reflection. The best-performing results are marked in bold. The results of our method are
colored in gray.
benchmarks. These methods utilize task-specific samples for prompting or training purposes, thus
necessitating separate evaluations for each benchmark.
**Main Results. AdaPlanner consistently outperforms the existing baselines, achieving state-of-the-art**
performance, i.e., an overall success rate of 91.79% in ALFWorld tasks (Table 2) and 91.11% in
MiniWoB++ tasks with feedback (Table 3). Specifically, in ALFWorld, AdaPlanner equipped with
GPT-3 achieves a remarkable success rate exceeding 95% in the majority of individual tasks. It
also surpasses all other baselines in the Pick, Clean, and Examine tasks. Notably, even in the task
with the lowest performance (Pick two), AdaPlanner still outperforms BUTLER and ReAct. In the
MiniWoB++ environment, AdaPlanner demonstrates superiority over all other methods on tasks that
provide feedback. This superior performance suggests that AdaPlanner effectively leverages feedback
to refine its plans and enhance its performance. Furthermore, AdaPlanner maintains competitive
performance on tasks without feedback, achieving a success rate of 93.22%. Note that AdaPlanner’s
success rates of tasks without feedback are still comparable to CC-Net, the state-of-the-art model
requiring over 23,000 samples per task. This result highlights the efficacy of the programming-based
planning strategy employed by AdaPlanner. In both environments, AdaPlanner consistently delivers
superior or competitive performance when compared to not only training-based methods but also
implicit closed-loop methods under the same LLM models. These results affirm the effectiveness of
the proposed explicit closed-loop plan refinement in AdaPlanner.
Furthermore, we summarize the relationship between success rate (%)
and the number of samples in Figure 3. In ALFWorld, AdaPlanner yields the highest performance
with the fewest number of samples.
In MiniWoB++, our method outperforms most baselines. Notably, our
method achieves performance comparable to CC-Net but requires 600
times fewer samples. This study
highlights that AdaPlanner significantly reduces the need for extensive
demonstrations or expert trajectories,
thereby offering a more resourceefficient solution.
100
100
80
60
40
80
60
40
10 1 10[1] 10[3] 10[5]
|AdaPlann Reflexio|er n|Col3|Col4|
|---|---|---|---|
|ReAct||||
|||||
|||BUT|LER|
# Samples per task
10 1 10[1] 10[3] 10[5]
|AdaPlan RCI W|ner GE|CC-NE|T|
|---|---|---|---|
|||||
||WebN-T5-|3B||
|||||
# Samples per task
(b) MiniWoB++
(a) ALFWorld
Figure 3: Relationship between success rate (%) and the number of expert demonstrations in ALFWorld and MiniWoB++
environments. We adopt the same settings as in Table 2 (GPT3 version) and Table 3. The top-left corner represents the
pinnacle of sample efficiency.
-----
|Method|With feedback (9 tasks) No feedback (44 tasks)|All(53 tasks)|
|---|---|---|
_Training-Based Methods_
|CC-Net [7] WGE [11]|87.00 95.66 67.60 87.93|94.00 86.00|
|---|---|---|
_Finetuning-Based Methods_
|WebN-T5-3B [4]|38.50 54.67|52.00|
|---|---|---|
_Implicit Closed-Loop Methods with Fixed Plan_
|RCI [9]|81.56 92.68|91.00|
|---|---|---|
_Explicit Closed-Loop Methods with Plan Refinement_
|AdaPlanner|91.11 93.22|92.87|
|---|---|---|
Table 3: Success rate (%) of tested methods on two subsets of tasks in the MiniWoB++ environment.
RCI and AdaPlanner harness GPT-3.5 (text-davinci-003) as backends. Our AdaPlanner method
is provided with 38 human-written demonstrations; then, it automatically discovers 21 additional
examples via skill discovery, which makes up the final set of 59 examples for 53 tasks. This is around
_half the number of samples used in RCI and over one six hundredths of the number of samples used in_
CC-Net. The best-performing results are marked in bold. The results of our AdaPlanner are colored
in gray. Per-task success rates are provided in Appendix 8.5.
**Adaptive Closed-Loop Architecture Enhances Planning Performance. Figure 4a shows the**
performance v.s. the number of closed-loop refinements, under settings with different numbers of
demo samples. The detailed example selection for this study is provided in Appendix 8.1. We observe
a significant trend of increased success rates corresponding to each subsequent closed-loop plan
refinement. This indicates the AdaPlanner’s ability to consistently leverage real-time feedback for
performance enhancement, regardless of the number of samples used. Remarkably, AdaPlanner
maintains this trend of success rate enhancement even when the total number of demonstrations across
all six tasks is as low as two. Moreover, a comparison with Reflexion, depicted in Figure 4b, shows
AdaPlanner’s consistently superior performance across all iterations of closed-loop corrections. These
observations highlight AdaPlanner’s sample efficiency and its potential for real-world applications
where the number of available demonstrations is limited.
**Code Interface Mitigates Hallucination. The latest gpt-3.5-turbo is reported to be the most**
capable GPT-3.5 model while reducing the cost by a tenth compared to other prevailing GPT-3 [1]
and 3.5 models [13] (e.g., text-davinci-002 and text-davinci-003.) However, our findings
from Table 2 indicate that gpt-3.5-turbo underperforms in decision-making tasks relative to its
predecessors, i.e., text-davinci-002, in all LLM-agents. Upon examination of trajectories from
both models, we observed a noticeable hallucination with GPT-3.5 (gpt-3.5-turbo), as shown
in Appendix 8.4. We hypothesize that gpt-3.5-turbo might be a smaller-scale model that is
more prone to hallucination. Despite this, AdaPlanner demonstrates a remarkable level of resilience
against hallucination even with gpt-3.5-turbo (Table 2), while ReAct and Reflexion are more
sensitive to the hallucination issue. AdaPlanner’s resilience against hallucination can be attributed to
its use of code prompts, which provide a more formal and constrained generation space for LLM.
For comparison, we implement an ablation version of AdaPlanner without the code interface by
translating solution examples directly into plans and actions using natural language. Without the
code interface, AdaPlanner’s performance substantially drops in both ALFWorld and MiniWoB++
environments (Figure 4c), from 81% to 46% and from 93% to 66%, respectively. This significant
performance drop underscores the essential role of the code interface in AdaPlanner.
**Skill Discovery Improves Sample Efficiency. The skill discovery in AdaPlanner utilizes a long-term**
memory mechanism that retains successful solutions, thus boosting planning performance when
faced with similar tasks. An ablation study depicted in Figure 4d compares the performance of
AdaPlanner with and without the implementation of skill discovery. In the skill acquisition stage,
we provide a maximum of one demonstration. In ALFWorld, AdaPlanner is prompted with only
one expert demonstration of the simplest task (put). We evaluate the average success rate of the
method on the remaining five tasks, which are comparatively more challenging and require additional
steps for completion. In MiniWoB++, we apply zero-shot prompting, omitting any examples in the
-----
100
80
60
40
20
100
80
60
40
20
93%
|Without SD With SD|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|With SD 60% 45% 38%||||||
|19%||||||
ALFWorld MiniWoB++
80.0
77.5
75.0
72.5
70.0
67.5
90
80
70
60
50
40
With CI
|Col1|Col2|81%|Col4|Col5|Col6|
|---|---|---|---|---|---|
|46%||81%|66%|||
|||||Wit|hout CI|
ALFWorld MiniWoB++
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||# Sa|mples|
|||||2 4 6|
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||A|daPlan|ner (GP|T-3)|
||A Re Re|daPlan flexion flexion|ner (GP (GPT- (GPT-|T-3.5) 3+3.5) 3.5)|
AdaPlanner (GPT-3)
AdaPlanner (GPT-3.5)
Reflexion (GPT-3+3.5)
Reflexion (GPT-3.5)
# Samples
2
4
6
# Closed-loop corrections
(a)
# Closed-loop corrections
(b)
(c)
(d)
Figure 4: Performance comparison on 134 ALFWorld tasks in different cases. We adopt the same
settings as in Table 2. (a) and (b) presents the success rate (%) with different numbers of closed-loop
corrections: (a) compares AdaPlanner with different numbers of samples; (b) compares AdaPlanner
and Reflexion with two LLMs. (c) shows the success rate (%) of AdaPlanner with and without code
interface (CI). (d) shows the success rate (%) of AdaPlanner with and without skill discovery (SD).
Note that for (a), the number signifies the total number of samples used across all six tasks.
skill acquisition phase. For both environments, we operate the method using GPT-3.5 in adaptive
closed-loop mode, and one round of skill discovery is conducted. As Figure 4d illustrates, the
inclusion of skill discovery significantly enhances performance. In the ALFWorld tasks, the success
rate of AdaPlanner nearly doubles when skill discovery is employed. Similarly, in the MiniWoB++
tasks, the overall success rate increases by approximately 15% with skill discovery.
**Related Work**
Many works have studied how to leverage LLMs as autonomous agents to accomplish decisionmaking tasks within text-based environments. Earlier studies, like Chain-of-Thoughts [21] and
Zero-Shot Planner [5], utilize prompts to guide LLMs in generating complete action sequences for
elementary tasks. For more complex tasks, methods like HuggingGPT [16] and Chameleon [12] also
generate the initial plan of using different tools and then call the corresponding APIs for execution.
However, all these plans are created in an open-loop fashion without adapting to feedback from
external environments.
To address the limitations of open-loop systems, recent techniques have emerged that focus on
establishing closed-loop systems. These systems are capable of leveraging environmental feedback,
thereby facilitating more adaptive decision-making. ReAct [22] and Inner Monologue [6] allow
LLM agents to take single-step actions according to the environmental feedback. Reflexion [17], as
an extension of ReAct, tries to resolve this issue by enabling the ReAct agent to revise itself from
past trials and errors. Moreover, RCI [9] starts by formulating a comprehensive plan, modifying the
immediate action when the agent encounters a failure at the current step. While all the aforementioned
methods can adapt their decisions based on environmental feedback, they assume the LLM-generated
initial plan is correct and do not adjust it. Rather, they solely modify the immediate action being
executed and are easy to fall into local sub-optimal actions without considering the long-term plans.
To further enhance the agents’ both capabilities of planning and adapting to environmental feedback,
strict closed-loop architectures are proposed that can recursively refine the generated plans. DEPS [20]
is one of the examples that initially proposes an entire plan and then applies real-world feedback to
recursively refine it during execution. However, this method requires training a selector to generate a
plan that is highly probable to succeed, which makes it difficult to generalize the plans and actions to
other tasks. Besides, the required data for training the plan selector are often unavailable in practice
and expensive to collect. In contrast, AdaPlanner generates and refines plans via LLM prompting,
making it widely applicable to various decision-making problems.
**6** **Conclusion and Limitations**
We proposed AdaPlanner, a closed-loop approach enabling LLM agents to adaptively refine their
generated plans according to environment feedback. We defined two different refinement strategies,
in-plan and out-of-plan refinement, to fully leverage environment information. Furthermore, to
mitigate the LLMs’ hallucination issue and make them learn from past experience, we proposed
-----
code-style prompting and skill discovery mechanisms. Through comprehensive experiments, we
demonstrated that AdaPlanner outperforms the state-of-the-art baselines significantly and has better
sample efficiency. Our ablation studies also showed the effectiveness of different components in
AdaPlanner. One limitation of AdaPlanner is that it still requires few-shot expert demonstrations
for solving complex tasks. Although AdaPlanner has already achieved better sample efficiency than
existing methods, it is interesting to study how to further enhance AdaPlanner to solve complex tasks
with no demonstrations in the future.
**7** **Broader Impacts**
Our research approach focuses on treating LLMs as autonomous agents and improving their ability
to solve complex sequential decision-making tasks. However, this research line carries inherent risks,
including security threats, potential misuse, and unintended consequences such as job displacement
due to automation. To mitigate these risks, it is essential for researchers and policymakers to collaborate in creating and implementing effective regulations to guide the development and deployment of
these technologies toward positive outcomes. Additionally, we believe that the research community
should coordinate efforts to design principles and techniques that prioritize safety and human values
before LLM agents are deployed in various industries. This will help ensure that LLMs are aligned
with ethical and moral standards while promoting their positive impact on society.
**References**
[1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural
_information processing systems, 33:1877–1901, 2020._
[2] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling
computation from reasoning for numerical reasoning tasks. arXiv, page 2211.12588v3, 2022.
[3] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig. Pal: Programaided language models. arXiv, page 2211.10435v2, 2022.
[4] I. Gur, O. Nachum, Y. Miao, M. Safdari, A. Huang, A. Chowdhery, S. Narang, N. Fiedel, and
A. Faust. Understanding html with large language models. arXiv, page 2210.03945v1, 2022.
[5] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners:
Extracting actionable knowledge for embodied agents. arXiv, page 2201.07207v2, 2022.
[6] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch,
Y. Chebotar, P. Sermanet, N. Brown, T. Jackson, L. Luu, S. Levine, K. Hausman, and B. Ichter.
Inner monologue: Embodied reasoning through planning with language models. arXiv, page
2207.05608v1, 2022.
[7] P. C. Humphreys, D. Raposo, T. Pohlen, G. Thornton, R. Chhaparia, A. Muldal, J. Abramson,
P. Georgiev, A. Goldin, A. Santoro, and T. Lillicrap. A data-driven approach for learning
to control computers. arXivProceedings of the 39th International Conference on Machine
_Learning, Baltimore, Maryland, USA, PMLR 162, 2022, page 2202.08137v2, 2022._
[8] E. Jang. Can llms critique and iterate on their own outputs? evjang.com, 2023.
[9] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks. arXiv, page
2303.17491v1, 2023.
[10] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as
policies: Language model programs for embodied control. arXiv, page 2209.07753v3, 2022.
[11] E. Z. Liu, K. Guu, P. Pasupat, and P. Liang. Reinforcement learning on web interfaces using
workflow-guided exploration. In International Conference on Learning Representations, 2018.
[12] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao.
Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint
_arXiv:2304.09842, 2023._
-----
[13] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,
K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback.
_Advances in Neural Information Processing Systems, 35:27730–27744, 2022._
[14] A. Parisi, Y. Zhao, and N. Fiedel. Talm: Tool augmented language models. arXiv preprint
_arXiv:2205.12255, 2022._
[15] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and
T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint
_arXiv:2302.04761, 2023._
[16] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang. Hugginggpt: Solving ai tasks with
chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023.
[17] N. Shinn, B. Labash, and A. Gopinath. Reflexion: an autonomous agent with dynamic memory
and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
[18] M. Shridhar, X. Yuan, M.-A. Cote, Y. Bisk, A. Trischler, and M. Hausknecht. {ALFW}orld:
Aligning text and embodied environments for interactive learning. In International Conference
_on Learning Representations, 2021._
[19] I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and
A. Garg. Progprompt: Generating situated robot task plans using large language models. In
_Second Workshop on Language and Reinforcement Learning, 2022._
[20] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang. Describe, explain, plan and select: Interactive
planning with large language models enables open-world multi-task agents. _arXiv, page_
2302.01560v1, 2023.
[21] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chainof-thought prompting elicits reasoning in large language models. arXiv, page 2201.11903v6,
2022.
[22] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y. Cao. React: Synergizing
reasoning and acting in language models. In The Eleventh International Conference on Learning
_Representations, 2023._
[23] D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet,
Q. Le, and E. Chi. Least-to-most prompting enables complex reasoning in large language
models. arXiv, page 2205.10625v2, 2022.
-----
**8** **Supplementary Material**
**8.1** **Experimental Setup**
**ALFWorld [18] is a comprehensive suite of synthetic, text-based environments, encompassing**
six distinct task types set, Pick, Clean, Heat, Cool, Examine, and Pick two, within a virtual
household. Each task possesses a unique high-level objective (e.g., put some vase in safe, etc.)
that necessitates agent navigation and interaction with various objects or receptacles (e.g., go to
```
shelf 6, clean apple, etc.). To fulfill the stipulated task, the agent is required to implement a
```
sequence of actions aimed at accomplishing the predetermined goal. However, given that an object
may potentially reside in any one of over 50 possible locations in a task instance, the agent must
sequentially explore each of these. Consequently, the entire action trajectory could involve more than
50 individual actions, presenting a significant challenge to the agent.
**MiniWoB++ [11] is a task suite of simulation environments that covers a large range of com-**
puter tasks for net agents. The computer tasks start from simple button-clicking to more
challenging ones with longer time horizons (e.g., click-checkboxes-large), reasoning (e.g.,
```
click-checkboxes-soft), unexpected pop-ups (e.g., login-user-popup ), and stochastically
```
varying layouts (e.g., multi-orderings, multi-layouts). These challenges are suitable for
evaluating our proposed closed-loop framework. Each task interacts with a 160px × 210px
web environment, where the state space is purely the HTML code of the web. Following
RCI [9], we define the actions space as two sets of operations, i.e., clicking and typing actions. The clicks allow the agent to interact with clickable HTML elements (e.g., webpage buttons). The typings are conducted with keyboard-based actions, such as inputting characters into
the input box and stroking functional keys (e.g., ENTER, BACKSPACE). We select nine MiniWoB++ tasks where environment observations (i.e., the HTML code) change after certain actions:
```
search-engine, tic-tac-toe, terminal, login-user-popup, guess-number, email-inbox,
email-inbox-nl-turk, email-inbox-forward-nl, and email-inbox-forward-nl-turk.
```
Take the task terminal as an example. Search results appear on the webpage after inputting
the keyword and pressing the search button. Therefore, environment feedback can be interpreted
from the change in HTML code and further leveraged by closed-loop planning. We also adopt and
test the 53 tasks evaluated in RCI [9].
**Metrics. Consistent with previous works [18, 22, 8, 7, 4, 11, 9], we use success rate (%) to evaluate**
the performance of tested methods. The success rate is defined as the number of successful episodes
over the total number of episodes. Note that in ALFWorld, failure of an episode occurs when the total
number of actions attains 50, with the task still unsolved. In MiniWoB++, failures can occur in two
scenarios: either due to the execution of invalid actions or if the task remains unfinished following
the execution of the entire plan.
**8.2** **Baseline Details**
**ALFWorld. Following a set of previous works [18, 22, 8], we evaluate AdaPlanner on 134 different**
environments. By default, we include one sample as an exemplar per task to prompt AdaPlanner.
For the study presented in Figure 4a, we adopt the setting of prompted samples as in Table 4.
For the study in Figure 4d, we use one sample of the simplest task put to prompt the rest of the
five tasks, which are more complex and require more steps to solve. For baselines, we compare
AdaPlanner with BUTLER [18], ReAct [22], and Reflexion [8]. BUTLER [18] is an imitation
learning method trained with 100k samples per task. ReAct and Reflexion, as outlined in Table 1,
are prompting-based methodologies utilizing an implicit closed-loop framework. They employ a
total of 6 and 8 samples, respectively, across all six tasks. BUTLER results are sourced from [18].
We evaluate ReAct, Reflexion, and AdaPlanner empowered by both GPT-3 (text-davinci-002)
and GPT-3.5 (gpt-3.5-turbo and text-davinci-003) models. MiniWoB++. Overall, we report
the evaluation results of RCI [9] and the proposed AdaPlanner in GPT-3.5 (text-davinci-003),
along with three training or finetuning-based baselines: Computer Control Agent Architecture(CCNet) [7], Workflow-Guided Exploration(WGE) [11], and WebN-T5-3B [4]. CC-Net and WGE
employ supervised learning and reinforcement learning with over 23K and 10 demonstrations per
task, respectively. WebN-T5-3B uses 12K demonstrations to finetune a pre-trained language model.
RCI is a prompting-based approach that is categorized as the implicit closed-loop method in Table 1,
which utilizes 93 samples across the 53 tasks. For these 53 tasks, we first provide AdaPlanner with 38
-----
|# samples|Pick Clean Heat Cool Examine Pick two|
|---|---|
|2 4 6|Clean Clean Clean Clean Examine Examine Pick Clean Clean Clean Examine Pick two Pick Clean Heat Cool Examine Pick two|
|---|---|
Table 4: The specific allocation of samples for prompting each task is divided into three cases based
on the total number of samples (2, 4, and 6) used across the six types of tasks. For instance, when
a total of 2 samples are used for all tasks, a single expert trajectory sample for the Clean task is
utilized to prompt four tasks (Pick, Clean, Heat, and Cool). Similarly, a sample from the Examine
task is used to prompt two tasks (Examine and Pick two).
human-written demonstrations and perform skill discovery to obtain another 21 additional examples,
i.e., 59 examples are used for 53 tasks. Evaluations results of RCI, CC-Net, WGE, and WebN-T5-3B
are sourced from the works of [9, 7, 11, 4], respectively.
**8.3** **Prompts**
**8.3.1** **ALFWorld**
**Basic Information. The <basic_info> defines the agent and admissible actions for AdaPlanner.**
Note that the actual definitions of action functions are not specified in the prompt. Instead, only a
formalized definition with several examples is provided, such that the planner can acquire how to
compose a plan based on these actions. As can be seen in the following part, this <basic_info>
prompt is used in both <initial_planning> and <refinement> prompts.
```
<basic_info> Prompt
# You are a household agent. Here is some Python code defining a household
# Use literal_eval to convert the answer from ask() to a list.
# In the environment, you can ask questions to an assistant by ask():
# for example: You have a list of receptacles, and you want to sort them by the
likelihood of a soapbar appearing in them. You can do this by asking the
receptacles = ['countertop 1', 'garbagecan 1', 'sinkbasin 2', 'sinkbasin 1',
answer = ask(f'Sort the list of receptacles, starting from the one a soapbar is
most likely to appear: {receptacles}. You should return a Python list.')
# answer = ['sinkbasin 1', 'sinkbasin 2', 'countertop 1', 'towelholder 1',
# Agent class represents the state of the agent, including its location,
```
```
# Here are the admissible actions the agent can take:
# Go to a receptacle and update the agent's location.
# For example, 'On the countertop 1, you see a candle 1, a cloth 2, and a
# For example, 'On the sidetable 2, you see nothing.' = goto('sidetable 2')
# Take an object from a receptacle if the agent is not holding anything.
```
```
environment:
from ast import literal_eval
from large_language_model import ask_llm as ask
assistant:
'toilet 1', 'toiletpaperhanger 1', 'towelholder 1']
'toiletpaperhanger 1', 'garbagecan 1', 'toilet 1']
# what it's holding as well as the actions it can take.
class Agent:
def __init__(self, receptacles):
self.location = None
self.holding = None
self.receptacles = receptacles
soapbar 1.' = goto('countertop 1')
def goto(self, receptacle):
...
```
-----
```
# For example, 'You pick up the soapbar 1 from the towelholder 1.' =
take('soapbar 1', 'towelholder 1')
def take(self, object, receptacle):
...
# Put an object in or on a receptacle if the agent is holding it.
# For example, 'You put the soapbar 1 in/on the cabinet 1.' = put('soapbar
1', 'cabinet 1')
def put(self, object, receptacle):
...
# Open a receptacle and observe its contents.
# For example, 'You open the cabinet 1. The cabinet 1 is open. In it, you
see a cloth 1.' = open_receptacle('cabinet 1')
def open_receptacle(self, receptacle):
...
# Clean an object with a receptacle.
# For example, 'You clean the soapbar 1 using the sinkbasin 1.' =
clean('soapbar 1', 'sinkbasin 1')
def clean(self, object, receptacle):
...
# Heat an object with a receptacle.
# For example, 'You heat the tomato 1 using the microwave 1.' = heat('tomato
1', 'microwave 1')
def heat(self, object, receptacle):
...
# Cool an object with a receptacle.
# For example, 'You cool the pan 2 using the fridge 1.' = cool('pan 2',
'fridge 1')
def cool(self, object, receptacle):
...
# Turn on an object.
# For example, 'You turn on the desklamp 1.' = turn_on('desklamp 1')
def turn_on(self, object):
...
# Report agent's current state, including its location, what it's holding,
and last action and observation.
# This function should only be used in assertion.
def report(self):
...
```
**Initial Planning. The <initial_planning> prompt is employed to generate the preliminary plan.**
In this context, <basic_info> is substituted by the content of the <basic_info> prompt. The
```
<sample> is replaced with an expert trajectory, while <receptacle_list> is substituted by the list
```
of interactive receptacles provided by the task environment. Finally, <task> is substituted by the
task description, expressed in natural language.
```
<initial_planning> Prompt
# Now complete the function solution() below to solve the task by composing the
# For each step you plan to take, 1) mark with '[Step xx]', 2) give a reason why
you think it is a good step to take 3) write an assertion to check if the step
```
```
<basic_info>
agent's methods to interact with the environment.
is successful.
# Here is an example of a solution to the task:
<sample>
```
-----
```
# Here is the actual task.
# define environment and agent
receptacles = <receptacle_list>
agent = Agent(receptacles)
# <task>
# You should complete your solution function below:
def solution(agent, start_from=1):
```
**Samples. In ALFWorld, there are six types of tasks: Pick, Clean, Heat, Cool, Examine, and**
```
Pick two. For each type, we gather one expert sample of solutions that the planner can refer to.
```
These six expert samples are presented as follows:
The expert sample for the task Pick:
```
<sample_pick> Prompt
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
# General Plan: I need to get a list of receptacles where the soapbar is
likely to appear, and then go to each receptacle in the list until seeing a
soapbar. Then I can put get the identifier of the soapbar and take it.
Finally I can go to the countertop and put the soapbar.
print("[Step 1] get a list of receptacles where the soapbar is likely to
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a soapbar in each of
them. The list of receptacles is: {agent.receptacles}. You should
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 1]: recep_to_check should not be
```
```
print("[Step 2] go to each receptacle in the list until seeing a
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
observation = agent.open_receptacle(receptacle)
# check if a soapbar is in/on the receptacle.
# expectation: I should be able to find a receptacle where a soapbar is
assert 'soapbar' in observation, f'Error in [Step 2]: There is no
soapbar in/on {recep_to_check}. {agent.report()}'
print("[Step 3] identify the soapbar I juts found and take it")
# I need to get the identifier of the soapbar. I can ask the assistant
```
```
# define environment and agent
'fridge 1']
agent = Agent(receptacles)
# Your task is to: put soapbar on countertop.
# here is a solution:
def solution(agent, start_from=1):
if start_from <= 1:
appear")
# I can ask the assistant to do that.
directly return a Python list.')
recep_to_check = literal_eval(answer)
empty. {agent.report()}'
if start_from <= 2:
soapbar")
for receptacle in recep_to_check:
if 'closed' in observation:
if 'soapbar' in observation:
break
in/on it.
if start_from <= 3:
to do that.
```
-----
```
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation} The identifier of the
soap? Only Output a single number without any other words. ')
found_soapbar = f'soapbar {answer}'
observation = agent.take(found_soapbar, receptacle)
# expectation: I should be able to take the soapbar from the receptacle.
assert agent.holding == found_soapbar, f'Error in [Step 3]: I cannot
take {found_soapbar} from the {receptacle}. {agent.report()}'
if start_from <= 4:
print("[Step 4] go to a countertop and put the soapbar on it")
# There are multiple countertops, and I only need to go to one of them.
observation = agent.goto('countertop 1')
# check if the countertop is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('countertop 1')
observation = agent.put(found_soapbar, 'countertop 1')
# expectation: I should be able to put the soapbar on the countertop.
assert f'You put the {found_soapbar} in/on the countertop 1.' in
observation, f'Error in [Step 4]: I cannot put the {found_soapbar} on
the countertop 1. {agent.report()}'
```
The expert sample for the task Clean:
```
<sample_clean> Prompt
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
# Your task is to: put a clean lettuce in diningtable / clean a lettuce and put
# General plan: I need to get a list of receptacles to find the lettuce,
take the lettuce to the sinkbasin, clean it and put it in a diningtable.
print("[Step 1] get a list of receptacles where the lettuce is likely to
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a lettuce in each of
them. The list of receptacles is: {agent.receptacles}. You should
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 1]: recep_to_check should not be
```
```
print("[Step 2] go to each receptacle in the list until seeing a
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
observation = agent.open_receptacle(receptacle)
# check if a lettuce is in/on the receptacle.
# expectation: I should be able to find a receptacle where a lettuce is
```
```
# define environment and agent
'fridge 1']
agent = Agent(receptacles)
it in diningtable.
# here is a solution:
def solution(agent, start_from=1):
if start_from <= 1:
appear.")
# I can ask the assistant to do that.
directly return a Python list.')
recep_to_check = literal_eval(answer)
empty. {agent.report()}'
if start_from <= 2:
lettuce")
for receptacle in recep_to_check:
if 'closed' in observation:
if 'lettuce' in observation:
break
in/on it.
```
-----
```
assert 'lettuce' in observation, f'Error in [Step 2]: There is no
lettuce in/on {recep_to_check}. {agent.report()}'
if start_from <= 3:
print("[Step 3] identify the lettuce I juts found and take it")
# I need to get the identifier of the lettuce. I can ask the assistant
to do that.
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation} The identifier of the
lettuce? Only Output a single number without any other words. ')
found_lettuce = f'lettuce {answer}'
observation = agent.take(found_lettuce, receptacle)
# expectation: I should be able to take the lettuce from the receptacle.
assert agent.holding == found_lettuce, f'Error in [Step 3]: I cannot
take {found_lettuce} from the {receptacle}. {agent.report()}'
if start_from <= 4:
print("[Step 4] go to a sinkbasin to clean the lettuce. ")
# I should go to the sinkbasin first if I want to clean the lettuce.
observation = agent.goto('sinkbasin 1')
# check if the sinkbasin is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('sinkbasin 1')
observation = agent.clean(found_lettuce, 'sinkbasin 1')
# expectation: I should be able to clean the lettuce.
assert f'You clean the {found_lettuce} using the sinkbasin 1.' in
observation, f'Error in [Step 4]: I cannot clean the {found_lettuce}
using the sinkbasin 1. {agent.report()} I should have been at sinkbasin
1 and holding {found_lettuce}.'
if start_from <= 5:
print("[Step 5] go to a diningtable and put the lettuce on it. ")
# There are multiple diningtables, and I only need to go to one of them.
observation = agent.goto('diningtable 1')
# check if the diningtable is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('diningtable 1')
observation = agent.put(found_lettuce, 'diningtable 1')
# expectation: I should be able to put the lettuce on the diningtable.
assert f'You put the {found_lettuce} in/on the diningtable 1.' in
observation, f'Error in [Step 5]: I cannot put the {found_lettuce} on
the diningtable 1. {agent.report()}'
```
The expert sample for the task Heat:
```
<sample_heat> Prompt
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
# Your task is to: put a hot lettuce in diningtable / heat some lettuce and put
# General plan: I need to get a list of receptacles to find the lettuce,
take the lettuce to the microwave, heat it and put it in a diningtable.
print("[Step 1] get a list of receptacles where the lettuce is likely to
```
```
# define environment and agent
'fridge 1']
agent = Agent(receptacles)
it in diningtable.
# here is a solution:
def solution(agent, start_from=1):
if start_from <= 1:
appear.")
# I can ask the assistant to do that.
```
-----
```
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a lettuce in each of
them. The list of receptacles is: {agent.receptacles}. You should
directly return a Python list.')
recep_to_check = literal_eval(answer)
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 1]: recep_to_check should not be
empty. {agent.report()}'
if start_from <= 2:
print("[Step 2] go to each receptacle in the list until seeing a
lettuce")
for receptacle in recep_to_check:
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle(receptacle)
# check if a lettuce is in/on the receptacle.
if 'lettuce' in observation:
break
# expectation: I should be able to find a receptacle where a lettuce is
in/on it.
assert 'lettuce' in observation, f'Error in [Step 2]: There is no
lettuce in/on {recep_to_check}. {agent.report()}'
if start_from <= 3:
print("[Step 3] identify the lettuce I juts found and take it")
# I need to get the identifier of the lettuce. I can ask the assistant
to do that.
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation} The identifier of the
lettuce? Only Output a single number without any other words. ')
found_lettuce = f'lettuce {answer}'
observation = agent.take(found_lettuce, receptacle)
# expectation: I should be able to take the lettuce from the receptacle.
assert agent.holding == found_lettuce, f'Error in [Step 3]: I cannot
take {found_lettuce} from the {receptacle}. {agent.report()}'
if start_from <= 4:
print("[Step 4] go to a microwave to heat the lettuce")
# I should go to a microwave to heat the lettuce.
observation = agent.goto('microwave 1')
# check if the microwave is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('microwave 1')
observation = agent.heat(found_lettuce, 'microwave 1')
# expectation: I should be able to heat the lettuce.
assert f'You heat the {found_lettuce} using the microwave 1.' in
observation, f'Error in [Step 4]: I cannot heat the {found_lettuce}
using the microwave 1. {agent.report()} I should have been at microwave
1 and holding {found_lettuce}. '
```
```
if start_from <= 5:
print("[Step 5] go to a diningtable and put the lettuce on it")
# There are multiple diningtables, and I only need to go to one of them.
observation = agent.goto('diningtable 1')
# check if the diningtable is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('diningtable 1')
observation = agent.put(found_lettuce, 'diningtable 1')
```
-----
```
# expectation: I should be able to put the lettuce on the diningtable.
assert f'You put the {found_lettuce} in/on the diningtable 1.' in
observation, f'Error in [Step 5]: I cannot put the {found_lettuce} on
the diningtable 1. {agent.report()}'
```
The expert sample for the task Cool:
```
<sample_cool> Prompt
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
# Your task is to: put a cold lettuce in diningtable / cool some lettuce and put
# General plan: I need to get a list of receptacles to find the lettuce,
take the lettuce to the fridge, cool it and put it in a diningtable.
print("[Step 1] get a list of receptacles where the lettuce is likely to
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a lettuce in each of
them. The list of receptacles is: {agent.receptacles}. You should
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 1]: recep_to_check should not be
print("[Step 2] go to each receptacle in the list until seeing a
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
observation = agent.open_receptacle(receptacle)
# check if a lettuce is in/on the receptacle.
```
```
# expectation: I should be able to find a receptacle where a lettuce is
assert 'lettuce' in observation, f'Error in [Step 2]: There is no
lettuce in/on {recep_to_check}. {agent.report()}'
print("[Step 3] identify the lettuce I juts found and take it")
# I need to get the identifier of the lettuce. I can ask the assistant
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation} The identifier of the
lettuce? Only Output a single number without any other words. ')
observation = agent.take(found_lettuce, receptacle)
# expectation: I should be able to take the lettuce from the receptacle.
assert agent.holding == found_lettuce, f'Error in [Step 3]: I cannot
take {found_lettuce} from the {receptacle}. {agent.report()}'
```
```
# define environment and agent
'fridge 1']
agent = Agent(receptacles)
it in diningtable.
# here is a solution:
def solution(agent, start_from=1):
if start_from <= 1:
appear.")
# I can ask the assistant to do that.
directly return a Python list.')
recep_to_check = literal_eval(answer)
empty. {agent.report()}'
if start_from <= 2:
lettuce")
for receptacle in recep_to_check:
if 'closed' in observation:
if 'lettuce' in observation:
break
in/on it.
if start_from <= 3:
to do that.
found_lettuce = f'lettuce {answer}'
if start_from <= 4:
```
-----
```
print("[Step 4] go to a fridge to cool the lettuce")
# I should go to a fridge to cool the lettuce.
observation = agent.goto('fridge 1')
# check if the fridge is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('fridge 1')
observation = agent.cool(found_lettuce, 'fridge 1')
# expectation: I should be able to cool the lettuce.
assert f'You cool the {found_lettuce} using the fridge 1.' in
observation, f'Error in [Step 4]: I cannot cool the {found_lettuce}
using the fridge 1. {agent.report()} I should have been at fridge 1 and
holding {found_lettuce}.'
if start_from <= 5:
print("[Step 5] go to a diningtable and put the lettuce on it")
# There are multiple diningtables, and I only need to go to one of them.
observation = agent.goto('diningtable 1')
# check if the diningtable is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle('diningtable 1')
observation = agent.put(found_lettuce, 'diningtable 1')
# expectation: I should be able to put the lettuce on the diningtable.
assert f'You put the {found_lettuce} in/on the diningtable 1.' in
observation, f'Error in [Step 5]: I cannot put the {found_lettuce} on
the diningtable 1. {agent.report()}'
```
The expert sample for the task Examine:
```
<sample_examine> Prompt
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
# Your task is to: look at the bowl under the desklamp / examine the bowl with
# General plan: I need to get a list of receptacles to find the bowl and
take the bowl with me, then I get another list of receptacles to find the
print("[Step 1] get a list of receptacles where a bowl is likely to
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a bowl in each of
them. The list of receptacles is: {agent.receptacles}. You should
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 1]: recep_to_check should not be
```
```
print("[Step 2] go to each receptacle in the list until seeing a pen")
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
observation = agent.open_receptacle(receptacle)
# check if a bowl is in/on the receptacle.
```
```
# define environment and agent
'fridge 1']
agent = Agent(receptacles)
the desklamp
# here is a solution:
def solution(agent, start_from=1):
desklamp and turn it on.
if start_from <= 1:
appear.")
# I can ask the assistant to do that.
directly return a Python list.')
recep_to_check = literal_eval(answer)
empty. {agent.report()}'
if start_from <= 2:
for receptacle in recep_to_check:
if 'closed' in observation:
if 'pen' in observation:
```
-----
```
break
# expectation: I should be able to find a receptacle where a bowl is
in/on it.
assert 'pen' in observation, f'Error in [Step 2]: There is no bowl in/on
{recep_to_check}. {agent.report()}'
if start_from <= 3:
print("[Step 3] take the bowl from the receptacle")
# I need to get the identifier of the bowl so that I can take it. I can
ask the assistant to do that.
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation} The identifier of the
pen? Only Output a single number without any other words. ')
found_pen = f'pen {answer}'
observation = agent.take(found_pen, receptacle)
# expectation: I should be able to take the bowl from the receptacle.
assert agent.holding == found_pen, f'Error in [Step 3]: I cannot take
{found_pen} from the {receptacle}. {agent.report()}'
if start_from <= 4:
print("[Step 4] get a list of receptacles where a desklamp is likely to
appear.")
# I can ask the assistant to do that.
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a desklamp in each
of them. The list of receptacles is: {agent.receptacles}. You should
directly return a Python list.')
recep_to_check = literal_eval(answer)
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 4]: recep_to_check should not be
empty. {agent.report()}'
if start_from <= 5:
print("[Step 5] go to each receptacle in the list until seeing a
desklamp")
for receptacle in recep_to_check:
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle(receptacle)
# check if a desklamp is in/on the receptacle.
if 'desklamp' in observation:
break
# expectation: I should be able to find a receptacle where a desklamp is
in/on it.
assert 'desklamp' in observation, f'Error in [Step 5]: There is no
desklamp in/on {recep_to_check}. {agent.report()}'
if start_from <= 6:
print("[Step 6] turn on desklamp")
# There might be multiple desklamps in the environment, and I need to
get the identifier of the desklamp. I can ask the assistant to do that.
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation} The identifier of the
desklamp? Only Output a single number without any other words.')
found_desklamp = f'desklamp {answer}'
# I can directly turn on the desklamp that I just found.
observation = agent.turn_on(found_desklamp)
# expectation: the desklamp should be turned on now.
assert 'turn on' in observation, f'Error in [Step 6]: I cannot turn on
{found_desklamp} in/on {receptacle}. {agent.report()}'
```
-----
The expert sample for the task Pick two:
```
<sample_picktwo> Prompt
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
# Your task is to: put two cellphone in cabinet / find two cellphone and put
print("[Step 1] get a list of receptacles where a cellphone is likely to
answer = ask(f'Given a list of receptacles, please sort them in
descending order based on the likelihood of finding a cellphone in each
of them. The list of receptacles is: {agent.receptacles}. You should
# expectation: the returned recep_to_check should not be empty.
assert recep_to_check, f'Error in [Step 1]: recep_to_check should not be
print("[Step 2] go to each receptacle in the list until seeing a
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
observation = agent.open_receptacle(receptacle)
# check if a cellphone is in/on the receptacle.
```
```
# expectation: I should be able to find a receptacle where a cellphone
assert 'cellphone' in observation, f'Error in [Step 2]: There is no
cellphone in/on {recep_to_check}. {agent.report()}'
print("[Step 3] identify the first cellphone found and take it")
# I need to get the identifier of the cellphone. I can ask the assistant
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation}. The identifier of the
cellphone? Only Output a single number without any other words. ')
found_cellphone1 = f'cellphone {answer}'
observation = agent.take(found_cellphone1, receptacle)
# expectation: I should be able to take the cellphone from the
assert agent.holding == found_cellphone1, f'Error in [Step 3]: I cannot
take {found_cellphone1} from the {receptacle}. {agent.report()}'
print("[Step 4] go to a cabinet and put the first cellphone found on it.
# There are multiple countertops, and I only need to go to one of them.
# check if the cabinet is closed. If so, open it.
```
```
# define environment and agent
'fridge 1']
agent = Agent(receptacles)
them in cabinet
# here is a solution:
def solution(agent, start_from=1):
if start_from <= 1:
appear.")
# I can ask the assistant to do that.
directly return a Python list.')
recep_to_check = literal_eval(answer)
# remove the destination from the list
recep_to_check.remove('cabinet 1')
empty. {agent.report()}'
if start_from <= 2:
cellphone")
for receptacle in recep_to_check:
if 'closed' in observation:
if 'cellphone' in observation:
break
is in/on it.
if start_from <= 3:
to do that.
receptacle.
if start_from <= 4:
")
observation = agent.goto('cabinet 1')
```
-----
```
if 'closed' in observation:
observation = agent.open_receptacle('cabinet 1')
observation = agent.put(found_cellphone1, 'cabinet 1')
# expectation: I should be able to put the cellphone1 on the countertop.
assert f'You put the {found_cellphone1} in/on the cabinet 1.' in
observation, f'Error in [Step 4]: I cannot put the {found_cellphone1} on
the cabinet 1. {agent.report()}'
if start_from <= 5:
print("[Step 5] go to each of the remaining receptacle in the list until
seeing a second cellphone")
for receptacle in recep_to_check:
observation = agent.goto(receptacle)
# check if the receptacle is closed. If so, open it.
if 'closed' in observation:
observation = agent.open_receptacle(receptacle)
# check if a cellphone is in/on the receptacle.
if 'cellphone' in observation:
break
# expectation: I should be able to find a receptacle where a cellphone
is in/on it.
assert 'cellphone' in observation, f'Error in [Step 5]: There is no
second cellphone in/on {recep_to_check}. {agent.report()}'
if start_from <= 6:
print("[Step 6] identify the second cellphone I just found and take it")
# I need to get the identifier of the cellphone. I can ask the assistant
to do that.
answer = ask(f'From the observation, get the identifier of an object.
For example, On the cabinet 1, you see a cloth 2, and a toiletpaper 2.
The identifier of cloth is 2. Now, {observation}. The identifier of the
cellphone? Only Output a single number without any other words. ')
found_cellphone2 = f'cellphone {answer}'
observation = agent.take(found_cellphone2, receptacle)
# expectation: I should be able to take the cellphone from the
receptacle.
assert agent.holding == found_cellphone2, f'Error in [Step 6]: I cannot
take {found_cellphone2} from the {receptacle}. {agent.report()}'
if start_from <= 7:
print("[Step 7] go to a cabinet and put the second cellphone found on
it")
observation = agent.goto('cabinet 1')
observation = agent.put(found_cellphone2, 'cabinet 1')
# expectation: I should be able to put the cellphone2 on the countertop.
assert f'You put the {found_cellphone2} in/on the cabinet 1.' in
observation, f'Error in [Step 7]: I cannot put the {found_cellphone2} on
the cabinet 1. {agent.report()}'
```
**Code Check. After plan generation, we employ the following prompt to verify and rectify any**
syntax errors. The placeholder <solution_func> is replaced by the generated solution function.
The <code_check> prompt prompts the model to return two questions. If the response to Question
```
1 is Yes, the answer to Question 2 is adopted as the corrected solution function. Otherwise, the
```
solution function is kept unchanged.
```
<code_check> Prompt
You are given a Python code snippet to define a function called solution.
```
```
[Code]
<solution_func>
```
-----
```
Question 1: Are there any syntax errors present in the code? Answer Yes/No.
Question 2: Fix the syntax errors and output an error-free version of the code.
Only Output the revised code after [Revised code] without any other words.
```
**Out-of-Plan Refinement. In the event of an assertion error, we use <refinement> to conduct**
the out-of-plan refinement. In this prompt, <basic_info> is replaced by the content of the
```
<basic_info> prompt. The placeholder <sample> is substituted with an expert trajectory, while
<receptacle_list> is replaced by the list of interactive receptacles provided by the task environ
```
ment. <task> is replaced by the task description in natural language. Finally, <error_msg> is
replaced by the assertion error message returned by the solution function. To adhere to the context
length limit of the GPT-3 and 3.5 models, the previously generated solution function is not included
in this prompt. Instead, we incorporate comprehensive information in the assertion error message,
enabling the refiner to generate a revised plan based on these details.
```
<refinement> Prompt
<basic_info>
# Here is a example of successful solution for solving a similar task:
[Successful example]
receptacles = ['diningtable 1','drawer 2', 'drawer 1', 'sinkbasin 1', 'toilet
1', 'sidetable 2', 'sidetable 1', 'cabinet 1', 'countertop 1', 'microwave 1',
'fridge 1']
agent = Agent(receptacles)
<sample>
# Here is the actual task.
# define environment and agent
receptacles = <receptacle_list>
agent = Agent(receptacles)
# <task>
You have generated code of solution() to solve the task. However, you executed
the solution() function and get an error message:
<error_msg>
Let's think step by step. Referring to the successful case and the error
message, you should complete the solution function with the correct code.
def solution(agent, start_from=1):
```
**Determining start_from. After formulating a revised plan, we utilize the following prompt**
to ascertain from which step the new solution function should commence. In this context, the <previous_solution> is replaced by the preceding solution function, while the
```
<revised_solution> is replaced by the updated one. Subsequently, the argument start_from=1
```
is substituted with the step number that this prompt yields.
```
<start_from> Prompt
Previously, you generated some code defining a solution function as in [Previous
solution]. The previous code is executed and outputs some error. Now you just
revised the code as in [Revised solution]. Determine from which step these two
version differs. You should only output the step number without saying any other
words.
[Previous solution]
<previous_solution>
[Revised solution]
<revised_solution>
```
**8.3.2** **MiniWoB++**
**Basic Information. Similar to the ALFWorld tasks, the <basic_info> of MiniWoB++ defines the**
agent and admissible actions for AdaPlanner. Note that the actual definitions of action functions
-----
are not specified in the prompt. Instead, only a formalized definition with several examples is
provided, such that the planner can acquire how to compose a plan based on these actions. As can be
seen in the following part, this <basic_info> prompt is used in both <initial_planning> and
```
<refinement> prompts.
```
<basic_info> Prompt
# Interact with the HTML webpage to finish the computer task. Here is some
Python code defining a computer task environment:
# In the environment, you can ask questions to an assistant by ask():
from large_language_model import ask_llm as ask
# for example: You want to solve a algebra problem x + 3 = 6. You can ask the
assistant to solve it directly for you.
answer = ask('Solve an algebra problem. You should directly output the value of
the unknown. For example, solve 'y + 1 = 2' -> 1. Now solve 'x + 3 = 6' ->')
# answer = '3'
# Agent class represents the current state of the HTML webpage and the actions
it can take.
class Agent:
def __init__(self, initial_state):
self.current_state = initial_state
# Here are the admissible actions the agent can take:
# Action: type a string into the input box
# this function returns the string of the HTML code after taking the action
# e.g., new_html_state = agent.type("hello")
def type(self, characters: str) -> str:
...
# Action: press a key on the keyboard, the input can be one of the following:
# enter, space, arrow_left, arrow_right, arrow_up, arrow_down, backspace
# this function returns the string of the HTML code after taking the action
# e.g., new_html_state = agent.press_key("enter")
def press_key(self, key: str) -> str:
...
# Action: click a <select> element in a list with an XPath
# this function returns the string of the HTML code after taking the action
# e.g., new_html_state = agent.click_option("//select[@id='cars']/option[1]")
def click_option(self, xpath_of_option: str) -> str:
...
# Action: click an HTML element with its XPath
# this function returns the string of the HTML code after taking the action
# e.g., new_html_state = agent.click_xpath("//button[@id='button1']")
def click_xpath(self, xpath_of_element: str) -> str:
...
# Action: move the mouse cursor on an HTML element with an XPath
# this function returns the string of the HTML code after taking the action
# e.g., new_html_state = agent.move_mouse_on("//button[@id='button1']")
def move_mouse_on(self, xpath_of_element: str) -> str:
...
```
```
**Initial Planning. The <initial_planning> prompt is employed to generate the preliminary plan.**
In this context, <basic_info> is substituted by the content of the <basic_info> prompt. The
```
<sample> is replaced with an expert trajectory (or discarded in the skill discovery stage). The
<initial_state> is substituted by the initial HTML code of the task UI. Finally, <task> is
```
substituted by the task description, expressed in natural language.
```
<initial_planning> Prompt
```
```
<basic_info>
```
-----
```
<sample>
# Now complete the function solution() below to solve the task by composing the
agent's methods to interact with the environment.
# In the solution function, start with a commented "# General plan: ". For each
step you plan to take, mark with '[Step xx]', and write an assertion to check if
the step is successful.
# Here is the actual task.
# define environment and agent. The state is the list of HTML elements of the
webpage.
initial_state = \'''
<initial_state>
\'''
agent = Agent(initial_state)
# Task: <task>
# Here is the solution:
def solution(agent, start_from=1):
```
**Samples. In MiniWoB++, we select nine tasks where feedback is available. Here are the expert**
samples that we gathered for these tasks.
The expert sample for the task email-inbox:
```
<sample_email-inbox> Prompt
```
```
# Task: Find the email by Brittani and reply to them with the text "Aliquet.
# General plan: I should first click on the email by Brittani, then click on the
reply button, then type the text "Aliquet. Sollicitudin nam lectus." and finally
print('[Step 1] click on the email by Brittani')
agent.click_xpath("//div[@class='email-sender' and text()='Brittani']")
agent.click_xpath("//span[@class='email-reply']")
# the reply content should be displayed on page.
assert 'reply-text' in state_after_interaction, 'I cannot do [Step 1]
correctly. The reply button is not displayed on the page.'
print('[Step 2] type the text "Aliquet. Sollicitudin nam lectus."')
agent.click_xpath("//textarea[@id='reply-text']")
agent.type('Aliquet. Sollicitudin nam lectus.')
state_after_interaction = agent.click_xpath("//*[@id='send-reply']")
# Task: Find the email by Blanca and forward that email to Agathe.
# General plan: I should first click on the email by Blanca, then click on the
forward button, then type "Agathe" and finally click on the send button.
print('[Step 1] click on the email by Blanca')
agent.click_xpath("//div[@class='email-sender' and text()='Blanca']")
agent.click_xpath("//span[@class='email-forward']")
# the forward content should be displayed on page.
assert 'forward-sender' in state_after_interaction, 'I cannot do [Step
1] correctly. The forward button is not displayed on the page.'
agent.click_xpath("//input[@class='forward-sender']")
```
```
# Here are three examples of solutions.
Sollicitudin nam lectus.".
def solution(agent, start_from=1):
click on the send button.
if start_from <= 1:
state_after_interaction =
if start_from <= 2:
def solution(agent, start_from=1):
if start_from <= 1:
state_after_interaction =
if start_from <= 2:
print('[Step 2] type "Agathe"')
agent.type('Agathe')
```
-----
```
state_after_interaction = agent.click_xpath("//*[@id='send-forward']")
# Task: Find the email by Salli and click the trash icon to delete it.
def solution(agent, start_from=1):
# General plan: I should first click on the email by Salli, then click on the
trash icon.
if start_from <= 1:
print('[Step 1] click on the email by Salli')
agent.click_xpath("//div[@class='email-sender' and text()='Salli']")
agent.click_xpath("//span[@class='trash']")
```
The expert sample for the task email-inbox-forward-nl:
```
<sample_email-inbox-forward-nl> Prompt
# Here is an example of solution.
# task: Send Alice the email from Beth / navigate to the message from Beth and
send it to Alice.
def solution(agent, start_from=1):
# General plan: I should first click on the email from Beth, then click on the
"Forward" button, then type "Alice" in the "To" inputbox, finally click on the
"Send" button.
if start_from <= 1:
print('[Step 1] click on the email from Beth')
agent.click_xpath('//*[@class="email-sender" and text()="Beth"]')
state_after_interaction =
agent.click_xpath('//span[@class="email-forward"]')
# the "To" inputbox should be displayed on page.
assert 'forward-sender' in state_after_interaction, f'I cannot do [Step
1] correctly. The "To" inputbox is not displayed on the page. Current
state: {state_after_interaction}'
if start_from <= 2:
print('[Step 2] type "Alice" in the "To" inputbox')
agent.click_xpath('//input[@class="forward-sender"]')
agent.type('Alice')
state_after_interaction = agent.click_xpath('//span[@id="send-forward"]')
# the email should be sent successfully.
assert 'email-sender' in state_after_interaction, f'I cannot do [Step 2]
correctly. The email is not sent successfully. Current state:
{state_after_interaction}'
```
The expert sample for the task email-inbox-forward-nl-turk:
```
<sample_email-inbox-forward-nl-turk> Prompt
```
```
# task: Send Alice the email from Beth / navigate to the message from Beth and
send it to Alice / I want to forward the email from Beth over to Alice
# General plan: I should first click on the email from Beth, then click on the
"Forward" button, then type "Alice" in the "To" inputbox, finally click on the
print('[Step 1] click on the email from Beth')
agent.click_xpath('//*[@class="email-sender" and text()="Beth"]')
state_after_interaction =
agent.click_xpath('//span[@class="email-forward"]')
# the "To" inputbox should be displayed on page.
assert 'forward-sender' in state_after_interaction, f'I cannot do [Step
1] correctly. The "To" inputbox is not displayed on the page. Current
state: {state_after_interaction}'
```
```
print('[Step 2] type "Alice" in the "To" inputbox')
agent.click_xpath('//input[@class="forward-sender"]')
```
```
# Here is an example of solution.
def solution(agent, start_from=1):
"Send" button.
if start_from <= 1:
if start_from <= 2:
```
-----
```
agent.type('Alice')
state_after_interaction = agent.click_xpath('//span[@id="send-forward"]')
# the email should be sent successfully.
assert 'email-sender' in state_after_interaction, f'I cannot do [Step 2]
correctly. The email is not sent successfully. Current state:
{state_after_interaction}'
```
The expert sample for the task email-inbox-nl-turk:
```
<sample_email-inbox-nl-turk> Prompt
# Here are three examples of solution.
# Task: "Aliquet. Sollicitudin nam lectus." is my reply to Brittani's most
recent email / Find the email by Brittani and reply to them with the text
"Aliquet. Sollicitudin nam lectus.".
def solution(agent, start_from=1):
# General plan: I should first click on the email by Brittani, then click on the
reply button, then type the text "Aliquet. Sollicitudin nam lectus." and finally
click on the send button.
if start_from <= 1:
print('[Step 1] click on the email by Brittani')
agent.click_xpath("//div[@class='email-sender' and text()='Brittani']")
state_after_interaction =
agent.click_xpath("//span[@class='email-reply']")
# the reply content should be displayed on page.
assert 'reply-text' in state_after_interaction, 'I cannot do [Step 1]
correctly. The reply button is not displayed on the page.'
if start_from <= 2:
print('[Step 2] type the text "Aliquet. Sollicitudin nam lectus."')
agent.click_xpath("//textarea[@id='reply-text']")
agent.type('Aliquet. Sollicitudin nam lectus.')
state_after_interaction = agent.click_xpath("//*[@id='send-reply']")
# Task: Find the last email by Blanca and send it to Agathe / Find the email by
Blanca and forward that email to Agathe.
def solution(agent, start_from=1):
# General plan: I should first click on the email by Blanca, then click on the
forward button, then type "Agathe" and finally click on the send button.
if start_from <= 1:
print('[Step 1] click on the email by Blanca')
agent.click_xpath("//div[@class='email-sender' and text()='Blanca']")
state_after_interaction =
agent.click_xpath("//span[@class='email-forward']")
# the forward content should be displayed on page.
assert 'forward-sender' in state_after_interaction, 'I cannot do [Step
1] correctly. The forward button is not displayed on the page.'
if start_from <= 2:
print('[Step 2] type "Agathe"')
agent.click_xpath("//input[@class='forward-sender']")
agent.type('Agathe')
state_after_interaction = agent.click_xpath("//*[@id='send-forward']")
# Task: Delete this email from Salli / Please find Salli's email in the inbox
and delete it.
def solution(agent, start_from=1):
# General plan: I should first click on the email by Salli, then click on the
trash icon.
if start_from <= 1:
print('[Step 1] click on the email by Salli')
agent.click_xpath("//div[@class='email-sender' and text()='Salli']")
agent.click_xpath("//span[@class='trash']")
```
The expert sample for the task guess-number:
-----
```
<sample_guess-number> Prompt
# Here is an example of solution.
# Task: Guess the number between 0-9 and press Submit. Use the feedback below to
find the right number.
# Here is the solution:
def solution(agent, start_from = 1):
# General plan: Given a list of possible_numbers, I will try the number in the
middle of the list and get feedback from the html state.
# Now the given list of possible_numbers is [0, 1, 2, 3, 4, 5, 6, 7, 8, 9].
if start_from <= 1:
print('[Step 1] maintain a list of the possible numbers left and try the
middle one')
# before making a guess, I should store the html state for future
comparison.
state_before_interaction = agent.current_state
# I will choose the number 5, which is in the middle of possible_numbers.
guess_number = 5
# click the input box, type the number I guessed and click submit.
agent.click_xpath("//input[@id='tt']")
agent.press_key("backspace")
agent.type(str(guess_number))
state_after_interaction = agent.click_xpath('//*[@id="subbtn"]')
# after input and submit the guessed number, the html_state should be
changed and contain the feedback. Otherwise this step is not successful.
assert state_after_interaction != state_before_interaction, 'I did [Step
1] but the html state did not change.'
if start_from <= 2:
print('[Step 2] get the feedback information from the new html state')
# If the guess is successful, the keyword "higher" or "lower" should not
be present in the observation. Otherwise I should use assert to jump out
to pass the feedback.
observation = ask(f'Answer a question based on the html code below:
{state_after_interaction} Question: Which one is displayed? "The number
is lower than" or "The number is higher than"? You must only output the
displayed sentence without saying any other words.')
assert "higher" not in observation, f'You tried the number
{guess_number} in [Step 1], and the correct number is greater than this
number. I need to revise solution function according to the new plan:
Now the given list of possible_numbers is [6, 7, 8, 9].'
assert "lower" not in observation, f'You tried the number {guess_number}
in [Step 1], and the correct number is smaller than this number. I need
to revise solution function according to the new plan: Now the given
list of possible_numbers is [0, 1, 2, 3, 4].'
```
The expert sample for the task login-user-popup:
```
<sample_login-user-popup> Prompt
```
```
# Task: Enter the username "kanesha" and the password "oT" into the text fields
# General plan: I should first click on the username field, then type in the
username, then click on the password field, then type in the password, then
print('[Step 1] Click on the username field')
state_after_interaction = agent.click_xpath("//input[@id=\'username\']")
# during interaction, some popups may appear. If so, I need to jump out
```
```
# Here is an example of solution.
and press login.
def solution(agent, start_from=1):
click on the login button.
if start_from <= 1:
to handle the popups.
```
-----
```
assert 'popup' not in state_after_interaction, f'After [Step 1], some
popups appeared, you need to close the popup at the beginning of [Step
1]. The current html is: {state_after_interaction} You need to add some
code at the beginning of [Step 1] to cancel the popup before any other
actions.'
if start_from <= 2:
print('[Step 2] Type in the username')
state_after_interaction = agent.type('kanesha')
# during interaction, some popups may appear. If so, I need to jump out
to handle the popups.
assert 'popup' not in state_after_interaction, f'After [Step 2], some
popups appeared, you need to close the popup at the beginning of [Step
2]. The current html is: {state_after_interaction} You need to add some
code at the beginning of [Step 2] to cancel the popup before any other
actions.'
if start_from <= 3:
print('[Step 3] Click on the password field')
state_after_interaction = agent.click_xpath("//input[@id=\'password\']")
# during interaction, some popups may appear. If so, I need to jump out
to handle the popups.
assert 'popup' not in state_after_interaction, f'After [Step 3], some
popups appeared, you need to close the popup at the beginning of [Step
3]. The current html is: {state_after_interaction} You need to add some
code at the beginning of [Step 3] to cancel the popup before any other
actions.'
if start_from <= 4:
print('[Step 4] Type in the password')
state_after_interaction = agent.type('oT')
# during interaction, some popups may appear. If so, I need to jump out
to handle the popups.
assert 'popup' not in state_after_interaction, f'After [Step 4], some
popups appeared, you need to close the popup at the beginning of [Step
4]. The current html is: {state_after_interaction} You need to add some
code at the beginning of [Step 4] to cancel the popup before any other
actions.'
if start_from <= 5:
print('[Step 5] Click on the login button')
state_after_interaction = agent.click_xpath("//button[@id='subbtn']")
# during interaction, some popups may appear. If so, I need to jump out
to handle the popups.
assert 'popup' not in state_after_interaction, f'After [Step 5], some
popups appeared, you need to close the popup at the beginning of [Step
5]. The current html is: {state_after_interaction} You need to add some
code at the beginning of [Step 5] to cancel the popup before any other
actions.'
```
The expert sample for the task search-engine:
```
<sample_search-engine> Prompt
```
```
# Task: Use the textbox to enter "Alice" and press "Search", then find and click
# General plan: I should first click on the inputbox, then type "Alice", then
click on the "Search" button, finally look through pages and click on the 7th
print('[Step 1] click on the inputbox and type "Alice"')
agent.click_xpath('//*[@id="search-text"]')
```
```
# Here is an example of solution.
the 7th search result.
def solution(agent, start_from=1):
result.
if start_from <= 1:
agent.type('Alice')
```
-----
```
state_after_interaction = agent.click_xpath('//*[@id="search"]')
# the search content should be displayed on page.
assert 'search-desc' in state_after_interaction, 'I cannot do [Step 1]
correctly. The search content is not displayed on the page.'
if start_from <= 2:
print('[Step 2] calculate the page number of the 7th result and click on
the page')
# I should count the number of results on each page, iteratively turn to
next page until seeing the 7th result.
# I can use the following code to count the number of results on each
page.
num_results_displayed_per_page =
state_after_interaction.count('search-desc')
num_page = (7 - 1) // num_results_displayed_per_page
state_after_interaction =
agent.click_xpath(f'//*[@id="pagination"]/li[{3+num_page}]/a')
# I should click on the 7th result.
num_to_click = 7 - num_results_displayed_per_page * num_page
state_after_interaction =
agent.click_xpath(f'//*[@id="page-content"]/div[{num_to_click}]/a')
```
The expert sample for the task terminal:
```
<sample_terminal> Prompt
# Here is an example of solution.
# Task: Use the terminal below to delete a file ending with the extension .gif
def solution(agent, start_from=1):
# General plan: I should first type "ls" to list all files in the terminal, then
identify the filename ending with ".gif" and type "rm [filename].gif" to delete
the identified file.
if start_from <= 1:
print('[Step 1] type "ls" to list all files in the terminal')
agent.type('ls')
state_after_interaction = agent.press_key('enter')
# the file list should be displayed on terminal.
assert 'gif' in state_after_interaction, f'I cannot do [Step 1]
correctly. The file list is not displayed on the terminal. Current
state: {state_after_interaction}'
if start_from <= 2:
print('[Step 2] identify the filename ending with ".gif" and type "rm
[filename].gif" to delete the identified file')
# I should identify the filename ending with ".gif". I can ask assistant
to do that.
filename = ask(f'You are given some html code as follows:
{state_after_interaction} What is the file ending with the extension
.gif? You must directly output the full file name, including the
extension.')
agent.type(f'rm {filename}')
state_after_interaction = agent.press_key('enter')
assert 'not found' not in state_after_interaction, f'I cannot do [Step
2] correctly. The file ending with the extension .gif is not deleted.
Current state: {state_after_interaction}'
```
The expert sample for the task tic-tac-toe:
```
<sample_tic-tac-toe> Prompt
```
```
# Here is an example of solution.
# Task: Playing as 'X', win a game of tic-tac-toe.
def solution(agent, start_from=1):
```
-----
```
# The board layout and corresponding html xpath: top-left("//*[@id='ttt-0']"),
top-center("//*[@id='ttt-1']"), top-right("//*[@id='ttt-2']"),
middle-left("//*[@id='ttt-3']"), middle-center("//*[@id='ttt-4']"),
middle-right("//*[@id='ttt-5']"), bottom-left("//*[@id='ttt-6']"),
bottom-center("//*[@id='ttt-7']"), bottom-right("//*[@id='ttt-8']"). Note that
"mark-o" indicates the 'O' placed on board, "mark-x" indicates the 'X' placed on
board.
# General plan: Currently, no grid is occupied. The plan is 1) put an 'X' in the
middle-center("//*[@id='ttt-4']"), 2) put an 'X' in the
top-left("//*[@id='ttt-0']"), 3) put an 'X' in the
bottom-right("//*[@id='ttt-8']").
place_to_put_X = ['4', '0', '8']
for idx, place_id in enumerate(place_to_put_X):
print(f'[Step {idx}] put an X in {place_id}')
# before interaction, I need to store the current state so that I can
compare it with the state after interaction.
state_before_interaction = agent.current_state
state_after_interaction = agent.click_xpath(f"//*[@id='ttt-{place_id}']")
# if the current state does not change after interaction, that means I
cannot put an 'X' in the desired location, and that location is already
occupied and the plan will not work.
assert state_before_interaction != state_after_interaction, f'''I cannot
do [Step {idx}] put an X in the "//*[@id='ttt-{place_id}']", because it
is occupied. I need to revise solution function according to the new
plan. ''' + ask(f'''Playing as 'X', win a game of tic-tac-toe. The board
layout and corresponding html xpath: top-left("//*[@id='ttt-0']"),
top-center("//*[@id='ttt-1']"), top-right("//*[@id='ttt-2']"),
middle-left("//*[@id='ttt-3']"), middle-center("//*[@id='ttt-4']"),
middle-right("//*[@id='ttt-5']"), bottom-left("//*[@id='ttt-6']"),
bottom-center("//*[@id='ttt-7']"), bottom-right("//*[@id='ttt-8']").Note
that "mark-o" indicates the 'O' placed on board, "mark-x" indicates the
'X' placed on board. The game in progress is represented in html code:
{agent.current_state} Report current board situation and generate a plan
that the 'X' player should follow to continue this game. Use the format
like "Currently, 'X' has been placed at <position>("//*[@id='ttt-x']")
and 'O' has been placed at <position>("//*[@id='ttt-x']"). Therefore,
the plan is to: 1) put an 'X' in the <position>("//*[@id='ttt-x']") 2)
put an 'X' in the ..."''')
```
**Code Check. We use the same <code_check> prompt for MiniWoB++ tasks as ALFWorld.**
**Out-of-Plan Refinement. In the event of an assertion error, we use <refinement> to conduct the out-**
of-plan refinement. In this prompt, <basic_info> is replaced by the content of the <basic_info>
prompt. The placeholder <solution_func> is replaced by the generated solution function, while
```
<task> is replaced by the task description in natural language. Finally, <feedback> is replaced by
```
the assertion error message returned by the solution function. Note that the <refinement> prompt
used here differs slightly from the one used for ALFWorld. In this context, we include the previously
generated solution function in the prompt, enabling the refiner to adjust the solution based on the
error messages accordingly.
```
<refinement> Prompt
However, you executed the solution() function and get an error message:
Let's think step by step. You must output the revised solution function based on
the error message. You must only complete the revised solution function without
```
```
<basic_info>
# Task: <task>
You have generated code of solution() to solve the task:
<solution_func>
[Error message]
<feedback>
any other words.
```
-----
```
# In the solution function, start with a commented "# General plan: ". For each
step you plan to take, mark with '[Step xx]', and write an assertion to check if
the step is successful.
def solution(agent, start_from=1):
```
**Determining start_from. We use the same <start_from> prompt for MiniWoB++ tasks as**
ALFWorld.
**8.4** **Hallucination with GPT-3.5 Model (gpt-3.5-turbo)**
We present the trajectories of three tested methods (ReAct, Reflexion, and AdaPlanner) for two tasks.
Here, each model utilizes gpt-3.5-turbo as the backend.
**8.4.1** **Case 1: examine the book with the desklamp.**
ReAct’s trajectory for this case is detailed below. Despite the presence of two expert demonstrations,
ReAct encounters significant difficulties, as it extensively hallucinates inadmissible actions or scenarios. For example, search for a book titled "The Great Gatsby" and ask household
```
members if they know where the book is are not feasible within the given environment.
```
ReAct Trajectory of Case 1 (Hallucinated actions are marked in pink.)
```
You are in the middle of a room. Looking quickly around you, you see a bed 1, a
desk 2, a desk 1, a drawer 6, a drawer 5, a drawer 4, a drawer 3, a drawer 2,
a drawer 1, a garbagecan 1, a laundryhamper 1, a safe 1, a shelf 6, a shelf 5,
a shelf 4, a shelf 3, a shelf 2, and a shelf 1.
Your task is to: examine the book with the desklamp.
> think: To solve the task, I need to find and take a book, then find and use a
desklamp. First, I need to find a book. A book is more likely to appear in
desk (1-2), shelf (1-6), drawer (1-6). I can check one by one, starting with
desk 1.
OK.
> go to desk 1
On the desk 1, you see a alarmclock 1, a bowl 2, a cd 1, a cellphone 3, a desklamp
1, a mug 1, a pen 1, and a pencil 1.
> take book from desk 1
Nothing happens.
> think: I need to be more specific about which book I am looking for. Let me try
looking for a specific title or author.
OK.
> search for a book titled "The Great Gatsby"
Nothing happens.
> think: Let me try looking for a book by a specific author. How about "To Kill a
Mockingbird" by Harper Lee?
OK.
> search for a book by Harper Lee
Nothing happens.
> think: Maybe I should try looking in a different location. Let me check shelf 1.
OK.
> go to shelf 1
On the shelf 1, you see a bowl 1.
> go to shelf 2
On the shelf 2, you see a creditcard 1, and a pencil 2.
> go to shelf 3
On the shelf 3, you see a alarmclock 1, a bowl 2, and a desklamp 1.
> take book from shelf 3
Nothing happens.
> think: Let me try looking in a different location. How about drawer 1?
OK.
> go to drawer 1
The drawer 1 is closed.
> open drawer 1
You open the drawer 1. The drawer 1 is open. In it, you see nothing.
> go to drawer 2
On the drawer 2, you see a cellphone 2.
> go to drawer 3
Nothing happens.
```
-----
```
> go to drawer 4
The drawer 4 is closed.
> open drawer 4
You open the drawer 4. The drawer 4 is open. In it, you see nothing.
> go to drawer 5
On the drawer 5, you see nothing.
> go to drawer 6
The drawer 6 is closed.
> open drawer 6
You open the drawer 6. The drawer 6 is open. In it, you see nothing.
> think: Let me try looking in a different location. How about desk 2?
OK.
> go to desk 2
On the desk 2, you see a alarmclock 3, a bowl 3, a cd 3, a mug 2, and a pencil 3.
> take book from desk 2
Nothing happens.
> think: Let me try looking in a different location. How about shelf 4?
OK.
> go to shelf 4
On the shelf 4, you see a alarmclock 2.
> think: Let me try looking in a different location. How about shelf 5?
OK.
> go to shelf 5
On the shelf 5, you see nothing.
> think: Let me try looking in a different location. How about shelf 6?
OK.
> go to shelf 6
On the shelf 6, you see nothing.
> think: It seems like the book I am looking for is not in any of the locations I
have checked so far. Let me ask the household members if they know where it is.
OK.
> ask household members if they know where the book is
Nothing happens.
> think: Maybe I should try asking a specific household member. How about asking
John?
OK.
> ask John if he knows where the book is
Nothing happens.
> think: It seems like I am unable to find the book. I may need to ask for more
information or try a different approach to the task.
OK.
> ask the household members if they have any suggestions on where to look for book
Nothing happens.
> think: It looks like I may need to give up on this task for now and come back to
it later with more information or a different approach.
OK.
> end task
Nothing happens.
> exit room
Nothing happens.
```
```
> end interaction
Nothing happens.
> leave the household
Nothing happens.
> end interaction and try again later
Nothing happens.
> end interaction and come back later with a new approach
Nothing happens.
> end interaction and move on to a different task or activity
Nothing happens.
> end interaction and leave the room/household/location
Nothing happens.
> end interaction and try again later with a different approach or more information
Nothing happens.
> end interaction and seek help from someone outside of the household
Nothing happens.
```
-----
```
> end interaction and try to find book in a different room or location in the house
Nothing happens.
```
The trajectory of Reflexion, after two rounds of iterative trials and errors, is detailed below.
Despite multiple closed-loop corrections, Reflexion fails to formulate a feasible plan. Instead,
it persists in generating unfeasible actions in its reflection part. For instance, it plans to ask
```
the household members for help, or to look for a ladder or step stool outside
of the room. These actions fall outside the scope of admissible actions in the ALFWorld en
```
vironment.
Reflexion Trajectory of Case 1 (Hallucinated actions are marked in pink.)
```
Your memory for the task below:
Trial 0:
In this environment, my plan was to find a book then find and use a desklamp.
However, I spent too much time checking every possible location without
thoroughly examining each one. I should have focused on the most likely
locations for a book (desk, shelf, drawer) and thoroughly searched each one
before moving on to the next. Additionally, I should have asked the household
members for help earlier in the process. In the next trial, I will prioritize
searching the most likely locations and ask for help if I am unable to find
the book after a thorough search.
Trial 1:
In the next trial, I will prioritize searching the most likely locations for a book
(desk, shelf, drawer) and thoroughly examine each one before moving on to the
next. If I am unable to find the book after a thorough search, I will ask the
household members for help. If the book is out of reach, I will look for a
ladder or step stool outside of the room that I can bring in to reach it.
Here is the task:
You are in the middle of a room. Looking quickly around you, you see a bed 1, a
desk 2, a desk 1, a drawer 6, a drawer 5, a drawer 4, a drawer 3, a drawer 2,
a drawer 1, a garbagecan 1, a laundryhamper 1, a safe 1, a shelf 6, a shelf 5,
a shelf 4, a shelf 3, a shelf 2, and a shelf 1.
Your task is to: examine the book with the desklamp.
> think: To solve the task, I need to find and take a book, then find and use a
desklamp. First, I need to find a book. A book is more likely to appear in
desk (1-2), shelf (1-6), drawer (1-6). I can check one by one, starting with
desk 1.
OK.
> go to desk 1
On the desk 1, you see a alarmclock 1, a bowl 2, a cd 1, a cellphone 3, a desklamp
1, a mug 1, a pen 1, and a pencil 1.
> examine books on desk 1
Nothing happens.
> examine shelf 1
Nothing happens.
> go to shelf 2
On the shelf 2, you see a creditcard 1, and a pencil 2.
> go to shelf 3
On the shelf 3, you see a alarmclock 1, a bowl 2, and a desklamp 1.
> take book from shelf 3
Nothing happens.
> examine book on shelf 3
Nothing happens.
> ask household member for help
Nothing happens.
> think: I need to keep searching for the book. It could be in desk 2, shelf 4-6,
or drawer 1-6. I will check drawer 1 next.
OK.
> go to drawer 1
The drawer 1 is closed.
> open drawer 1
You open the drawer 1. The drawer 1 is open. In it, you see nothing.
> go to drawer 2
On the drawer 2, you see a cellphone 2.
> take book from drawer 2
Nothing happens.
> examine book in drawer 2
Nothing happens.
```
-----
```
> think: The book is not in drawer 2. I will check drawer 3 next.
OK.
> go to drawer 3
Nothing happens.
> examine books in drawer 3
Nothing happens.
> think: The book is not in drawer 3. I will check drawer 4 next.
OK.
> go to drawer 4
The drawer 4 is closed.
> open drawer 4
You open the drawer 4. The drawer 4 is open. In it, you see nothing.
> think: The book is not in drawer 4. I will check drawer 5 next.
OK.
> go to drawer 5
On the drawer 5, you see nothing.
> think: The book is not in drawer 5. I will check drawer 6 next.
OK.
> go to drawer 6
The drawer 6 is closed.
> open drawer 6
You open the drawer 6. The drawer 6 is open. In it, you see nothing.
> think: The book is not in drawer 6. I will check desk 2 next.
OK.
> go to desk 2
On the desk 2, you see a alarmclock 3, a bowl 3, a cd 3, a mug 2, and a pencil 3.
> think: The book is not on desk 2. I will check shelf 4 next.
OK.
> go to shelf 4
On the shelf 4, you see a alarmclock 2.
> think: The book is not on shelf 4. I will check shelf 5 next.
OK.
> go to shelf 5
On the shelf 5, you see nothing.
> think: The book is not on shelf 5. I will check shelf 6 next.
OK.
> go to shelf 6
On the shelf 6, you see nothing.
> think: The book is not on shelf 6. I have searched all the most likely locations
for a book and have not found it. I will ask the household members for help.
OK.
> ask household member for help
```
```
Nothing happens.
> think: The household member did not respond. I will look for a ladder or step
stool outside of the room that I can bring in to reach higher shelves or areas.
OK.
> go outside of the room
Nothing happens.
> look for a ladder or step stool
Nothing happens.
> think: I was unable to find a ladder or step stool. I will continue searching the
room for the book and ask the household members for help again if necessary.
OK.
> go to desk 1
On the desk 1, you see a alarmclock 1, a bowl 2, a cd 1, a cellphone 3, a desklamp
1, a mug 1, a pen 1, and a pencil 1.
> examine books on desk 1
Nothing happens.
> think: I have searched the room thoroughly and have not found the book. I will
ask the household members for help again.
OK.
> ask household member for help
Nothing happens.
> think: The household member is not responding. I will try to think of any other
possible locations for the book and search those areas.
OK.
> think: The book could be in the laundry hamper or safe. I will check those areas
next.
OK.
```
-----
```
> go to laundry hamper 1
Nothing happens.
> examine laundry hamper 1
Nothing happens.
> go to safe 1
The safe 1 is closed.
> try to open safe 1
Nothing happens.
```
The trajectory of AdaPlanner is detailed below. For better readability, we only present the major subgoals generated by the planner. Evidently, AdaPlanner successfully completes this task without the
need for any revisions. This indicates that the code interface can effectively mitigate the hallucination
in gpt-3.5-turbo.
AdaPlanner Trajectory of Case 1
```
You are in the middle of a room. Looking quickly around you, you see a bed 1, a
desk 2, a desk 1, a drawer 6, a drawer 5, a drawer 4, a drawer 3, a drawer 2,
a drawer 1, a garbagecan 1, a laundryhamper 1, a safe 1, a shelf 6, a shelf 5,
a shelf 4, a shelf 3, a shelf 2, and a shelf 1.
Your task is to: examine the book with the desklamp.
[Step 1] get a list of receptacles where a book is likely to appear.
[Step 2] go to each receptacle in the list until seeing a book
> go to shelf 6
On the shelf 6, you see nothing.
> go to shelf 5
On the shelf 5, you see nothing.
> go to shelf 4
On the shelf 4, you see a mug 2, and a pencil 1.
> go to shelf 3
On the shelf 3, you see a creditcard 1, a desklamp 1, and a mug 1.
> go to shelf 2
On the shelf 2, you see a pen 1.
> go to shelf 1
On the shelf 1, you see a bowl 1.
> go to desk 2
On the desk 2, you see a cellphone 2, a creditcard 2, a mug 3, and a pencil 2.
> go to desk 1
On the desk 1, you see a alarmclock 1, a creditcard 1, a desklamp 1, a laptop 3,
and a mug 1.
> go to bed 1
On the bed 1, you see a book 1, a cellphone 1, a laptop 2, a laptop 1, a pillow 2,
and a pillow 1.
[Step 3] take the book from the receptacle
> take book 1 from bed 1
You pick up the book 1 from the bed 1.
[Step 4] get a list of receptacles where a desklamp is likely to appear.
[Step 5] go to each receptacle in the list until seeing a desklamp
> go to desk 1
On the desk 1, you see a alarmclock 1, a creditcard 1, a desklamp 1, a laptop 3,
and a mug 1.
[Step 6] turn on desklamp
> use desklamp 1
You turn on the desklamp 1.
```
**8.4.2** **Case 2: put some watch on safe.**
In the second trajectory, ReAct also hallucinates infeasible actions (e.g., look for a manual or
```
instructions and ask someone in the household), ultimately leading to task failure.
```
ReAct Trajectory of Case 2 (Hallucinated actions are marked in pink.)
```
You are in the middle of a room. Looking quickly around you, you see a armchair 1,
a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a drawer 5, a drawer 4, a
drawer 3, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a safe 1, a
shelf 12, a shelf 11, a shelf 10, a shelf 9, a shelf 8, a shelf 7, a shelf 6,
a shelf 5, a shelf 4, a shelf 3, a shelf 2, a shelf 1, a sidetable 1, and a
sofa 1.
```
-----
```
Your task is to: put some watch on safe.
> think: To solve the task, I need to find and take a watch, then put it on the
safe.
OK.
> think: First I need to find a watch. A watch is more likely to appear in a drawer
(1-5), a shelf (1-12), a cabinet (1-4), a dresser (1), a sidetable (1), or a
sofa (1). I can check one by one, starting with drawer 1.
OK.
> go to drawer 1
The drawer 1 is closed.
> open drawer 1
You open the drawer 1. The drawer 1 is open. In it, you see nothing.
> go to drawer 2
The drawer 2 is closed.
> open drawer 2
You open the drawer 2. The drawer 2 is open. In it, you see nothing.
> go to drawer 3
The drawer 3 is closed.
> open drawer 3
You open the drawer 3. The drawer 3 is open. In it, you see a creditcard 2.
> go to drawer 4
The drawer 4 is closed.
> open drawer 4
You open the drawer 4. The drawer 4 is open. In it, you see nothing.
> go to drawer 5
The drawer 5 is closed.
> open drawer 5
You open the drawer 5. The drawer 5 is open. In it, you see nothing.
> go to shelf 1
On the shelf 1, you see nothing.
> go to shelf 2
On the shelf 2, you see nothing.
> go to shelf 3
Nothing happens.
> go to shelf 4
On the shelf 4, you see nothing.
> go to shelf 5
On the shelf 5, you see nothing.
> go to shelf 6
On the shelf 6, you see nothing.
> go to shelf 7
On the shelf 7, you see a newspaper 3.
> go to shelf 8
On the shelf 8, you see nothing.
> go to shelf 9
On the shelf 9, you see nothing.
> go to shelf 10
On the shelf 10, you see nothing.
> go to shelf 11
On the shelf 11, you see nothing.
> go to shelf 12
On the shelf 12, you see nothing.
> go to cabinet 1
The cabinet 1 is closed.
> open cabinet 1
You open the cabinet 1. The cabinet 1 is open. In it, you see a vase 1.
> go to cabinet 2
Nothing happens.
> go to cabinet 3
The cabinet 3 is closed.
> open cabinet 3
You open the cabinet 3. The cabinet 3 is open. In it, you see a tissuebox 2.
> go to cabinet 4
Nothing happens.
> go to dresser 1
On the dresser 1, you see a laptop 1, a remotecontrol 1, a tissuebox 1, a vase 3, a
vase 2, and a watch 1.
> take watch 1 from dresser 1
You pick up the watch 1 from the dresser 1.
> think: Now I have a watch (1). Next, I need to put it on the safe.
OK.
> go to safe 1
```
-----
```
The safe 1 is closed.
> open safe 1
You open the safe 1. The safe 1 is open. In it, you see a cellphone 3, and a
cellphone 2.
> put watch 1 on safe 1
Nothing happens.
> think: I need to be more specific about where to put the watch. Maybe on top of
the cellphone 3?
OK.
> put watch 1 on cellphone 3 in safe 1
Nothing happens.
> think: Maybe I need to place the watch inside the safe instead of on top of
something inside it.
OK.
> put watch 1 in safe 1
Nothing happens.
> think: Maybe I need to enter a code to properly open the safe and put the watch
inside.
OK.
> look for a code or key to open the safe
Nothing happens.
> think: Maybe I need to ask someone in the household for the code or key to open
the safe.
OK.
> ask someone in the household for the code or key to open the safe
Nothing happens.
> think: Maybe I need to search for a manual or instructions on how to open the
safe.
OK.
> look for a manual or instructions on how to open the safe
Nothing happens.
> think: Maybe I need to try a combination of numbers to open the safe.
OK.
> try the combination 1234 on the safe
Nothing happens.
> think: Maybe I need to try a different combination of numbers to open the safe.
OK.
```
Similarly, Reflexion experiences issues with hallucination during this task, such as seek
```
additional resources or ask for help. This contributes to the eventual failure of the task.
```
Reflexion Trajectory of Case 2 (Hallucinated actions are marked in pink.)
```
Your memory for the task below:
Trial 0:
In this environment, my plan was to find a watch and put it on the safe. However, I
did not consider that the safe may require a code or key to lock it. I should
have looked for a code or lock mechanism on the safe or a key to the safe
before attempting to put the watch inside. When I could not find a way to
secure the watch inside the safe, I should have looked for another container
or safe in the room. In the next trial, I will first look for a code or lock
mechanism on the safe or a key to the safe. If I cannot find one, I will look
for another container or safe in the room.
Trial 1:
To solve the task, I need to find a watch and a way to secure it inside the safe.
First, I will look for a watch in drawers (1-5), cabinets (1-4), dressers (1),
shelves (1-12), sidetables (1), and sofas (1). Once I find a watch, I will
examine the safe for any clues or look for a key in other locations. If I
cannot find a way to secure the watch inside the safe, I will look for another
container or safe in the room. If I still cannot find a way to secure the
watch, I will seek additional resources or ask for help.
Here is the task:
You are in the middle of a room. Looking quickly around you, you see a armchair 1,
a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a drawer 5, a drawer 4, a
drawer 3, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a safe 1, a
shelf 12, a shelf 11, a shelf 10, a shelf 9, a shelf 8, a shelf 7, a shelf 6,
a shelf 5, a shelf 4, a shelf 3, a shelf 2, a shelf 1, a sidetable 1, and a
sofa 1.
Your task is to: put some watch on safe.
```
-----
```
> think: To solve the task, I need to find a watch and a way to secure it inside
the safe. First, I will look for a watch in drawers (1-5), cabinets (1-4),
dressers (1), shelves (1-12), sidetables (1), and sofas (1).
OK.
> go to drawer 1
The drawer 1 is closed.
> open drawer 1
You open the drawer 1. The drawer 1 is open. In it, you see nothing.
> go to drawer 2
The drawer 2 is closed.
> open drawer 2
You open the drawer 2. The drawer 2 is open. In it, you see nothing.
> go to drawer 3
The drawer 3 is closed.
> open drawer 3
You open the drawer 3. The drawer 3 is open. In it, you see a creditcard 2.
> go to drawer 4
The drawer 4 is closed.
> open drawer 4
You open the drawer 4. The drawer 4 is open. In it, you see nothing.
> go to drawer 5
The drawer 5 is closed.
> open drawer 5
You open the drawer 5. The drawer 5 is open. In it, you see nothing.
> go to cabinet 1
The cabinet 1 is closed.
> open cabinet 1
You open the cabinet 1. The cabinet 1 is open. In it, you see a vase 1.
> go to cabinet 2
Nothing happens.
> think: The cabinet 2 may be locked or empty. I should try another location.
OK.
> go to cabinet 3
The cabinet 3 is closed.
> open cabinet 3
You open the cabinet 3. The cabinet 3 is open. In it, you see a tissuebox 2.
> go to cabinet 4
Nothing happens.
> think: The cabinet 4 may be locked or empty. I should try another location.
OK.
> go to dresser 1
On the dresser 1, you see a laptop 1, a remotecontrol 1, a tissuebox 1, a vase 3, a
vase 2, and a watch 1.
> think: Now I find a watch (1). Next, I need to examine the safe for any clues or
look for a key in other locations.
OK.
> go to safe 1
The safe 1 is closed.
> think: Before attempting to put the watch inside the safe, I should look for a
code or lock mechanism on the safe or a key to the safe.
OK.
> examine safe 1
The safe 1 is closed.
> look for key or code on safe 1
Nothing happens.
> think: I cannot find a key or code on the safe. I should look for another
container or safe in the room.
OK.
> look for another container or safe in the room
Nothing happens.
> think: I still cannot find another container or safe in the room. I should seek
additional resources or ask for help.
OK.
> seek additional resources or ask for help
Nothing happens.
> think: I may need to think of a different solution to the task. Perhaps I can
place the watch near the safe instead of inside it.
OK.
> put watch 1 near safe 1
Nothing happens.
```
-----
```
> think: The task is incomplete. I need to put the watch on the safe. I should try
to find a way to secure the watch on top of the safe or find a container to
put the watch in that can be placed on top of the safe.
OK.
> look for a container to put the watch in that can be placed on top of the safe
Nothing happens.
> think: I should try to find a way to secure the watch on top of the safe.
OK.
> look for a way to secure the watch on top of the safe
Nothing happens.
> think: I may need to think of a different solution to the task. Perhaps I can
find a way to attach the watch to the safe using tape or string.
OK.
> look for tape or string
Nothing happens.
> think: I cannot find tape or string. I should seek additional resources or ask
for help.
OK.
> seek additional resources or ask for help
Nothing happens.
> think: The task is incomplete. I need to put the watch on the safe. I should try
to find a way to secure the watch on top of the safe or find a container to
put the watch in that can be placed on top of the safe. Since I cannot find
any resources or help, I will have to abandon the task for now and come back
to it later with new ideas or resources.
OK.
> end task
Nothing happens.
> exit room
Nothing happens.
> think: I cannot exit the room until the task is complete or I have abandoned it.
I will continue to look for a solution to the task.
OK.
> continue searching for a solution to the task
Nothing happens.
> think: I have exhausted all possible options and resources in the room. I should
take a break and come back to the task later with a fresh perspective or seek
help from someone else.
OK.
> end task
Nothing happens.
> exit room
Nothing happens.
> think: I cannot exit the room until the task is complete or I have abandoned it.
I will take a break and come back to the task later with a fresh perspective
or seek help from someone else.
OK.
> end task
Nothing happens.
> exit room
Nothing happens.
```
Initially, AdaPlanner also experiences hallucination, presuming that the watch is located in the safe.
This incorrect assumption triggers an assertion error. AdaPlanner then revises the plan to better adapt
to the actual environment. Following this refinement, the task is successfully completed.
AdaPlanner Trajectory of Case 2 (Hallucinated actions are marked in pink.)
```
You are in the middle of a room. Looking quickly around you, you see a armchair 1,
a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a drawer 5, a drawer 4, a
drawer 3, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a safe 1, a
shelf 12, a shelf 11, a shelf 10, a shelf 9, a shelf 8, a shelf 7, a shelf 6,
a shelf 5, a shelf 4, a shelf 3, a shelf 2, a shelf 1, a sidetable 1, and a
sofa 1.
Your task is to: put some watch on safe.
[Step 1] get a list of receptacles where the safe is likely to appear
```
-----
```
[Step 2] go to each receptacle in the list until seeing a safe
> go to safe 1
The safe 1 is closed.
> open safe 1
You open the safe 1. The safe 1 is open. In it, you see a cellphone 3, and a
cellphone 2.
[Step 3] identify the watch I just found and take it
> take watch 1 from safe 1
Nothing happens.
[Step 4] go to the safe and put the watch in it
> go to safe 1
Nothing happens.
> put watch 1 in/on safe 1
Nothing happens.
AssertionError: Error in [Step 4]: I cannot put the watch 1 in the safe 1. The last
three interactions before error were:
> take watch 1 from safe 1
Nothing happens.
> go to safe 1
Nothing happens.
> put watch 1 in/on safe 1
Nothing happens.
I am at safe 1 and holding None.
Refinement is done with start_from = 1.
[Step 1] get a list of receptacles where the watch is likely to appear
[Step 2] go to each receptacle in the list until seeing a watch
> go to safe 1
Nothing happens.
> go to cabinet 1
The cabinet 1 is closed.
> open cabinet 1
You open the cabinet 1. The cabinet 1 is open. In it, you see a vase 1.
> go to cabinet 2
Nothing happens.
> go to cabinet 3
The cabinet 3 is closed.
> open cabinet 3
You open the cabinet 3. The cabinet 3 is open. In it, you see a tissuebox 2.
> go to cabinet 4
Nothing happens.
> go to dresser 1
On the dresser 1, you see a laptop 1, a remotecontrol 1, a tissuebox 1, a vase 3, a
vase 2, and a watch 1.
[Step 3] identify the watch I just found and take it
> take watch 1 from dresser 1
You pick up the watch 1 from the dresser 1.
[Step 4] go to the safe and put the watch in it
> go to safe 1
The safe 1 is open. In it, you see a cellphone 3, and a cellphone 2.
> put watch 1 in/on safe 1
You put the watch 1 in/on the safe 1.
```
**8.5** **Detailed Evaluation Results on MiniWoB++**
|Task Name|AdaPlanner|CC-Net [7]|WGE [11]|WebN-T5-3B [4]|RCI [9]|
|---|---|---|---|---|---|
_Continued on next page._
|choose-list click-button click-button-sequence click-checkboxes click-checkboxes-large click-checkboxes-soft click-checkboxes-transfer click-collapsible|100 100 100 100 100 80 98 100|99 100 100 98 71 95 99 100|16 100 100 100 84 94 64 100|26 100 100 96 22 54 63 0|100 100 100 100 94 72 100 100|
|---|---|---|---|---|---|
-----
|click-collapsible-2 click-color click-dialog click-dialog-2 click-link click-menu click-option click-scroll-list click-shades click-shape click-tab click-tab-2 click-tab-2-hard click-test click-test-2 click-widget count-shape email-inbox email-inbox-forward-nl email-inbox-forward-nl-turk email-inbox-nl-turk enter-date enter-password enter-text enter-text-dynamic enter-time focus-text focus-text-2 grid-coordinate guess-number identify-shape login-user login-user-popup multi-layouts multi-orderings navigate-tree search-engine simple-algebra social-media social-media-all social-media-some terminal tic-tac-toe use-autocomplete use-spinner|84 100 100 100 98 78 100 100 100 75 100 85 78 100 100 100 50 98 100 100 90 100 98 98 96 96 100 94 100 88 96 100 98 84 100 82 100 82 82 100 90 98 48 88 90|98 100 100 100 99 94 99 60 100 95 100 98 98 100 100 100 85 100 100 100 100 100 100 100 100 97 100 100 100 100 100 100 100 100 100 99 100 75 90 75 85 0 83 100 100|99 100 100 100 100 n/a 100 n/a 99 64 100 98 n/a 100 100 93 76 99 n/a n/a 93 96 100 100 100 90 100 100 100 0 100 100 n/a 100 100 99 99 n/a 100 1 42 n/a 47 98 4|0 27 100 24 100 37 87 0 0 53 74 18 12 100 100 100 41 38 60 33 23 0 97 89 98 0 100 100 49 0 88 82 72 83 88 91 34 n/a 21 0 2 n/a 48 22 7|62 100 100 100 100 100 100 100 100 98 100 74 76 100 100 98 40 98 100 94 98 96 100 100 100 100 100 100 100 20 76 100 68 72 100 86 100 100 98 100 90 100 56 58 88|
|---|---|---|---|---|---|
Table 5: Per-task success rate (%) of AdaPlanner, CC-Net [7], WGE [11], WebN-T5-3B [4], and
RCI [9]. "n/a" signifies that the corresponding success rate is not reported in the original paper of the
method. The nine tasks with feedback are marked in gray.
-----
| [
"Haotian, Sun",
"Yuchen, Zhuang",
"Lingkai, Kong",
"Bo, Dai",
"Chao, Zhang"
] | 2023-05-26T00:00:00 | NeurIPS 2023 Poster | true | 80 | 1 | null | http://arxiv.org/abs/2305.16653 | https://arxiv.org/abs/2305.16653 | https://www.semanticscholar.org/paper/8e37dc1215681aa153a51c07078ba8befd6a6e01 |
HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving | Large computer-understandable proofs consist of millions of intermediate logical steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has generally not been used to filter or generate these steps. In this paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this dataset publicly available under the BSD license. We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression, convolutional neural networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving. | A new dataset based on Higher-Order Logic (HOL) proofs is introduced, for the purpose of developing new machine learning-based theorem-proving strategies and the results of these models show the promise of applying machine learning to HOL theorem proving. | ## HOLSTEP: A MACHINE LEARNING DATASET FOR HIGHER-ORDER LOGIC THEOREM PROVING
**Cezary Kaliszyk**
University of Innsbruck
[email protected]
**François Chollet, Christian Szegedy**
Google Research
{fchollet,szegedy}@google.com
ABSTRACT
Large computer-understandable proofs consist of millions of intermediate logical
steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has
generally not been used to filter or generate these steps. In this paper, we introduce
a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this
dataset publicly available under the BSD license. We propose various machine
learning tasks that can be performed on this dataset, and discuss their significance
for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression, convolutional neural
networks and recurrent neural networks). The results of our baseline models show
the promise of applying machine learning to HOL theorem proving.
1 INTRODUCTION
As the usability of interactive theorem proving (ITP) systems (Harrison et al., 2014) grows, its use
becomes a more common way of establishing the correctness of software as well as mathematical
proofs. Today, ITPs are used for software certification projects ranging from compilers (Leroy,
2009) and operating system components (Chen et al., 2016; Klein et al., 2014), to establishing the
absolute correctness of large proofs in mathematics such as the Kepler conjecture (Hales et al., 2015)
and the Feit-Thomson Theorem (Gonthier et al., 2013).
For results of such significance to be possible, the theorem libraries of these ITPs must contain
all necessary basic mathematical properties, accompanied with formal proofs. This means that the
size of many ITP libraries can be measured in dozens of thousands of theorems (Grabowski et al.,
2010; Blanchette et al., 2015) and billions of individual proof steps. While the general direction of
the proofs is specified by humans (by providing the goal to prove, specifying intermediate steps, or
applying certain automated tactics), the majority of such proof steps are actually found by automated
reasoning-based proof search (Kaliszyk & Urban, 2015b), with very little application of machine
learning techniques so far.
At the same time, fast progress has been unfolding in machine learning applied to tasks that involve
logical inference, such as natural language question answering (Sukhbaatar et al., 2015), knowledge
base completion (Socher et al., 2013a), automated translation (Wu et al., 2016), and premise selection in the context of theorem proving (Alemi et al., 2016). Deep learning in particular has proven
to be a powerful tool for embedding semantic meaning and logical relationships into geometric
spaces, specifically via models such as convolutional neural networks, recurrent neural networks,
and tree-recursive neural networks. These advances strongly suggest that deep learning may have
become mature enough to yield significant advances in automated theorem proving. Remarkably,
it has recently become possible to build a system, AlphaGo (Silver et al., 2016), blending classical AI techniques such as Monte-Carlo tree search and modern deep learning techniques, capable
of playing the game of Go at super-human levels. We should note that theorem proving and Go
playing are conceptually related, since both consist in searching for specific nodes in trees of states
with extremely large arity and relatively large depth, which involves node evaluation decision (how
valuable is this state?) and policy decisions (which node should be expanded next?). The success of
AlphaGo can thus serve as encouragement on the road to building deep learning-augmented theorem
-----
provers that would blend classical techniques developed over the past few decades with the latest
machine learning advances.
Fast progress in specific machine learning verticals has occasionally been achieved thanks to the
release of specialized datasets (often with associated competitions, e.g. the ImageNet dataset for
large-scale image classification (Deng et al., 2009)) serving as an experimental testbed and public
benchmark of current progress, thus focusing the efforts of the research community. We hope that
releasing a theorem proving dataset suited for specific machine learning tasks can serve the same
purpose in the vertical of applying machine learning to theorem proving.
1.1 CONTRIBUTION AND OVERVIEW
First, we develop a dataset for machine learning based on the proof steps used in a large interactive
proof section 2. We focus on the HOL Light (Harrison, 2009) ITP, its multivariate analysis library
(Harrison, 2013), as well as the formal proof of the Kepler conjecture (Hales et al., 2010). These formalizations constitute a diverse proof dataset containing basic mathematics, analysis, trigonometry,
as well as reasoning about data structures such as graphs. Furthermore these formal proof developments have been used as benchmarks for automated reasoning techniques (Kaliszyk & Urban,
2014).
The dataset consists of 2,013,046 training examples and 196,030 testing examples that originate
from 11,400 proofs. Precisely half of the examples are statements that were useful in the currently
proven conjectures and half are steps that have been derived either manually or as part of the automated proof search but were not necessary in the final proofs. The dataset contains only proofs of
non-trivial theorems, that also do not focus on computation but rather on actual theorem proving.
For each proof, the conjecture that is being proven as well as its dependencies (axioms) and may
be exploited in machine learning tasks. Furthermore, for each statement both its human-readable
(pretty-printed) statement and a tokenization designed to make machine learning tasks more manageable are included.
Next, in section 3 we discuss the proof step classification tasks that can be attempted using the
dataset, and we discuss the usefulness of these tasks in interactive and automated theorem proving.
These tasks include unconditioned classification (without access to conjectures and dependencies)
and conjecture-conditioned classification (with access to the conjecture) of proof steps as being
useful or not in a proof. We outline the use of such classification capabilities for search space
pruning and internal guidance, as well as for generation of intermediate steps or possible new lemma
statements.
Finally, in section 4 we propose three baseline models for the proof step classification tasks, and we
experimentally evaluate the models on the data in section 5. The models considered include both
a relatively simple regression model, as well as deep learning models based on convolutional and
recurrent neural networks.
1.2 RELATED WORK
The use of machine learning in interactive and automated theorem proving has so far focused on
three tasks: premise selection, strategy selection, and internal guidance. We shortly explain these.
Given a large library of proven facts and a user given conjecture, the multi-label classification problem of selecting the facts that are most likely to lead to a successful proof of the conjecture has
been usually called relevance filtering or premise selection (Alama et al., 2014). This is crucial for
the efficiency of modern automation techniques for ITPs (Blanchette et al., 2016), which today can
usually solve 40–50% of the conjectures in theorem proving libraries. Similarly most competitive
ATPs today (Sutcliffe, 2016) implement the SInE classifier (Hoder & Voronkov, 2011).
A second theorem proving task where machine learning has been of importance is strategy selection.
With the development of automated theorem provers came many parameters that control their execution. In fact, modern ATPs, such as E (Schulz, 2013) and Vampire (Kovács & Voronkov, 2013),
include complete strategy description languages that allow a user to specify the orderings, weighting
functions, literal selection strategies, etc. Rather than optimizing the search strategy globally, one
-----
can choose the strategy based on the currently considered problem. For this some frameworks use
machine learning (Bridge et al., 2014; Kühlwein & Urban, 2015).
Finally, an automated theorem prover may use machine learning for choosing the actual inference
steps. It has been shown to significantly reduce the proof search in first-order tableaux by the
selection of extension steps to use (Urban et al., 2011), and has been also successfully applied
in monomorphic higher-order logic proving (Färber & Brown, 2016). Data/proof mining has also
been applied on the level of interactive theorem proving tactics (Duncan, 2007) to extract and reuse
repeating patterns.
2 DATASET EXTRACTION
We focus on the HOL Light theorem prover for two reasons. First, it follows the LCF approach[1]).
This means that complicated inferences are reduced to the most primitive ones and the data extraction related modifications can be restricted the primitive inferences and it is relatively easy to extract
proof steps at an arbitrary selected level of granularity. Second, HOL Light implements higher-order
logic (Church, 1940) as its foundation, which on the one hand is powerful enough to encode most
of today’s formal proofs, and on the other hand allows for an easy integration of many powerful
automation mechanisms (Baader & Nipkow, 1998; Paulson, 1999).
When selecting the theorems to record, we choose an intermediate approach between HOL Light
ProofRecording (Obua & Skalberg, 2006) and the HOL/Import one (Kaliszyk & Krauss, 2013). The
theorems that are derived by most common proof functions are extracted by patching these functions
like in the former approach, and the remaining theorems are extracted from the underlying OCaml
programming language interpreter. In certain cases decision procedures derive theorems to be reused
in subsequent invocations. We detect such values by looking at theorems used across proof blocks
and avoid extracting such reused unrelated subproofs.
All kernel-level inferences are recorded together with their respective arguments in a trace file. The
trace is processed offline to extract the dependencies of the facts, detect used proof boundaries,
mark the used and unused steps, and mark the training and testing examples. Only proofs that have
sufficiently many used and unused steps are considered useful for the dataset. The annotated proof
trace is processed again by a HOL kernel saving the actual training and testing examples originating
from non-trivial reasoning steps. Training and testing examples are grouped by proof: for each proof
the conjecture (statement that is finally proved), the dependencies of the theorem are constant, and a
list of used and not used intermediate statements is provided. This means that the conjectures used
in the training and testing sets are normally disjoint.
For each statement, whether it is the conjecture, a proof dependency, or an intermediate statement,
both a fully parenthesised HOL Light human-like printout is provided, as well as a predefined tokenization. The standard HOL Light printer uses parentheses and operator priorities to make its
notations somewhat similar to textbook-style mathematics, while at the same time preserving the
complete unambiguity of the order of applications (this is particularly visible for associative operators). The tokenization that we propose attempts to reduce the number of parentheses. To do this we
compute the maximum number of arguments that each symbol needs to be applied to, and only mark
partial application. This means that fully applied functions (more than 90% of the applications) do
not require neither application operators nor parentheses. Top-level universal quantifications are
eliminated, bound variables are represented by their de Bruijn indices (the distance from the corresponding abstraction in the parse tree of the term) and free variables are renamed canonically. Since
the Hindley-Milner type inference Hindley (1969) mechanisms will be sufficient to reconstruct the
most-general types of the expressions well enough for automated-reasoning techniques Kaliszyk
et al. (2015) we erase all type information. Table 1 presents some dataset statistics. The dataset, the
description of the used format, the scripts used to generate it and baseline models code are available:
[http://cl-informatik.uibk.ac.at/cek/holstep/](http://cl-informatik.uibk.ac.at/cek/holstep/)
1LCF approach is a software architecture for implementing theorem provers which uses a strongly typed
programming language with abstract datatypes (such as OCaml in the case of HOL Light) to separate the small
trusted core, called the kernel, which verifies the primitive inferences from user code which allows the user to
arbitrarily extend the system in a safe manner. For more details see (Gordon et al., 1979).
-----
Train Test Positive Negative
Examples 2013046 196030 1104538 1104538
Avg. length 503.18 440.20 535.52 459.66
Avg. tokens 87.01 80.62 95.48 77.40
Conjectures 9999 1411 - -
Avg. dependencies 29.58 22.82 - -
Table 1: HolStep dataset statistics
3 MACHINE LEARNING TASKS
3.1 TASKS DESCRIPTION
This dataset makes possible several tasks well-suited for machine learning most of which are highly
relevant for theorem proving:
_• Predicting whether a statement is useful in the proof of a given conjecture;_
_• Predicting the dependencies of a proof statement (premise selection);_
_• Predicting whether a statement is an important one (human named);_
_• Predicting which conjecture a particular intermediate statement originates from;_
_• Predicting the name given to a statement;_
_• Generating intermediate statements useful in the proof of a given conjecture;_
_• Generating the conjecture the current proof will lead to._
In what follows we focus on the first task: classifying proof step statements as being useful or not in
the context of a given proof. This task may be further specialized into two different tasks:
_• Unconditioned classification of proof steps: determining how likely a given proof is to be_
useful for the proof it occurred in, based solely on the content of statement (i.e. by only
providing the model with the step statement itself, absent any context).
_• Conditioned classification of proof steps: determining how likely a given proof is to be_
useful for the proof it occurred in, with “conditioning” on the conjecture statement that the
proof was aiming to attain, i.e. by providing the model with both the step statement and the
conjecture statement).
In the dataset, for every proof we provide the same number of useful and non-useful steps. As such,
the proof step classification problem is a balanced two-class classification problem, where a random
baseline would yield an accuracy of 0.5.
3.2 RELEVANCE TO INTERACTIVE AND AUTOMATED THEOREM PROVING
In the interaction with an interactive theorem prover, the tasks that require most human time are: the
search for good intermediate steps; the search for automation techniques able to justify the individual
steps, and searching theorem proving libraries for the necessary simpler facts. These three problems
directly correspond to the machine learning tasks proposed in the previous subsection. Being able
to predict the usefulness of a statement will significantly improve many automation techniques.
The generation of good intermediate lemmas or intermediate steps can improve level of granularity
of the proof steps. Understanding the correspondence between statements and their names can
allow users to search for statements in the libraries more efficiently (Aspinall & Kaliszyk, 2016).
Premise selection and filtering are already used in many theorem proving systems, and generation
of succeeding steps corresponds to conjecturing and theory exploration.
-----
Figure 1: Unconditioned classification model architectures.
4 BASELINE MODELS
For each task (conditioned and unconditioned classification), we propose three different deep learning architectures, meant to provide a baseline for the classification performance that can be achieved
on this dataset. Our models cover a range of architecture features (from convolutional networks
to recurrent networks), aiming at probing what characteristics of the data are the most helpful for
usefulness classification.
Our models are implemented in TensorFlow (Abadi et al., 2015) using the Keras framework (Chollet,
2015). Each model was trained on a single Nvidia K80 GPU. Training only takes a few hours per
model, which makes running these experiments accessible to most people (they could even be run
on a laptop CPU). We are releasing all of our benchmark code as open-source software [2] so as to
allow others to reproduce our results and improve upon our models.
4.1 UNCONDITIONED CLASSIFICATION MODELS
Our three models for this task are as follow:
_• Logistic regression on top of learned token embeddings. This minimal model aims to_
determine to which extent simple differences between token distribution between useful
and non-useful statements can be used to distinguish them. It provides an absolute floor on
the performance achievable on this task.
_• 2-layer 1D convolutional neural network (CNN) with global maxpooling for sequence re-_
duction. This model aims to determine the importance of local patterns of tokens.
_• 2-layer 1D CNN with LSTM (Hochreiter & Schmidhuber, 1997) sequence reduction. This_
model aims to determine the importance of order in the features sequences.
See figure 1 for a layer-by-layer description of these models.
4.2 CONDITIONED CLASSIFICATION MODELS
For this task, we use versions of the above models that have two siamese branches (identical branches
with shared weights), with one branch processing the proof step statement being considered, and the
[2https://github.com/tensorflow/deepmath/tree/master/holstep_baselines](https://github.com/tensorflow/deepmath/tree/master/holstep_baselines)
-----
Figure 2: Conditioned classification model architectures.
other branch processing the conjecture. Each branch outputs an embedding; these two embeddings
(step embedding and conjecture embedding) are then concatenated and the classified by a fullyconnected network. See figure 2 for a layer-by-layer description of these models.
4.3 INPUT STATEMENTS ENCODING
It should be noted that all of our models start with an Embedding layer, mapping tokens or characters
in the statements to dense vectors in a low-dimensional space. We consider two possible encodings
for presenting the input statements (proof steps and conjectures) to the Embedding layers of our
models:
_• Character-level encoding of the human-readable versions of the statements, where each_
character (out of a set of 86 unique characters) in the pretty-printed statements is mapped
to a 256-dimensional dense vector. This encoding yields longer statements (training statements are 308 character long on average).
_• Token-level encoding of the versions of the statements rendered with our proposed high-_
level tokenization scheme. This encoding yields shorter statements (training statements are
60 token long on average), while considerably increasing the size of set of unique tokens
(1993 total tokens in the training set).
-----
Table 2: HolStep proof step classification accuracy without conditioning
**Logistic**
**1D CNN** **1D CNN-LSTM**
**regression**
**Accuracy with char input** 0.71 0.82 **0.83**
**Accuracy with token input** 0.71 **0.83** 0.77
Table 3: HolStep proof step classification accuracy with conditioning
**Logistic** **Siamese** **Siamese**
**regression** **1D CNN** **1D CNN-LSTM**
**Accuracy with char input** 0.71 0.81 **0.83**
**Accuracy with token input** 0.71 0.82 0.77
5 RESULTS
Experimental results are presented in tables 2 and 3, as well as figs. 3 to 6.
5.1 INFLUENCE OF MODEL ARCHITECTURE
Our unconditioned logistic regression model yields an accuracy of 71%, both with character encoding and token encoding (tables 2 and 3). This demonstrates that differences in token or character
distributions between useful and non-useful steps alone, absent any context, is sufficient for discriminating between useful and non-useful statements to a reasonable extent. This also demonstrates that
the token encoding is not fundamentally more informative than raw character-level statements.
Additionally, our unconditioned 1D CNN model yields an accuracy of 82% to 83%, both with character encoding and token encoding (tables 2 and 3). This demonstrates that patterns of characters or
patterns of tokens are considerably more informative than single tokens for the purpose of usefulness
classification.
Finally, our unconditioned convolutional-recurrent model does not improve upon the results of the
1D CNN, which indicates that our models are not able to meaningfully leverage order in the feature
sequences into which the statements are encoded.
5.2 INFLUENCE OF INPUT ENCODING
For the logistic regression model and the 2-layer 1D CNN model, the choice of input encoding seems
to have little impact. For the convolutional-recurrent model, the use of the high-level tokenization
seems to cause a large decrease in model performance (figs. 4 and 6). This may be due to the fact
that token encoding yields shorter sequences, making the use of a LSTM less relevant.
5.3 INFLUENCE OF CONDITIONING ON THE CONJECTURE
None of our conditioned models appear to be able to improve upon the unconditioned models, which
indicates that our architectures are not able to leverage the information provided by the conjecture.
The presence of the conditioning does however impact the training profile of our models, in particular by making the 1D CNN model converge faster and overfit significantly quicker (figs. 5 and 6).
6 CONCLUSIONS
Our baseline deep learning models, albeit fairly weak, are still able to predict statement usefulness
with a remarkably high accuracy. Such methods already help first-order automated provers (Kaliszyk
& Urban, 2015a) and as the branching factor is higher in HOL the predictions are valuable for a
number of practical proving applications. This includes making tableaux-based (Paulson, 1999)
and superposition-based (Hurd, 2003) internal ITP proof search significantly more efficient in turn
-----
Figure 3: Training profile of the three unconditioned baseline models with character input.
Figure 5: Training profile of the three conditioned baseline models with character input.
Figure 4: Training profile of the three unconditioned baseline models with token input.
Figure 6: Training profile of the three conditioned baseline models with token input.
making formalization easier. However, our models do not appear to be able to leverage order in the
input sequences, nor conditioning on the conjectures. This is due to the fact that these models are not
doing any form of logical reasoning on their input statements; rather they are doing simple pattern
matching at the level of n-grams of characters or tokens. This shows the need to focus future efforts
on different models that can do reasoning, or alternatively, on systems that blend explicit reasoning
(e.g. graph search) with deep learning-based feature learning. A potential new direction would be
to leverage the graph structure of HOL statements using e.g. Recursive Neural Tensor Networks
(Socher et al., 2013a;b) or other graph-based recursive architectures.
6.1 FUTURE WORK
The dataset focuses on one interactive theorem prover. It would be interesting if the proposed techniques generalize, primarily across ITPs that use the same foundational logic, for example using
OpenTheory (Hurd, 2011), and secondarily across fundamentally different ITPs or even ATPs. A
significant part of the unused steps originates from trying to fulfill the conditions for rewriting and
from calls to intuitionistic tableaux. The main focus is however on the human found proofs so the
trained predictions may to an extent mimic the bias on the usefulness in the human proofs. As ATPs
are at the moment very week in comparison with human intuition improving this even for the many
proofs humans do not find difficult would be an important gain.
Finally, two of the proposed task for the dataset have been premise selection and intermediate sentence generation. It would be interesting to define more ATP-based ways to evaluate the selected
premises, as well as to evaluate generated sentences (Kaliszyk et al., 2015). The set is a relatively
large one when it comes to proof step classification, however the number of available premises
makes the set a medium-sized set for premise selection in comparison with those of the Mizar Mathematical Library or the seL4 development.
-----
ACKNOWLEDGEMENTS
The first author was partly supported by the ERC starting grant 714034.
REFERENCES
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew
Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath
Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,
Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning
[on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensor-](http://tensorflow.org/)
flow.org.
Jesse Alama, Tom Heskes, Daniel Kühlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise selection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning, 52(2):
191–213, 2014. doi: 10.1007/s10817-013-9286-5.
Alex A. Alemi, François Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath
– Deep sequence models for premise selection. In Daniel D. Lee, Masashi Sugiyama, Ulrike V.
Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing
_[Systems (NIPS 2016), pp. 2235–2243, 2016. URL https://arxiv.org/abs/1606.04442.](https://arxiv.org/abs/1606.04442)_
David Aspinall and Cezary Kaliszyk. What’s in a theorem name? In Jasmin Christian Blanchette
and Stephan Merz (eds.), Interactive Theorem Proving (ITP 2016), volume 9807 of LNCS, pp.
459–465. Springer, 2016. doi: 10.1007/978-3-319-43144-4.
Franz Baader and Tobias Nipkow. Term rewriting and all that. Cambridge University Press, 1998.
ISBN 978-0-521-45520-6.
Jasmin C. Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. Hammering towards
QED. _J. Formalized Reasoning, 9(1):101–148, 2016._ ISSN 1972-5787. doi: 10.6092/issn.
1972-5787/4593.
Jasmin Christian Blanchette, Maximilian P. L. Haslbeck, Daniel Matichuk, and Tobias Nipkow.
Mining the Archive of Formal Proofs. In Manfred Kerber, Jacques Carette, Cezary Kaliszyk,
Florian Rabe, and Volker Sorge (eds.), Intelligent Computer Mathematics (CICM 2015), volume
9150 of LNCS, pp. 3–17. Springer, 2015.
James P. Bridge, Sean B. Holden, and Lawrence C. Paulson. Machine learning for first-order theorem proving - learning to select a good heuristic. J. Autom. Reasoning, 53(2):141–172, 2014. doi:
10.1007/s10817-014-9301-5.
Haogang Chen, Daniel Ziegler, Tej Chajed, Adam Chlipala, M. Frans Kaashoek, and Nickolai Zeldovich. Using crash Hoare logic for certifying the FSCQ file system. In Ajay Gulati and Hakim
Weatherspoon (eds.), USENIX 2016. USENIX Association, 2016.
[François Chollet. Keras. https://github.com/fchollet/keras, 2015.](https://github.com/fchollet/keras)
Alonzo Church. A formulation of the simple theory of types. J. Symb. Log., 5(2):56–68, 1940. doi:
[10.2307/2266170. URL http://dx.doi.org/10.2307/2266170.](http://dx.doi.org/10.2307/2266170)
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR09, 2009.
Hazel Duncan. The Use of Data-Mining for the Automatic Formation of Tactics. PhD thesis, University of Edinburgh, 2007.
Michael Färber and Chad E. Brown. Internal guidance for Satallax. In Nicola Olivetti and Ashish
Tiwari (eds.), International Joint Conference on Automated Reasoning (IJCAR 2016), volume
9706 of LNCS, pp. 349–361. Springer, 2016. doi: 10.1007/978-3-319-40229-1.
-----
Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François Garillot,
Stéphane Le Roux, Assia Mahboubi, Russell O’Connor, Sidi Ould Biha, Ioana Pasca, Laurence
Rideau, Alexey Solovyev, Enrico Tassi, and Laurent Théry. A machine-checked proof of the
odd order theorem. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie (eds.),
_Interactive Theorem Proving (ITP 2013), volume 7998 of LNCS, pp. 163–179. Springer, 2013._
Michael J. C. Gordon, Robin Milner, and Christopher P. Wadsworth. Edinburgh LCF, volume 78
of Lecture Notes in Computer Science. Springer, 1979. ISBN 3-540-09724-4. doi: 10.1007/
[3-540-09724-4. URL http://dx.doi.org/10.1007/3-540-09724-4.](http://dx.doi.org/10.1007/3-540-09724-4)
Adam Grabowski, Artur Kornilowicz, and Adam Naumowicz. Mizar in a nutshell. J. Formalized
_Reasoning, 3(2):153–245, 2010. doi: 10.6092/issn.1972-5787/1980._
Thomas Hales, John Harrison, Sean McLaughlin, Tobias Nipkow, Steven Obua, and Roland
Zumkeller. A revision of the proof of the Kepler Conjecture. Discrete & Computational Ge_ometry, 44(1):1–34, 2010._
Thomas C. Hales, Mark Adams, Gertrud Bauer, Dat Tat Dang, John Harrison, Truong Le Hoang,
Cezary Kaliszyk, Victor Magron, Sean McLaughlin, Thang Tat Nguyen, Truong Quang Nguyen,
Tobias Nipkow, Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, An Hoai Thi Ta,
Trung Nam Tran, Diep Thi Trieu, Josef Urban, Ky Khac Vu, and Roland Zumkeller. A formal
proof of the Kepler conjecture. CoRR, abs/1501.02155, 2015.
John Harrison. HOL Light: An overview. In Stefan Berghofer, Tobias Nipkow, Christian Urban,
and Makarius Wenzel (eds.), Theorem Proving in Higher Order Logics (TPHOLs 2009), volume
5674 of LNCS, pp. 60–66. Springer, 2009.
John Harrison. The HOL Light theory of Euclidean space. J. Autom. Reasoning, 50(2):173–190,
2013. doi: 10.1007/s10817-012-9250-9.
John Harrison, Josef Urban, and Freek Wiedijk. History of interactive theorem proving. In Jörg
Siekmann (ed.), Handbook of the History of Logic vol. 9 (Computational Logic), pp. 135–214.
Elsevier, 2014.
R. Hindley. The principal type-scheme of an object in combinatory logic. Transactions of the
_american mathematical society, 146:29–60, 1969. ISSN 0002-9947._
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735–1780, 1997.
Kryštof Hoder and Andrei Voronkov. Sine qua non for large theory reasoning. In Nikolaj Bjørner and
Viorica Sofronie-Stokkermans (eds.), CADE-23, volume 6803 of LNAI, pp. 299–314. Springer,
2011.
Joe Hurd. First-order proof tactics in higher-order logic theorem provers. In Myla Archer, Ben Di
Vito, and César Muñoz (eds.), Design and Application of Strategies/Tactics in Higher Order Log_ics (STRATA 2003), number NASA/CP-2003-212448 in NASA Technical Reports, pp. 56–68,_
[September 2003. URL http://www.gilith.com/research/papers.](http://www.gilith.com/research/papers)
Joe Hurd. The OpenTheory standard theory library. In Mihaela Gheorghiu Bobaru, Klaus Havelund,
Gerard J. Holzmann, and Rajeev Joshi (eds.), NASA Formal Methods (NFM 2011), volume 6617
of LNCS, pp. 177–191. Springer, 2011.
Cezary Kaliszyk and Alexander Krauss. Scalable LCF-style proof translation. In Sandrine Blazy,
Christine Paulin-Mohring, and David Pichardie (eds.), Interactive Theorem Proving (ITP 2013),
volume 7998 of LNCS, pp. 51–66. Springer, 2013.
Cezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with Flyspeck. J. Autom.
_Reasoning, 53(2):173–213, 2014. doi: 10.1007/s10817-014-9303-3._
Cezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly efficient machine learning connection
prover. In Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov (eds.), 20th In_ternational Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR_
_2015), volume 9450 of LNCS, pp. 88–96. Springer, 2015a. doi: 10.1007/978-3-662-48899-7._
-----
Cezary Kaliszyk and Josef Urban. Learning-assisted theorem proving with millions of lemmas.
_J. Symbolic Computation, 69:109–128, 2015b. doi: 10.1016/j.jsc.2014.09.032._
Cezary Kaliszyk, Josef Urban, and Jiˇrí Vyskoˇcil. Learning to parse on aligned corpora. In Christian
Urban and Xingyuan Zhang (eds.), Proc. 6h Conference on Interactive Theorem Proving (ITP’15),
volume 9236 of LNCS, pp. 227–233. Springer-Verlag, 2015. doi: 10.1007/978-3-319-22102-1_
15.
Gerwin Klein, June Andronick, Kevin Elphinstone, Toby C. Murray, Thomas Sewell, Rafal Kolanski, and Gernot Heiser. Comprehensive formal verification of an OS microkernel. ACM Trans.
_Comput. Syst., 32(1):2, 2014._
Laura Kovács and Andrei Voronkov. First-order theorem proving and Vampire. In Natasha Sharygina and Helmut Veith (eds.), Computer-Aided Verification (CAV 2013), volume 8044 of LNCS,
pp. 1–35. Springer, 2013.
Daniel Kühlwein and Josef Urban. MaLeS: A framework for automatic tuning of automated theorem
provers. J. Autom. Reasoning, 55(2):91–116, 2015. doi: 10.1007/s10817-015-9329-1.
Xavier Leroy. Formal verification of a realistic compiler. Commun. ACM, 52(7):107–115, 2009.
Steven Obua and Sebastian Skalberg. Importing HOL into Isabelle/HOL. In Ulrich Furbach and
Natarajan Shankar (eds.), International Joint Conference on Automated Reasoning (IJCAR 2006),
volume 4130 of LNCS, pp. 298–302. Springer, 2006.
Lawrence C. Paulson. A generic tableau prover and its integration with Isabelle. _J. Universal_
_Computer Science, 5(3):73–87, 1999._
Stephan Schulz. System description: E 1.8. In Kenneth L. McMillan, Aart Middeldorp, and Andrei
Voronkov (eds.), Logic for Programming, Artificial Intelligence (LPAR 2013), volume 8312 of
_LNCS, pp. 735–743. Springer, 2013._
David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den
Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot,
Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering
the game of go with deep neural networks and tree search. Nature, 529:484–503, 2016. URL
[http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html.](http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html)
Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. Reasoning
with neural tensor networks for knowledge base completion. In Advances in Neural In_formation Processing Systems 26:_ _27th Annual Conference on Neural Information Pro-_
_cessing Systems 2013. Proceedings., pp. 926–934, 2013a._ [URL http://papers.nips.cc/paper/](http://papers.nips.cc/paper/5028-reasoning-with-neural-tensor-networks-for-knowledge-base-completion)
[5028-reasoning-with-neural-tensor-networks-for-knowledge-base-completion.](http://papers.nips.cc/paper/5028-reasoning-with-neural-tensor-networks-for-knowledge-base-completion)
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng,
and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment
treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language
_Processing, pp. 1631–1642, Stroudsburg, PA, October 2013b. Association for Computational Lin-_
guistics.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances
_in Neural Information Processing Systems, pp. 2431–2439, 2015._
Geoff Sutcliffe. The CADE ATP system competition - CASC. AI Magazine, 37(2):99–101, 2016.
Josef Urban, Jiˇrí Vyskoˇcil, and Petr Štˇepánek. MaLeCoP: Machine learning connection prover. In
Kai Brünnler and George Metcalfe (eds.), TABLEAUX 2011, volume 6793 of LNCS. Springer,
2011.
-----
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey,
Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa,
Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa,
Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural
machine translation system: Bridging the gap between human and machine translation. CoRR,
[abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.](http://arxiv.org/abs/1609.08144)
-----
| [
"Cezary, Kaliszyk",
"François, Chollet",
"Christian, Szegedy"
] | 2017-03-01T00:00:00 | ICLR 2017 | true | 79 | 10 | [
"HOL Light"
] | http://arxiv.org/abs/1703.00426 | null | https://www.semanticscholar.org/paper/f1318d15d7d8ab626b92d0c70dbdc5b5d37e223f |
Deductive Verification of Chain-of-Thought Reasoning | Large Language Models (LLMs) significantly benefit from Chain-of-thought (CoT) prompting in performing various reasoning tasks. While CoT allows models to produce more comprehensive reasoning processes, its emphasis on intermediate reasoning steps can inadvertently introduce hallucinations and accumulated errors, thereby limiting models’ ability to solve complex reasoning tasks. Inspired by how humans engage in careful and meticulous deductive logical reasoning processes to solve tasks, we seek to enable language models to perform explicit and rigorous deductive reasoning, and also ensure the trustworthiness of their reasoning process through self-verification. However, directly verifying the validity of an entire deductive reasoning process is challenging, even with advanced models like ChatGPT. In light of this, we propose to decompose a reasoning verification process into a series of step-by-step subprocesses, each only receiving their necessary context and premises. To facilitate this procedure, we propose Natural Program, a natural language-based deductive reasoning format. Our approach enables models to generate precise reasoning steps where subsequent steps are more rigorously grounded on prior steps. It also empowers language models to carry out reasoning self-verification in a step-by-step manner. By integrating this verification process into each deductive reasoning stage, we significantly enhance the rigor and trustfulness of generated reasoning steps. Along this process, we also improve the answer correctness on complex reasoning tasks. | This work proposes Natural Program, a natural language-based deductive reasoning format that enables models to generate precise reasoning steps where subsequent steps are more rigorously grounded on prior steps and significantly enhances the rigor and trustfulness of generated reasoning steps. | ## Deductive Verification of Chain-of-Thought Reasoning
**Zhan Ling[1][∗]** **Yunhao Fang[1][∗]** **Xuanlin Li[1]** **Zhiao Huang[1]** **Mingu Lee[2]**
**Roland Memisevic[2]** **Hao Su[1]**
1UC San Diego, 2Qualcomm AI Research†
**Abstract**
Large Language Models (LLMs) significantly benefit from Chain-of-Thought
(CoT) prompting in performing various reasoning tasks. While CoT allows models
to produce more comprehensive reasoning processes, its emphasis on intermediate
reasoning steps can inadvertently introduce hallucinations and accumulated errors,
thereby limiting models’ ability to solve complex reasoning tasks. Inspired by how
humans engage in careful and meticulous deductive logical reasoning processes
to solve tasks, we seek to enable language models to perform explicit and rigor_ous deductive reasoning, and also ensure the trustworthiness of their reasoning_
process through self-verification. However, directly verifying the validity of an
entire deductive reasoning process is challenging, even with advanced models
like ChatGPT. In light of this, we propose to decompose a reasoning verification
process into a series of step-by-step subprocesses, each only receiving their necessary context and premises. To facilitate this procedure, we propose Natural
**Program, a natural language-based deductive reasoning format. Our approach**
enables models to generate precise reasoning steps where subsequent steps are
more rigorously grounded on prior steps. It also empowers language models to
carry out reasoning self-verification in a step-by-step manner. By integrating this
verification process into each deductive reasoning stage, we significantly enhance
the rigor and trustfulness of generated reasoning steps. Along this process, we also
improve the answer correctness on complex reasoning tasks. Code will be released
[at https://github.com/lz1oceani/verify_cot.](https://github.com/lz1oceani/verify_cot)
**1** **Introduction**
The transformative power of large language models, enhanced by Chain-of-Thought (CoT)
prompting [50, 21, 59, 42], has significantly reshaped the landscape of information processing [14, 26, 49, 56, 13, 55, 23, 29], fostering enhanced abilities across a myriad of disciplines
and sectors. While CoT allows models to produce more comprehensive reasoning processes, its
emphasis on intermediate reasoning steps can inadvertently introduce hallucinations [4, 30, 16, 20]
and accumulated errors [4, 51, 1], thereby limiting models’ ability to produce cogent reasoning
processes.
In fact, the pursuit of reliable reasoning is not a contemporary novelty; indeed, it is an intellectual
endeavor that traces its roots back to the time of Aristotle’s ancient Greece. Motivated by the desire
to establish a rigorous reasoning process, in his “Organon,” Aristotle introduced principles of logic,
in particular, syllogism, a form of logical argument that applies deductive reasoning to arrive at
a conclusion based on two or more propositions assumed to be true. In disciplines that rigorous
reasoning is critical, such as judical reasoning and mathematical problem solving, documents must be
written in a formal language with a logical structure to ensure the validity of the reasoning process.
_∗Equal contribution_
_†Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc_
All datasets and models were solely downloaded and evaluated by the University of California San Diego.
37th Conference on Neural Information Processing Systems (NeurIPS 2023).
-----
|Col1|Question|Col3|
|---|---|---|
|There are 53 maple trees currently in the park. Park workers will plant maple trees today. When the workers are finished there will be 64 maple trees in the park. How many maple trees did the workers plant today?|||
|Col1|Verification|Col3|
|---|---|---|
|Here is some information: "There are 53 maple trees currently in the park. After the workers plant trees, there will be 64 maple trees in the park." Based on the given information, here is a reasoning process: "Calculate the number of maple trees the workers will plant. Number of maple trees the workers will plant: 64 - 53 = 11" Double-check the reasoning process, let's analyze its correctness, and end with "yes" or "no".|||
```
There are 53 maple trees currently in After the workers plant trees, there will be 64 maple trees in the park."
the park. Park workers will plant maple
trees today. When the workers are Based on the given information, here is a reasoning process: "Calculate
finished there will be 64 maple trees the number of maple trees the workers will plant. Number of maple trees
in the park. How many maple trees did the workers will plant: 64 - 53 = 11"
the workers plant today? Double-check the reasoning process, let's analyze its correctness, and
end with "yes" or "no".
```
Question-Related Premises Reasoning Reasoning
```
# 1. There are 53 maple trees currently # 5. (by #2 #1) Step 1: # 6. (by #4 #5) Final Step: The original
```
`in the park.# 2. there will be 64 maple trees in the park.# 3. # 4. After the workers plant trees, The workers plant some maple trees.How many maple trees did the` `Calculate the number of maple trees the workers will plant. Number of maple trees the workers will plant: 64 - 53 = 11` … `question is #4. How many maple trees did the workers plant today? We do not miss information on the rewritten labels. So the answer to this question is the workers planted 11 maple trees today.`
```
workers plant today?
```
Figure 1: An overview of our proposed deductive reasoning and verification process. In response
|Col1|Col2|Col3|
|---|---|---|
|Reasoning #8. (by #4 #7) Final Step: The original question is #4. How much will she spend on place settings? We do not miss information on the rewritten labels. So the answer to this question is Avery will spend $540.00 on place settings.|Reasoning||
|Col1|Col2|Col3|
|---|---|---|
|Reasoning #10. (by #6 #9) Final Step: The original question is #6. How much will Avery spend on place settings? We do not miss information on the rewritten labels. So the answer to this question is Avery will spend $540.00 on place settings.|Reasoning||
|Col1|Col2|
|---|---|
|Question Avery needs to buy a 3 piece place setting (dinner & salad plate and a bowl) for her holiday dinner. She’s having 12 people over for dinner. If the dinner plates cost $6.00 each and bowls each cost $5.00 and the salad plates cost $4.00, how much will she spend on place settings?|Question|
language models that can propose potential solutions through reasoning in logical structures. Si
to an input question, LLMs generate deductive reasoning chains using the Natural Program format
(bottom 3 boxes), a natural language-based deductive reasoning approach. The Natural Program
format allows individual reasoning steps (an example in purple) and their corresponding minimal
set of premises (an example in orange) to be easily extracted. This streamlined extraction process
facilitates the step-by-step decomposition and verification of deductive reasoning (top-right box).
Question Question-Related Premises Reasoning Reasoning
`# 1. setting (dinner & salad plate and a bowl) for her holiday dinner.# 2. # 3. cost $5.00, and salad plates cost $4.00.# 4. Avery needs to buy a 3 piece place She’s having 12 people over for dinner.Dinner plates cost $6.00 each, bowls each How much will she spend on place settings?` `# 5. (Calculate the total number of place settings needed.Total number of place settings: 12 * 3 = 36by #2) Step 1:` … `#8. (by #4 #7) Final Step: original question is #4. How much will she spend on place settings? We do not miss information on the rewritten labels. So the answer to this question is Avery will spend $540.00 on place settings.` `The`
Question-Related Premises Reasoning Reasoning
`Avery needs to buy a 3 piece place setting (dinner & salad plate and a bowl) for her holiday dinner. She’s having 12 people over for dinner. If the dinner plates cost $6.00` `# 1. setting for her holiday dinner.# 2. # 3. # 4. # 5. # 6. settings?Avery needs to buy a 3 piece place Avery is having 12 people over for dinner.The cost of each dinner plate is $6.00.The cost of each bowl is $5.00.The cost of each salad plate is $4.00.How much will Avery spend on place` `# 7. (Calculate the total number of place settings Avery needs to buy. Total number of place settings: 12 * 3 = 36by #2) Step 1:` … `#10. (by #6 #9) Final Step:original question is #6. How much will Avery spend on place settings? We do not miss information on the rewritten labels. So the answer to this question is Avery will spend $540.00 on place settings.` `The`
```
each and bowls each cost
$5.00 and the salad
plates cost $4.00, how
```
`much will she spend on place settings?` … Reasoning Error Grounding Error
Question-Related Premises Reasoning Reasoning
`# 1. setting for 12 people.# 2. plate, a salad plate, and a bowl.# 3. # 4. # 5. # 6. of the place settings.Avery needs to buy a 3 piece place The place setting consists of a dinner The cost of a dinner plate is $6.00.The cost of a bowl is $5.00.The cost of a salad plate is $4.00.Avery needs to calculate the total cost` `# 7. (Calculate the total number of place settings Avery needs to buy. Total number of place settings needed: 12by #1) Step 1:` … `#10. (by #6 #9) Final Step:original question is #6. How much will Avery spend on place settings? We do not miss information on the rewritten labels. So the answer to this question is Avery will spend $180.00 on place settings.` `The`
Figure 2: Through our Natural Program-based deductive reasoning verification approach, we identify
and eliminate reasoning chains that contain errors in reasoning and grounding (we define grounding
error as utilizing information that is not present in cited premises). By alleviating such errors,
we significantly enhance the rigor, trustworthiness, and interpretability of the generated reasoning
outputs.
We yearn for this sequence of reliable knowledge when answering questions. Our goal is to develop
multaneously, we aim to establish a verifier capable of accurately assessing the validity of these
reasoning processes. Despite recent significant explorations in the field, such as [48]’s emphasis
on self-consistency and [27, 5]’s innovative use of codes to represent the reasoning process, these
approaches still exhibit considerable limitations. For example, consistency and reliability are not
inherently correlated; as for program codes, they are not powerful enough to represent many kinds
of reasoning process, e.g., in the presence of quantifiers (“for all”, “if there exists”) or nuances of
natural language (moral reasoning, “likely”, ...).
We propose leveraging the power of natural language to achieve the deductive reasoning emphasized in
ancient Greek logic, introducing a “natural program”. This involves retaining natural language for its
inherent power and avoiding the need for extensive retraining with large data sets. A natural program
-----
represents a rigorous reasoning sequence, akin to a computer program. We expect implementations
of the idea to have two properties: 1) that natural programs are generated with minimal effort from an
existing language model capable of CoT reasoning, preferably through in-context learning; 2) that
the natural program can be easily verified for reliability in the reasoning process.
Through a step-by-step investigation, we discovered that large language models have the potential
to meet our expectation. Naïve CoT prompts like "Let us think step by step." has many flaws, and
entrusting the entire verification process to a large model like ChatGPT can still lead to significant
error rates. However, we found that, if the reasoning process is very short, and only based on
necessary premises and contexts, the verification of existing large language models is already quite
reliable. Therefore, our approach is to design prompts that induce CoT processes comprised of
rigorous premises/conditions and conclusions with statement labels, and verification can be done by
gradually isolating very few statements within the long thought chain. Experimentally, we found that
most reasoning that passed the verification was rigorous, and many that did not pass had elements of
imprecision in the reasoning process, even if they occasionally arrived at correct answers.
It is worth emphasizing that, we are not looking for a method to just maximize the correctness rate
of final answers; instead, we aspire to generate a cogent reasoning process, which is more aligned
with the spirit of judical reasoning. When combined with sampling-based methods, our method can
identify low-probability but rigorous reasoning processes. When repeated sampling fails to yield a
rigorous reasoning process, we can output "unknown" to prevent hallucinations that mislead users.
We demonstrate the efficacy of our natural program-based verification approach across a range of
arithmetic and common sense datasets on publicly-available models like OpenAI’s GPT-3.5-turbo.
Our key contributions are as follows:
1. We propose a novel framework for rigorous deductive reasoning by introducing a “Natural
**Program” format (Fig. 1), which is suitable for verification and can be generated by just in-context**
learning;
2. We show that reliable self-verification of long deductive reasoning processes written in our Natural
Program format can be achieved through step-by-step subprocesses that only cover necessary context
and premises;
3. Experimentally, we demonstrate the superiority of our framework in improving the rigor, trustworthiness, and interpretability of LLM-generated reasoning steps and answers (Fig. 2).
**2** **Related work**
**Reasoning with large language models. Recent large language models (LLMs) [3, 8, 57, 47, 38, 18,**
9, 37] have shown incredible ability in solving complex reasoning tasks. Instead of letting LLMs
directly generate final answers as output, prior work have shown that by encouraging step-by-step
reasoning through proper prompting, such as Chain-of-Thought (CoT) prompting [50] and many
others [21, 59, 58, 44, 48, 60, 25, 54], LLMs exhibit significantly better performance across diverse
reasoning tasks. To further improve the step-by-step reasoning process, some recent studies have
investigated leveraging external solvers such as program interpreters [39, 5, 27], training and calling
external reasoning modules [11], or performing explicit search to generate deductive steps [2, 46].
Parallel to these works, we do not rely on external modules and algorithms, and we directly leverage
the in-context learning ability of LLMs to generate more precise and rigorous deductive reasonings.
**Large language models as verifiers. Using language models to evaluate model generations has been**
a long standing idea [22, 36, 40, 4]. As LLMs exhibit impressive capabilities across diverse tasks, it
becomes a natural idea to use LLMs as evaluation and verification tools. For example, [10, 11, 33]
finetune LLMs to verify solutions and intermediate steps. LLMs aligned with RLHF [32, 31, 48]
have also been employed to compare different model generations. In addition, recent works like
[43, 52, 28, 6] leverage prompt designs to allow LLMs to self-verify, self-refine, and self-debug
without the need for finetuning. However, these works do not focus on the rigor and trustworthiness
of the deductive reasoning processes at every reasoning step. In this work, we propose a natural
language-based deductive reasoning format that allows LLMs to self-verify every intermediate step
of a deductive reasoning process, thereby improving the rigor and trustfulness of reasoning.
-----
**Question: Jerry is rolling a six-sided die. How much more likely is it (expressed as a percentage)**
that he rolls a number greater than $3 than that he rolls two even numbers in a row?
**Ground Truth: There are 3 numbers greater than 3 on the dice, so the chances of rolling one**
of them are 3 / 6 = 50%. The chance of rolling one even number is 50%, so the chance of rolling
two in a row is 50% * 50% = 25%. The difference between these two probabilities is 50% - 25%
_= 25%. #### 25_
**Output: There are two possible outcomes for rolling a number greater than 3 out of 6 total**
outcomes, so the probability is 2 / 6 or 1 / 3. For rolling two even numbers in a row, there are
three possible outcomes (2-4, 4-6, 6-2) out of 36 total outcomes, so the probability is 3 / 36 or
_1 / 12. To find the difference in probability, we subtract the probability of rolling two even_
numbers in a row from the probability of rolling a number greater than 3: 1 / 3 - 1 / 12 = 1 / 4.
This means that it is 25% more likely to roll a number greater than 3 than to roll two even
numbers in a row.
Table 1: An example question from GSM8K with a generated CoT reasoning path with GPT3.5 (turbo), where the output provides a wrong reasoning chain with the correct answer.
Additionally, while some recent works [12, 53, 15, 34] have proposed methods to verify individual
steps in a reasoning process, our approach distinguishes from these works in the following perspectives: (1) Our approach leverages in-context learning to achieve reasoning verification, without the
need for language model finetuning. (2) Our Natural Program-based LLM verification approach
not only identifies invalid reasoning steps, but also provides explicit explanations for why they are
invalid, detailing the specific reasoning errors involved. (3) Our Natural Program-based reasoning
and verification approach is compatible with in-context abstract reasoning tasks where reasoning
steps do not possess proof-like entailment structures. For example, our approach is compatible with
the Last Letters task, where the LLM is instructed to output the concatenation of the last letters
of all words in a sequence as the final answer. (4) Our Natural Program approach allows the use
of commonsense knowledge not explicitly listed in premises. For example, consider this problem:
“Marin eats 4 apples a day. How many apples does he eat in November?” Even though “November
has 30 days” is not explicitly listed in the premises, Natural Program permits the use of such common
knowledge within a reasoning step. Our in-context verification process is also capable of handling
these implicit premises (e.g., if LLM outputs “November has 29 days” in a reasoning step, it will be
marked as invalid).
**3** **Motivation and Problem Formulation**
A reasoning-based question-answering (QA) task can be defined as a tuple (Q, C, O, A) [35], where Q
is the target question; C is the context of a question, such as the necessary background for answering a
question; O = (o1, o2, _, ck) are optional answer choices if Q is a K-way multiple choice problem;_
_· · ·_
and A is the ground-truth answer. Given Q and C as inputs, large language models (LLMs) [3, 8, 47]
generate a sequence of tokens T = (t1, t2, _, tn) to answer the question. Recent works like Chain-_
_· · ·_
of-Thought (CoT) [50, 21] leverage prompt engineering in the context C to encourage models to
generate the intermediate reasoning process in T, which benefits LLM performance across diverse
reasoning tasks. In this case, T consists of a set of m intermediate reasoning steps, which we denote
as S = (s1, s2, · · ·, sm) . Each step si can be represented by a subsequence of the generated tokens
(tli _, tri_ ) _T_ . In much prior work, a generated solution is “correct” if and only if the predicted final
_⊆_
answer in sm matches the ground truth A, which we call answer correct(ness).
We observe that for all cases where LLMs produce erroneous final answers, there exists at least one
mistake among the intermediate reasoning steps S. Moreover, even when the final answer is correct,
there might still exist some mistakes among S. This phenomenon, as illustrated in Tab. 1, occurs for
all LLMs we tested, including state-of-the-art models such as ChatGPT and GPT-4 [32]. Since later
reasoning steps are conditioned on prior reasoning steps, these mistakes often initiate a snowball
effect, causing subsequent mistakes to compound. This significantly diminishes the likelihood of
correct problem-solving and impedes the progress towards achieving human-level complex reasoning.
Therefore, in this work, we place significant emphasis on ensuring the validity of every reasoning
step, not just the correctness of the final answer. In particular, we focus on the validity of deductive
_reasoning, an essential component of a logical reasoning process. In deductive reasoning, we are_
-----
Prompting Reasoning Correctness GSM8K AQuA MATH AddSub Date Last Letters
Correct 0.98 0.96 1.00 0.98 0.98 1.00
Zero-shot Incorrect 0.04 0.06 0.04 0.02 0.04 0.04
(Average) 0.51 0.51 0.52 0.50 0.51 0.52
Correct 0.98 0.96 1.00 0.92 1.00 0.96
Two-shot Incorrect 0.02 0.04 0.00 0.06 0.26 0.06
(Average) 0.50 0.50 0.50 0.49 0.63 0.51
Table 2: Zero-shot and two-shot reasoning chain verification accuracy for GPT-3.5-turbo (ChatGPT),
where an entire reasoning chain is verified at once. The two shot prompt we used is presented in
Appendix D.1. To generate verification inputs, for each dataset, we perform Chain-of-Thought (CoT)
prompting and randomly sample 50 reasoning chains that are valid and 50 reasoning chains that
exhibit mistakes. We observe that when given an entire reasoning process, where the deductive graphs
for all reasoning steps are entangled together, it is challenging even for strong language models like
ChatGPT to verify its validity.
given a (premise, conclusion) pair, and we are interested in determining whether the conclusion
follows from the premises. In the context of reasoning-based QA tasks, for each reasoning step si,
we define its deductive validity V (si) as a binary variable. A reasoning step is deductively valid
(V (si) = 1) if and only if si can be logically deduced from its corresponding premises pi, which
consist of the context C, the question Q, and all the previous reasoning steps sj(j < i). Then, we can
also define the deductive validity for the entire reasoning chain S as V (S) = ∧i[M]=1[V][ (][s][i][)][. Compared]
to evaluating answer correctness, which can be accomplished by simple functions such as exact string
match, evaluating deductive validity is a lot more challenging. Thanks to the recent progress on
LLMs, which demonstrate impressive in-context learning capabilities across diverse scenarios, we
propose to use LLMs to examine reasoning chains and predict the deductive reasoning validity.
**4** **Deductively Verifiable Chain-of-Thought Reasoning**
In this section, we introduce our specific approaches to performing deductive verification of reasoning
chains. Specifically, we first introduce our motivation and method for decomposing a deductive verification process into a series of step-by-step processes, each only receiving contexts and premises that
are necessary. Then, we propose Natural Program, a natural language-based deductive reasoning
format, to facilitate local step-by-step verification. Finally, we show that by integrating deductive verification with unanimity-plurality voting, we can improve the trustworthiness of reasoning processes
along with final answers. An overview of our approach is illustrated in Fig. 1 and Fig. 2.
**4.1** **Decomposition of Deductive Verification Process**
Given a reasoning chain S = (s1, s2, _, sn), a straightforward idea to verify its deductive validity is_
_· · ·_
to ask LLMs to examine the entire reasoning chain at once. To assess the effectiveness of this approach,
we conduct a preliminary experiment: for a dataset problem and its reasoning chain S generated
by ChatGPT, we prompt ChatGPT with “Do you think the above reasoning process is
```
correct? Let’s think step by step” such that its outputs whether there exists any mistake
```
among any reasoning step in S. However, as demonstrated in Tab. 2, the verification accuracy is 50%
for most datasets, and ChatGPT struggles at finding out mistaken reasonings. Notably, it persistently
outputs “Correct” for most reasoning chain queries, regardless of their actual validity.
We conjecture that such phenomenon is caused by the abundance of irrelevant premises for each
reasoning step. Recall that the premises pi for a reasoning step si consist of the the question
_Q, the question context C, along with the prior reasoning steps s_ _j =_ _sj : j < i_ . For Q
_≤_ _{_ _}_
and C, we can further extract and decompose Q ∪ _C into a set of “question-related premises”_
_QC = {qc1, qc2, · · ·, qcm}, where qci is a premise or condition inferred from Q ∪_ _C. Then, it is_
often the case that most elements of pi = QC _s_ _j are irrelevant to the validity of si, leading_
_∪_ _≤_
to erroneous verifications from language models. A very recent work [41] also observes a similar
phenomenon where LLMs are easily distracted by irrelevant context.
Hence, we propose a decomposition of the reasoning chain verification process into a series of stepby-step processes, where each step only considers the premises that are necessary. The overall validity
-----
of the reasoning chain, denoted as V (S) = ∧i[M]=1[V][ (][s][i][)][, can be naturally decomposed into individual]
step validity V (si). However, achieving such decomposition is highly challenging without imposing
constraints on the format of reasoning chains. Additionally, for each si _S, we aim to ensure that_
itpotential ambiguities during verification. This motivates us to introduce a natural-language-based explicitly lists the minimal subset of premises ¯pi ⊆ _pi required for deductive reasoning to avoid ∈_
deductive reasoning format in Section 4.2.
**4.2** **Natural Program Deductive Reasoning Format**
As previously mentioned in Sec. 4.1, we desire LLMs to output deductive reasoning processes that
can be easily verified by themselves, specifically by listing out the minimal set of necessary premises
_pi at each reasoning step si. To accomplish its goal, we propose to leverage the power of natural_
language, which is capable of rigorously representing a large variety of reasoning processes and can
be generated with minimal effort. In particular, we introduce Natural Program, a novel deductive
reasoning format for LLMs. More formally, Natural Program consists of the following components:
- An instruction for models to extract question-related premises QC. We use the following instruction: “First, let’s write down all the statements and relationships
```
in the question with labels".
```
- A numbered-list of question-related premises, each prefixed with “#{premise_number}”.
- An instruction for models to generate the reasoning chain S based on the question-related
premises QC. We use the following instruction: “Next, let’s answer the question
```
step by step with reference to the question and reasoning process”.
```
- A list of prefixed reasoning steps Si. The prefix has the following format:
```
#{number} (by {list_of_premises_used}). Here “number” equals |QC| + i, and
```
“list_of_premises_used” consists of numbers from the smallest subset of premises among
_QC_ _s_ _j that are used for the deductive reasoning of si. In addition, for the last reasoning_
_∪_ _≤_
step sm, we ensure that it (1) includes a special tag Final Step; (2) refers to the premise
number of the target question to be answered; (3) explicitly gives the final answer to a
question.
To encourage language models to reason in the Natural Program format, we have designed oneshot prompts for different datasets, which are shown Appendix D.2. Given that LLM’s reasoning outputs follow the Natural Program format, we can then verify the deductive validity
of a single reasoning step si through an instruction that consists of (1) the full descriptions of
premises used for the reasoning of si; (2) the full description of si; (3) an instruction for validity verification, such as “Double-check the reasoning process, let’s analyze its
```
correctness, and end with "yes" or "no".” Note that throughout this verification pro
```
cess, we only retain the minimal necessary premise and context for si, thereby avoiding irrelevant
context distraction and significantly improving the effectiveness of validation. Additionally, we
employ a one-shot prompt for this verification process, which we find very helpful for improving the
verification accuracy. The prompt is shown in Appendix D.3.
Figure 1 provides an overview of the complete Natural Program-based deductive reasoning and
verification process. By using the Natural Program approach, we demonstrate that LLMs are
capable of performing explicit, rigorous, and coherent deductive reasoning. Furthermore, Natural
Program enables LLMs to self-verify their reasoning processes more effectively, enhancing the
reliability and trustworthiness of the generated responses.
**4.3** **Integrating Deductive Verification with Unanimity-Plurality Voting**
Given that we can effectively verify a deductive reasoning process, we can naturally integrate
verification with LLM’s sequence generation strategies to enhance the trustworthiness of both the
intermediate reasoning steps and the final answers. In this work, we propose Unanimity-Plurality
Voting, a 2-phase sequence generation strategy described as follows. Firstly, similar to prior work
like [48], we sample k reasoning chain candidates along with their final answers. In the unanimity
phase, we perform deductive validation on each reasoning chain. Recall that a chain S is valid (i.e.,
_V (S) = 1) if and only if all of its intermediate reasoning steps are valid (i.e.,_ _i, V (si) = 1). For_
_∀_
_each intermediate reasoning step si, we perform majority voting over k[′]_ sampled single-step validity
-----
predictions to determine its final validity V (si). We then only retain the verified chain candidates
_{S : V (S) = 1}. In the plurality voting stage, we conduct a majority-based voting among the verified_
chain candidates to determine the final answer. This voting process ensures that the final answer is
selected based on a consensus among the trustworthy reasoning chains.
**5** **Experiments**
In this section, we perform evaluations to demonstrate the effectiveness of our Natural Program-based
deductive reasoning verification approach over diverse reasoning datasets. Firstly, we show that
our deductive verification process leads to substantial improvements in the rigor and reliability of
reasoning chains. Subsequently, we will examine the impact of deductive verification on the accuracy
of final answers. Our findings reveal that by adopting our Natural Program reasoning format without
verification, we improve answer correctness on challenging benchmarks. Further applying deductive
verification leads to slight reductions in final answer accuracy. One reason for this phenomenon is
that the verification process effectively identifies and eliminates flawed reasoning chains that still
produce correct answers.
**5.1** **Experimental Setup**
**Benchmarks. We evaluate the deductive verification accuracy and the answer correctness of reasoning**
chains over a diverse set of reasoning tasks: arithmetic reasoning, symbol manipulation, and date
understanding. For arithmetic reasoning, we utilize the following benchmarks: 1) AddSub [19];
2) GSM8K [10]; 3) MATH [17]; 4) AQuA [24]. Among these benchmarks, the AddSub and GSM8K
datasets involve middle school-level multi-step calculations to arrive at a single number as the final
answer. The MATH dataset presents more challenging problems that require expressing the answer
as a mathematical expression in LaTeX format. These problems involve concepts from linear algebra,
algebra, geometry, calculus, statistics, and number theory. AQuA also features similarly challenging
problems, except that questions are in a multiple-choice format. For symbol manipulation, we use
Last Letter Concatenation [50], where the model is tasked with concatenate the last letters of all the
words provided in the question. For date understanding, we use the one from BIG-bench [45]
**Deductive verfication evaluation setup. For each of the above benchmarks, we select 100 reasoning**
chains, where 50 of them are deductively valid and 50 of them exhibit reasoning mistakes. The
ground-truth deductive validity of each reasoning chain is determined by human annotators.
**Answer extraction. To extract answers from reasoning solutions, we first perform text splitting based**
on answer prefix patterns such as “answer is” or “option is”. Then, using problem type-specific regular
expressions, we extract the final answer. To extract the validity results from deductive verification
processes, we only keep the last sentence of model response. We then extract the validity answer with
regular expressions to obtain attitude words, e.g., “yes” or “no”, to determine the validity answer.
Sometimes, language models may not provide a direct answer and instead output phrases like “not
applicable” at the end of the response. In such cases, we consider the answer from the model as "yes".
Please refer to Appendix C for more details.
**Model and Hyperparameters. We conduct our main experiments with GPT-3.5-turbo (Chat-**
GPT) [32]. We also present results for the LLama model-family [47]) in Appendix A, where we
find the deductive verification accuracy to be worse than larger models even after finetuning. For
ChatGPT, we use a generation temperature of T = 0.7. For Unanimity-Plurality Voting, we set
_k = 10 and k[′]_ = 3 by default. We use 1-shot prompting for both reasoning chain generation and
deductive verification (except reasoning chain generation for the date understanding task where we
use 2-shot). See Appendix D.2 and Appendix D.3 for more details.
**5.2** **Comparison of Deductive Verification Accuracy**
We compare the verification accuracy of reasoning chains using two methods: (1) verifying the entire
reasoning chain at once (as described in Section 4.1) without utilizing the Natural Program, and
[1 Most results for Faithful CoT are from their official repository https://github.com/veronica320/](https://github.com/veronica320/Faithful-COT)
```
Faithful-COT, except MATH and AddSub due to the unavailability. For these two datasets, we use our
```
implementation and the same prompt for the math word problems in their paper. The prompt for Last Letters is
not available, so we leave it blank.
-----
|Verification Method|Reasoning Correctness GSM8k AQuA MATH AddSub Date Last Letters|Overall|
|---|---|---|
|CoT Two-shot|Correct 98% 96% 100% 92% 100% 96% Incorrect 2% 4% 0% 6% 26% 6% (Average) 50% 50% 50% 49% 63% 51%|97% 7% 52%|
|---|---|---|
|Natural Program One-shot|Correct 84% 72% 70% 95% 90% 96% Incorrect 84% 62% 76% 40% 56% 6% (Average) 84% 67% 73% 68% 73% 51%|85% 54% 69%|
|---|---|---|
Table 3: Comparison of deductive verification accuracy of reasoning chains for GPT-3.5-turbo (ChatGPT). We compare two approaches: (1) verifying entire reasoning chains generated by Chain-ofThought prompting; (2) verifying reasoning chains generated in the Natural Program format with
step-by-step decomposition. In the latter case, when we verify each reasoning step si, we only keep
the necessary subset of premises ¯pi _pi. To calculate verification accuracy, for each dataset, we_
randomly sample 50 reasoning chains that are deductively valid and 50 reasoning steps exhibiting ⊆
incorrect reasonings.
Arithmetic Commonsense
Methods GSM8K AQuA MATH[∗] AddSub Date Last Letters
CoT + Voting **87.62% 70.18%** 35.93% 92.36% 69.97% 81.60%
Faithful CoT + Voting 75.80% 61.80% 31.78%[1] 88.35%[1] **73.50%** -
Ours (Natural Program (NP), No Verification) 87.05% 70.34% 36.75% 93.67% 72.49% **92.98%**
Ours (NP + Deductive Verification + UPV) 86.01% 69.49% 36.48% 93.54% 71.45% 92.60%
Table 4: Final answer accuracy comparison on GPT-3.5-turbo (ChatGPT). All approaches generate
_k = 10 reasoning chains for each problem before performing majority voting or reasoning chain_
filtering with our deductive verification approach.
(2) our Natural Program-based verification approach with step-by-step decomposition. The results,
presented in Table 3, indicate that our approach achieves significantly higher reasoning verification
accuracy across most datasets. It effectively identifies erroneous reasoning in faulty chains while
maintaining a low rate of false positives for valid chains. However, we observe that our approach’s
effectiveness is limited on the “Last Letters” task. We hypothesize that this is due to the task’s
nature, where each subsequent reasoning step is conditioned on all previous steps, presenting greater
challenges for reasoning verification due to the increased dependency among premises.
**5.3** **Impact of Natural Program and Deductive Verification on Final Answer Correctness**
We then investigate the impact of our Natural Program reasoning format and our deductive verification
process on final answer correctness. We conduct two experiments: (1) for each problem, we instruct
language models to generate k = 10 reasoning chain candidates in the Natural Program (NP) format
and perform simple majority voting on final answers, without using deductive verification to filter out
reasoning chain candidates; (2) applying our deductive verification approach to filter out reasoning
chain candidates, and apply Unanimity-Plurality Voting (UPV) along the process to determine the
final answer. As a reference, we also report the performance of Chain-of-Thought (CoT) [50] and
Faithful CoT [27]. For these baselines, we perform simple answer-based majority voting with k = 10
for fair comparison.
Results are presented in Tab. 4. While our major goal is to improve the trustworthiness and reliability
of deductive reasoning, we find that prompting language models to reason in our Natural Program
format achieves on-par or better final answer accuracy than baselines over many reasoning tasks.
Upon further applying our deductive verification approach to filter out invalid reasoning chains, we
observe a slight decrease in final answer accuracy. One major contributing factor to this decrease
is the filtering out of reasoning chain candidates that provide correct answers but exhibit incorrect
reasoning. We illustrate an example in Table 5, where ChatGPT generates the correct final answer
but assigns incorrect premise numbers to support the first reasoning step. We note that in many such
cases, our approach effectively identifies these reasoning errors, thereby enhancing the rigor and
-----
**Question: In March it rained 0.81 inches. It rained 0.35 inches less in April than in March.**
How much did it rain in April?
**Ground Truth: It rained 0.81 - 0.35 = 0.46 inches.**
**GPT-3.5-turbo (ChatGPT):**
**Question-related premises:**
#1. It rained 0.81 inches in March.
#2. It rained 0.35 inches less in April than in March.
#3. How much did it rain in April?
**The first reasoning step:**
#4. (by #2) Step 1: Calculate how much it rained in April by subtracting the difference of 0.35 inches from
the amount in March.
Amount of rain in April: 0.81 - 0.35 = 0.46 inches
Table 5: For this reasoning chain candidate of this GSM8k problem, ChatGPT generates the correct
final answer but provides incorrect premise numbers for grounding the first reasoning step. In “Step
1”, the correct premise numbers should be #1 and #2. Our deductive reasoning verification approach
effectively identifies these reasoning errors, enhancing the rigor and trustworthiness of the reasoning
process. It is worth noting that removing a correct answer from the candidate reasoning chains has a
slightly negative impact on the overall final answer correctness.
|Premise Context # Shots|Reasoning Correctness GSM8K AQuA MATH AddSub Date Last Letters|Average|
|---|---|---|
|Full Premises 1|Correct 64% 54% 58% 95% 26% 96% Wrong 56% 68% 56% 24% 76% 5% (Average) 60% 61% 57% 60% 51% 51%|66% 48% 57%|
|---|---|---|
|Minimal Premises 0|Correct 84% 78% 90% 96% 90% 12% Wrong 26% 12% 28% 20% 20% 80% (Average) 55% 45% 59% 58% 55% 46%|75% 31% 53%|
|---|---|---|
|Minimal Premises 1|Correct 84% 72% 70% 95% 90% 96% Wrong 84% 62% 76% 40% 56% 6% (Average) 84% 67% 73% 68% 73% 51%|85% 54% 69%|
|---|---|---|
Table 6: Ablation study on the impact of (1) premise context and (2) zero-shot vs. few-shot scenarios
on deductive verification accuracy using our Natural Program-based approach with step-by-step
reasoning chain decomposition. To verify each reasoning step si, we either the full premises
_pi = QC_ _S_ _j, or use the minimal subset of premises ¯pi_ _pi necessary as outlined in Sec. 4.1_
The one-shot prompt we used is shown in Appendix D.3. For each dataset, we randomly sample 50 ∪ _≤_ _⊆_
reasoning chains that are deductively valid and 50 reasoning steps exhibiting incorrect reasonings.
reliability of the language models’ reasoning processes, albeit with a slight negative impact on the
overall final answer correctness. Further discussions are presented in Appendix B.
**5.4** **Ablation Study**
In addition, we perform several ablation studies to gain further insights into the designs of our
deductive verification approach. In Tab. 6, we compare two different approaches to verify a single
reasoning step si _S following our Natural Program format. The first approach utilizes all premises_
_pi = QC_ _S_ _j for verification regardless of their relevance to ∈_ _si, potentially introducing irrelevant_
_∪_ _≤_
contexts. The second approach follows our design in Sec. 4.1 and only includes the necessary context
and premises ¯pi _pi. We observe that removing irrelevant premises significantly improves the_
reasoning chain verification accuracy on many datasets, highlighting the importance of this technique. ⊆
We also ablate on our Unanimity-Plurality Voting strategy by investigating the impact of different k[′].
Recall that k[′] determines the number of votes to produce validity predictions of single-step reasoning.
Results are shown in Tab. 7. We observe that increasing k[′] generally enhances reasoning validation
accuracy, though we note that this is at the expense of more compute.
**6** **Limitations**
While we have demonstrated the effectiveness of Natural Program-based deductive reasoning verification to enhance the trustworthiness and interpretability of reasoning steps and final answers, it is
-----
Answer Correctness k[′] = 1 k[′] = 3 k[′] = 5 k[′] = 10
Correct 86% 90% 90% 92%
Wrong 38% 38% 38% 40%
Table 7: Ablation of different values of k[′] on the verification accuracy of reasoning chains using our
Unanimity-Plurality Voting strategy. Experiments are performed on AddSub using GPT-3.5-turbo
(ChatGPT).
**Question: Melanie had 10 quarters and 17 pennies in her bank. Her dad gave her 27 pennies**
and her mother gave her 19 pennies. How many pennies does Melanie have now?
**Ground Truth: Melanie have 17 + 27 + 19 = 63 pennies.**
**ChatGPT’s reasoning step:**
#5. (by #1) Step 1: Calculate the number of pennies Melanie had initially.
Number of pennies in 10 quarters: 10 * 25 = 250
Number of pennies initially: 250 + 17 = 267
Table 8: An example question with ambiguous wordings. The term "pennies" in this question can
be interpreted as either a type of coin or a unit of currency. In this particular question, "pennies" is
treated as a type of coin. However, the initial reasoning step by ChatGPT mistakenly treats "pennies"
as a unit of currency, resulting in the conversion of all Melanie’s money into "pennies" (highlighted
in red). Consequently, all subsequent reasoning steps follow this flawed logic, leading to an incorrect
reasoning trace. Our deductive verification is not yet able to detect such errors.
important to acknowledge that our approach has limitations. In this section, we analyze a common
source of failure cases to gain deeper insights into the behaviors of our approach. The failure case, as
shown in Tab. 8, involves the ambiguous interpretation of the term “pennies,” which can be understood
as either a type of coin or a unit of currency depending on the context. The ground truth answer
interprets “pennies” as coins, while ChatGPT interprets it as a unit of currency. In this case, our
deductive verification process is incapable of finding such misinterpretations. Contextual ambiguities
like this are common in real-world scenarios, highlighting the current limitation of our approach.
**7** **Conclusion**
In this paper, we aim to enable Large Language Models (LLMs) to perform explicit and rigorous
deductive reasoning while ensuring the trustworthiness of their reasoning processes through selfverification. To this end, we have proposed a novel framework based on “Natural Program”, a natural
language-based deductive reasoning format that facilitates reasoning verification and can be easily
generated through in-context learning. Within this framework, we decompose the verification process
of complex reasoning chains into step-by-step subprocesses that focus solely on necessary context and
premises, allowing us to significantly enhance the accuracy of verification. Additionally, we introduce
a Unanimity-Plurality Voting strategy to further improve verification accuracy. Experimentally,
we demonstrate the superiority of our framework in improving the rigor, trustworthiness, and
interpretability of reasoning steps and answers.
**Broader Impact. While our deductive verification approach can mitigate hallucinations and reasoning**
errors of Large Language Models (LLMs), it does not completely eliminate these phenomena.
LLMs can still produce harmful and biased content, make incorrect claims, and produce wrongful
advice. This issue becomes particularly significant when LLMs engage in complex reasoning chains,
increasing the risk of misleading users. Consequently, it is still crucial for users to exercise great
caution when interacting with, deploying, or developing LLM-based applications.
**Acknowledgements**
We would like to express our sincere gratitude to Tongzhou Mu and Caiwei Xiao from UC San
Diego, Kairong Luo from Tsinghua University, and Pulkit Madan, Reza Pourreza, Sunny Panchal,
and Apratim Bhattacharyya from Qualcomm for their valuable discussions and feedback.
-----
**References**
[1] Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Chi Kit Cheung. Why exposure bias
matters: An imitation learning perspective of error accumulation in language generation. arXiv preprint
_arXiv:2204.01171, 2022._
[2] Kaj Bostrom, Zayne Sprague, Swarat Chaudhuri, and Greg Durrett. Natural language deduction through
search over statement compositions. arXiv preprint arXiv:2201.06028, 2022.
[3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.
_Advances in neural information processing systems, 33:1877–1901, 2020._
[4] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early
experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[5] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588,
2022.
[6] Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to
self-debug. arXiv preprint arXiv:2304.05128, 2023.
[7] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan
Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source
chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
[8] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language
modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
[9] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv
_preprint arXiv:2210.11416, 2022._
[10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word
problems. arXiv preprint arXiv:2110.14168, 2021.
[11] Antonia Creswell and Murray Shanahan. Faithful reasoning using large language models. arXiv preprint
_arXiv:2208.14271, 2022._
[12] Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language
models for interpretable logical reasoning. In The Eleventh International Conference on Learning Repre_sentations, 2023._
[13] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan
Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language
model. arXiv preprint arXiv:2303.03378, 2023.
[14] Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier
Bousquet, and Denny Zhou. Compositional semantic parsing with large language models. arXiv preprint
_arXiv:2209.15003, 2022._
[15] Olga Golovneva, Moya Peng Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam FazelZarandi, and Asli Celikyilmaz. Roscoe: A suite of metrics for scoring step-by-step reasoning. In The
_Eleventh International Conference on Learning Representations, 2022._
[16] Nuno M Guerreiro, Duarte Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo,
and André FT Martins. Hallucinations in large multilingual translation models. _arXiv preprint_
_arXiv:2303.16104, 2023._
[17] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874, 2021._
[18] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford,
Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal
large language models. arXiv preprint arXiv:2203.15556, 2022.
-----
[19] Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve
arithmetic word problems with verb categorization. In EMNLP, pages 523–533, 2014.
[20] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea
Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing
_Surveys, 55(12):1–38, 2023._
[21] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language
models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
[22] Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve
algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational
_Linguistics (Volume 1: Long Papers), pages 271–281, Baltimore, Maryland, June 2014. Association for_
Computational Linguistics.
[23] Andrew Lampinen, Ishita Dasgupta, Stephanie Chan, Kory Mathewson, Mh Tessler, Antonia Creswell,
James McClelland, Jane Wang, and Felix Hill. Can language models learn from explanations in context?
In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 537–563, Abu Dhabi,
United Arab Emirates, December 2022. Association for Computational Linguistics.
[24] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation:
Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146, 2017.
[25] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train,
prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM
_Computing Surveys, 55(9):1–35, 2023._
[26] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question
answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022.
[27] Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and
Chris Callison-Burch. Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379, 2023.
[28] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback.
_arXiv preprint arXiv:2303.17651, 2023._
[29] Ana Marasovi´c, Iz Beltagy, Doug Downey, and Matthew E. Peters. Few-shot self-rationalization with
natural language prompts, 2022.
[30] Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. On faithfulness and factuality in
abstractive summarization. arXiv preprint arXiv:2005.00661, 2020.
[31] OpenAI. Gpt-4 technical report, 2023.
[32] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with
human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
[33] Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi
Faltings. Refiner: Reasoning feedback on intermediate representations. arXiv preprint arXiv:2304.01904,
2023.
[34] Archiki Prasad, Swarnadeep Saha, Xiang Zhou, and Mohit Bansal. Receval: Evaluating reasoning chains
via correctness and informativeness. 2023.
[35] Danilo Ribeiro, Shen Wang, Xiaofei Ma, Henry Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica
Ramos, William Wang, Zhiheng Huang, et al. Street: A multi-task structured reasoning and explanation
benchmark. arXiv preprint arXiv:2302.06729, 2023.
[36] Subhro Roy and Dan Roth. Solving general arithmetic word problems. arXiv preprint arXiv:1608.01413,
2016.
[37] Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot
task generalization. arXiv preprint arXiv:2110.08207, 2021.
-----
[38] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman
Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b-parameter
open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
[39] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv
_preprint arXiv:2302.04761, 2023._
[40] Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. Generate & rank:
A multi-task framework for math word problems. arXiv preprint arXiv:2109.03034, 2021.
[41] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli,
and Denny Zhou. Large language models can be easily distracted by irrelevant context. arXiv preprint
_arXiv:2302.00093, 2023._
[42] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won
Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought
reasoners. arXiv preprint arXiv:2210.03057, 2022.
[43] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory
and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
[44] Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, and Lijuan
Wang. Prompting gpt-3 to be reliable. arXiv preprint arXiv:2210.09150, 2022.
[45] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game:
Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
[46] Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. Proofwriter: Generating implications, proofs, and
abductive statements over natural language. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli,
editors, Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event,
_August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 3621–3634. Association for_
Computational Linguistics, 2021.
[47] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation
language models. arXiv preprint arXiv:2302.13971, 2023.
[48] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves
chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
[49] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv
_preprint arXiv:2206.07682, 2022._
[50] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain
of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[51] Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. Neural text
generation with unlikelihood training. arXiv preprint arXiv:1908.04319, 2019.
[52] Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners
with self-verification. arXiv preprint arXiv:2212.09561, 2022.
[53] Kaiyu Yang, Jia Deng, and Danqi Chen. Generating natural language proofs with verifier-guided search.
In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
[54] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React:
Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
[55] Eric Zelikman, Jesse Mu, Noah D Goodman, and Yuhuai Tony Wu. Star: Self-taught reasoner bootstrapping
reasoning with reasoning. 2022.
[56] Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit,
Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. Socratic models: Composing
zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022.
-----
[57] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher
Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models.
_arXiv preprint arXiv:2205.01068, 2022._
[58] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large
language models. arXiv preprint arXiv:2210.03493, 2022.
[59] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in large
language models. arXiv preprint arXiv:2205.10625, 2022.
[60] Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi.
Teaching algorithmic reasoning via in-context learning. arXiv preprint arXiv:2211.09066, 2022.
-----
**A** **Deductive Verification with Vicuna Models**
We further explore the efficacy of deductive verification for open-source models. We select two
popular models: Vicuna-7B and Vicuna-13B [7]. These models are fine-tuned versions of LLaMA-7B
and LLaMA-13B [47] using the ShareGPT data[3]. We use the same Natural Program-based one-shot
verification method we used in the main paper. Results are shown in the first and the third rows
of Table 9. We observe for the original Vicuna models without finetuning, Vicuna-7B exhibits
poor performance in deductive verification and fails to find out reasoning mistakes, while the larger
Vicuna-13B exhibits better verification accuracy.
|Models|Reasoning Correctness GSM8K AQuA MATH AddSub Date Last Letters|Overall|
|---|---|---|
|Vicuna-7B|Correct 80% 86% 96% 98% 96% 80% Wrong 14% 22% 16% 6% 20% 34% (Average) 47% 54% 56% 52% 58% 57%|89% 19% 54%|
|---|---|---|
|Vicuna-7B (fine-tuned)|Correct 68% 48% 46% 76% 46% 32% Wrong 72% 86% 54% 60% 72% 68% (Average) 70% 67% 50% 68% 61% 50%|53% 69% 61%|
|---|---|---|
|Vicuna-13B|Correct 86% 82% 92% 96% 72% 74% Wrong 32% 36% 20% 20% 34% 30% (Average) 59% 59% 56% 58% 53% 52%|84% 29% 57%|
|---|---|---|
|ChatGPT (GPT-3.5-Turbo)|Correct 84% 72% 70% 95% 90% 96% Wrong 84% 62% 76% 40% 56% 6% (Average) 84% 67% 73% 68% 73% 51%|85% 54% 69%|
|---|---|---|
Correct 74% 50% 56% 86% 72% 12% 58%
Vicuna-13B
Wrong 72% 76% 72% 68% 62% 96% 74%
(fine-tuned)
(Average) **73%** 63% **64%** **77%** **67%** **54%** **66%**
Table 9: One-shot Deductive Verification Accuracy of Vicuna-7B and Vicuna-13B. The models are
evaluated with or without finetuning on our deductive verification dataset. For each dataset, we
randomly sample 50 reasoning chains that are deductively valid and 50 reasoning steps exhibiting
incorrect reasonings.
We therefore conduct an additional experiment to investigate if the verification accuracy of Vicuna
models can be improved by fine-tuning. To this end, we generate a deductive verification dataset,
which consists of 2000 reasoning steps evenly distributed between correct and incorrect categories.
We automatically generate this dataset using GPT-3.5-turbo since it exhibits a very high accuracy of
single-step verification. We first use GPT-3.5-turbo to generate solutions for problems in GSM8K’s
training set. We then execute step-by-step deductive verification on these solutions using GPT-3.5turbo. For solutions that result in correct final answers, we retain the reasoning steps that pass
deductive verification. For solutions that yield incorrect final answers, we retain the reasoning
steps that cannot pass deductive verification. After constructing our dataset, we then fine-tune the
Vicuna models using the verifications of the 2000 reasoning steps. Models were fine-tuned with 4
A100-80GB over 3 epochs. Training parameters are shown in Table 10.
As shown in Tab. 9, we observe that fine-tuning with our dataset can enhance the deductive verification
accuracy of Vicuna models not only on the dataset where the training dataset is constructed (GSM8K),
but also on many other datasets. However, the accuracy is still worse than non-finetuned GPT-3.5,
which suggests that model capacity has a significant impact on deductive verification capabilities.
**B** **More Discussion on Improvements of Deductive Verification Accuracy**
**Versus Improvements on Final Answer Correctness**
In the main paper, we demonstrated that our verification approach significantly improves the verification accuracy of reasoning chains (Tab. 3, 6, but barely improves the final answer accuracy (Tab. 4).
We further analyze this phenomenon below:
3https://github.com/domeccleston/sharegpt
-----
|Hyperparameters|Value|
|---|---|
|Optimizer Learning rate Weight decay Num epochs Batch size Learning rate schedule|AdamW 1 × 10−5 0.00 3 64 Linear|
|---|---|
Table 10: Hyperparameters for finetuning Vicuna models with our deductive verification dataset.
Consider the GSM8K dataset as an example (recall that the final answer for a problem is obtained
through majority voting). Among all problems, 91.6% of problems have |(number of votes received
by the correct answer) − (largest number of votes received by a single wrong answer)| > 2, and their
final answers are unlikely to be changed through our deductive verification approach. For the rest of
the cases (8.4%), where deductive verification is more likely to impact their final answers, we found
that:
- Among all reasoning chains that arrive at correct answers (these correct-answer chains
account for 49.4% of all reasoning chain candidates), 46.2% of reasoning chains are filtered
out by our verification process.
- Among the reasoning chains that arrive at correct answer but are filtered out by our verification process, 76.3% indeed exhibit incorrect reasoning.
- Among the reasoning chains that arrive at correct answer and are not filtered out by our
verification process, 78.0% indeed have correct reasonings.
- Among the reasoning chains that do not arrive at correct answer and exhibit incorrect
reasonings (these account for 50.6% of all reasoning chain candidates), 40.6% are filtered
out by our verification process.
The above statistics shows that a significant portion of reasoning chains that arrive at correct answers
but exhibit incorrect reasoning are successfully eliminated. Therefore, the reliability and trustfulness
of reasoning chains that arrive at the correct answers are significantly improved. Combined with the
fact that a significant proportion of reasoning chains that exhibit incorrect answers are eliminated, and
that our approach’s verification accuracy significantly improves over naive verification approaches,
our primary goal to improve LLM reasoning reliability is accomplished.
Nevertheless, the removals of many reasoning chains yielding correct answers (specifically, a significant 46.2% × 49.4% of all chains) has a notable impact. This even exceeds the removals of reasoning
chains with incorrect reasonings and answers (40.6% × 50.6% of all chains). As a result, there are
fewer votes for the correct answer when generating final answers through majority voting, which
limits the final answer accuracy. In the future, we believe that when a greater proportion of incorrect
reasoning chains with incorrect answers are filtered out, we can improve the final answer accuracy.
**C** **More Details on Answer Extraction**
In this section, we describe our process to extract the final answer from language models’ responses.
The process begins by selecting the last three non-empty lines. Then, these lines are processed
through the following pipeline:
1. Firstly, we use a list of regular expressions to identify "No-Answer" patterns within the text,
such as "we cannot answer (this|the) question". This process helps us ascertain whether the
model can provide a conclusive answer. If any such patterns appear in the text, we mark
"No answer!" as the final answer. However, if we don’t detect these patterns, we proceed to
the next steps for extracting the final answer.
2. Secondly, if any "Answer-Split" patterns are found in the text, we divide the text into several
blocks using the identified pattern. The last block of text is then utilized for extracting the
answer.
-----
3. Lastly, we use regular expressions, as outlined in Tab. 11, to scan the remaining text for
possible final answers. If multiple matches are found for the pattern, we select the first
match as the final answer. If no pattern matches are found in the remaining text, we default
the final response to "No answer!".
**“No-Answer” Patterns: "we cannot provide an answer to this question with (this|the) given informa-**
tion", "we cannot answer (this|the) question", "we cannot determine", "we can’t determine", "we do
not have enough information to answer (this|the) question", "we do not have enough information to
provide a definitive answer to (this|the) question", "the answer(.*?)is unknown", "answer is not listed
among the answer choices".
**“Answer-Split” Patterns: "answer is", "final answer:", "answer to the question is", "answer to this**
question is", "concatenated letters are", "concatenate the letters -", "The answer of ".
|Answer Type|Regular Expression|
|---|---|
|Number Fractional number Date Yes or No|(-? d[ d, . ]*) \ \ \ (-? ( d+ / d+ ) / d+-? d+ / d+) \ \ \ \ \ \ \ | \ \ \ ( d d / d d / d d d d) \ \ \ \ \ \ \ \ \ \ (?:YesNoyesnoNOYES) | | | | ||
|---|---|
Table 11: Regular Expression for extracting the final answers of different kinds of questions.
**D** **Prompts**
**D.1** **Prompt for Direct Reasoning Chain Verification Without Natural Program Format**
For the results in Tab. 2 of the main paper, We use “Do you think the above reasoning process is
correct? Let’s think step by step.” as the zero-shot prompt to verify an entire reasoning chain at once.
We also design a two-shot prompt for reasoning chain verification as shown in Tab. 12, which covers
one correct reasoning chain and one incorrect reasoning chain.
**D.2** **Prompts for Reasoning Chain Generation in the Natural Program Format**
To instruct models to generate reasoning chains in the Natural Program format that facilitates stepby-step deductive verification, we have designed four distinct prompts to address different types of
problems. These include:
1. Math word problems, as illustrated in Tab. 13, covering GSM8K, MATH, and AddSub
datasets.
2. Math word problems with multiple-choice options, illustrated in Tab. 14, covering the AQuA
dataset.
3. Date-related problems, illustrated in Tab. 15, covering the Date dataset.
4. Last Letters problems, illustrated in Tab. 16, covering the Last Letters dataset.
**D.3** **Prompt for Deductive Verification Following Natural Program Format and Step-by-Step**
**Decomposition**
We have designed a general one-shot prompt for the deductive verification of a single reasoning step
on different datasets, as shown in Tab. 17. This prompt serves to instruct language models to generate
the deductive validity of each reasoning step as illustrated in Sec. 4.2 and the top-right box of Fig. 1
of the main paper.
**E** **More Deductive Verification Examples**
In this section, we present more deductive verification examples using our Natural Program-based
approach on single reasoning steps.
-----
In Tab. 18, we demonstrate that the language model (ChatGPT) not only successfully identifies
ungrounded information, but also identifies logical errors within the given solutions.
In Tab. 19, we illustrate a case where the language model fails to detect ungrounded premise numbers,
mistakenly assuming that these numbers can be derived from grounded ones.
Lastly, in Tab. 20, we illustrate a case where the language model is sometimes unable to correctly
identify grounded numbers.
-----
**Here is a math question and its solution:**
"Question:
Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops
did Jason give to Denny?
Answer:
Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 + 12 =34.
The answer is 34."
You are a math teacher. Do you think the reasoning process is correct?
Let’s check the "Answer" step by step, and then end with "yes" or "no".
**Answer:**
The problem statement is clear, but the reasoning process has a major error in it.
Let’s go through it step by step:
1. "Jason started with 20 lollipops." This is correct according to the problem statement.
2. "Then he had 12 after giving some to Denny." This is also correct as per the problem.
3. "So he gave Denny 20 + 12 =34. The answer is 34." This is incorrect. The correct reasoning process
should be subtracting the number of lollipops he ended up with from the number he started with.
That would be 20 - 12 = 8 lollipops.
So, Jason gave Denny 8 lollipops, not 34. No, the reasoning process is incorrect.
**Here is a math question and its solution:**
"Question:
Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
Answer:
Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left.
23 - 15 is 8. The answer is 8."
You are a math teacher. Do you think the reasoning process is correct?
Let’s check the "Answer" step by step, and then end with "yes" or "no".
**Answer:**
Let’s go through this reasoning process step by step.
1. "Olivia had 23 dollars." This is the given starting amount of money that Olivia has. This step is correct.
2. "5 bagels for 3 dollars each will be 5 x 3 = 15 dollars." This is correct as well. Olivia spends 3 dollars
for each bagel, and she buys 5 of them. So the total cost of the bagels is 3 dollars/bagel x 5 bagels = 15 dollars.
3. "So she has 23 - 15 dollars left." This statement is also correct. After purchasing the bagels, Olivia will
have her initial amount of money minus the cost of the bagels.
4. "23 - 15 is 8. The answer is 8." This step is correct as well. 23 dollars (Olivia’s initial amount) - 15 dollars
(the cost of the bagels) = 8 dollars. This means that Olivia has 8 dollars left after buying the bagels.
So overall, Olivia has 8 dollars left after buying five bagels for 3 dollars each. Yes, the reasoning process
is correct.
Table 12: Two-shot prompt for direct reasoning chain verification without Natural Program format.
-----
**Question:**
Alice buys 5 apples. Each apple costs her $2. She gets them for 50% off. After she bought
she decides she doesn’t like 2 of them and sells them to her friends for 1. How much money
was she out?
**Answer:**
First, let’s write down all the statements and relationships in the question with labels.
#1. Alice gets 5 apples.
#2. Each apple costs $2, but Alice gets apples for 50% off.
#3. Alice sells 2 of 5 apples for 1.
#4. How much money was Alice out?
Next, let’s answer the question step by step with reference to the question and reasoning process:
#5. (by #2) Step 1: Calculate the price per apple after a 50% discount.
Original price per apple: $2
Discount: 50%
Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1
#6. (by #1 #5) Step 2: Calculate Alice’s cost of the 5 apples after a 50% discount.
Price per apple after discount: $1
Total apples: 5
Alice’s total cost of 5 apples: $1 * 5 = $5
#7. (by #3) Step 3: Calculate the money Alice gets back from selling 2 apples.
Money that Alice gets back from selling 2 apples: $1
#8. (by #6 #7) Step 4: Calculate the total amount of money Alice is out.
Alice’s total cost of 5 apples: $5
Money that Alice gets back from selling 2 apples: $1
Money that Alice is out: $5 - $1 = $4
#9. (by #4 #8) The original question is #4.
How much money was Alice out? We do not miss information on the rewritten labels. So the
answer to this question is Alice is out $4.
Please answer the following question by following the above template.
Table 13: One-shot Natural Program prompt for reasoning chain generation on math word problems.
-----
**Question:**
Alice buys 5 apples. Each apple costs her $2.
She gets them for 50% off. After she bought She decides she doesn’t like 2 of them and sells them
to her friends for 1.
How much money was she out?
Answer Choices:
A) 4
B) 2
C) None of the above
**Answer:**
First, let’s write down all the statements and relationships in the question with labels.
#1. Alice gets 5 apples.
#2. Each apple costs $2, but Alice gets apples for 50% off.
#3. Alice sells 2 of 5 apples for 1.
#4. How much money was Alice out?
Next, let’s answer the question step by step with reference to the question and reasoning process:
#5. (by #2) Step 1: Calculate the price per apple after a 50% discount.
Original price per apple: $2
Discount: 50%
Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1
#6. (by #1 #5) Step 2: Calculate Alice’s cost of the 5 apples after a 50% discount.
Price per apple after discount: $1
Total apples: 5
Alice’s total cost of 5 apples: $1 * 5 = $5
#7. (by #3) Step 3: Calculate the money Alice gets back from selling 2 apples.
Money that Alice gets back from selling 2 apples: $1
#8. (by #6 #7) Step 4: Calculate the total amount of money Alice is out.
Alice’s total cost of 5 apples: $5
Money that Alice gets back from selling 2 apples: $1
Money that Alice is out: $5 - $1 = $4
#9. (by #4 #8) The original question is #4. How much money was Alice out? We do not miss
information on the rewritten labels. So the answer to this question is Alice is out $4. Among
all the answer choices, the best option is A) 4.
Please answer the following question by following the above template.
Table 14: One-shot Natural Program prompt for reasoning chain generation on math word problems
with multiple choice.
-----
Let’s work on date computation.
**Question:**
Yesterday is 02/22/2012. What is the date one week from today? Please answer in format
MM/DD/YYYY.
**Answer:**
First, let’s write down all the statements and relationships in the question with labels.
#1. Yesterday is 02/22/2012.
#2. What is the date one week from today? Please answer in format MM/DD/YYYY.
Next, let’s answer the question step by step with reference to the question and reasoning process:
#4. (by #1) Step 1: Calculate the date today.
The date of Yesterday is 02/22/2012.
The date of today is 02/23/2012.
#5. (by #2 #4) Step 2: Calculate the date one week from today and write the answer in
MM/DD/YYYY.
Because 2012 is a leap year and February in 2012 has 29 days, the date one week from today is
03/01/2012.
Therefore, date one week from today is 03/01/2012.
**Question:**
Today is 3/2 and Bob is watching NBA 2007. What is the date one week before today?
Please answer in format MM/DD/YYYY.
**Answer:**
First, let’s write down all the statements and relationships in the question with labels.
#1. Today is 3/2.
#2. Bob is watching NBA 2007 today.
#3. What is the date one week before today? Please answer in format MM/DD/YYYY.
Next, let’s answer the question step by step with reference to the question and reasoning process:
#4. (by #1 #2) Step 1: Calculate the date today.
Today is 3/2 and Bob is watching NBA 2007 today. Thus, Today is 03/02/2007.
#5. (by #3 #4) Step 2: Calculate the date one week before today and write the answer
in MM/DD/YYYY.
Because 2007 is not a leap year and February in 2007 has 28 days, the date one week from today
is 02/23/2007.
Therefore, the date one week before today is 02/23/2007.
Please answer the following question by following the above template.
Table 15: Two-shot Natural Program prompt for reasoning chain generation on the Date dataset.
-----
Let’s work on date computation.
**Question:**
Take the last letters of each words in "Apple Is" and concatenate them.
**Answer:**
First, let’s write down all the statements and relationships in the question with labels.
#1. We need to take the last letter of the word "Apple".
#2. We need to take the last letter of the word "Is".
#3. We need to concatenate the last letters of the words sequentially
Next, let’s answer the question step by step with the complete reference to the question and reasoning process:
#4. (by #1, #3) Step 1: Taking the last letter of the word "Apple" and concatenate the latter.
The last letter of the word "Apple" is "e".
The concatenated last letters are "e".
#5. (by #2, #3, #4) Step 2: Taking the last letter of the word "Is" and concatenate the latter.
The last letter of the word "Is" is "s".
The concatenated last letters are "es".
#6. (by #3, #5) Step 3: Provide the final answer of the concated letter.
Therefore, the answer is "es".
Please answer the following question by following the above template.
Table 16: One-shot Natural Program prompt for reasoning chain generation on the Last Letters
dataset.
-----
**Here is some information:**
“1. Alice gets 2 apples.
2. Alice gets twice as many oranges as bananas.
3. Original price per apple: $2
Discount: 50%
Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1"
Based on the given information, here is a reasoning process:
"Calculate Alice’s cost of the 5 apples after a 50% discount.
Price per apple after discount: $1
Apples be eaten: 3
Oranges be eaten: 6
Total apples: 5
Total oranges: x
Total bananas: 2x
Alice’s total number of fruits: 5 + x + 2x = 5 + 4x
Alice’s total cost of 5 apples: $1 * (5 - 3) = $2”
Double-check the reasoning process, let’s analyze its correctness, and end with "yes" or "no".
**Answer:**
Let’s think step by step.
Grounding check
Purpose: Find references for numbers in the reasoning process sequentially
Rule: Any ungrounded number makes the check fails
"x" is a variable, we do not care about variables.
"y" is a variable, we do not care about variables.
"5" is a quantity from the given information that represents the number of apples.
"50%" is a quantity from the given information that represents the discount.
"$1" is a quantity from the given information that represents the price per apple after the discount.
"3" is a quantity not from the given information, it is ungrounded.
"6" is a quantity not from the given information, it is ungrounded.
"$2" is a quantity from the given information that represents the original price per apple.
Result: Based on the check, there are ungrounded numbers, so the grounding check fails.
Reasoning check:
Purpose: Check whether the reasoning logic correctly answers the question
Rule: Any logic error makes the check fails
To calculate the total cost of apples after a 50% discount, we should multiply the number of apples
by the discounted price. But to answer the question, we do not need to calculate the total number
of fruit Alice gets.
Result: Based on the check, there are some logic mistakes, so the reasoning check fails.
Calculation check:
Purpose: Check whether the calculation process is consistent
Rule: Any inconsistent calculation makes the check fails
calculation1:
equation: $1 * (5 - 3), answer: $2
(5 - 3) = 2
$1 * 2 = $2 is consistent with the answer, so the calculation is correct.
calculation2:
equation: 5 + x + 2x, answer: 5 + 4x
x + 2x = 3x
5 + 3x is inconsistent with the answer, so the calculation is incorrect.
Result: Based on the check, the calculation process is inconsistent, so the calculation check fails.
Check results: Ground check fails, Reasoning check fails, Calculation check fails.
Rule: Any failed check makes the reasoning incorrect.
So the answer is "no".
Table 17: One-shot prompt for deductive verification of a single reasoning step, following our Natural
Program format and step-by-step reasoning chain decomposition.
-----
**Input:**
Here is some information:
"For every 25 telephone calls he makes, he gets one person to come into the dealership."
Based on the given information, here is a reasoning process:
"Calculate the number of people that would come into the dealership for 15 car sales.
Number of people needed to sell 15 cars: 15 * 2 = 30"
Double-check the reasoning process, let’s analyze its correctness, and end with "yes" or "no".
Answer:
Let’s think step by step without any assumptions.
**Output:**
Grounding check:
Purpose: Find references for numbers in the reasoning process sequentially
Rule: Any ungrounded number makes the check fails
"15" is a quantity from the question that represents the number of car sales.
"2" is a quantity not from the given information, it is ungrounded.
"25" is a quantity from the given information that represents the number of telephone calls
per one person to come into the dealership.
Result: Based on the check, there are ungrounded numbers, so the grounding check fails.
Reasoning check:
Purpose: Check whether the reasoning logic correctly answers the question
Rule: Any logic error makes the check fails
It is not clear how the number of people coming into the dealership relates to the number of car sales.
The given information only tells us the ratio of telephone calls to people coming into the dealership,
but it does not provide any information about the number of telephone calls made for each car sale.
Therefore, the reasoning process is not correct.
Result: Based on the check, there is a logic mistake, so the reasoning check fails.
Calculation check:
Purpose: Check whether the calculation process is consistent
Rule: Any inconsistent calculation makes the check fails
calculation1:
equation: 15 * 2, answer: 30
15 * 2 = 30 is consistent with the answer, so the calculation is correct.
Result: Based on the check, the calculation process is consistent, but the reasoning is incorrect,
so the calculation check fails.
Check results: Grounding check fails, Reasoning check fails, Calculation check fails.
Rule: Any failed check makes the reasoning incorrect.
So the answer is "no".
Table 18: Successful case: our deductive verification approach successfully discovers ungrounded
information and reasoning mistakes.
-----
**Input:**
Here is some information:
"How many hours would it take for 5 people to paint a whole house?"
Based on the given information, here is a reasoning process:
"Calculate the total number of hours it would take for 5 people to paint a whole house.
Total hours for one person to paint a whole house: 240 hours
Total number of people: 5
Total hours for 5 people to paint a whole house: 240 hours / 5 = 48 hours"
Double-check the reasoning process, let’s analyze its correctness, and end with "yes" or "no".
Answer:
Let’s think step by step without any assumptions.
**Output:**
Grounding check
Purpose: Find references for numbers in the reasoning process sequentially
Rule: Any ungrounded number makes the check fails
"240" is a quantity not from the given information, it is ungrounded.
"5" is a quantity from the given information that represents the number of people.
"48" is a quantity calculated from the ungrounded number and grounded number in the reasoning process.
Result: Based on the check, there is one ungrounded number, but it is calculated from another grounded
number, so the grounding check passes.
Reasoning check:
Purpose: Check whether the reasoning logic correctly answers the question
Rule: Any logic error makes the check fails
The reasoning process correctly calculates the total number of hours for 5 people to paint a whole house by
dividing the total hours for one person to paint a whole house by 5. Therefore, the reasoning check passes.
Calculation check:
Purpose: Check whether the calculation process is consistent
Rule: Any inconsistent calculation makes the check fails
calculation1:
equation: 240 / 5, answer: 48
240 / 5 = 48 is consistent with the answer, so the calculation is correct.
Result: Based on the check, the calculation process is consistent, so the calculation check passes.
Check results: Ground check passes, Reasoning check passes, Calculation check passes.
Rule: All checks pass make the reasoning correct.
So the answer is "yes".
Table 19: Failure case: our deductive verification process fails to find out ungrounded information in
the reasoning step. The number 240 in the reasoning step is ungrounded, but the model states that it
can be calculated from grounded numbers.
-----
**Input:**
Here is some information:
"Calculate the number of doughnuts Derrick makes in a day.
Number of doughnuts in a dozen: 12
Number of doughnuts in 10 dozens: 10 * 12 = 120
Calculate the revenue from selling one doughnut.
Price per doughnut: $2"
Based on the given information, here is a reasoning process:
"Calculate the revenue from selling all the doughnuts in a day.
Number of doughnuts in a day: 120
Revenue from selling all the doughnuts in a day: $2 * 120 = $240"
Double-check the reasoning process, let’s analyze its correctness, and end with "yes" or "no".
Answer:
Let’s think step by step without any assumptions.
**Output:**
Grounding check
Purpose: Find references for numbers in the reasoning process sequentially
Rule: Any ungrounded number makes the check fails
"12" is a quantity from the given information that represents the number of doughnuts in a dozen.
"10" is a quantity not from the given information, it is ungrounded.
"$2" is a quantity from the given information that represents the price per doughnut.
"120" is a quantity not from the given information, it is ungrounded.
Result: Based on the check, there are ungrounded numbers, so the grounding check fails.
Reasoning check:
Purpose: Check whether the reasoning logic correctly answers the question
Rule: Any logic error makes the check fails
The reasoning process is correct. We have the number of doughnuts Derrick makes in a day,
and we have the price per doughnut.
Multiplying these two yields the revenue from selling all the doughnuts in a day.
Result: Based on the check, the reasoning logic is correct.
Calculation check:
Purpose: Check whether the calculation process is consistent
Rule: Any inconsistent calculation makes the check fails
calculation1:
equation: 10 * 12, answer: 120
10 * 12 = 120 is consistent with the answer, so the calculation is correct.
calculation2:
equation: $2 * 120, answer: $240
$2 * 120 = $240 is consistent with the answer, so the calculation is correct.
Result: Based on the check, the calculation process is consistent, so the calculation check passes.
Check results: Ground check fails, Reasoning check passes, Calculation check passes.
Rule: Any failed check makes the reasoning incorrect.
So the answer is "no".
Table 20: Failure case: our deductive verification process sometimes treats grounded information as
if they were ungrounded. The number 120 is provided in the given information, but the model states
that it is ungrounded.
-----
| [
"Zhan, Ling",
"Yunhao, Fang",
"Xuanlin, Li",
"Zhiao, Huang",
"Mingu, Lee",
"Roland, Memisevic",
"Hao, Su"
] | 2023-10-03T00:00:00 | NeurIPS 2023 | true | 77 | 6 | null | http://arxiv.org/abs/2306.03872 | https://arxiv.org/abs/2306.03872 | https://www.semanticscholar.org/paper/8236010c2ecc94d826be6010ff187fdc000e7df6 |
Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models | While language models (LMs) have shown potential across a range of decision-making tasks, their reliance on simple acting processes limits their broad deployment as autonomous agents. In this paper, we introduce Language Agent Tree Search (LATS) -- the first general framework that synergizes the capabilities of LMs in reasoning, acting, and planning. By leveraging the in-context learning ability of LMs, we integrate Monte Carlo Tree Search into LATS to enable LMs as agents, along with LM-powered value functions and self-reflections for proficient exploration and enhanced decision-making. A key feature of our approach is the incorporation of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that surpasses the constraints of existing techniques. Our experimental evaluation across diverse domains, including programming, interactive question-answering (QA), web navigation, and math, validates the effectiveness and generality of LATS in decision-making while maintaining competitive or improved reasoning performance. Notably, LATS achieves state-of-the-art pass@1 accuracy (92.7%) for programming on HumanEval with GPT-4 and demonstrates gradient-free performance (average score of 75.9) comparable to gradient-based fine-tuning for web navigation on WebShop with GPT-3.5. Code can be found at https://github.com/lapisrocks/LanguageAgentTreeSearch | Experimental evaluation across diverse domains, including programming, interactive question-answering (QA), web navigation, and math, validates the effectiveness and generality of LATS in decision-making while maintaining competitive or improved reasoning performance. | ## Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models
**Andy Zhou** [1 2] **Kai Yan** [1] **Michal Shlapentokh-Rothman** [1] **Haohan Wang** [1] **Yu-Xiong Wang** [1]
**Abstract**
While language models (LMs) have shown potential across a range of decision-making tasks,
their reliance on simple acting processes limits
their broad deployment as autonomous agents. In
this paper, we introduce Language Agent Tree
Search (LATS) – the first general framework that
_synergizes the capabilities of LMs in reasoning,_
acting, and planning. By leveraging the in-context
learning ability of LMs, we integrate Monte Carlo
Tree Search into LATS to enable LMs as agents,
along with LM-powered value functions and
self-reflections for proficient exploration and enhanced decision-making. A key feature of our approach is the incorporation of an environment for
external feedback, which offers a more deliberate
and adaptive problem-solving mechanism that surpasses the constraints of existing techniques. Our
experimental evaluation across diverse domains,
including programming, interactive questionanswering (QA), web navigation, and math, validates the effectiveness and generality of LATS
in decision-making while maintaining competitive or improved reasoning performance. Notably, LATS achieves state-of-the-art pass@1 accuracy (92.7%) for programming on HumanEval
with GPT-4 and demonstrates gradient-free performance (average score of 75.9) comparable to
gradient-based fine-tuning for web navigation on
WebShop with GPT-3.5. Code can be found
at [https://github.com/lapisrocks/](https://github.com/lapisrocks/LanguageAgentTreeSearch)
[LanguageAgentTreeSearch.](https://github.com/lapisrocks/LanguageAgentTreeSearch)
**1. Introduction**
_Figure 1. Overview of LATS. Serving as a unified framework,_
LATS leverages an external environment and an MCTS-based
search algorithm to improve reasoning and decision-making.
and Jennings, 1995) have been of longstanding interest in
the field of artificial intelligence. While this has traditionally been studied in reinforcement learning, the recent rise
of language models (LMs) (Brown et al., 2020; Chowdhery et al., 2023; Touvron et al., 2023; OpenAI, 2023) with
strong reasoning and general adaptability offers an alternative paradigm. Not only have LMs excelled in standard
natural language processing (NLP) tasks such as summarization (Nallapati et al., 2016) and language inference (Bowman et al., 2015), but they have also been adapted to an
increasingly diverse set of tasks that often require advanced
common-sense reasoning or quantitative skills (Cobbe et al.,
2021; Saparov and He, 2023). In addition, LMs are capable
of performing in complex environments that involve knowledge and reasoning, such as web navigation (Yao et al.,
2022; Deng et al., 2023), tool-use (Schick et al., 2023), and
open-ended games (Fan et al., 2022).
Reasoning and acting abilities have been further improved
by prompting techniques that augment LMs with feedback
or observations from an external environment, as exemplified by ReAct (Yao et al., 2023b) and other work (Gao et al.,
2023; Shinn et al., 2023). This eliminates the need to rely entirely on the base abilities of LMs, enhancing them through
external tools or semantic feedback. Despite such strengths,
these methods are reflexive and fall short of humans’ deliberate and thoughtful decision-making characteristics to solve
problems (Sloman, 1996; Evans, 2010). In particular, they
fail to consider multiple reasoning paths or to plan ahead.
Recent search-guided LM work (Xie et al., 2023; Yao et al.,
2023a; Hao et al., 2023) addresses this issue by searching
over multiple reasoning chains. While enabling planning,
General autonomous agents capable of reasoning and
decision-making in a variety of environments (Wooldridge
1University of Illinois Urbana-Champaign. 2Lapis Labs. Correspondence to: Andy Zhou <[email protected]>.
_Proceedings of the 41_ _[st]_ _International Conference on Machine_
_Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by_
the author(s).
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
**2. Related Work**
**LMs for reasoning. For LMs, reasoning involves decom-**
posing complex inputs into sequential intermediate steps
towards a final answer (Cobbe et al., 2021), demonstrated
with chain-of-thought (CoT) prompting (Wei et al., 2022)
and its variants (Wei et al., 2022; Kojima et al., 2022; Wang
et al., 2022). However, these methods, which create chains
autoregressively in a single step, often suffer from error
propagation as the number of steps increases (Guo et al.,
2018; Chen et al., 2023b), due to compound errors. Various
advancements aim to mitigate this issue; some approaches,
such as self-consistency (Wang et al., 2022), employ majority voting over sampled chains, while others focus on
multi-step decomposition, such as least-to-most prompting (Zhou et al., 2022). Recently, CoT has been improved
with search algorithms (Yao et al., 2023a; Hao et al., 2023;
Besta et al., 2023) that can sample trajectories more effectively. Tree-of-thought (ToT) prompting (Yao et al., 2023a)
uses DFS or BFS-based (depth/breadth-first) search guided
by an LM-generated heuristic, while reasoning via planning
(RAP) (Hao et al., 2023) uses MCTS with rollouts simulated by LMs. However, they rely solely on LM internal
knowledge and cannot adapt to useful external feedback.
**LMs for acting. The strong reasoning and common-sense**
abilities of LMs have been further adapted for decisionmaking or acting tasks as a policy model in interactive
environments. In robotics, LMs have been employed as
high-level controllers of control policies (Ahn et al., 2022;
Huang et al., 2022; Driess et al., 2023). Similar work (Baker
et al., 2022; Wang et al., 2023) has also adapted LM agents
to complex multimodal games such as Minecraft (Guss et al.,
2019; Fan et al., 2022). LMs are particularly useful in textbased environments (Liu et al., 2018; Shridhar et al., 2020;
Liu et al., 2024), where acting-based prompting techniques
such as ReAct (Yao et al., 2023b) have seen success. Similar to CoT, ReAct is limited by its simplicity and cannot
effectively adapt to environment conditions. Many extensions have been proposed to address this issue, including
self-refine (Madaan et al., 2023) and Reflexion (Shinn et al.,
2023), which use self-improvement to enhance reasoning
and decision-making, and AdaPlanner (Sun et al., 2023),
which incorporates both positive and negative feedback.
However, these methods focus on refining an individual trajectory and do not consider alternative choices at each step.
In addition, recent work (Huang et al., 2024) has suggested
that LMs cannot self-correct their internal reasoning, making it critical to use external feedback. Alternatively, to pure
decision-making environments, the reasoning and practical
abilities of LMs have been enhanced by providing access
to external tools, such as APIs, search engines, calculators,
and other models (Schick et al., 2023; Shen et al., 2023;
Sur´ıs et al., 2023). We summarize prior work in Tab. 1.
**Tree-based search. Tree-based search, where multiple**
such methods operate in isolation, lacking the incorporation
of external feedback that can improve reasoning.
To overcome these challenges, we propose Language Agent
Tree Search (LATS) – a unified framework for decisionmaking and reasoning with language models. As illustrated
in Fig. 1, LATS synergizes LM reasoning, acting, and plan_ning strategies by expanding ReAct (Yao et al., 2023b) into a_
search over a combinatorial space of possible reasoning and
acting steps. This effort is nontrivial – adapting search algorithms to language agents and shifting from non-interactive
tasks to interactive ones requires a substantial novel design
on nodes, prompts, and search algorithms. In particular,
nodes and prompts must effectively store and retrieve external feedback, with the search algorithm incorporating
this information into useful heuristics for value assignment.
Indeed, our empirical evaluation, as demonstrated on HotPotQA (Yang et al., 2018) in Sec. 5.1, reveals that a simple
combination of existing methods is inadequate, even failing
to surpass internal reasoning performance, despite having
access to the ground truth answer from the environment.
Our key insight underpinning LATS is adapting Monte Carlo
Tree Search (MCTS), inspired by its success in model-based
reinforcement learning (Silver et al., 2017) and the obser_vation that many LM tasks allow reverting to earlier steps,_
to language agents, repurposing pretrained LMs as agents
with LM-powered value functions and self-reflections for
cleverer exploration. Leveraging the general capabilities
and in-context learning abilities of modern LMs, we use
language as an interface between each component, allowing
LATS to adapt planning to environmental conditions with_out additional training. To the best of our knowledge, LATS_
is the first framework that incorporates reasoning, acting,
and planning to enhance LM performance. Notably, LATS
doubles the performance of ReAct (Yao et al., 2023b) on
HotPotQA (Yang et al., 2018) and raises the average score
by 22.1 on WebShop (Yao et al., 2022) with GPT-3.5. When
used with GPT-4, LATS achieves a 92.7 Pass@1 rate on
HumanEval (Chen et al., 2021), setting the state of the art.
Our contributions are the following: 1) We introduce LATS,
a framework based on Monte Carlo Tree Search to construct
the best trajectory from sampled actions, enabling more flexible and adaptive problem-solving compared with reflexive
prompting methods. 2) We propose a novel value function
that guides the search process and incorporates successful
heuristics such as self-refinement and self-consistency. 3)
By integrating external feedback and self-reflection, LATS
enhances model sensibility and enables agents to learn from
experience, surpassing reasoning-based search methods.
Through experiments across diverse domains, including programming, interactive question-answering (QA), web navigation, and math, we demonstrate the versatility of LATS
for enhancing autonomous reasoning and decision-making.
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
**Approach** **Reasoning** **Acting** **Planning** **Self-** **External**
**Reflection** **Memory**
CoT (Wei et al., 2022) ✓ _×_ _×_ _×_ _×_
ReAct (Yao et al., 2023b) ✓ ✓ _×_ _×_ _×_
ToT (Yao et al., 2023a) ✓ _×_ ✓ ✓ ✓
RAP (Hao et al., 2023) ✓ _×_ ✓ _×_ ✓
Self-Refine (Madaan et al., 2023) ✓ _×_ _×_ ✓ _×_
Beam Search (Xie et al., 2023) ✓ _×_ _×_ ✓ _×_
Reflexion (Shinn et al., 2023) ✓ ✓ _×_ ✓ ✓
**LATS (Ours)** ✓ ✓ ✓ ✓ ✓
_Table 1. Summary of related work on reasoning, acting, and planning. LATS is the first work incorporating designs from all three domains,_
allowing broad applicability in all corresponding tasks. We refer to reasoning as LM internal reasoning, acting as external decision-making,
planning as the use of a search algorithm, self-reflection as the use of LM-generated feedback, and external memory as storing past text
context for future updates of the solution.
branches of outcomes are explored during search, is widely
used in many planning algorithms (Swiechowski et al., 2021;
LaValle, 1998) and reinforcement learning (RL) (Hafner
et al., 2019; Du et al., 2023; Wu et al., 2023) algorithms for
its good exploration-exploitation trade-off. Note that though
tree-based search necessitates an environment model that
can expand from an arbitrary state (Vodopivec et al., 2017),
often requiring extra training in RL (Hafner et al., 2023),
such a problem does not exist for most LM tasks. This is
because we can conveniently revert to any state by setting
the input to be the context and the corresponding previous
output from the LM for many tasks. Thus, we operate on the
tree-based framework and use MCTS (Swiechowski et al.,
2021) to fully unlock the potential of LMs. In addition, we
avoid the cost of training a value function over language
descriptions by leveraging the in-context learning (Brown
et al., 2020) abilities of LMs. Concurrent work (Liu et al.,
2023) also explores combining search algorithms with LM
agents but uses an off-the-shelf search algorithm, which
may not be optimal for LMs. Finally, following Yao et al.
(2023a) and Hao et al. (2023), we note that we use planning
and search algorithms interchangeably in this paper.
**3. Preliminaries**
**3.1. Problem Setting and Prompting**
We first define our problem and outline a few established
methods that leverage language models for reasoning or
decision-making. In LM reasoning or decision making,
we are given an input x in natural language and a pretrained language model pθ(x) parameterized by θ; our goal
is to generate a final output y _pθ(x) that corresponds_
_∼_
to the answer (reasoning) or completes the task (decisionmaking). Both x and y are language sequences, which are
comprised of a list of tokens (the basic elements of natural
language, often words), denoted as x = (x[1], . . ., x[lx])
and y = (y[1], . . ., y[ly]) where lx and ly are the length.
The LM decodes text autoregressively, i.e., without other
inputs, the probability for an LM to generate a sequence y
is given by pθ(x) = _i=1_ _[p][θ][(][x][[][i][]][|][x][[1][ . . . i][ −]_ [1])][. Usually,]
to improve reasoning, prompts are provided along with the
input x, which are specific instructions or few-shot input-[Q][l][x]
output examples. We denote the generic process where an
input promptIO(x) is transformed into an output y by LM:
_y_ _pθ(promptIO(x))._
_∼_
**Chain-of-thought (CoT) prompting (Wei et al., 2022)**
caters to scenarios where the direct mapping from x to y is
intricate, e.g., when x is from a mathematical query or challenging question. It hinges on creating thoughts z1, . . ., zl
that act as stepping stones between x and y; each thought zi
is a language sequence. To employ CoT prompting, thoughts
are extracted sequentially as zi ∼ _p[CoT]θ_ (x, z1···i−1), with
the final output being y ∼ _p[CoT]θ_ (x, z1···l).
**Tree-of-thought (ToT) prompting (Yao et al., 2023a) ex-**
tends CoT prompting by exploring multiple reasoning paths
over thoughts. It frames problems as a search over a tree,
where each node s = [x, z1·i] represents a partial solution
state comprising the original input x and the thought sequence z1 _i. Thoughts zi are generated by proposal or_
_···_
sampling with CoT zi ∼ _p[CoT]θ_ (x, z1···i−1). Search algorithms like depth-first (DFS) or breadth-first (BFS) search
are used to systematically explore the tree, guided by heuristics based on LM evaluations V (s) of each state.
**ReAct (Yao et al., 2023b) extends language models to**
tasks where the mapping from x to y is enhanced by or
requires interactions with an external environment, such
as a game or API. This technique constructs an action
space _A[ˆ] = A ∪_ _Z that adds permissible actions a ∈_ _A_
to the reasoning traces z ∈ _Z from CoT. Observations o_
from the environment are used to improve both reasoning
and acting. To solve problems with ReAct, after each observation, actions are generated from pθ sequentially as
_ai ∼_ _p[ReAct]θ_ (x, o1···i−1, a1···i−1), with the final output be
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
ing y ∼ _p[ReAct]θ_ (x, o1···l, a1···l). In this paper, consistent
with other LM agent methods such as ReAct and Reflexion
(Shinn et al., 2023), we focus on decision-making tasks
_where reverting between iterations is feasible._
While the previously described prompting techniques improve LM performance on reasoning tasks, they falter on
difficult tasks that involve multifaceted decision-making
due to several shortcomings: 1) Flexibility: Base prompting
designs (CoT or ReAct) autoregressively sample from the
LM, neglecting potential alternative continuations from specific states. 2) Sensibility: Reasoning-based methods (CoT,
RAP (Hao et al., 2023), or ToT) rely solely on the internal representations of the LM and cannot consider external
observations. This dependency risks fact hallucination and
error propagation while setting a performance ceiling. 3)
_Adaptability: Current planning strategies (RAP or ToT) use_
simple search algorithms such as BFS or cannot leverage
environmental feedback to improve planning. Additionally,
the agent is static and cannot reuse previous experience or
learn from trial and error. While RAP also adopts MCTS, it
is constrained to tasks where the LM can become a world
model and accurately predict states. These shortcomings
limit the ability of LMs to be deployed as general problemsolving agents and form the motivation for LATS.
**3.2. Monte Carlo Tree Search (MCTS)**
Monte Carlo Tree Search (MCTS) is a heuristic search algorithm that is proved successful on many decision-making
environments, such as Atari (Ye et al., 2021) and Go (Silver
et al., 2016). MCTS builds a decision tree where every node
in the tree is a state and edge is an action. MCTS runs for k
episodes; for each episode, it starts from the root (i.e., initial
state) and iteratively conducts two steps to expand the tree:
1) Expansion, where multiple children states s are explored
from the current parent state p by sampling n actions, and 2)
_Selection, where the children with the highest UCT (Upper_
_Confidence bounds applied to Trees) (Kocsis and Szepesvari´_,
2006) value is selected for expansion by the next iteration.
The UCT of a child state s is calculated as follows:
any step by simply copy-pasting historical text input. Such
a special property is the key motivation of our work.
**4. Unifying Reasoning, Acting, and Planning**
**4.1. LM Agent**
Depending on the base prompting framework design, LATS
supports sequential reasoning or decision-making tasks. At
time step t, an agent receives an observation ot _O from_
_∈_
the environment and takes an action at _A following some_
_∈_
policy π(at|x, o1···t−1, a1···t−1). We initialize the agent
with pθ to leverage the useful language representations of
an LM as a base decision-maker. We follow the ReAct instantiation, in which the action space _A[ˆ] = A ∪_ _Z consists_
of both the space of permissible actions A and the language
space of reasoning traces Z. Actions directly affect the environment and result in observation, while thoughts are used
to formalize decisions by organizing information, planning
future actions, or injecting internal knowledge. The exact
instantiation of the action space depends on the particular
environment – for decision-making tasks actions might consist of commands on a website, while for reasoning tasks
the action space might be limited to a few external tools or
APIs. In environments without feedback, such as reasoning
tasks, we use CoT as the base prompting framework.
Instead of greedily decoding one trajectory or solution, we
sample n actions from pθ using the current state. This is
based on the intuition that for complex decision-making
tasks, there is likely to be a range of potential trajectories or
reasoning paths that are correct (Evans, 2010). Sampling a
diverse set of candidates at each step mitigates the stochastic
nature of LM text generation and enables greater exploration
in both the decision-making and reasoning space. We wrap
_pθ within our proposed search algorithm to deliberately_
construct the best trajectory from sampled actions.
**4.2. LATS**
The main component of LATS is a search algorithm that
controls the problem-solving process with planning. To find
the most promising trajectory and systemically balance exploration with exploitation, we adopt a variant of MCTS that
frames decision-making as a tree search, in which each node
_s = [x, a1···i, o1···i] represents a state comprising the origi-_
nal input x, action sequence a1·i, and observation sequence
_o1·i, where i is a token in the text sequence._
Our main technical contribution is adapting MCTS to lan_guage agents. LATS repurposes pθ as an agent, state evalua-_
tor, and feedback generator, leveraging the useful language
representations of modern LMs to facilitate planning. While
standard MCTS and RAP (Hao et al., 2023) rely on internal
dynamics models to facilitate simulation, LATS uses environment interaction and does not require a world model. As
ln N (p)
(1)
_N_ (s) _[,]_
_UCT_ (s) = V (s) + w
where N (s) is the number of visits to a node s, V (s) is the
value function (expected return) from the subtree of s, w is
the exploration weight, and p is the parent node of s. When
the end of an episode is reached, a backpropagation is carried out: the return r is used for updating every V (s) along
the path with the formula V (s) = _[V][old][(][s][)(]N[N]([(]s[s])[)][−][1)+][r]_, where
_Vold(s) is the old value function. Normally, the major short-_
coming of MCTS is that it requires an environment model to
undo previous steps and form a searching tree, which could
be a strong assumption. However, this limitation does not
exist for many LM tasks, as we can conveniently reset to
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
– selection, expansion, evaluation, simulation, backpropa
_Figure 2. Overview of the six operations in LATS. A node is selected, expanded, evaluated, then simulated until a terminal node is reached,_
and then the resulting value is backpropagated. If the trajectory fails, a reflection is generated and used as additional context for future
trials. These operations are performed in succession until the budget is reached or the task is successful.
depicted in Fig. 2, LATS consists of a series of operations as it is difficult for LMs to improve their responses with
_gation, and reflection – performed in succession until the_
task is successfully completed or a computational limit is
reached after sampling k trajectories. The full pseudocode
of LATS can be found in Sec. A in the Appendix.
**Selection. In the first operation, the algorithm identifies**
a segment of the current tree most suitable for subsequent
expansion. Starting from the root node, denoted as the initial
state s0, a child node is selected at each tree level until a leaf
node is reached. To balance exploration and exploitation,
we use the UCT algorithm as shown in Eq. 1.
**Expansion. After selecting a node, the second operation**
expands the tree by sampling n actions from pθ, as described
in the prior section. The environment receives each action
and returns corresponding feedback as an observation. This
results in n new child nodes added to the tree. This tree is
stored in an external long-term memory structure.
**Evaluation. The third operation assigns a scalar value to**
each new child node for selection and backpropagation.
This value effectively quantifies the agent’s progress in task
completion, serving as a heuristic to steer the search algorithm towards the most promising regions of the tree. As
LATS does not involve training, we propose a novel value
function for this setting based on two components: (1) a
_self-generated LM score and (2) a self-consistency score._
Inspired by ToT, we repurpose pθ into a value function by
prompting it to reason about a given state. To obtain a scalar
value, we instruct pθ to end its reasoning trace with a score
indicating the correctness of the trajectory. Our key distinction from ToT is that we obtain this value after obtaining
the environmental feedback, improving value assignment.
This also enables scaling to more challenging environments,
as it is difficult for LMs to improve their responses without external feedback (Huang et al., 2024). Additionally,
to further improve value assignment, we introduce an additional heuristic based on self-consistency (Wang et al.,
2022), in which actions sampled multiple times at the same
state tend to be more accurate. This results in the overall
value function:
_V (s) = λ ∗_ LM(s) + (1 − _λ) ∗_ SC(s), (2)
where λ is a hyperparameter. Notably, our method offers
enhanced flexibility over programmed heuristics (Campbell
et al., 2002) and greater efficiency than learned heuristics
(Silver et al., 2017).
**Simulation. The fourth operation expands the currently se-**
lected node until a terminal state is reached. At each depth
level, we sample and evaluate nodes with the same operations but prioritize nodes of the highest value. Reaching a
terminal state provides objective feedback on the correctness of a trajectory. If the task is completed successfully,
then LATS terminates the search. If the solution is partially
successful or unsuccessful, then we perform two additional
operations as described below. The success of a trajectory is
determined by the design of the specific environment, such
as finalizing a purchase in web navigation environments.
**Backpropagation. This operation updates the values of the**
tree based on the outcome of a trajectory. For each node
_s0, s1, . . ., sl in the trajectory from root (initial state s0)_
of the searching tree to leaf (terminal state sl), its value is
updated to reflect the outcome of the simulation by N (si) =
_N_ (si−1)+1 and V (si) = _[V][ (][s][i][−][1]N[)][N](s[(]i[s])[i][−][1][)+][r]_, where r is the
reward. These updated values are used in the UCT formula
(Eq. 1) to guide the selection of the next node.
**Reflection. In addition to the environmental feedback, we**
leverage self-reflection to further refine the decision-making
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
|Prompt Method|HotpotQA (EM) ↑|
|---|---|
|Base LM CoT (Wei et al., 2022) CoT - SC (Wang et al., 2022) ToT (Yao et al., 2023a) RAP (Hao et al., 2023) RAP (n = 10) LATS (CoT)|0.32 0.34 0.38 0.55 0.60 0.60 0.62|
|---|---|
_Table 2. GPT-3.5 reasoning-based prompting results on HotpotQA._
LATS achieves the highest exact match (EM) for reasoning. We
sample n = 5 nodes during expansion and k = 50 trajectories.
process (Shinn et al., 2023; Madaan et al., 2023). Upon
encountering an unsuccessful terminal node, pθ is prompted
with the trajectory and final reward to provide a verbal selfreflection that summarizes the errors in the reasoning or
acting process and proposes superior alternatives. We store
both failed trajectories and corresponding reflections in the
memory. In subsequent iterations, these are integrated as
additional context to the agent and value function, refining
both through in-context learning. This imparts a semantic
gradient signal more useful than a scalar value, enabling
the agent to learn from trial and error without the cost of
expensive optimization such as reinforcement learning.
**Discussion. Conceptually, LATS has several notable advan-**
tages as a general framework for reasoning and decisionmaking with LM agents. (1) Generality: LATS supports
both reasoning and decision-making tasks by defining a
shared space of thoughts and actions. (2) Deliberation:
Leveraging MCTS and LM value function in LATS ensures a principled search that selects options with high value
while exploring promising alternatives. (3) Adaptability:
Incorporating external feedback through observations and
self-reflection in LATS enables greater adaptation during
problem-solving. (4) Flexibility: LATS can accommodate
different scenarios, environments, and resource stipulations
by modifying state design and tree dimensions. (5) Modu_larity: The base LM agent, reflection generator, and value_
function can be independently altered and adapted to individual LM properties.
**5. Experiments**
To demonstrate the general applicability of LATS, we evaluate our method on a variety of domains that require reasoning and acting: programming (Chen et al., 2021; Austin
et al., 2022), HotPotQA (Yang et al., 2018), WebShop (Yao
et al., 2022), and Game of 24 (Yao et al., 2023a).
|Prompt Method|HotpotQA (EM) ↑|
|---|---|
|ReAct (Yao et al., 2023b) ReAct (best of k) Reflexion (Shinn et al., 2023) ToT (ReAct) RAP (ReAct) LATS (ReAct) LATS (n = 3) LATS (n = 10) LATS (CoT + ReAct)|0.32 0.38 0.51 0.39 0.54 0.63 0.58 0.65 0.71|
|---|---|
_Table 3. GPT-3.5 acting-based prompting results on HotpotQA._
LATS achieves the highest exact match (EM) for acting. We
sample n = 5 nodes and use k = 50 trajectories. We also evaluate
sampling ReAct k times and using both CoT and ReAct base
prompting designs for LATS, which achieves the best performance.
Note that LATS outperforms ToT and RAP with ReAct prompting,
which are the simple adaptations of search algorithms to decisionmaking.
**5.1. HotPotQA**
For a task that can be approached with both reasoning-based
and acting-based strategies, we consider HotPotQA (Yang
et al., 2018), a multi-hop question-answering benchmark
that requires retrieval over two or more Wikipedia passages.
For the action space, in addition to LM thoughts, we follow
the setup from Yao et al. (2023b), which provides the agent
with API calls to search and retrieve information. The output
of these API calls and self-generated reflections form the
observation space. Note that consistent with previous work
(Yao et al., 2023b; Shinn et al., 2023), we use an oracle
setup for HotPotQA, in which the environment provides
feedback about the answer’s correctness upon receiving an
answer. This enables a fair comparison between our method
and baselines in scenarios where the quality of feedback is
high, allowing us to focus our evaluation on how well the
agent incorporates external feedback. We use a subset of
100 questions and three few-shot examples for each method.
For ToT, we use DFS as the base search algorithm. For all
methods that involve sampling, including LATS, we sample
_k = 50 trajectories. More details are in Appendix Sec. D._
We evaluate internal reasoning strategies by removing actions and observations from the context, corresponding to
CoT (Wei et al., 2022) and its variants, CoT-SC (Wang et al.,
2022), ToT (Yao et al., 2023a), and RAP (Hao et al., 2023).
These methods rely solely on the agent’s existing knowledge
to answer the question. We further consider acting-based
methods ReAct, Reflexion, and LATS, which augment the
agent with the interactive API environment and primarily
evaluate its information retrieval abilities. We also design
a simple integration of search algorithms with LM agents,
extending ToT and RAP with ReAct prompting to handle
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
|Prompt Method Model|Pass@1 ↑|
|---|---|
|Prompt Method Model|Pass@1 ↑|
|---|---|
|CoT (Wei et al., 2022) GPT-3.5 ReAct (Yao et al., 2023b) GPT-3.5 Reflexion (Shinn et al., 2023) GPT-3.5 ToT (Yao et al., 2023a) GPT-3.5 RAP (Hao et al., 2023) GPT-3.5 LATS (ReAct) GPT-3.5|46.9 56.9 68.1 54.4 63.1 83.8|
|Base LM GPT-4 Reflexion GPT-4 LATS (ReAct) GPT-4|80.1 91.0 92.7|
_Table 4. GPT-3.5 and GPT-4 Pass@1 accuracy on HumanEval._
Prompting with LATS achieves the best performance. We sample
5 solutions during expansion for 8 iterations.
external observations. In addition, while LATS is designed
for scenarios where external feedback can enhance reasoning, we also implement a reasoning-only version with CoT
as the base prompting framework. Moreover, we combine
internal and external reasoning in LATS by first prompting
with a CoT-based prompt and then switching to a ReActbased prompt upon failure. This is closer to how humans
might approach this task by using tools to retrieve additional
information only when the answer is not already known.
**Results. We observe in Tab. 2 and Tab. 3 that both in-**
ternal reasoning and external retrieval strategies perform
well on HotPotQA. Due to their large-scale training corpus,
modern LMs already encode factual knowledge and can
often directly answer the question correctly. While CoT can
slightly enhance performance on questions requiring reasoning, larger gains are observed with search methods ToT
and RAP (Tab. 2, Row 4, 5), which can sample and explore
more outputs. We observe similar results for acting-based
methods. LATS surpasses ReAct, even when sampling the
same number of trajectories, by expanding more nodes with
principled search. This is demonstrated when modifying
_n, the number of nodes expanded during each iteration. In-_
creasing n can consistently improve performance, although
at greater computational and inference costs. LATS also
outperforms RAP on internal reasoning, but has higher performance on the decision-making setting of HotPotQA than
the reasoning setting. Contrary to LATS, the ReAct versions
of ToT and RAP (Tab. 3, Row 4, 5) perform even worse than
_the reasoning-only setting of HotPotQA, which indicates_
that the acting-based setting is more challenging and adap_tation of search algorithms to decision-making scenarios_
_is non-trivial. Combining internal and external reasoning_
in LATS results in the highest performance, indicating the
importance of external feedback in augmenting reasoning
even in tasks where the base LM can already perform.
|Prompt Method|Pass@1 ↑|
|---|---|
|CoT (Wei et al., 2022) ReAct (Wei et al., 2022) Reflexion (Shinn et al., 2023) ToT (Yao et al., 2023a) RAP (Hao et al., 2023) LATS (ReAct)|54.9 67.0 70.0 65.8 71.4 81.1|
|---|---|
_Table 5. GPT-3.5 Pass@1 accuracy on MBPP. Prompting with_
LATS achieves the highest performance. We sample 5 solutions
during expansion for 8 iterations.
**5.2. Programming**
To demonstrate the importance of external observations
for complex reasoning tasks, we evaluate the baselines
and LATS on programming with HumanEval (Chen et al.,
2021)[1] and MBPP (Austin et al., 2022). Both datasets measure the correctness of synthesized programs in Python from
natural language docstrings. We use individual solutions
as the action space and test suite and compiler feedback as
the external observation. We follow Chen et al. (2023a) and
use an LM to generate a synthetic test suite of syntactically
valid “assert” statements for each question. For each step,
the solution is evaluated on this test suite, and the results,
including successful and failed tests and compiler output,
are added to the context as an observation.
For this task, the reasoning and acting baselines share an
action space, but acting methods are able to incorporate
observations as additional context. For LATS, since each
action corresponds to a complete solution, we skip the simulation step of LATS and directly use the percentage of
passed tests as the backpropagated reward. We use k = 8
iterations, set the number of generated tests at 4, and sample n = 5 solutions during expansion. After the search is
completed, we select the solution with the highest value and
evaluate it on the real test suite for the pass@1 accuracy
evaluation. More details can be found in Appendix Sec. D.
**Results. Tab. 4 and Tab. 5 show that both search and seman-**
tic feedback are crucial for better performance. Despite not
using observations, ToT and RAP are competitive with Reflexion. LATS has the highest performance on both datasets.
RAP uses a search algorithm similar to LATS, which reveals
the importance of external feedback for difficult reasoning
tasks such as programming. With GPT-4, using LATS sets
the state of the art for HumanEval, validating that LATS can
be used with more advanced LMs for higher performance.
1Some baselines use 161 questions from HumanEval. We
use all 164 questions for LATS and find minimal performance
differences, so we report baselines for both settings.
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
|Method|Score SR ↑ ↑|
|---|---|
|ReAct (Yao et al., 2023b) ReAct (best of k) Reflexion (Shinn et al., 2023) LATS (ReAct)|53.8 28.0 59.1 32.0 64.2 35.0 75.9 38.0|
|---|---|
|IL (Yao et al., 2022) IL+RL (Yao et al., 2022) Fine-tuning (Furuta et al., 2024)|59.9 29.1 62.4 28.7 67.5 45.0|
|---|---|
|Expert|82.1 59.6|
|---|---|
_Table 6. Score and success rate (SR) on WebShop. Results are_
organized into prompting, RL-based training, and human performance. For the same number of iterations, LATS improves both
score and SR and surpasses RL-based training.
**5.3. WebShop**
For a complex decision-making environment with practical applications, we consider WebShop (Yao et al., 2022),
an online shopping environment composed of a website
with 1.18M real-world products and 12k human instructions.
Agents must navigate a website through a variety of commands to purchase an item matching a user specification.
We use the preconstructed action space of search and click
commands and browser feedback and reflections for the
observation. The performance is gauged using two metrics:
an average score, reflecting the percentage of user-specified
attributes met by the selected product, and a success rate,
indicating the frequency with which the chosen product fulfills all given conditions. We compare against acting-based
prompting methods and RL-based approaches. We evaluate
on 50 instructions, expand n = 5 children for LATS, and set
_k = 30 for LATS, ReAct (best of k), and Reflexion. More_
details and prompts are in Appendix Sec. D and Sec. G.
**Results. We find in Tab. 6 that GPT-3.5 with ReAct is**
competitive to imitation learning (IL) and can exceed reinforcement learning techniques with stronger prompting
strategies. Sampling k = 30 trajectories with ReAct and
Reflexion results in a similar performance, suggesting the semantic feedback is not as helpful in complex environments
like WebShop. Similar to Shinn et al. (2023), we find that
generated reflections are often generic and do not provide
useful feedback, resulting in a tendency for the agent to
become stuck in local minima. However, using LATS indeed results in a noticeable improvement, indicating a more
effective exploration for the same number of iterations.
**5.4. Ablation Study and Additional Analysis**
We further test the reasoning ability of LATS on Game of 24,
and also conduct additional experiments on HotPotQA to
demonstrate the effect of each component of LATS (results
|Prompt Method|Game of 24 (Success Rate) ↑|
|---|---|
|CoT (Wei et al., 2022) Reflexion (Shinn et al., 2023) ToT (Yao et al., 2023a) RAP (Hao et al., 2023) LATS (CoT)|0.08 0.12 0.20 0.40 0.44|
|---|---|
_Table 7. Results on Game of 24 with GPT-3.5. We sample n = 5_
nodes and k = 30 trajectories.
|Prompt Method|HotPotQA (EM) ↑|
|---|---|
|ToT (ReAct) RAP (ReAct) LATS (No LM Heuristic) LATS (DFS) LATS (No Reflection) LATS (ReAct)|0.39 0.54 0.37 0.42 0.58 0.63|
|---|---|
_Table 8. Ablation results on LATS and baseline variants in Hot-_
PotQA. We use ReAct as the base prompt and sample n = 5
children and k = 50 trajectories. LATS requires every component
and operation for optimal performance.
shown in Tab. 8). More ablations for token consumption on
HotPotQA are in Tab. 9 in Appendix Sec. C.
**Reasoning on Game of 24. To show how LATS can be**
applied to purely internal reasoning tasks, we additionally
evaluate on Game of 24 (Yao et al., 2023a), a mathematical
reasoning task where the agent must construct 24 out of a
set of numbers and basic operations. We use CoT as the
base prompting design and employ the same operations as
in other settings. We find in Tab. 7 that LATS outperforms
previous methods proposed specifically for reasoning. This
is due to our proposed value function, which incorporates
self-consistency as an additional heuristic.
**Self-reflection. LATS uses self-reflection to provide addi-**
tional semantic signals for the agent. In Tab. 8 (Row 5, 6),
we observe a 0.05 performance drop when self-reflection
is removed from LATS, validating its usefulness. This is a
smaller gain than the 0.19 gain that Reflexion has over ReAct as shown in Tab. 3, suggesting overlap between the questions where an answer can be improved by self-reflection
and search. This variant outperforms RAP (ReAct), reflecting our improvements to MCTS.
**Search algorithm. MCTS is a more principled search algo-**
rithm than variants like A* (Zhuang et al., 2023) or DFS and
is the basis for observed performance gains. We observe
the effects of using DFS, and incorporate the LM-based
heuristic used in ToT in which branches with low values are
pruned. This removes the selection and backpropagation
operations, and we observe a 0.21 drop in performance in
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
|Method|Performance ↑|Sample complexity ↓|Token Consumption ↓|
|---|---|---|---|
|ReAct (Best k = 250) CoT-SC (n = 1, k = 250) LATS (n = 1, k = 50) ToT (ReAct, n = 5, k = 50) RAP (ReAct, n = 5, k = 50) LATS (n = 5, k = 50)|0.42 0.40 0.48 0.49 0.54 0.63|O(k) O(k) O(k) O(kn) O(kn) O(kn)|- - - 210, 215 176, 500 173, 290|
_Table 9. Performance, sample complexity of different methods, average number of nodes expanded, and token consumption upon success_
by methods with tree-based search. n is the number of children nodes expanded at every step and k is the number of trajectories. LATS
has the same sample complexity as other methods with tree-based search and expands less nodes upon success, which indicates lower
token cost.
RAP and 12.12 fewer nodes than ToT. These findings underscore our improvements to MCTS and adaptation to LM
agents, resulting in a more principled and efficient search
mechanism.
**6. Conclusion**
This work introduces Language Agent Tree Search (LATS),
the first framework to unify reasoning, acting, and planning for enhanced LM problem-solving. LATS addresses
key limitations of prior prompting techniques by deliberately constructing trajectories with search algorithms, incorporating external feedback, and enabling agents to learn
from experience. Our evaluation demonstrates the ability
of LATS to harness LM capabilities for various decisionmaking tasks while maintaining its reasoning ability without
_additional training. The proposed synergies between search,_
interaction, and reflection offer a versatile approach to autonomous decision-making, highlighting the potential of
LMs as generalist agents.
**Limitations and future directions. LATS has two main**
limitations that should be considered before its application.
First, it has a higher computational cost compared to simpler
prompting methods like ReAct or Reflexion, which may
limit its practicality in certain situations. Second, LATS
assumes the ability to revert to earlier states in decisionmaking environments, which may not be universally applicable in all possible environments. Despite these limitations, it is worth noting that LATS still achieves better
performance and efficiency compared to similar methods,
and the number of nodes expanded at each step provides a
trade-off between performance and efficiency. Additionally,
we expect inference-time compute costs to decrease over
time, thereby increasing the usefulness of LATS and other
“System-2” LM approaches. Finally, the reversion property is feasible in many real-world applications, opening up
new opportunities in the LM decision-making community.
Future directions include scaling LATS to more complex
environments or multi-agent frameworks and improving efficiency to reduce costs. A more detailed discussion about
the limitations of LATS can be found in Appendix Sec. B.
|Method|k HotPotQA # of Nodes ↑ ↓|
|---|---|
|ToT RAP LATS|10 0.34 33.97 10 0.44 31.53 10 0.44 28.42|
|---|---|
|ToT RAP LATS|30 0.39 47.54 30 0.50 37.71 30 0.52 34.12|
|---|---|
|ToT RAP LATS|50 0.49 84.05 50 0.54 70.60 50 0.61 66.65|
|---|---|
_Table 10. Comparison of the cost of different methods on Hot-_
PotQA. LATS achieves the highest accuracy and the lowest average number of nodes/states required for success at various k
trajectories sampled.
Tab. 8 (Row 4) when sampling the same number of nodes
but outperforms ToT (ReAct). Despite also benefiting from
ground-truth feedback, LATS uses it better than ToT and
RAP and can outperform these methods. We also find in
Tab. 8 (Row 3) that LM scoring, the main component of our
value function, is crucial for leveraging external feedback
and strong performance.
**Sample complexity and token consumption. One pos-**
sible concern of LATS is that the tree-structured search
might consume much more tokens than existing methods.
To further study the computational cost of LATS compared
to prior methods, we examine the sample complexity (i.e.,
asymptotic token cost) of all methods considered in this
paper and count the average number of nodes expanded
by our method and other tree-structured methods (ToT and
RAP) upon successful search on HotPotQA. We present the
results in Tab. 9 and Tab. 10, which show that our method
has the same sample complexity as other tree-based search
methods and requires fewer overall tokens and states. The
token cost gap will be even larger when taking failed trajectories into account, since our method has a higher success
rate and reaches the computational budget limit less often.
This is also true when sampling a smaller number of trajectories; on average, LATS requires 3.55 fewer nodes than
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
**Impact Statement**
LATS is a framework that enhances LM performance
through interactions with an environment. This improvement in autonomous decision-making may facilitate harmful uses of LMs. On the other hand, LATS enhances interpretability and the potential for greater alignment, as it
involves high-level linguistic reasoning and actions through
several rounds of decision-making and reflection rather than
relying on autoregressive generation. Finally, enhancing the
capabilities of LM agents may raise security risks, such as
executing malware. We encourage further research to fully
understand and mitigate the risks of LMs.
**Acknowledgements**
We thank Daniel Campos for useful feedback on earlier versions of this paper. This work was supported in part by NSF
Grant 2106825, NIFA Award 2020-67021-32799, the Jump
ARCHES endowment through the Health Care Engineering
Systems Center at Illinois and the OSF Foundation, and the
IBM-Illinois Discovery Accelerator Institute. This work
used NVIDIA GPUs at NCSA Delta through allocations
CIS220014, CIS230012, and CIS230218 from the ACCESS
program.
**References**
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan
Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex
Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian
Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano,
Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian,
Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee,
Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter
Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers,
Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei
Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and
Andy Zeng. Do as I can, not as I say: Grounding language
in robotic affordances. In CoRL, 2022.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten
Bosma, Henryk Michalewski, David Dohan, Ellen Jiang,
Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models. In
_NeurIPS, 2022._
Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga,
Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul
Sampedro, and Jeff Clune. Video pretraining (VPT):
Learning to act by watching unlabeled online videos. In
_NeurIPS, 2022._
Maciej Besta, Nils Blach, Ales Kubicek, Robert Ger
stenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz
Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. Graph of thoughts:
Solving elaborate problems with large language models.
_arXiv:2308.09687, 2023._
Samuel R Bowman, Gabor Angeli, Christopher Potts, and
Christopher D Manning. A large annotated corpus for
learning natural language inference. In EMNLP, 2015.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell,
Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger,
Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse,
Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray,
Benjamin Chess, Jack Clark, Christopher Berner, Sam
McCandlish, Alec Radford, Ilya Sutskever, and Dario
Amodei. Language models are few-shot learners. In
_NeurIPS, 2020._
Murray Campbell, A Joseph Hoane Jr, and Feng-hsiung
Hsu. Deep blue. Artificial intelligence, 2002.
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi
Lin, Jian-Guang Lou, and Weizhu Chen. CodeT: Code
generation with generated tests. In ICLR, 2023a.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura
Burda, Nicholas Joseph, Greg Brockman, Alex Ray,
Raul Puri, Gretchen Krueger, Michael Petrov, Heidy
Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan,
Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power,
Lukasz Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth
Barnes, Ariel Herbert-Voss, William H. Guss, Alex
Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra,
Evan Morikawa, Alec Radford, Matthew M. Knight,
Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. arXiv:2107.03374, 2021.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W.
Cohen. Program of thoughts prompting: disentangling
computation from reasoning for numerical reasoning
tasks. TMLR, 2023b. ISSN 2835-8856.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha
Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker
10
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran,
Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope,
James Bradbury, Jacob Austin, Michael Isard, Guy GurAri, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier
Garcia, Vedant Misra, Kevin Robinson, Liam Fedus,
Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek
Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick,
Andrew M. Dai, Thanumalayan Sankaranarayana Pillai,
Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon
Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou,
Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat,
Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. PaLM:
Scaling language modeling with pathways. JMLR, 24
(240):1–113, 2023.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark
Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,
Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to
solve math word problems. arXiv:2110.14168, 2021.
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel
Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2Web:
Towards a generalist agent for the web. In NeurIPS
_Datasets and Benchmarks Track, 2023._
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch,
Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid,
Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong
Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. PaLM-E: An embodied multimodal language model. In ICML, 2023.
Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir
Nachum, Joshua B. Tenenbaum, Dale Schuurmans, and
Pieter Abbeel. Learning universal policies via text-guided
video generation. In NeurIPS, 2023.
Jonathan St BT Evans. Intuition and reasoning: A dualprocess perspective. Psychological Inquiry, pages 313 –
326, 2010.
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar,
Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang,
Yuke Zhu, and Anima Anandkumar. MineDojo: Building
open-ended embodied agents with internet-scale knowledge. In NeurIPS Datasets and Benchmarks Track, 2022.
Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, and Izzeddin Gur. Multimodal
web navigation with instruction-finetuned foundation
models. In ICLR, 2024.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei
Liu, Yiming Yang, Jamie Callan, and Graham Neubig.
PAL: Program-aided language models. In ICML, 2023.
Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and
Jun Wang. Long text generation via adversarial training
with leaked information. In AAAI, 2018.
William H. Guss, Brandon Houghton, Nicholay Topin,
Phillip Wang, Cayden Codel, Manuela Veloso, and Ruslan Salakhutdinov. MineRL: A large-scale dataset of
Minecraft demonstrations. In IJCAI, 2019.
Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In ICML,
2019.
Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world
models. arXiv:2301.04104, 2023.
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen
Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning
with language model is planning with world model. In
_EMNLP, 2023._
Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven
Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou.
Large language models cannot self-correct reasoning yet.
In ICLR, 2024.
Wenlong Huang, F. Xia, Ted Xiao, Harris Chan, Jacky
Liang, Peter R. Florence, Andy Zeng, Jonathan Tompson,
Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah
Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol
Hausman, and Brian Ichter. Inner monologue: Embodied
reasoning through planning with language models. In
_CoRL, 2022._
Levente Kocsis and Csaba Szepesvari. Bandit based monte-´
carlo planning. In ECML, 2006.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka
Matsuo, and Yusuke Iwasawa. Large language models
are zero-shot reasoners. In NeurIPS, 2022.
Steven M. LaValle. Rapidly-exploring random trees : A
new tool for path planning. The Annual Research Report,
1998.
Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin
Shi, and Percy Liang. Reinforcement learning on web
interfaces using workflow-guided exploration. In ICLR,
2018.
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu
Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men,
11
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng,
Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun
Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong,
and Jie Tang. AgentBench: Evaluating LLMs as agents.
In ICLR, 2024.
Zhihan Liu, Hao Hu, Shenao Zhang, Hongyi Guo, Shuqi
Ke, Boyi Liu, and Zhaoran Wang. Reason for future, act for now: A principled framework for autonomous LLM agents with provable sample efficiency.
_arXiv:2309.17382, 2023._
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha
Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank
Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter
Clark. Self-refine: Iterative refinement with self-feedback.
In NeurIPS, 2023.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar
Gulcehre, and Bing Xiang. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In
_Special Interest Group on Natural Language Learning,_
2016.
OpenAI. GPT-4 technical report. arXiv:2303.08774, 2023.
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan
Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill
Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou,
Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun.
ToolLLM: Facilitating large language models to master
16000+ real-world APIs. In ICLR, 2024.
Abulhair Saparov and He He. Language models are greedy
reasoners: A systematic formal analysis of chain-ofthought. In ICLR, 2023.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language
models can teach themselves to use tools. In NeurIPS,
2023.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li,
Weiming Lu, and Yueting Zhuang. HuggingGPT: Solving
AI tasks with ChatGPT and its friends in Hugging Face.
In NeurIPS, 2023.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin
Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning.
In NeurIPS, 2023.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cotˆ e,´
Yonatan Bisk, Adam Trischler, and Matthew Hausknecht.
ALFWorld: Aligning text and embodied environments
for interactive learning. In ICLR, 2020.
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez,
L. Sifre, George van den Driessche, Julian Schrittwieser,
Ioannis Antonoglou, Vedavyas Panneershelvam, Marc
Lanctot, Sander Dieleman, Dominik Grewe, John Nham,
Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap,
Madeleine Leach, Koray Kavukcuoglu, Thore Graepel,
and Demis Hassabis. Mastering the game of Go with deep
neural networks and tree search. Nature, 529:484–489,
2016.
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez,
L. Sifre, George van den Driessche, Julian Schrittwieser,
Ioannis Antonoglou, Vedavyas Panneershelvam, Marc
Lanctot, Sander Dieleman, Dominik Grewe, John Nham,
Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap,
Madeleine Leach, Koray Kavukcuoglu, Thore Graepel,
and Demis Hassabis. Mastering chess and Shogi by selfplay with a general reinforcement learning algorithm.
_arXiv:1712.01815, 2017._
Steven A. Sloman. The empirical case for two systems of
reasoning. Psychological Bulletin, 119:3–22, 1996.
Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, and
Chao Zhang. AdaPlanner: Adaptive planning from feedback with language models. In NeurIPS, 2023.
D´ıdac Sur´ıs, Sachit Menon, and Carl Vondrick. ViperGPT:
Visual inference via Python execution for reasoning. In
_ICCV, 2023._
Maciej Swiechowski, Konrad Godlewski, Bartosz Sawicki,
and Jacek Ma’ndziuk. Monte Carlo tree search: A review of recent modifications and applications. Artificial
_Intelligence Review, 56:2497–2562, 2021._
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Canton´
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana
Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet,
Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin
Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta,
Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael
Smith, R. Subramanian, Xia Tan, Binh Tang, Ross
Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez,
Robert Stojnic, Sergey Edunov, and Thomas Scialom.
12
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
Llama 2: Open foundation and fine-tuned chat models.
_arXiv:2307.09288, 2023._
Tom Vodopivec, Spyridon Samothrakis, and Branko Ster.
On Monte Carlo tree search and reinforcement learning.
_Journal of Artificial Intelligence Research, 60:881–936,_
2017.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar,
Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with
large language models. arXiv:2305.16291, 2023.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. Self-consistency improves
chain of thought reasoning in language models. In ICLR,
2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of
thought prompting elicits reasoning in large language
models. In NeurIPS, 2022.
Michael Wooldridge and Nicholas R Jennings. Intelligent
agents: Theory and practice. The Knowledge Engineering
_Review, 10:115 – 152, 1995._
Philipp Wu, Alejandro Escontrela, Danijar Hafner, Pieter
Abbeel, and Ken Goldberg. Daydreamer: World models
for physical robot learning. In CoRL, 2023.
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, MinYen Kan, Junxian He, and Qizhe Xie. Decomposition
enhances reasoning via self-evaluation guided decoding.
_arXiv:2305.00633, 2023._
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio,
William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP,
2018.
Shunyu Yao, Howard Chen, John Yang, and Karthik R
Narasimhan. WebShop: Towards scalable real-world web
interaction with grounded language agents. In NeurIPS,
2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan.
Tree of thoughts: deliberate problem solving with large
language models. In NeurIPS, 2023a.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran,
Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing
reasoning and acting in language models. In ICLR, 2023b.
Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel,
and Yang Gao. Mastering Atari games with limited data.
In NeurIPS, 2021.
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan¨
Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting
enables complex reasoning in large language models. In
_ICLR, 2022._
Yuchen Zhuang, Xiang Chen, Tong Yu, Saayan Mitra, Victor
Bursztyn, Ryan A. Rossi, Somdeb Sarkhel, and Chao
Zhang. ToolChain*: Efficient action space navigation in
large language models with A* search. In ICLR, 2023.
13
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
**Appendix of LATS**
The appendix is organized as follows. First in Sec. A, we
show the pseudocode of our proposed algorithm, LATS. In
Sec. B, we provide further discussion of the limitations of
our method. In Sec. C, we present additional experimental
results. In Sec. D, we specify the environment details in our
experiments. Finally, we list our prompts used for the three
environments in Sec. E (HotPotQA), Sec. F (Programming),
and Sec. G (WebShop), respectively.
**A. LATS Pseudocode**
Alg. 1 shows the pseudocode of our algorithm LATS. Nodes
are stored explicitly in the memory. Unless otherwise specified, in all experiments, we set the number of sampled nodes
to n = 5 and the exploration weight to w = 1. We use
a self-consistency weight of λ = 0.5 for HotPotQA and
Game of 24, and λ = 0.8 for Programming and WebShop.
**B. More Discussion on Limitations**
As stated in Sec. 6, LATS has two main limitations:
**Computational cost. Although LATS can improve rea-**
soning and decision-making, this arrives at a higher computational cost relative to simpler prompting methods like
ReAct or Reflexion. However, the following facts serve as
mitigations to this issue:
- Asymptotically, our method has the same sample complexity as ToT (Yao et al., 2023a) and RAP (Hao et al.,
2023), but achieves better performance, expands fewer
nodes, and uses fewer tokens on average upon success.
This suggests that our method is not only stronger
in problem-solving but also has higher efficiency. A
full analysis of the cost can be found in Tab. 9 in Appendix C.
- The number of nodes n expanded at every step provides
a natural trade-off between performance and efficiency.
In fact, setting n = 1 makes the method as efficient
as ReAct (Yao et al., 2023b) with multiple trials or
CoT-SC (Wang et al., 2022).
In general, we recommend using LATS for difficult tasks
like programming or for situations where performance is
prioritized over efficiency in practice. We hope that continued advancements in LMs will reduce costs and increase
the applicability of LATS.
Additionally, there exists a minor cost from querying the environment, which we find to be trivial for the environments
we study. Most LM-based environments involve API-based
tools, which are inexpensive and fast to use. It is also worth
noting that this is cheaper than the inference cost associated with using LMs as world models, as in previous search
approaches (Hao et al., 2023; Liu et al., 2023).
**Assumption of environment reversion in decision-**
**making.** Since our method is based on Monte Carlo
Tree Search and is model-free, one limitation of LATS on
decision-making tasks is that it requires the agent to be
able to revert to earlier states in the environments. However, this reversion property is feasible in many real-world
environments and applications (despite being not universally applicable in all possible environments), including
programming (HumanEval (Chen et al., 2021)), web search
(WebShop (Yao et al., 2022)), text-based manipulation tasks
(Alfworld (Shridhar et al., 2020)), and LMs with tool use
(ToolBench (Qin et al., 2024)). Therefore, we believe that
leveraging the reversion property is not a shortcoming but
rather a feature that has not been explicitly given notice
_by the LM decision-making community – it opens up new_
opportunities in the emerging LM agent community.
Additionally, the benchmarks we use in this paper are relatively simple and focused on decision-making compared
to the complexity of real-world interactive environments.
Moreover, some environments might not easily support rollbacks to previous states. However, the design of LATS is
flexible and can be adjusted to various resource constraints.
Using planning-based prompting methods like LATS in
environments like Minecraft (Fan et al., 2022) and more reasoning benchmarks would be interesting avenues for future
work.
**C. Additional Ablations**
In this section, we ablate various designs of LATS. Experiments are conducted on HotPotQA with a maximum
of k = 50 trajectories and sampling size of n = 5 and
HumanEval with a maximum of k = 8 trajectories and sampling size of n = 5. The result for HotPotQA is shown in
Tab. 8 and HumanEval in Fig. 3.
**Exploration weight. We find that there is lower perfor-**
mance on HotPotQA when the exploration weight w in the
selection formula is decreased to 0.5, suggesting that this
reduces the effectiveness of the search. Increasing w to 2.0
does not lead to a performance improvement, but we tend
to observe faster convergence. The optimal setting depends
on the particular environment and complexity of the state
space.
**Depth. In our main experiments we use a maximum depth**
of d = 7 on HotPotQA for all methods, following previous
work (Yao et al., 2023b). We ablate the effect on LATS after
reducing it to d = 4. This results in only a slight drop in
performance. We find that most questions can be answered
within four steps, and using a greater number of steps tends
14
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
**Algorithm 1 LATS(s, pθ, pV, pref, d, k, n, w, a, b)**
**Require: Initial state s, action generator pθ, value function pV, reflection generator pref, number of generated actions n,**
depth limit L, number of roll-outs K, context c, exploration weight w, and value function weight λ
Initialize action space A, observation space O
Initialize the state-action value function pV : S × A 7→ R and visit counter N : S 7→ N to one
**for k ←** 0, . . ., K − 1 do
**for t ←** 0, . . ., L − 1 do
**if st not terminal then** _▷_ Expansion & Simulation
**for i ←** 1, . . ., n do
Sample a[(]t[i][)] _pθ(st)_
_∼_
Get o[(]t[i][)] from environment, s[(]t+1[i][)] _[←]_ [(][c]t[(][i][)][, o]t[(][i][)][, a]t[(][i][)][)][,][ c]t[(]+1[i][)] _[←]_ [(][o]t[(][i][)][, a]t[(][i][)][)]
Evaluate Vt[(][i][)] _∼_ _λ ∗_ _pV (s[(]t[i][)][) + (1][ −]_ _[λ][)][ ∗]_ [SC][(][s]t[(][i][)][)] _▷_ Evaluation
_V (st)_ _Vt[(][i][)]_
_←_
Add s[(]t[i][)] to children
**end for**
**end if**
**if st is terminal then** _▷_ Reflection
Get r from environment
**if r not success then**
reflection ← _pref(ct)_
_c ←_ reflection
**end if**
**end if**
_at ←_ arg maxa∈e(st) _V (st) + w_ _Nln( Nst(+1st))_ _▷_ Selection
Get correspondingN (st+1) _N_ (st+1 ot) + 1 from memory,h q st+1 ← (ict, ot, at), ct+1 ← (ot, at)
_←_
**if at is an output action then break**
**end for**
_T ←_ the actual number of steps
**for t ←** _T −_ 1, . . ., 0 do _▷_ Backpropagation
_V (st)_ _N_ (st)
_←_ _[V][ (][s][t][)(][N]_ [(][s][t][)][−][1)+][r]
**end for**
**end for**
to force the agent into local minima and rarely improves
success.
**LM value function. The LM value function scores states**
based on expected future reward. Without this heuristic,
the only signal to guide search would be from environment
rewards for completed trajectories, which are scarce and
often binary. When we remove the evaluation operation, we
observe a dramatic 0.26 drop in performance.
**Performance over time. To see the effects of increasing**
the number of trajectories sampled, we change k to different
values. We conduct this experiment on HumanEval, which
has a more noticeable difference due to sampling less trajectories. The results are shown in Fig. 3, in which LATS
scales better with more iterations than Reflexion.
**D. Environment Details**
**D.1. HotPotQA**
HotPotQA (Yang et al., 2018) is a question-answering
dataset that requires reasoning over multiple supporting
documents to answer questions. It contains 113k Wikipedia
based question-answer pairs crafted by crowdworkers to
be diverse, multi-hop, and explainable. Questions cover a
range of types like entities, locations, dates, and comparison
of shared properties between two entities. Crowdworkers
also provide supporting facts from the documents that justify
the answer. We use the HotPotQA benchmark setting with
all the Wikipedia paragraphs to test retrieval. We use a randomly selected subset of 100 questions for our experiments
and a maximum depth limit of 6. Fig. 4 illustrates how
ReAct and LATS work on an example task of HotPotQA,
and gives a qualitative example on how LATS outperforms
ReAct on the task. For value function hyperparameters, we
use λ = 0.5 for the LM score and self-consistency score.
**Action Space. We adopt the Wikipedia web API proposed**
in Yao et al. (2023b), with three types of actions to support
interactive information retrieval:
(1) search[entity], which returns the first 5 sentences
from the corresponding entity wiki page if it exists,
or else suggests top-5 similar entities from the Wikipedia
search engine,
(2) lookup[string], which returns the next sentence in
15
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
if any sample passes all tests. We use all 164 problems for
our experiments and a maximum depth limit of 8. For the
three questions without sample test cases, we write our own.
For value function hyperparameters, we use λ = 0.8 for the
LM score and self-consistency score. For GPT-3.5 we use
six internal tests, while for GPT-4 we use four internal tests.
The Mostly Basic Programming Problems (MBPP) (Austin
et al., 2022) benchmark contains 974 short Python functions
designed to evaluate program synthesis techniques. The
dataset was constructed by crowdsourcing from workers
with basic Python knowledge. Each data point consists of
a natural language description of a programming task, a
reference solution implementation, and three test cases for
functional correctness. The natural language prompts are
typically short, one-sentence descriptions. Solutions cover
common programming constructs including mathematical
operations, list processing, string manipulation, and usage
of the Python standard library. On average, solutions are 6.8
lines of code. The dataset is also supplemented with an additional set of 426 problems that were manually verified for
unambiguous specifications, standard function signatures,
and accurate test cases. We use a randomly selected subset
of 397 problems for our experiments. For value function
hyperparameters, we use λ = 0.8 for the LM score and
self-consistency score.
**D.3. WebShop**
WebShop (Yao et al., 2022) is an interactive web-based
environment designed to evaluate agents on grounded
language understanding and decision-making. It simulates
an e-commerce shopping task by providing agents with
over 1 million real-world products scraped from Amazon,
spanning 5 categories and 113 subcategories. These
products contain rich linguistic information, with an
average text length of 262 words and a vocabulary size
of 224k. In addition, there are over 800k unique product
options available for customization. The environment
renders webpages in two modes: HTML mode provides
pixel-level observations with interactive elements, while
simple mode converts the raw HTML into a structured text
observation more amenable for training agents. The action
space consists of query searches and button clicks, which
transition between 4-page types: search, results, item, and
item detail. Instructions are crowdsourced natural language
specifying product attributes and options, with a total of 12k
collected. Automatic rewards are computed by comparing
the product purchased by the agent against the attributes
and options specified in the instruction, using both lexical
matching and semantic similarity metrics.
There are two evaluation metrics used in WebShop: (1) Task
**Score defined as (100 × avg. reward), which captures the**
|Prompt Method|HotpotQA (EM) ↑|
|---|---|
|LATS (w = 0.5) LATS (w = 2.0) LATS (d = 4) LATS (CoT) LATS (No LM Heuristic) LATS (w = 1.0, d = 7)|0.55 0.63 0.58 0.62 0.37 0.63|
|---|---|
_Table 11. Ablation results on LATS and baseline variants in Hot-_
PotQA measured by Exact Match (EM). We test different depth d,
exploration factor w, and versions of LATS using CoT and without
the LM value function. We sample n = 5 and k = 50 trajectories.
_Figure 3. Performance over successive iterations on HumanEval_
with GPT-3.5.
the page containing string,
(3) finish[answer], which finishes the current task with
answer.
These API calls and free-form thoughts form the action
space for this environment.
**D.2. Programming**
The HumanEval dataset (Chen et al., 2021) is a collection
of 164 handwritten programming problems introduced to
evaluate the functional correctness of models for synthesizing programs from natural language descriptions. Each
problem includes a function signature, docstring description, reference implementation, and multiple unit tests, with
an average of 7.7 tests per problem. The programming
tasks assess comprehension of natural language, reasoning,
algorithms, and basic mathematics, at a difficulty level comparable to simple software interview questions. Pass rates
are evaluated with the pass@k metric, where k samples are
generated per problem and a problem is considered solved
16
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
_Figure 4. Example trajectories on HotPotQA for ReAct (left) and LATS (right). LATS can sample more actions and avoid failure from_
previous mistakes by evaluating states with an LM to guide the search toward promising areas of the tree.
Type Argument State → Next State
search [Query] Search → Results
choose Back to search _∗→_ Search
choose Prev/Next page Results → Results
choose [Product title] Results → Item
choose [Option] Item → Item
choose Desc/Overview Item → Item-Detail
choose Previous Item-Detail → Item
choose Buy Item → Episode End
_Table 12. Action space of WebShop._
average reward obtained across episodes; and (2) Success
**Rate (SR) defined as the portion of instructions where r = 1.**
The reward is calculated based on the number of attributes
satisfied by the selected item. We use 50 environments for
our experiments and a maximum depth limit of 15. For
value function hyperparameters, we use λ = 0.8 for the LM
score and self-consistency score.
**D.4. Game of 24**
Game of 24 is a mathematical reasoning challenge where
the goal is to use basic arithmetic operations to construct
24 out of 4 numbers. We follow the setup from Yao et al.
(2023a), where we measure success if the agent produces a
|Prompt Method|Game of 24 (Success Rate) ↑|
|---|---|
|LATS (CoT, λ = 1) LATS (CoT)|0.40 0.44|
|---|---|
_Table 13. Ablations on λ in Game of 24 with GPT-3.5. λ = 0.5_
used in the main paper outperforms λ = 1, equivalent to removing
self-consistency, which indicates that the self-consistency score
improves the performance of LATS.
correct equation that equals 24 and uses each input number
only once. We report success rate over 50 games. We use
_k = 30 iterations and a maximum depth limit of 5. For value_
function hyperparameters, we use λ = 0.5 for the LM score
and self-consistency score. Tab. 13 shows the performance
comparison between λ = 0.5 and λ = 1, which validates
our design of self-consistency term.
**E. HotPotQA Prompts**
**E.1. Base Acting Prompt**
Solve a question answering task with interleaving Thought,
Action, Observation steps. Thought can reason about the
current situation, and Action can be three types:
(1) Search[entity], which searches the exact entity on
Wikipedia and returns the first paragraph if it exists. If
not, it will return some similar entities to search.
17
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
(2) Lookup[keyword], which returns the next sentence containing keyword in the current passage.
(3) Finish[answer], which returns the answer and finishes
the task.
After each observation, provide the next Thought and next
Action.
Here are some examples:
Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for
Women, and find which was started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine was an American literary
periodical published in Philadelphia in the 19th century.
Edited by Timothy Shay Arthur, it featured work by Edgar
A. Poe, J.H. Ingraham, Sarah Josepha Hale, Thomas G.
Spear, and others.[1][2] In May 1846 it was merged into
Godey’s Lady’s Book.[3]
Thought 2: Arthur’s Magazine was started in 1844. I need
to search First for Women next.
Action 2: Search[First for Women]
Observation 2: First for Women is a woman’s magazine
published by Bauer Media Group in the USA.[1] The magazine was started in 1989. It is based in Englewood Cliffs,
New Jersey.[2] In 2011 the circulation of the magazine was
1,310,696 copies.[3]
Thought 3: First for Women was started in 1989. 1844
(Arthur’s Magazine) < 1989 (First for Women), so Arthur’s
Magazine was started first.
Action 3: Finish[Arthur’s Magazine]
(examples)
You have attempted to answer the following question before
and failed. The following reflection(s) give a plan to avoid
failing to answer the question in the same way you did
previously. Use them to improve your strategy of correctly
answering the given question.
(trajectories)
(input)
**E.2. Base Reasoning Prompt**
Solve a question answering task by having a Thought, then
Finish with your answer. Thought can reason about the
current situation. Finish[answer] returns the answer and
finishes the task. You will be given context that you should
use to help you answer the question. Start your response
with either Action or an indexed Thought
Here are some examples:
Question: What is the elevation range for the area that the
eastern sector of the Colorado orogeny extends into?
Let’s think step by step.
Thought 1: The eastern sector of Colorado orogeny extends
into the High Plains.
Thought 2: High Plains rise in elevation from around 1,800
to 7,000 ft
Thought 3: The answer is 1,800 to 7,000 ft.
Action: Finish[1,800 to 7,000 ft]
(examples)
Previous trial: (trajectories)
(input)
**E.3. Value Function Prompt**
Analyze the trajectories of a solution to a question answering
task. The trajectories are labeled by environmental Observations about the situation, Thoughts that can reason about
the current situation, and Actions that can be three types:
(1) Search[entity], which searches the exact entity on
Wikipedia and returns the first paragraph if it exists. If
not, it will return some similar entities to search.
(2) Lookup[keyword], which returns the next sentence containing keyword in the current passage.
(3) Finish[answer], which returns the answer and finishes
the task.
Given a question and a trajectory, evaluate its correctness
and provide your reasoning and analysis in detail. Focus
on the latest thought, action, and observation. Incomplete
trajectories can be correct if the thoughts and actions so
far are correct, even if the answer is not found yet. Do not
generate additional thoughts or actions. Then at the last line
conclude “Thus the correctness score is s”, where s is an
integer from 1 to 10.
Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for
Women, and find which was started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine was an American literary
periodical published in Philadelphia in the 19th century.
Edited by Timothy Shay Arthur, it featured work by Edgar
A. Poe, J.H. Ingraham, Sarah Josepha Hale, Thomas G.
18
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
Spear, and others.[1][2] In May 1846 it was merged into
Godey’s Lady’s Book.[3]
This trajectory is correct as it is reasonable to search for
the first magazine provided in the question. It is also better
to have simple searches corresponding to a single entity,
making this the best action.
Thus the correctness score is 10
(other examples)
(failed trajectories)
(context)
**E.4. Reflection Prompt**
Analyze the trajectories of a solution to a questionanswering task. The trajectories are labeled by environmental Observations about the situation, Thoughts that can
reason about the current situation, and Actions that can be
three types:
(1) Search[entity], which searches the exact entity on
Wikipedia and returns the first paragraph if it exists. If
not, it will return some similar entities to search.
(2) Lookup[keyword], which returns the next sentence containing keyword in the current passage.
(3) Finish[answer], which returns the answer and finishes
the task.
Given a question and a trajectory, evaluate its correctness
and provide your reasoning and analysis in detail. Focus
on the latest thought, action, and observation. Incomplete
trajectories can be correct if the thoughts and actions so
far are correct, even if the answer is not found yet. Do not
generate additional thoughts or actions. Then at the last line
conclude “Thus the correctness score is s”, where s is an
integer from 1 to 10.
Question: Which magazine was started first Arthur’s Magazine or First for Women?
Thought 1: I need to search Arthur’s Magazine and First for
Women, and find which was started first.
Action 1: Search[Arthur’s Magazine]
Observation 1: Arthur’s Magazine was an American literary
periodical published in Philadelphia in the 19th century.
Edited by Timothy Shay Arthur, it featured work by Edgar
A. Poe, J.H. Ingraham, Sarah Josepha Hale, Thomas G.
Spear, and others.[1][2] In May 1846 it was merged into
Godey’s Lady’s Book.[3]
This trajectory is correct as it is reasonable to search for
the first magazine provided in the question. It is also better
to have simple searches corresponding to a single entity,
making this the best action.
Thus the correctness score is 10
(other examples)
(failed trajectories)
(context)
**F. Programming Prompts**
**F.1. HumanEval function implementation example**
Sample function signature:
def minSubArraySum ( nums ) :
Given an a r r a y of i n t e g e r s nums,
f i n d the minimum sum of any
non−empty sub −a r r a y of nums .
Example
minSubArraySum ([ −1, −2, −3]) == −6
Sample function body implementation:
min sum = f l o a t ( ’ inf ’ )
f o r i in range ( len ( nums ) ) :
current sum = 0
f o r j in range ( i, len ( nums ) ) :
current sum += nums [ j ]
i f current sum < min sum :
min sum = current sum
r e t u r n min sum
**F.2. Base Acting/Reasoning Prompt**
You are an AI Python assistant. You will be given your
previous implementation of a function, a series of unit tests
results, and your self-reflection on your previous implementation. Write your full implementation (restate the function
signature).
Example 1:
[previous impl]:
def add ( a : int, b : i n t ) −> i n t :
‘ ‘ Given i n t e g e r s a and b,
r e t u r n the t o t a l value of a and b . ’ ’
r e t u r n a −b
[unit test results from previous impl]:
Tested passed:
Tests failed:
assert add(1, 2) == 3 # output: -1
19
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
assert add(1, 2) == 4 # output: -1
[reflection on previous impl]:
The implementation failed the test cases where the input
integers are 1 and 2. The issue arises because the code does
not add the two integers together, but instead subtracts the
second integer from the first. To fix this issue, we should
change the operator from ‘-’ to ‘+’ in the return statement.
This will ensure that the function returns the correct output
for the given input.
[improved impl]:
def add ( a : int, b : i n t ) −> i n t :
‘ ‘
Given i n t e g e r s a and b,
r e t u r n the t o t a l value of a and b .
’ ’
r e t u r n a + b
**F.3. Reflection Prompt**
You are a Python programming assistant. You will be given
a function implementation and a series of unit test results.
Your goal is to write a few sentences to explain why your
implementation is wrong, as indicated by the tests. You
will need this as guidance when you try again later. Only
provide the few sentence description in your answer, not the
implementation. You will be given a few examples by the
user.
Example 1:
[previous impl]:
def add ( a : int, b : i n t ) −> i n t :
‘ ‘
Given i n t e g e r s a and b,
r e t u r n the t o t a l value of a and b .
’ ’
r e t u r n a −b
[unit test results from previous impl]:
Tested passed:
Tests failed:
assert add(1, 2) == 3 # output: -1
assert add(1, 2) == 4 # output: -1
[reflection on previous impl]:
The implementation failed the test cases where the input
integers are 1 and 2. The issue arises because the code does
not add the two integers together, but instead subtracts the
second integer from the first. To fix this issue, we should
change the operator from ‘-’ to ‘+’ in the return statement.
20
This will ensure that the function returns the correct output
for the given input.
**F.4. Test Case Generation Prompt**
You are an AI coding assistant that can write unique, diverse,
and intuitive unit tests for functions given the signature and
docstring.
(input)
**G. WebShop Prompts**
**G.1. Acting Prompt**
Webshop
Instruction:
i would like a 3 ounce bottle of bright citrus deodorant for
sensitive skin, and price lower than 50.00 dollars
[Search]
Action: search[3 ounce bright citrus deodorant sensitive
skin]
Observation:
[Back to Search]
Page 1 (Total results: 50)
[Next ] [B078GWRC1J]
Bright Citrus Deodorant by Earth Mama — Natural and Safe
for Sensitive Skin, Pregnancy and Breastfeeding, Contains
Organic Calendula 3-Ounce
$10.99
[B078GTKVXY]
Ginger Fresh Deodorant by Earth Mama — Natural and Safe
for Sensitive Skin, Pregnancy and Breastfeeding, Contains
Organic Calendula 3-Ounce
$10.99
[B08KBVJ4XN]
Barrel and Oak - Aluminum-Free Deodorant, Deodorant
for Men, Essential Oil-Based Scent, 24-Hour Odor Protection, Cedar & Patchouli Blend, Gentle on Sensitive Skin
(Mountain Sage, 2.7 oz, 2-Pack)
$15.95
Action: think[B078GWRC1J and B078GTKVXY are
bright citrus deodorant less then 50 dollars. I can check
B078GWRC1J first.]
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
Observation: OK.
Action: click[B078GWRC1J]
Observation:
[Back to Search]
[Prev]
scent [assorted scents][bright citrus][calming lavender][ginger fresh][simply non-scents] size [travel set (4pack)][3 ounce (pack of 1)][3-ounce (2-pack)] Bright Citrus
Deodorant by Earth Mama — Natural and Safe for Sensitive Skin, Pregnancy and Breastfeeding, Contains Organic
Calendula 3-Ounce
Price: $10.99
Rating: N.A.
[Description]
[Features]
[Reviews]
[Buy Now]
Action: think[For 3 ounce bottle of bright citrus deodorant
for sensitive skin, the item has options ‘bright citrus’ and ‘3
ounce (pack of 1)’ and seems good to buy.]
Observation: OK.
Action: click[bright citrus]
Observation: You have clicked bright citrus.
Action: click[3 ounce (pack of 1)]
Observation: You have clicked 3 ounce (pack of 1).
Action: click[Buy Now]
**G.2. Value Function Prompt**
Given an item to purchase and a trajectory that aims to buy
an item that exactly matches the specification, which corresponds to the ideal score of 1.0, analyze the following
trajectory, then at the last line conclude, “Thus the correctness score is s”, where s is an integer from 1 to 10.
Here are some examples
Webshop
Instruction:
i am looking for dairy free and apple variety pack of chips,
and price lower than 30.00 dollars
[Search]
Action: search[dairy free and apple variety pack of chips]
Observation:
[Back to Search]
Page 1 (Total results: 50)
[Next]
[B07HRFJWP8]
Enjoy Life Foods Soft Baked Ovals, Breakfast Bars, Nut
Free Bars, Soy Free, Dairy Free, Non GMO, Gluten Free,
Vegan, Variety Pack, 4 Boxes (20 Bars Total)
$100.0
[B01KMHY5PG]
Enjoy Life Soft Baked Chewy Bars, Variety Pack, Nut Free
Bars, Soy Free, Dairy Free, Gluten Free, 6 Boxes (30 Total
Bars)
$21.49
[B008D2X8C4]
Enjoy Life Lentil Chips Variety Pack, Dairy Free Chips, Soy
Free, Nut Free, Non GMO, Vegan, Gluten Free, 24 Bags
(0.8 oz)
$100.0
Action: think[The first two items are not what I want. The
third item is the dairy free and apple variety pack of chips I
want.]
Observation: OK.
Action: click[B008D2X8C4]
Observation:
[Back to Search]
[Prev]
flavor name [dill and sour cream][garlic & parmesan][light
sea salt][margherita pizza][thai chili lime][variety pack] size
[0.8 ounce (pack of 24)][4 ounce (pack of 12)]
Price: $100.0
Rating: N.A.
[Description]
[Features]
21
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
[Reviews]
[Buy Now]
Action: think[The item has options ‘variety pack’ and ‘0.8
ounce (pack of 24)’ and seems good to buy.]
Observation: OK.
Action: click[variety pack]
Observation: You have clicked variety pack.
Action: click[0.8 ounce (pack of 24)]
Observation: You have clicked 0.8 ounce (pack of 24).
Action: click[Buy Now]
Reflection: In this attempt, I was unsuccessful. I accidentally bought a product that was $100, which is more than my
budget of $30. Either way, the initial search results were not
good. Next time, I will do search[“variety pack of chip”;]
and then check if the results meet the dairy free and the $30
budget constraints. I will continue to refine my searches so
that I can find more products.
Thus the correctness score is 5
(input)
**G.3. Reflection Prompt**
You are an advanced reasoning agent that can improve based
on self-reflection. You will be given a previous reasoning
trial in which you were given access to a shopping website
and a specific type of item to buy. You were given access
to relevant context and an item to purchase. You were unsuccessful in buying the correct item either because you did
not find an item meeting all of the required specifications
or because you did not select the correct item. The ideal
score is 1.0, and anything less is incorrect. In a few sentences, Diagnose a possible reason for failure and devise a
new, concise, high-level plan that aims to mitigate the same
failure. Use complete sentences. Here are some examples:
Previous Trial Instruction: i am looking for dairy free and
apple variety pack of chips, and price lower than 30.00
dollars [Search]
Action: search[dairy free and apple variety pack of chips]
Observation: [Back to Search] Page 1 (Total results: 50)
[Next >] [B07HRFJWP8] Enjoy Life Foods Soft Baked
Ovals, Breakfast Bars, Nut Free Bars, Soy Free, Dairy Free,
Non GMO, Gluten Free, Vegan, Variety Pack, 4 Boxes
(20 Bars Total) $100.0 [B01KMHY5PG] Enjoy Life Soft
Baked Chewy Bars, Variety Pack, Nut Free Bars, Soy Free,
Dairy Free, Gluten Free, 6 Boxes (30 Total Bars) $21.49
[B008D2X8C4] Enjoy Life Lentil Chips Variety Pack, Dairy
Free Chips, Soy Free, Nut Free, Non GMO, Vegan, Gluten
Free, 24 Bags (0.8 oz) $100.0
Action: think[The first two items are not what I want. The
third item is the dairy free and apple variety pack of chips I
want.] Observation: OK.
Action: click[B008D2X8C4] Observation: [Back to Search]
[< Prev] flavor name [dill and sour cream][garlic & parmesan][light sea salt][margherita pizza][thai chili lime][variety
pack] size [0.8 ounce (pack of 24)][4 ounce (pack of 12)]
Price: $100.0 Rating: N.A. [Description] [Features] [Reviews] [Buy Now]
Action: think[The item has options ‘variety pack’ and ‘0.8
ounce (pack of 24)’ and seems good to buy.] Observation:
OK.
Action: click[variety pack] Observation: You have clicked
variety pack.
Action: click[0.8 ounce (pack of 24)] Observation: You
have clicked 0.8 ounce (pack of 24).
Action: click[Buy Now]
STATUS: FAIL
Reflection: In this attempt, I was unsuccessful. I accidentally bought a product that was $100, which is more than my
budget of $30. Either way, the initial search results were not
good. Next time, I will do search[“variety pack of chips”]
and then check if the results meet the dairy free and the $30
budget constraints. I will continue to refine my searches so
that I can find more products.
Previous Trial Instruction: i need gluten free vegetarian
smoked peppered bacon - 4 ounce (pack of 2), and price
lower than 40.00 dollars [Search]
Action: search[gluten free vegetarian smoked peppered bacon 4 ounce pack of 2] Observation: [Back to Search] Page
1 (Total results: 50) [Next >] [B06Y96MXJV] Smoked
Bacon Sea Salt 3-Pack: Smoked Bacon Chipotle, Smoked
Bacon and Onion, Smoked Peppered Bacon - All-Natural
Sea Salts Slowly Smoked Over Alderwood - No Gluten, No
MSG, Non-GMO (12 total oz.) $29.99 [B06Y96N1KG]
Spicy Hot Pepper Sea Salt 3-Pack: Ghost Pepper, Jalapeno,
Habanero - All Natural, Delicious Blends of Peppers & Sea
Salt - Gluten-Free, Kosher, No MSG, Non-GMO (12 total
oz.) $29.99 [B07GJTKYJQ] Louisville Vegan Jerky - 5
Flavor Variety Pack, Non-GMO Soy Protein, Gluten-Free
(Black Pepper, Buffalo Dill, Pepperoni, Maple Bacon, and
Carolina BBQ, 3 Ounces) $42.99
22
-----
**Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models**
Action: think[The first two items are not what I want. The
third item is not what I want, but it is the closest one.]
Observation: OK.
Action: click[B07GJTKYJQ] Observation: [Back to
Search] [< Prev] Louisville Vegan Jerky - 5 Flavor Variety
Pack, Non-GMO Soy Protein, Gluten-Free (Black Pepper,
Buffalo Dill, Pepperoni, Maple Bacon, and Carolina BBQ,
3 Ounces) Price: $42.99 Rating: N.A. [Description] [Features] [Reviews] [Buy Now]
Action: think[The price is too high and it is not what I want.]
Observation: OK.
Action: click[< Prev] Observation: [Back to Search] Page
1 (Total results: 50) [Next >] [B06Y96MXJV] Smoked
Bacon Sea Salt 3-Pack: Smoked Bacon Chipotle, Smoked
Bacon and Onion, Smoked Peppered Bacon - All-Natural
Sea Salts Slowly Smoked Over Alderwood - No Gluten,
No MSG, Non-GMO (12 total oz.) 29.99 [B06Y96N1KG]
Spicy Hot Pepper Sea Salt 3-Pack: Ghost Pepper, Jalapeno,
Habanero - All Natural, Delicious Blends of Peppers &
Sea Salt - Gluten-Free, Kosher, No MSG, Non-GMO (12
total oz.) 29.99 [B07GJTKYJQ] Louisville Vegan Jerky - 5
Flavor Variety Pack, Non-GMO Soy Protein, Gluten-Free
(Black Pepper, Buffalo Dill, Pepperoni, Maple Bacon, and
Carolina BBQ, 3 Ounces) 42.99
Action: think[The first two items are not what I want.]
Observation: OK.
Action: click[Next >] Observation: Invalid action!
Action: think[I need to refine my search.] Observation: OK.
Action: click[< Back to Search] Observation: Invalid action!
Action: click[Back to Search]
STATUS: FAIL
Reflection: In this attempt, I was unsuccessful. The initial
search results were not the products that I was looking for.
Next time, I will do search[“vegetarian bacon”] for a broader
search. I will check that the new results will fulfill the gluten
free and 4 ounce pack of 2 constraints. I will continue to
refine my searches so that I can find more products.
Previous trial: trajectory Reflection:”’
23
-----
| [
"Andy, Zhou",
"Kai, Yan",
"Michal, Shlapentokh-Rothman",
"Haohan, Wang",
"Yu-Xiong, Wang"
] | 2024-06-05T00:00:00 | ICML 2024 Poster | true | 77 | 1 | null | http://arxiv.org/abs/2310.04406 | https://arxiv.org/abs/2310.04406 | https://www.semanticscholar.org/paper/700bd9681f1b9e9e2212e10415d27b11c7e6836b |
Learn to Solve Algebra Word Problems Using Quadratic Programming | This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014). | This paper presents a new algorithm to automatically solve algebra word problems via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. | # Learn to Solve Algebra Word Problems Using Quadratic Programming
**Lipu Zhou, Shuaixiang Dai, and Liwei Chen**
Baidu Inc., Beijing, China
[email protected], {daishuaixiang, chenliwei}@baidu.com
This paper presents a new algorithm to
automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the
word problem into a set of equation system templates extracted from the training
data. To obtain a robust decision surface,
we train a log-linear model to make the
margin between the correct assignments
and the false ones as large as possible.
This results in a quadratic programming
(QP) problem which can be efficiently
solved. Experimental results show that our
algorithm achieves 79.7% accuracy, about
10% higher than the state-of-the-art baseline (Kushman et al., 2014).
**1** **Introduction**
An algebra word problem describes a mathematical problem which can be typically modeled by
an equation system, as demonstrated in Figure 1.
Seeking to automatically solve word problems is
a classical AI problem (Bobrow, 1964). The word
problem solver is traditionally created by the rulebased approach (Lev et al., 2004; Mukherjee and
Garain, 2008; Matsuzaki et al., 2013). Recently,
using machine learning techniques to construct the
solver has become a new trend (Kushman et al.,
2014; Hosseini et al., 2014; Amnueypornsakul
and Bhat, 2014; Roy et al., 2015). This is based
on the fact that word problems derived from the
same mathematical problem share some common
semantic and syntactic features due to the same
underlying logic. Our method follows this trend.[1]
To solve a word problem, our algorithm analyzes all the possible ways to assign the numbers
1Our code is available at http://pan.baidu.com/
s/1dD336Sx
817
|Col1|Our method|Kushman’s method|
|---|---|---|
|Word problem|An amusement park sells 2 kinds of tickets. Tickets for children cost $ 1.50. Adult tickets cost $ 4. On a certain day, 278 people entered the park. On that same day the admission fees collected totaled $ 792. How many children were admitted on that day? How many adults were admitted?|2|
|Template|u u n 0 1 2 1 n u n u n 0 2 1 3 2 4|u1 u1 n 0 1 2 1 n u2 n u2 n 0 2 1 3 2 4|
|Assignment|2,1.5, 4, 278, 792 n,n,n,n 1 2 3 4|2,1.5, 4, 278, 792,nouns n,n,n,n,u 1,u 1,u 2,u2 1 2 3 4 1 2 1 2|
|Possible Assignment|5 4 3 2 120|174 5 4 3 2 10022520|
_u2_ _n1_ 0 _u11_
_u1_ _n3_ _u2_ _n4_ 0 _n2_
_u12_ _n1_ 0
_u12_ _n3_ _u22_ _n4_ 0
4 3 2 120
5 4 3 2 10022520
Figure 1: Comparison between our algorithm and
(Kushman et al., 2014). Nouns are boldfaced.
in the word problem to a set of equation system
templates. Kushman et al. (2014) also consider
filling the equation system templates to generate
the candidate equations. But Kushman’s template
contains number slots (e.g. n1, n2, n3, n4 in Figure 1) and unknown slots (e.g. _u[1]1[,][ u]1[2][,][ u]2[1][,][ u]2[2]_
in Figure 1). They separately consider assigning
nouns into the unknown slots and numbers into
the number slots, as demonstrated in Figure 1. As
filling the unknown slots is closely related to the
number slots assignment, we only consider assigning the number slots, and design effective features
to describe the relationship between numbers and
unknowns. This scheme significantly reduces the
hypothesis space, as illustrated in Figure 1, which
benefits the learning and inference processes.
We use a log-linear model to describe the template selection and number assignment. To learn
the model parameters of such problem, maximizing the log-likelihood objective is generally
adopted (Kwiatkowski et al., 2010; Kushman et
al., 2014). The key difficulty of this method is
that calculating the gradient of the objective function needs to sum over exponentially many samples. Thus, it is essential to approximate the gradient. For instance, Kushman et al. (2014) use
-----
**3** **Learning and Inference**
**3.1** **Learning**
Using (1), we obtain the difference between the
log-probability of a correct derivation yijk[c]
_[∈Y][i]_
and a false one yijl[f]
_[∈Y][i][ as:]_
ln P _yijk[c]_ _[|][x][i][;][ θ]_ _−_ ln P _yijl[f]_ _[|][x][i][;][ θ]_
=θ _φ _ _xi, yijk[c]_ _φ_ _xi, yijl[f]_ (2)
_·_ _−_
Note that the subtraction in (2) cancels the denominator of (1) which contains extensive computation. To decrease the generalization error of the
learned model, we would like the minimal gap between the correct derivations and the false ones as
large as possible. In practice, we may not find a
decision hyperplane to perfectly separate the correct and the false derivations. Generally, this can
be solved by introducing a slack variable ξijkl
_≥_
0 (Bishop, 2006) for each constraint derived from
(2). Define ϕ _xi, yijk[c]_ _[, y]ijl[f]_ = φ _xi, yijk[c]_
_−_
_φ_ _xi, yijl[f]_ . For _xi_, the resulting optimiza_∀_ _∈X_
tion problem is:
beam search to approximately calculate the gradient. This method can not exploit all the training
samples. Thus the resulting model may be suboptimal. Motivated by the work (Taskar et al.,
2005; Li, 2014), we adopt the max-margin objective. This results in a QP problem and opens the
way toward an efficient learning algorithm (Koller
and Friedman, 2009).
We evaluate our algorithm on the benchmark
dataset provided by (Kushman et al., 2014). The
experimental results show that our algorithm significantly outperforms the state-of-the-art baseline (Kushman et al., 2014).
**2** **Problem Formulation**
Our word problem solver is constructed by training a log-linear model to find the correct mapping
from a word problem to an equation.
**Notations: Let X denote the set of training word**
problems, and T denote the set of equation system templates abstracted from X as (Kushman et
al., 2014). _xi is the i-th word problem in X_ .
Assume Tj is the j-th equation system template
in, and _Tj_ = _n[1]Tj_ _[, n]T[2]_ _j_ _[,][ · · ·][, n]T[m]j_ is the
_T_ _N_
set of number slots ofn _Tj, where m representso_
the size of NTj . Denote the numbers in xi by
_Nxi =_ _n[1]xi[, n]x[2]i[,][ · · ·][, n]x[l]_ _i_, where l represents
the size of _xi. Assuming l_ _m, we further de-_
N _≥_
fine πijk a sequence of m numbers chosen from
_Nxi without repetition. Given πijk, we can map_
_Tj to an equation system eijk by filling the num-_
ber slots NTj of Tj sequently with the numbers in
_πijk. Solving eijk, we can obtain the correspond-_
ing solution sijk. To simplify the notation, we define yijk = (Tj, πijk, eijk, sijk) the k-th derivation
give xi and Tj, and let Yi denote the set of all possible yijk given xi and T . Therefore, to correctly
solve xi is to find the correct yijk ∈Yi.
**Probabilistic Model: As (Kushman et al., 2014),**
we use the log-linear model to define the probability of yijk ∈Yi given xi:
_e[θ][·][φ][(][x][i][,y][ijk][)]_
_p(yijk_ _xi; θ) =_ (1)
_|_
_e[θ][·][φ][(][x][i][,y][′][ijk][)]_
_y[′]ijk_ _i_
_∈Y_
P
where θ is the parameter vector of the model, and
_φ (xi, yijk) denotes the feature function. We adopt_
the max-margin objective (Vapnik, 2013) to directly learn the decision boundary for the correct
derivations and the false ones.
818
arg min [1]
2 _[∥][θ][∥][2][ +][ C]_
_ξijkl_ (3)
_i,j,k,l_
X
_s.t. θ_ _ϕ_ _xi, yijk[c]_ _[, y]ijl[f]_ 1 _ξijkl, ξijkl_ 0
_·_ _≥_ _−_ _≥_
The parameter C is used to balance the slack variable penalty and the margin. This is a QP problem
and has been well studied (Platt, 1999; Fan et al.,
2008).
According to the Karush-Kuhn-Tucker (KKT)
condition, only a part of the constraints is active
for the solution of (3) (Bishop, 2006). This leads
to an efficient learning algorithm called constraint
generation (Koller and Friedman, 2009; Felzenszwalb et al., 2010). Specifically, an initial model
is trained by a randomly selected subset of the
constraints. Next this model is used to check the
constraints and at most N false deviations that are
erroneously classified by this model are collected
for each word problem. These constraints are then
added to train a new model. This process repeats
until converges. Our experimental results show
that this process converges fast.
**3.2** **Inference**
When we obtain the model parameter θ, the inference can be performed by finding the maximum
-----
**Single slot features**
Relation between numbers and the question
sentence.
Position of a number w.r.t a comparative word.
Context of a number.
Is one or two?
Is a multiplier?
Is between 0 and 1?
**Slot pair features**
Relation between two numbers.
Context similarity between two numbers.
Does there exist coreference relationship?
Are two numbers both multipliers?
Are two numbers in the same sentence or continuous sentences?
Information of raw path and dependency path
between two numbers
One number is larger than another.
**Solution features**
Is integer solution?
Is positive solution?
Is between 0 and 1?
Table 1: Features used in our algorithm.
value of (1). This can be simplified by computing
arg max _θ_ _φ(xi, yijk)_ (4)
_yijk_ _i_ _·_
_∈Y_
As we only consider assigning the number slots of
the templates in T, generally, the size of the possible assignments per word problem is bearable, as
shown in the Table 2. Thus we simply evaluate all
the yijk _i. The one with the largest score is_
_∈Y_
considered as the solution of xi.
**4** **Features**
A feature vector φ (xi, yijk) is calculated for each
word problem xi and derivation yijk pair. As
Kushman (2014), a feature is associated with a signature related to the template of yijk. We extract
three kinds of features, i.e., single slot features,
slot pair features and solution features. Unless
otherwise stated, single slot and slot pair features
are associated with the slot and slot pair signature
of the equation system template, respectively, and
solution features are generated for the signature of
the equation system template. Table 1 lists the features used in our algorithm. The detailed description is as follows.
819
**4.1** **Single Slot Features**
To reduce the search space, we only consider the
assignment of the number slots of the template. It
seems that our algorithm will lose the information
about the unknown. But such information can be
recovered by the features that include the information of the question sentence. Specifically, we
associate a number with all the nouns in the same
sentence sorted by the length of the dependence
path between them. For instance, [$, tickets, children] is the sorted noun list for 1.5 in Figure 1.
Assume the n-th noun of the nouns associated to a
given number is the first noun that appears in the
question sentence. We quantify the relationship
between a number and a queried entity by the reciprocal of n. For instance, in Figure 1, “children”
appears in the question sentence, and it is the third
noun associated to 1.5. So the value of this feature
is 1/3. A larger value of this feature means a number more likely relates to the queried entity. The
maximum value of this feature is 1. Thus we introduce a feature to indicate whether this special case
occurs. We also use a feature to indicate whether
a number appears in the question sentence.
The comparative meaning is sensitive to both
the comparative words and the position of a number relative to them. For example, “one number
is 3 less than twice another” is different to “one
number is 3 more than twice another”, but equal to
“twice a number is 3 more than another”. To account for this, we use the comparative words coupled with the position of a number relative to them
as features.
On the other hand, we use the lemma, part of
speech (POS) tag and the dependence type related
to the word within a widow [-5, +5] around a number as features. Besides, if the POS tag or the
named entity tag of a number is not labeled as a
general number, we also import these tags together
with the first noun and the dependence type related
to the number as features.
Additionally, the numbers 1 and 2 are usually
used to indicate the number of variables, such as
“the sum of two numbers”. To capture such usage,
we use a feature to denote whether a number is
one or two as (Kushman et al., 2014). Since such
usage appears in various kinds of word problems,
this feature does not contain the slot signature. We
also generate features to indicate whether a number belongs to (0, 1), and whether it is a multiplier,
such as twice, triple.
-----
**5** **Experiments**
**Dataset: The dataset used in our experiment is**
provided by (Kushman et al., 2014). Equivalent equation systme templates are automatically
merged. The word problems are parsed by (Manning et al., 2014). The version of the parser is the
same as (Kushman et al., 2014). The performance
of our algorithm is evaluated by comparing each
number of the correct answer with the calculated
one, regardless of the ordering. We report the average accuracy of 5-fold cross-validation.
**Learning: We use liblinear (Fan et al., 2008) to**
solve the QP problem. The parameter C in (3)
is set to 0.01 in all the following experiments. We
randomly select 300 false derivations of each word
problem to form the initial training set. We add at
most 300 false derivations for each word problem
during the constraint generation step, and use 5fold cross-validation to avoid overfitting. We stop
iterating when the cross-validation error becomes
worse or the training error converges or none new
constraints are generated.
**Supervision Level: We consider the learning with**
two different levels of supervision. In the first
case, the learning is conducted by providing the
equation and the correct answer of every training
sample. In the second case, the correct answer is
available for every training sample but without the
equation. Instead, all the templates are given, but
the correspondence between the template and the
training sample is not available. During learning,
the algorithm should evaluate every derivation of
each template to find the true one.
**Results: Table 2 lists the learning statistics for our**
algorithm and (Kushman et al., 2014). We can observe that the number of possible alignments per
word problem of our algorithm is much smaller
than (Kushman et al., 2014). However, the number of all the false alignments is still 80K. Using the constraint generation algorithm (Koller and
Friedman, 2009), only 9K false alignments are
used in the quadratic programming. We trained
our model on a Intel i5-3210M CUP and 4G RAM
laptop. Kushman’s algorithm (2014) needs much
more memory than our algorithm and can not run
on a general laptop. Therefore, we tested their algorithm on a workstation with Intel E5-2620 CPU
and 128G memory. As shown in Table 2, their algorithm takes more time than our algorithm.
Table 3 lists the accuracy of our algorithm and
Kushman’s algorithm (2014). It is clear that our
**4.2** **Slot Pair Features**
Assume n1 and n2 are two numbers in a word
problem. Suppose NP1 and NP2 are the lists of
nouns associated to n1 and n2 (described in section 4.1), respectively. We evaluate the relationship r (n1, n2) between n1 and n2 by:
max
_noun[i]1[∈][NP][1][,]_
_noun[j]2[∈][NP][2]_
_s.t. noun[i]1[=][noun][j]2_
_ord_ _noun[i]1_ + ord _noun[j]2_
where ord(·) denotes the index of a noun
in NPi (i = 1, 2), starting from 1. A larger
_r (n1, n2) means n1 and n2 are more related. The_
maximum value of r (n1, n2) is 1, which occurs
when the first nouns of NP1 and NP2 are equal.
We use a feature to indicate whether r (n1, n2) is
1. This feature helps to import some basic rules
of the arithmetic operation, e.g., the units of summands should be the same.
If two slots are symmetric in a template (e.g.,
_n2 and n3 in Figure 1), the contexts around both_
numbers are generally similar. Assume CT1 and
_CT2 are two sets of certain tags within a window_
around n1 and n2, respectively. Then we calculate
the contextual similarity between n1 and n2 by:
_sim (ST1, ST2) =_
_[|][ST]ST[1]1[ ∩]_ _[ST]ST[2]2[|]_
_|_ _∪_ _|_
In this paper, the tags include the lemma, POS tag
and dependence type, and the window size is 5.
Besides, we exploit features to denote whether
there exists coreference relationship between any
elements of the sentences where n1 and n2 locate,
and whether two numbers are both multipliers. Finally, according to (Kushman et al., 2014), we
generate features related to the raw path and dependence path between two numbers, and use the
numeric relation between them as a feature to import some basic arithmetic rules, such as the positive summands are smaller than their sum. We also
include features to indicate whether two numbers
are in the same sentence or continuous sentences.
**4.3** **Solution Features**
Many word problems are math problems about the
real life. This background leads the solutions of
many word problems have some special numerical
properties, such as the positive and integer properties used by (Kushman et al., 2014). To capture
such fact, we introduce a set of features to describe
the solution properties.
820
-----
|Problem|Example|
|---|---|
|Lexicalized features can not gener- alize well for unseen words.|A woman is paid 20 dollars for each day she works and forfeits a 5 dollars for each day she is idle. At the end of 25 days she nets 450 dollars. How many days did she work?|
|Can not deal with compli- cated noun phrases.|The probability that San Francisco plays in the next super bowl is nine times the probability that they do not play in the next super bowl. The probability that San Francisco plays in the next super bowl plus the probabil- ity that they do not play is 1. What is the probability that San Francisco plays in the next super bowl?|
Table 5: The problems of our algorithm.
**6** **Conclusion and Future work**
In this paper, we present a new algorithm to learn
to solve algebra word problems. To reduce the
possible derivations, we only consider filling the
number slots of the equation system templates,
and design effective features to describe the relationship between numbers and unknowns. Additionally, we use the max-margin objective to train
the log-linear model. This results in a QP problem that can be efficiently solved via the constraint
generation algorithm. Experimental results show
that our algorithm significantly outperforms the
state-of-the-art baseline (Kushman et al., 2014).
Our future work will focus on studying the performance of applying nonlinear kernel function to
the QP problem (3), and using the word embedding vector (Bengio et al., 2003; Mikolov et al.,
2013) to replace current lexicalized features. Besides, we would like to compare our algorithm
with the algorithms designed for specific word
problems, such as (Hosseini et al., 2014).
**7** **Acknowledgments**
This work is supported by the National Basic
Research Program of China (973 program No.
2014CB340505). We would like to thank Hua
Wu and the anonymous reviewers for their helpful
comments that improved the work considerably.
|Mean negative samples|80K|
|---|---|
|Mean negative samples used in learning|9K|
|Mean time for feature extraction|22m|
|Mean training time|7.3m|
|Mean feature extraction and training time of (Kushman et al., 2014)|83m|
|# Alignments per problem of (Kushman et al., 2014)|4M|
|# Alignments per problem of our algo- rithm|1.9K|
Mean negative samples 80K
Mean negative samples used in learning 9K
Mean time for feature extraction 22m
Mean training time 7.3m
Mean feature extraction and training
83m
time of (Kushman et al., 2014)
# Alignments per problem of (Kushman
4M
et al., 2014)
# Alignments per problem of our algo1.9K
rithm
Table 2: Learning statistics.
|Algorithm|Accuracy|
|---|---|
|Our algorithm fully supervised|79.7%|
|Our algorithm weakly supervised|72.3%|
|Kushman’s algorithm (2014) fully supervised|68.7%|
Table 3: Algorithm comparison.
|Feature Ablation|Accuracy|
|---|---|
|Without single slot features|70.4%|
|Without slot pair features|69.3%|
|Without solution features|71.8%|
Table 4: Ablation study for fully supervised data.
algorithm obtains better result. The result of the
weakly supervised data is worse than the fully supervised one. But this result is still higher than
Kushman’s fully supervised result.
Table 4 gives the results of our algorithm with
different feature ablations. We can find that all the
features are helpful to get the correct solution and
none of them dramatically surpasses the others.
**Discuss: Although our algorithm gives a better re-**
sult than (Kushman et al., 2014), there still exist
two main problems that need to be further investigated, as demonstrated in Table 5. The first problem is caused by our feature for semantic representation. Our current lexicalized feature can not
generalize well for the unseen words. For example, it is hard for our algorithm to relate the word
“forfeits” to “minus”, if it does not appear in the
training corpus. The second problem is caused by
the fact that our algorithm only considers the single noun as the entity of a word problem. Thus
when the entity is a complicated noun phrase, our
algorithm may fail.
821
-----
**References**
Bussaba Amnueypornsakul and Suma Bhat. 2014.
Machine-guided solution to mathematical word
problems.
Yoshua Bengio, Rjean Ducharme, Pascal Vincent, and
Christian Janvin. 2003. A neural probabilistic language model. _Journal of Machine Learning Re-_
_search, 3(6):1137–1155._
Christopher M Bishop. 2006. Pattern recognition and
_machine learning. springer._
Daniel G Bobrow. 1964. Natural language input for a
computer problem solving system.
Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A
library for large linear classification. The Journal of
_Machine Learning Research, 9:1871–1874._
Pedro F Felzenszwalb, Ross B Girshick, David
McAllester, and Deva Ramanan. 2010. Object
detection with discriminatively trained part-based
models. Pattern Analysis and Machine Intelligence,
_IEEE Transactions on, 32(9):1627–1645._
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533._
Daphne Koller and Nir Friedman. 2009. Probabilistic
_graphical models: principles and techniques. MIT_
press.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. ACL (1), pages 271–
281.
Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic ccg grammars from logical form with higherorder unification. In Proceedings of the 2010 con_ference on empirical methods in natural language_
_processing, pages 1223–1233. Association for Com-_
putational Linguistics.
Iddo Lev, Bill MacCartney, Christopher D Manning,
and Roger Levy. 2004. Solving logic puzzles: From
robust processing to precise semantics. In Proceed_ings of the 2nd Workshop on Text Meaning and In-_
_terpretation, pages 9–16. Association for Computa-_
tional Linguistics.
Hang Li. 2014. Learning to rank for information retrieval and natural language processing. Synthesis
_Lectures on Human Language Technologies, 7(3):1–_
121.
822
Christopher D. Manning, Mihai Surdeanu, John Bauer,
Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd
_Annual Meeting of the Association for Computa-_
_tional Linguistics: System Demonstrations, pages_
55–60.
Takuya Matsuzaki, Hidenao Iwane, Hirokazu Anai,
and Noriko Arai. 2013. The complexity of math
problems–linguistic, or computational. In Proceed_ings of the Sixth International Joint Conference on_
_Natural Language Processing, pages 73–81._
Tomas Mikolov, Wen Tau Yih, and Geoffrey Zweig.
2013. Linguistic regularities in continuous spaceword representations. In HLT-NAACL.
Anirban Mukherjee and Utpal Garain. 2008. A review
of methods for automatic understanding of natural
language mathematical problems. Artificial Intelli_gence Review, 29(2):93–122._
John C. Platt. 1999. Fast training of support vector
machines using sequential minimal optimization. In
_B. Scho04lkopf, C. Burges and A. Smola (Eds.), Ad-_
_vances in kernel methods - Support vector learning._
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transac_tions of the Association for Computational Linguis-_
_tics, 3:1–13._
Ben Taskar, Vassil Chatalbashev, Daphne Koller, and
Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceed_ings of the 22nd international conference on Ma-_
_chine learning, pages 896–903. ACM._
Vladimir Vapnik. 2013. The nature of statistical learn_ing theory. Springer Science & Business Media._
-----
| [
"Lipu, Zhou",
"Shuaixiang, Dai",
"Liwei, Chen"
] | 2015-01-01T00:00:00 | null | false | 75 | 9 | null | http://aclweb.org/anthology/D15-1096 | null | https://www.semanticscholar.org/paper/414483f5d80802284885003d6b0bfc8a10f61d42 |
Neural Math Word Problem Solver with Reinforcement Learning | Sequence-to-sequence model has been applied to solve math word problems. The model takes math problem descriptions as input and generates equations as output. The advantage of sequence-to-sequence model requires no feature engineering and can generate equations that do not exist in training data. However, our experimental analysis reveals that this model suffers from two shortcomings: (1) generate spurious numbers; (2) generate numbers at wrong positions. In this paper, we propose incorporating copy and alignment mechanism to the sequence-to-sequence model (namely CASS) to address these shortcomings. To train our model, we apply reinforcement learning to directly optimize the solution accuracy. It overcomes the “train-test discrepancy” issue of maximum likelihood estimation, which uses the surrogate objective of maximizing equation likelihood during training while the evaluation metric is solution accuracy (non-differentiable) at test time. Furthermore, to explore the effectiveness of our neural model, we use our model output as a feature and incorporate it into the feature-based model. Experimental results show that (1) The copy and alignment mechanism is effective to address the two issues; (2) Reinforcement learning leads to better performance than maximum likelihood on this task; (3) Our neural model is complementary to the feature-based model and their combination significantly outperforms the state-of-the-art results. | Experimental results show that the copy and alignment mechanism is effective to address the two issues and Reinforcement learning leads to better performance than maximum likelihood on this task; and the neural model is complementary to the feature-based model and their combination significantly outperforms the state-of-the-art results. | null | [
"Danqing, Huang",
"Jing, Liu",
"Chin-Yew, Lin",
"Jian, Yin"
] | 2018-08-01T00:00:00 | null | false | 75 | 14 | null | https://www.semanticscholar.org/paper/Neural-Math-Word-Problem-Solver-with-Reinforcement-Huang-Liu/caeb950e503872a903e18a3b259424e3cc3c6006 | null | https://www.semanticscholar.org/paper/caeb950e503872a903e18a3b259424e3cc3c6006 |
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Large language models have made significant progress in various language tasks, yet they still struggle with complex mathematics. In this paper, we propose ToRA a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the utilization of external tools (e.g., computation libraries and symbolic solvers), thereby amalgamating the analytical prowess of language and the computational efficiency of tools. To train ToRA, we curate interactive tool-use trajectories on mathematical datasets, apply imitation learning on the annotations, and propose output space shaping to further refine models' reasoning behavior. As a result, ToRA models significantly outperform open-source models on 10 mathematical reasoning datasets across all scales with 13%-19% absolute improvements on average. Notably, ToRA-7B reaches 44.6% on the competition-level dataset MATH, surpassing the best open-source model WizardMath-70B by 22% absolute. ToRA-34B is also the first open-source model that achieves an accuracy exceeding 50% on MATH, which significantly outperforms GPT-4's CoT result, and is competitive with GPT-4 solving problems with programs. Additionally, we conduct a comprehensive analysis of the benefits and remaining challenges of tool interaction for mathematical reasoning, providing valuable insights for future research. | This paper proposes ToRA a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of tools. | ## TORA: A TOOL-INTEGRATED REASONING AGENT
#### FOR MATHEMATICAL PROBLEM SOLVING
**Zhibin Gou[1][,][2][∗], Zhihong Shao[1][,][2][∗], Yeyun Gong[2][†], Yelong Shen[2]**
**Yujiu Yang[1][†], Minlie Huang[1][†], Nan Duan[2], Weizhu Chen[2]**
1Tsinghua University 2Microsoft
{gzb22,szh19}@mails.tsinghua.edu.cn
{yegong,yeshe,nanduan,wzchen}@microsoft.com
ABSTRACT
Large language models have made significant progress in various language tasks,
yet they still struggle with complex mathematics. In this paper, we propose TORA,
a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the
utilization of external tools (e.g., computation libraries and symbolic solvers),
thereby amalgamating the analytical prowess of language and the computational
efficiency of tools. To train TORA, we curate interactive tool-use trajectories on
mathematical datasets, apply imitation learning on the annotations, and propose
output space shaping to further refine models’ reasoning behavior. As a result,
TORA models significantly outperform open-source models on 10 mathematical
reasoning datasets across all scales with 13%-19% absolute improvements on average. Notably, TORA-7B reaches 44.6% on the competition-level dataset MATH,
surpassing the best open-source model WizardMath-70B by 22% absolute. TORACODE-34B is also the first open-source model that achieves an accuracy exceeding
50% on MATH, which significantly outperforms GPT-4’s CoT result, and is competitive with GPT-4 solving problems with programs. Additionally, we conduct a
comprehensive analysis of the benefits and remaining challenges of tool interaction
for mathematical reasoning, providing valuable insights for future research[1].
|Col1|Col2|Col3|Col4|Col5|48.1|Col7|Col8|Col9|Col10|49.7|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||44.6||||||||||||
||||||||||||||
||||||||||||||
|||||||||||22.7|||
||||||||||||||
|7.2|10.7||9.2||14.0||14.414.9||||||
||||||||||||||
|4.1|||6.3||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||81.684.3|||
||||72.6 54.9|||||75.8 63.9 57.855.2||||||||
|||||||||||||||||
|41.3|||||51.1|||||||||||
|||||||||||||||||
|13.3|||||24.3|||||||||||
|||||||||||||||||
|||||||||||||||||
GPT-4-Code ChatGPT-Code Base WizardMath
GPT-4 ChatGPT SFT ToRA
MATH GSM8k
60 100
50 48.1 49.7 81.6 84.3
44.6 80 72.6 75.8
40 63.9
60 54.9 57.8 55.2
51.1
30
41.3
22.7 40
20
Accuracy (%) 14.0 14.4 14.9 Accuracy (%) 24.3
10 7.2 10.7 6.3 9.2 20 13.3
4.1
0 0
7B 13B 70B 7B 13B 70B
Figure 1: Comparing TORA with baselines on LLaMA-2 base models from 7B to 70B. TORA
models exhibit remarkable improvements over previous state-of-the-art approaches across all scales.
In particular, TORA-70B notably outperforms GPT-4’s CoT result on MATH and attains comparable
results to GPT-4 solving problems with code.
_∗Equal contribution. See Contributions section for details. Work done during an internship at Microsoft._
_†Corresponding authors._
[1Code and models will be available at https://github.com/microsoft/ToRA.](https://github.com/microsoft/ToRA)
-----
Problem: Suppose that the sum of the squares of two complex Let’s use `Sympy` to calculate and print all
numbers 𝑥 and 𝑦 is 7 and the sum of their cubes is 10. List all possible values for 𝑥+ 𝑦,
possible values for 𝑥+ 𝑦, separated by commas. 𝑟"
from sympy import symbols, Eq, solve, simplify
We are given that 𝑥[!] + import math
𝑦[!] = 7 and 𝑥["] + 𝑦["] = def possible_values():
10. We can factor 𝑥["] + def solution(): x, y = symbols('x y’)
𝑦["] to get 𝑥+ 𝑦 [(]𝑥[!] − x = symbols('x’), y = symbols('y') eq1 = Eq(x**2 + y**2, 7)
𝑥𝑦+ 𝑦[!][)]. x**2 + y**2 = 7x**3 + y**3 = 10 eq2 = Eq(x**3 + y**3, 10) 𝑎"
solutions = solve((eq1,eq2), (x, y))
Thus, we can write sums = [simplify(sol[0] + sol[1]) for sol in
10 = 𝑥+𝑦 7 . It x = (-1 + math.sqrt(17)) / 2 solutions]
follows that 𝑥+ 𝑦 must y = (-1 - math.sqrt(17)) / 2 return sums
divide 10. print(possible_values())
# Get all possible values for x+y
The only possibilities result = [x + y, -x - y] >>> [-5, -5, 1, 1, 4, 4]
are that 𝑥+ 𝑦 is >>> SyntaxError: cannot assign to Removing duplicates, the possible values for 𝑜"
1,2, 5 or 10. ✘ expression. ✘ 𝑥+ 𝑦 are \boxed{-5,1, 4}. ✓ 𝑟!
Rationale Program Output Rationale **(c) Tool-integrated**
**Reasoning**
(Format used by ToRA)
**(a) Rationale-based** **(b) Program-based**
Figure 2: Examples of three reasoning formats for mathematical reasoning: (a) Rationale-based
methods (e.g., CoT prompting) generate step-by-step natural language rationales, (b) Program-based
methods (e.g., PAL prompting) solve tasks with program synthesis, and (c) our proposed Toolintegrated Reasoning format interleaves rationales with program-based tool use. For brevity, we
present a simple example of single-round tool interaction, where the model creates rationale r1 for
analysis, writes program a1 to call an external solver, obtains the execution output o1, and then
generates rationale r2 to finalize the answer.
1 INTRODUCTION
Large language models (LLMs), such as GPT-4 (OpenAI, 2023) and PaLM-2 (Anil et al., 2023), have
demonstrated remarkable progress in a wide range of language tasks, particularly in the longstanding
challenge of mathematical reasoning (Feigenbaum et al., 1963; Hosseini et al., 2014). However,
open-source models, such as LLaMA-2 (Touvron et al., 2023a;b) and Falcon (Penedo et al., 2023),
still struggle with advanced mathematical reasoning tasks.
Existing works improve mathematical performance of language models either with step-by-step
natural language reasoning (Wei et al., 2022) as illustrated in Fig 2 (a), or by synthesizing and
executing programs to obtain the answers (Gao et al., 2022; Chen et al., 2022), as depicted in Fig 2
(b). Both approaches exhibit complementary advantages. Natural language is suitable for semantic
analysis, planning, and abstract reasoning (e.g., commonsense reasoning), but struggles with precise
computation, symbolic manipulation, and algorithmic processing. Conversely, programs excel in
rigorous operations, and can outsource intricate calculations to specialized tools like equation solvers.
To leverage the benefits of both natural language reasoning and program-based tool use, we train
open-source models such as LLaMA-2 to reason in a way where natural language reasoning is
interleaved with program-based tool use synergistically (as depicted in Fig 2 (c)), thereby largely
reducing the gap with closed-source models like GPT-4 in mathematical reasoning. Specifically, we
first design the interleaving format of reasoning, curate corresponding interactive tool-use trajectories
for mathematical problems from the popular GSM8k (Cobbe et al., 2021) and MATH (Hendrycks
et al., 2021) dataset, and then apply imitation learning on the high-quality annotations, leading to a
better performance than any existing open-source model. Furthermore, since the curated data is far
from exhausting all valid trajectories for a problem, relying solely on imitation learning restricts a
model’s output space, hindering the flexibility in exploring plausible trajectories during testing. To
improve the diversity of plausible reasoning steps and mitigate improper tool-use behavior, we apply
_output space shaping which additionally trains the models on both self-sampled valid trajectories_
and invalid ones that have been corrected by a teacher model (e.g., a 34B model can serve as the
teacher for a 7B model). Output space shaping significantly boosts reasoning performance, allowing
-----
① **Imitation Learning**
Tool-integrated Reasoning ToRA-Corpus
Fine-tune
Valid Trajectories
Problem Output Rationale …
LLM M
② **Output Space Shaping** Valid Trajectories
✓ ✓ Fine-tune
Problem Output Rationale ✘ ✓
M Output Rationale ToRA
✘ M’ ✓
Output Sampling Teacher Correction
Figure 3: Training TORA contains two steps. ① **Imitation Learning: Prompt LLMs like GPT-4 to**
generate Tool-integrated Reasoning trajectories (TORA-CORPUS) and use this corpus to fine-tune a
model M; ② **Output Space Shaping: Sample diverse tool-use trajectories with M, keep the valid**
ones, correct the invalid ones with a teacher model M[′], and retrain M on the union of sampled valid
trajectories, corrected ones, and the initial TORA-CORPUS to obtain TORA.
open-source models to attain an accuracy exceeding 50% on the competition-level MATH dataset for
the first time.
We evaluate the resulting suite of Tool-integrated Reasoning Agents (TORA) ranging from 7B to
70B on 10 diverse mathematical reasoning datasets. As shown in Fig 1, TORA series significantly
outperform open-source models across all scales. Notably, on the competition-level MATH dataset,
TORA-7B outperforms the previous SoTA WizardMath-70B (Luo et al., 2023) by 22% absolute.
TORA-CODE-34B beats GPT-4’s CoT result (Bubeck et al., 2023) by 8.3% absolute (50.8% vs.
42.5%), and is competitive with GPT-4 solving problems with code (GPT-4-Code, 51.8%). In addition,
we analyze the benefits and remaining challenges of tool interaction for mathematical reasoning,
providing valuable insights for future work.
2 TORA: TOOL-INTEGRATED AGENTS FOR MATHEMATICAL REASONING
2.1 OVERVIEW
TORA series solve challenging mathematical problems by leveraging both natural language reasoning
and program-based tool use. As shown in Fig 2 (c), given a mathematical problem q, TORA reasons
with natural language, producing r1. When reaching a point where program-based tool use is more
appropriate for the subsequent task, e.g., equation solving, TORA generates a program a1 for tool use
following natural language guidance r1. The execution output o1 will be fed to TORA for subsequent
processing including tool use adjustments, sub-tasks solving, or answer finalization. We repeat the
process until the model places its answer within “\boxed{}”. The resulting trajectory is denoted as
_τ = r1a1o1...rn_ 1an 1on 1rn, where rn contains the answer.
_−_ _−_ _−_
Fig 3 presents the training pipeline of TORA. We first collect interactive tool-use trajectories on
popular mathematical datasets. We then apply imitation learning on the resulting annotations, as well
as output space shaping to further refine models’ reasoning behavior.
2.2 COLLECTING INTERACTIVE TOOL-USE TRAJECTORIES
Existing mathematical reasoning datasets primarily contain annotations in either natural language or
code, posing a challenge for training tool-integrated agents due to the absence of interactive tool-use
annotations. To address this, we utilize GPT-4 to synthesize high-quality trajectories on the GSM8k
and MATH training sets. We select GSM8k and MATH as they exhibit diverse reasoning patterns,
spanning multiple domains and difficulty levels.
-----
**Algorithm 1 Inference of Tool-Integrated Reasoning**
**Require: problem q, model G, prompt ℘, external tools E, stop condition Stop(·), maximum iteration rounds n**
1:2: τ for0 ← i ←"" 1 to n do _▷_ Trajectory Initialization
4:3: **ifri Stop ∼** P(Gr(i·|) then℘ _⊕_ _q ⊕_ _τi−1)_ _▷_ Rationale Generation (Eq. 1)▷ Stopping Criteria
5: **return τi** 1 _ri_
6: **end if** _−_ _⊕_
7:8:9: _aoτiii ←E ∼_ PτiG((a1·|i)℘ _⊕riq ⊕aτi_ _i−1o ⊕i_ _ri)_ _▷_ Program Generation (Eq. 2)▷ Trajectory Update (Eq. 3)▷ Tool Execution
10: end for ← _−_ _⊕_ _⊕_ _⊕_
11: return τn
**Prompt Curation** We compose instructions along with diverse few-shot examples, utilizing an interleaved format as depicted in Fig 2 (c). These examples showcase interactive tool usage trajectories,
incorporating descriptive variable names and combined program outputs. Please refer to Appendix C
for the assembled prompts.
**Inference Procedure** We follow Algorithm 1 and feed GPT-4 (G) with the composed prompt ℘ to
generate a tool-use trajectory τ for each question q from the training set. The trajectory is initialized
as an empty string τ0, for each interaction round i, we first generate a rationale:
_ri_ P ( _℘_ _q_ _τi_ 1) (1)
_∼_ _G_ _·|_ _⊕_ _⊕_ _−_
where ⊕ means concatenation. If ri includes an answer within “\boxed{}” (i.e., the stopping
condition Stop(ri)), we cease generation, otherwise the model continues to write a program for tool
use:
_ai_ P ( _℘_ _q_ _τi_ 1 _ri)_ (2)
_∼_ _G_ _·|_ _⊕_ _⊕_ _−_ _⊕_
In line with Gou et al. (2023), if the model triggers the code execution stop words like “‘‘‘output”,
we supply it with the corresponding execution message and outputfacilitating the generation of subsequent steps. Then, we update the trajectory by concatenating it oi by calling tools with oi ←E(ai),
with the newly generated rationale ri, program ai, and output oi:
_τi ←_ _τi−1 ⊕_ _ri ⊕_ _ai ⊕_ _oi_ (3)
We repeat the above interaction process until we reach the maximum rounds n.
**Trajectory Sampling** We set n = 3 and perform inference using GPT-4 with greedy decoding,
retaining trajectories that yield correct answers. For questions where GPT-4 fails with greedy
decoding, we apply nucleus sampling with a sample size of 10 and keep up to 4 valid trajectories per
question. Ultimately, we successfully annotate trajectories for 98.2% of GSM8k questions and 83.1%
of MATH questions. After filtering out invalid trajectories with tool-use errors or wrong answers,
we obtain 16k annotations which constitute our dataset TORA-CORPUS. Table 1 compares TORACORPUS with recently proposed mathematical reasoning datasets, while Table 5 in the Appendix
displays MATH annotation accuracy details.
2.3 TRAINING
**Imitation Learning** We apply imitation learning on TORA-CORPUS by minimizing negative
log-likelihood loss on the trajectory τ conditioned on the problem q:
_n−1_
_−_ log Pθ(ri+1ai+1|q, r1...oi) (4)
_i=1_
X
= arg min
_M_ _θ_
_q,τ_
where M is the resulting model. After imitation learning, we can simply apply the same procedure
in Algorithm 1 by setting prompt to empty ℘ = "" for inference. Imitation learning leads to
state-of-the-art mathematical reasoning performance despite the small scale of TORA-CORPUS.
-----
Table 1: Compared with mathematical reasoning datasets, TORA-CORPUS uniquely combines
natural language rationales with program-based tool usage. Note that TORA-CORPUS only employ
questions from the original training set of MATH and GSM8k.
**Methods** **#Annotation** **Tool** **Interleaving LLM Used** **Source**
RFT (Yuan et al., 2023) _>100k_ ✗ ✗ LLaMA-2 GSM8k
Open-Platypus Lee et al. (2023) 25k ✗ ✗ GPT-4 11 datasets with MATH
WizardMath (Luo et al., 2023) _>96k_ ✗ ✗ ChatGPT MATH & GSM8k
Lila (Mishra et al., 2022) 134k ✓(PoT) ✗ - 20 datasets with MATH & GSM8k
MathInstruct (Yue et al., 2023) 260k ✓(PoT) ✗ GPT-4 14 datasets with MATH & GSM8k
TORA-CORPUS (ours) 16k ✓ ✓ GPT-4 MATH & GSM8k
**Output Space Shaping** For each question, TORA-CORPUS mostly demonstrates only one valid
interactive tool-use trajectory, which may restrict a model’s output space, rendering it inflexible in
exploring plausible trajectories during testing. We therefore propose output space shaping in order to
encourage the diversity of plausible reasoning steps and reduce improper tool-use behavior.
To explore diverse valid trajectories, we apply nucleus sampling to imitation learning models M to
sample 64 trajectories per training question q, following the inference procedure in Section 2.2. We
retain valid trajectories with correct answers and no tool-use errors. As many samples are duplicates,
to further improve diversity and in an attempt to correct models’ improper behavior, we seek to
leverage invalid trajectories as well. We observe that trajectories with wrong answers are mostly
incorrect halfway (Li et al., 2023), and the preceding reasoning is still plausible; in other words, we
can obtain valid trajectories by correcting the subsequent portions. Specifically, a wrong trajectory
_τ_, when written in text, can be represented as a sequence of lines separated by line breaks, i.e.,
_τ = l1...lm, where m is the total number of lines in_ _τ_ . We enumerate possible preceding portions of
wrong trajectories, i.e., _τ_ [: j] = l1...lj, and leverage a teacher model to complete the subsequent
e _M[′]_
steps with greedy decoding: τ P _′_ ( _q_ _τ_ [: j]) where we abuse the notation P _′_ ( ) to denote
e _←_ _M_ _·|_ _⊕_ e _M_ _·_
the interactive tool use process following Section 2.2. Finally, corrected trajectories as well as valid
e
trajectory samples will be used for model training, thereby shaping the output space.
e
In our experiments, we always use CodeLLaMA-34B trained on TORA-CORPUS as the teacher
model, and apply sampling with the CodeLLaMA series (ranging from 7B to 34B, with imitation
learning). We obtain a total of 233k distinct valid trajectory samples and 69k corrected ones. From
this combined dataset, we randomly select up to 4 trajectories per GSM8k and MATH problem,
merge them with TORA-CORPUS, and then train all TORA models on the resulting 69k annotations.
3 EXPERIMENTS
3.1 IMPLEMENTATION DETAILS
We fine-tuned LLaMA-2 (Touvron et al., 2023b) and CodeLLaMA (Rozière et al., 2023) series
(ranging from 7B to 70B) using TORA-CORPUS with output space shaping, yielding the TORA and
TORA-CODE series respectively. We used a learning rate of 2e-5 by default except that we used 1e-5
for the 34B and 70B models. We set the global batch size to 128 and used a linear scheduler with a
3% warm-up period for 3 epochs. We trained all models with DeepSpeed ZeRO Stage3 (Rajbhandari
et al., 2021) and Flash-Attention 2 (Dao, 2023). We used greedy decoding for all results, with the
maximum sequence length set to 2,048 and the maximum number of tool executions set to 3.
3.2 EVALUATION SETUP
**Datasets We evaluated models on GSM8k (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021),**
along with 8 out-of-distribution datasets, namely GSM-Hard (Gao et al., 2022), SVAMP (Patel et al.,
2021), ASDIV (Miao et al., 2020), TabMWP (Lu et al., 2023), SingleEQ, SingleOP, AddSub, and
MultiArith (Koncel-Kedziorski et al., 2016), as illustrated in Table 4 in Appendix. The 10 assorted
datasets collectively encompass mathematical problems spanning basic arithmetic to competition
level, covering middle and high school curricula and various mathematical domains. The problem
formats comprise tabular-based, free-form, and multiple-choice questions, ensuring a thorough
assessment of the model’s mathematical reasoning aptitude.
-----
Table 2: Results on 10 mathematical reasoning tasks. MAWPS results are averaged over four tasks:
Singleeq, Singleop, Addsub, and MultArith. Vanilla models are tested with CoT. The best results in
each section are in blue, the second-best results are underlined, while the results of our best model
are bolded. _[∗]_ ZS: Zero-shot inference without demonstrations.
**AVG**
|Model Size Tools ZS∗|GSM8k MATH GSM-Hard SVAMP TabMWP ASDiv MAWPS|
|---|---|
|Used for training?|✓ ✓ ✗ ✗ ✗ ✗ ✗|
|Col1|Proprietary Models|Col3|
|---|---|---|
|GPT-4 - ✗ ✗ GPT-4 (PAL) - ✓ ✗|92.0 42.5 64.7 93.1 67.1 91.3 97.6 94.2 51.8 77.6 94.8 95.9 92.6 97.7|78.3 86.4|
|ChatGPT - ✗ ✗ ChatGPT (PAL) - ✓ ✗ Claude-2 - ✗ ✗ PaLM-2 540B ✗ ✗|80.8 35.5 55.9 83.0 69.1 87.3 94.6 78.6 38.7 67.6 77.8 79.9 81.0 89.4 85.2 32.5 - - - - - 80.7 34.3 - - - - -|72.3 73.3 - -|
**Used for training?** ✓ ✓ ✗ ✗ ✗ ✗ ✗
Open-Source Models
|LLaMA-2 7B ✗ ✗ LLaMA-2 SFT 7B ✗ ✓ LLaMA-2 RFT 7B ✗ ✓ Platypus-2 7B ✗ ✗ WizardMath 7B ✗ ✓ CodeLLaMA (PAL) 7B ✓ ✗ Toolformer† 7B ✓ ✓ TORA 7B ✓ ✓ TORA-CODE 7B ✓ ✓|13.3 4.1 7.8 38.0 31.1 50.7 60.9 41.3 7.2 16.1 31.9 27.8 47.4 60.0 51.2 - - - - - - 14.4 5.4 8.6 36.7 26.5 47.9 58.4 54.9 10.7 20.6 57.3 38.1 59.1 73.7 34.0 16.6 33.6 59.0 47.3 61.4 79.6 - - - 29.4 - 40.4 44.0 68.8 40.1 54.6 68.2 42.4 73.9 88.8 72.6 44.6 56.0 70.4 51.6 78.7 91.3|29.4 33.1 - 28.3 44.9 47.4 - 62.4 66.5 (+19)|
|---|---|---|
|LLaMA-2 13B ✗ ✗ LLaMA-2 SFT 13B ✗ ✓ LLaMA-2 RFT 13B ✗ ✓ Platypus-2 13B ✗ ✗ WizardMath 13B ✗ ✓ CodeLLaMA (PAL) 13B ✓ ✗ TORA 13B ✓ ✓ TORA-CODE 13B ✓ ✓|24.3 6.3 13.6 43.1 39.5 56.3 70.4 51.1 9.2 22.3 46.3 35.8 58.6 75.0 55.3 - - - - - - 23.7 7.1 14.3 50.7 45.3 55.1 69.6 63.9 14.0 28.4 64.3 46.7 65.8 79.7 39.9 19.9 39.0 62.4 59.5 65.3 86.0 72.7 43.0 57.3 72.9 47.2 77.2 91.3 75.8 48.1 60.5 75.7 65.4 81.4 92.5|36.2 42.6 - 38.0 51.8 53.1 65.9 71.3 (+18)|
|LLaMA-1 RFT 34B ✗ ✓ CodeLLaMA (PAL) 34B ✓ ✗ TORA-CODE 34B ✓ ✓|57.9 - - - - - - 53.3 23.9 49.4 71.0 63.1 72.4 91.5 80.7 50.8 63.7 80.5 70.5 84.2 93.3|- 60.7 74.8 (+14)|
|LLaMA-2 70B ✗ ✗ LLaMA-2 SFT 70B ✗ ✓ LLaMA-2 RFT 70B ✗ ✓ Platypus-2 70B ✗ ✗ WizardMath 70B ✗ ✓ LLaMA-2 (PAL) 70B ✓ ✗ TORA 70B ✓ ✓|57.8 14.4 36.0 73.6 57.5 76.0 92.4 69.3 14.9 39.0 64.0 53.0 71.3 84.8 64.8 - - - - - - 45.9 15.0 24.6 74.3 47.3 72.7 91.1 81.6 22.7 50.3 80.0 49.8 76.2 86.2 55.2 18.3 50.0 74.6 59.5 71.9 92.8 84.3 49.7 67.2 82.7 74.0 86.8 93.8|58.2 56.6 - 53.0 63.8 60.3 76.9 (+13)|
**Metrics We report accuracies of predicted answers. For numerical values, we perform rounding,**
while for expressions, we employ sympy [2] for parsing. Since the SingleEQ, SingleOP, AddSub, and
MultiArith datasets focus on different aspects of basic arithmetic, we report their average results
under the collective term MAWPS (Koncel-Kedziorski et al., 2016) for all methods.
3.3 BASELINES
**Proprietary Models We present results from an array of SoTA LLMs, such as OpenAI’s GPT-4,**
ChatGPT (gpt-3.5-turbo), Google’s PaLM-2, and Anthropic’s Claude-2. By default, we report
CoT prompting results, and include PAL (Gao et al., 2022) prompting results for selected models.
**Open-Source Models Base models comprise LLaMA-2 and CodeLLaMA with CoT and PAL**
prompting. Supervised Fine-Tuning (SFT) employs CoT rationales from the original GSM8k and
MATH dataset (15k samples) for fine-tuning. Rejection sampling Fine-Tuning (RFT) leverages
multiple models to generate diverse reasoning paths for fine-tuning (Yuan et al., 2023). WizardMath
augments data using ChatGPT, and conducts SFT and RLHF. Platypus-2, the top model on the LLM
Leaderboard [3], is fine-tuned with Open-Platypus reasoning datasets (Lee et al., 2023). We also
compare TORA with Toolformer (Schick et al., 2023) which is a model trained to utilize calculators.
[2https://www.sympy.org](https://www.sympy.org)
[3https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
-----
Table 3: Results on MATH subtopics.
|Model Size Tool|Intermediate Number Counting & Precalculus Geometry Prealgebra Algebra Algebra Theory Probability|Overall|
|---|---|---|
Proprietary Models
|ChatGPT (PAL) - ✓ GPT-4 (PAL) - ✓|18.5 19.2 23.2 48.5 43.0 62.7 45.4 32.8 29.3 38.0 58.7 61.0 73.9 59.1|38.7 51.8|
|---|---|---|
Open-Source Models
|WizarMath 7B ✗ TORA-CODE 7B ✓ w/o Shaping 7B ✓ w/o Rationale 7B ✓|6.2 6.0 6.5 7.6 9.5 18.1 16.3 35.1 (+28.9) 31.0 (+25.0) 24.0 (+17.5) 50.7 (+43.1) 30.6 (+21.1) 55.0 (+36.9) 61.7 (+45.4) 29.7 (-5.4) 25.1 (-5.9) 17.7 (-6.3) 46.9 (-3.8) 32.3 (+1.7) 51.9 (-3.1) 55.7 (-6.0) 25.5 (-9.6) 14.7 (-16.3) 15.4 (-8.6) 45.9 (-4.8) 29.7 (-0.9) 51.0 (-4.0) 52.4 (-9.3)|11.2 44.6 (+33.4) 40.2 (-4.4) 36.8 (-7.8)|
|---|---|---|
|WizarMath 13B ✗ TORA-CODE 13B ✓ w/o Shaping 13B ✓ w/o Rationale 13B ✓|6.4 6.6 11.5 9.6 11.0 28.5 21.1 35.7 (+29.3) 31.1 (+24.5) 25.7 (+14.2) 55.6 (+46.0) 39.5 (+28.5) 58.7 (+30.2) 66.7 (+45.6) 32.8 (-2.9) 26.0 (-5.1) 24.0 (-1.7) 52.6 (-3.0) 38.4 (-1.1) 55.6 (-3.1) 61.2 (-5.5) 27.1 (-8.6) 15.8 (-15.3) 16.3 (-9.4) 50.4 (-5.2) 36.9 (-2.6) 55.3 (-3.4) 56.5 (-10.2)|15.0 48.1 (+33.1) 44.6 (-3.5) 40.2 (-7.9)|
|TORA-CODE 34B ✓ w/o Shaping 34B ✓ w/o Rationale 34B ✓|38.9 34.6 27.3 57.8 41.4 63.7 67.7 34.0 (-4.9) 29.9 (-4.7) 24.6 (-2.7) 55.6 (-2.2) 41.6 (+0.2) 63.8 (+0.1) 61.4 (-6.3) 28.3 (-10.6) 15.8 (-18.8) 18.0 (-9.3) 52.4 (-5.4) 40.7 (-0.7) 58.6 (-5.1) 57.5 (-10.2)|50.8 47.4 (-3.4) 41.9 (-8.9)|
|WizarMath 70B ✗ TORA 70B ✓ w/o Shaping 70B ✓ w/o Rationale 70B ✓|9.1 13.4 16.9 16.5 19.2 42.7 35.0 37.1 (+28) 30.4 (+17) 30.1 (+13.2) 54.6 (+38.1) 40.3 (+21.1) 64.9 (+22.2) 66.6 (+31.6) 33.8(-3.3) 28.9(-1.5) 27.1(-3) 53.0(-1.6) 38.0(-2.3) 62.2(-2.7) 64.2(-2.4) 26.7(-10.4) 14.7(-15.7) 20.3(-9.8) 48.9(-5.7) 39.2(-1.1) 59.8(-5.1) 57.6(-9)|24.1 49.7 (+25.6) 47.3(-2.4) 41.5(-8.2)|
3.4 MAIN RESULTS
Table 2 presents the results of TORA on 10 mathematical datasets, highlighting the following
salient observations: (1) Using interleaved formatting and output space shaping, TORA consistently
surpasses prior state-of-the-art open-source models across all scales, achieving 13% to 19% absolute
improvements across 10 tasks. (2) TORA-70B substantially outperforms ChatGPT with both CoT
and PAL prompting on GSM8k (84.3% vs. 80.4%) and MATH (49.7% vs. 38.7%), while TORACODE-34B is competitive with GPT-4 solving competition-level MATH dataset with code (50.8%
vs. 51.8%). (3) The accuracy of TORA-CODE is about 5% higher than TORA of the same size,
demonstrating that continued training on code data significantly benefits program-based tool use.
**(4) While rationale-based fine-tuning negatively affects out-of-distribution generalization, TORA**
displays superior generalization. For instance, WizardMath-70B underperforms the base model on
TabMWP (49.8% vs. 57.5%), while TORA-70B effectively generalizes to this tabular reasoning task
(74.0%). (5) TORA attains fast zero-shot inference speed, averaging 1.02 tool interaction rounds per
problem, while effectively addressing problems that require interactive tool utilization.
3.5 ABLATION STUDY
Rationale-only Program-only Tool-integrated Reasoning
60
50
40
30
20
10
0
LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B GPT-4
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|61.6|Col20|Col21|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||51|||||.8|||
|||||||||37.5 39|||||47.3 42.5 .2||||||||
||||||||||||||||||||||
||||||||||||||||||||||
||||33.6 31|||||.3|||||||||||||
||||||||||||||||||||||
|27|||.8||||||||||||||||||
||||||||||||||||||||||
||||||9.2|||||14.9|||||||||||
||||||||||||||||||||||
||7.2||||||||||||||||||||
Figure 4: Comparison of three formats: (1) Rationale-only: step-by-step natural language reasoning
like CoT; (2) Program-only: solving problems with programs like PAL; (3) Tool-integrated Reasoning
used by TORA: interweaving rationale and program execution to solve problems. We evaluated
GPT-4 with few-shot prompting. We trained LLaMA-2 models to reason in the three types of formats,
respectively. For a fair comparison, we do not apply output space shaping for all LLaMA-2 models.
-----
3.5.1 COMPARISONS OF FORMATTING
To evaluate the efficacy of the reasoning format adopted by TORA which interleaves rationales with
programs, we compared it with Rationale-only and Program-only formats using GPT-4 and LLaMA-2
trained with the same size of data from MATH. As shown in Fig 4, the TORA method consistently
surpasses Rationale-only and Program-only approaches. Remarkably, Using LLaMA-2, the TORA
method achieves substantial improvements of 29.0% and 6.7% over Rationale-only and Program-only,
respectively. With the closed-source GPT-4, the improvements are 19.1% and 9.8%, respectively.
This emphasizes the effectiveness of integrating natural language rationales with programs.
3.5.2 EFFECTS OF OUTPUT SPACE SHAPING
Shaping
ToRA Correction ToRA Correction ToRA
GSM8k MATH
80 50 48.1
46.7
75.8
75 73.5 74.9 45 44.6 44.6 44.6
72.6
71.1
40.2
Accuracy (%)70 68.1 Accuracy (%)40
65 35
7B 13B 7B 13B
Figure 5: Ablation on output space shaping strategies using CodeLLaMA: (1) TORA[−][Shaping]Correction [is]
trained on TORA-CORPUS without shaping. (2) TORA Correction employs only the sampling strategy
_−_
for shaping, trained with up to 4 additional valid trajectory samples per problem. (3) TORA utilizes
We assess the effectiveness of the output space shaping strategies presented in Section 2.3, specifically
sampling and correction. As shown in Fig 5: (1) Output space shaping yields a considerable average
improvement of 3.4% and 4.0% absolute for GSM8k and MATH, respectively, with greater benefits
for smaller models; (2) Applying the sampling strategy results in a 2.7% absolute improvement
on average, while additionally incorporating correction offers a modest yet significant average
|Col1|Col2|MATH|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|||44.6 44.6 44.6||||48.1 46.7|||
||||||||||
||||||||||
|40.2|||||||||
||||||||||
|us C ry o ra sp d y on|7 ing orrecti sam 4 add tegie ace s MAT resu offe|B Code em on ples ition s pres hapi H, re lts in rs a|LLa ploy per p al tra ente ng yi spec a 2. mod|MA: (1) T s only the roblem. ( jectories d in Sectio elds a con tively, wit 7% absol est yet si|13 ORA samp 3) T per p n 2.3 sider h gre ute i gnifi|B −Sha −Co ling ORA roble, spe able ater mpro cant|ping rrection strate utili m. cifica avera bene vem avera|i g ze ll g fit en g|
improvement of 0.8% to 1.2% absolute; (3) Output space shaping benefits even the largest model
TORA-70B, with a notable improvement from 47.3% to 49.7% on MATH. These findings highlight
We investigate the benefits, detailed patterns, and remaining challenges of tool interaction for
mathematical reasoning on the challenging MATH dataset. Performance breakdowns on all subtopics
**Benefits from Tool-Integration for MATH Sub-topics As shown in Table 3, TORA outperforms**
|Col1|Col2|GSM8k|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|||||||75.8 74.9|||
||||||||||
|||73.5 72.6 71.1|||||||
||||||||||
||||||||||
||||||||||
|68.1|||||||||
||||||||||
|e d ap he se in ve a er ve - fe A v m T|7 5: A on T ing, sam ss the g and men ller age, men 70B, ctive NAL estiga atica H ar|B blatio ORA train pling effe corr t of 3 mode whil t of with ness YSIS te th l reas e rep|n on -COR ed wi and ctive ectio .4% ls; ( e ad 0.8% a no of ou e be onin orted|output sp PUS with th up to 4 correctio ness of the n. As sho and 4.0% 2) Applyi ditionally to 1.2% table impr r shaping nefits, de g on the c in Table|13 ace out s addi n, als outp wn i abso ng th inco absol ovem strat taile halle 3.|B shapi hapin tiona o trai ut sp n Fig lute f e sa rpor ute; ent egies d pat nging|ng s g. (2 l vali ned ace s 5: (1 or G mplin ating (3) O from acro terns MA|tra ) d wi ha ) S g c ut 4 ss, T|
WizardMath by around 45% in Algebra and Number Theory, which is attributed to stimulating and
shaping tool-use behavior. Problems from the two sub-topics typically need intricate computation
and data manipulation. Algebra mainly focuses on solving equations and application problems, while
many Number Theory problems can be tackled using brute-force approaches through code.
**Patterns of Library Usage for Problem Solving Fig 6 presents the most frequently used libraries**
for different sub-topics and the corresponding accuracies of their solutions. Tool-use behavior on
different mathematical areas demonstrates distinct patterns. sympy and its internal solvers are
primarily employed for algebra-related topics. Precalculus exhibits extensive matrix operations via
matrices, resulting in a high accuracy. Number Theory depends on algorithms like gcd
and lcm. Geometry mainly uses the rational library for fraction-based computations, while the
application of other tools is limited, signifying the potential for improvement.
**Detailed Impact of Rationale on Different Topics Table 3 shows that using an interleaved format,**
in contrast to merely writing the program, leads to significant improvements across all subtopics,
especially in Precalculus, Algebra, and Geometry, where notable increases range from 8.6% to 18.8%.
-----
|Col1|Col2|Col3|Col4|sy so|mpy lvers|Col7|Col8|Col9|matri binom|ces ial|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|||||ra c|tional alculus||||algori|thm|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|Col1|sym solv|Col3|py ers|Col5|Col6|matrice binomia|Col8|s l|Col10|Col11|Col12|Col13|Col14|Col15|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||
||||||||||||||||
||ratio calc||nal ulus|||algorith||m|||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
Library Usage Frequency for Each Topic Library Usage Accuracy for Each Topic
100 100
sympy matrices sympy matrices
80 solvers binomial 80 solvers binomial
rational algorithm rational algorithm
60 calculus 60 calculus
40 40
Frequency (%) Accuracy (%)
20 20
0 0
Int. Alg.PreCalcGeometryNum. Th.C&P PreAlgAlgebraOverall Int. Alg.PreCalcGeometryNum. Th.C&P PreAlgAlgebraOverall
Figure 6: Library usage frequency and accuracy on each sub-topic of MATH.
Appendix D.1 provides representative examples demonstrating how the rationale aids in planning,
multi-round self-correction, and finalizing answers.
**Remaining Challenges in Mathematical Reasoning for TORA Although TORA has made notable**
progress in various mathematical domains, substantial improvements are still needed in topics like
Geometry, Precalculus, and Intermediate Algebra. In Geometry, as illustrated by failure cases in
Listing 6 in Appendix, a deeper understanding of geometric space is essential, encompassing visual
modalities and interactions with images for auxiliary information, while incorporating computational
tools like SymPy offers limited benefits. For Intermediate Algebra and Precalculus problems, as
shown in Listing 5, direct brute-force solutions are often infeasible, resulting in timeout exceptions.
Addressing these challenges requires complex symbolic reasoning over algebraic expressions and the
given conditions, along with sophisticated problem-solving and proof techniques involving forward
and backward reasoning, as well as result verification.
4 RELATED WORKS
**Mathematical Reasoning Recent research has greatly improved reasoning in LLMs with step-by-**
step natural language reasoning (Wei et al., 2022; Zhou et al., 2023; Zhu et al., 2023; Huang et al.,
2022; Liang et al., 2023). However, natural language reasoning struggles with complex computations
and symbolic manipulations. To overcome the limitations, recent research has exploited tools like
calculators (Cobbe et al., 2021; Shao et al., 2022), code interpreters (Mishra et al., 2022), and symbolic
solvers (Zhang et al., 2023). Program-based methods (Gao et al., 2022; Chen et al., 2022; Shao
et al., 2023a) transform reasoning tasks into program synthesis tasks, thus offering complementary
advantages over natural language reasoning, but they face challenges in nuanced reasoning, planning,
and error handling (Gou et al., 2023), where natural language reasoning should be more suitable.
**Tool-Augmented Language Models Augmenting LLMs with tools can largely alleviate LLMs’**
limitations and improve reasoning and generation performance (Parisi et al., 2022; Mialon et al.,
2023; Yao et al., 2023). Recent work demonstrates the benefits of integrating retrievers (Borgeaud
et al., 2022; Shao et al., 2023b), search engines (Nakano et al., 2021), and multi-tool approaches
(Schick et al., 2023; Paranjape et al., 2023; Gou et al., 2023) to improve generation.
**Knowledge Distillation Knowledge distillation (KD) transfers knowledge from teacher models to**
student models (Bucilua et al., 2006; Hinton et al., 2015). Using LLM-generated trajectories forˇ
fine-tuning is a form of KD (Fu et al., 2023; Taori et al., 2023; Peng et al., 2023; Ho et al., 2023).
Our proposed TORA shows that learning interactive tool-use trajectories is a promising direction to
adapt language models to reasoning tasks.
5 CONCLUSION
This paper presents TORA, a series of novel Tool-integrated Reasoning Agents that synergistically
combines natural language rationale with program-based tool-use for mathematical problem solving.
Our approach demonstrates the potential of integrating external tools in the reasoning process,
-----
enabling language models to effectively tackle complex quantitative tasks. TORA achieves state-ofthe-art performance on 10 diverse mathematical reasoning tasks, substantially outperforming existing
rationale-based and program-based approaches. Furthermore, our systematic analysis of the benefits
and remaining challenges of tool interaction provides valuable insights for future research, paving the
way for the development of more advanced and versatile reasoning agents.
6 AUTHOR CONTRIBUTIONS
Zhibin Gou proposed the interleaved tool-use format of TORA and curated TORA-CORPUS dataset,
implemented the training and evaluation pipeline, conducted experiments and analysis on all datasets,
implemented baselines, and was a main contributor to the paper writing. Zhihong Shao proposed the
project, conducted preliminary experiments, proposed and implemented the training and evaluation
pipelines, proposed and trained all TORA models with output space shaping as well as TORA variants
in the ablation study, designed and oversaw experimental analysis, and contributed to many parts of
the paper writing. Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
Chen provided research mentorship, oversaw project coordination, and advised and contributed to
many parts of the writing.
REFERENCES
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403, 2023._
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican,
George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al.
Improving language models by retrieving from trillions of tokens. In International conference on
_machine learning, pp. 2206–2240. PMLR, 2022._
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio
Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4.
_[CoRR, abs/2303.12712, 2023. doi: 10.48550/arXiv.2303.12712. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2303.12712)_
[48550/arXiv.2303.12712.](https://doi.org/10.48550/arXiv.2303.12712)
Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. Inˇ _Proceedings_
_of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp._
535–541, 2006.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
[Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/](https://arxiv.org/abs/2110.14168)
[abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023.
Edward A Feigenbaum, Julian Feldman, et al. Computers and thought, volume 7. New York
McGraw-Hill, 1963.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towards multi-step reasoning. In Andreas Krause, Emma Brunskill, Kyunghyun
Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Confer_ence on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume_
202 of Proceedings of Machine Learning Research, pp. 10421–10430. PMLR, 2023. URL
[https://proceedings.mlr.press/v202/fu23d.html.](https://proceedings.mlr.press/v202/fu23d.html)
-----
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen.
Critic: Large language models can self-correct with tool-interactive critiquing. arXiv preprint
_arXiv:2305.11738, 2023._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv
_preprint arXiv:1503.02531, 2015._
Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning teachers. In
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pp. 14852–14882, Toronto, Canada, July 2023. Association for Computational_
[Linguistics. doi: 10.18653/v1/2023.acl-long.830. URL https://aclanthology.org/](https://aclanthology.org/2023.acl-long.830)
[2023.acl-long.830.](https://aclanthology.org/2023.acl-long.830)
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to
solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference
_on Empirical Methods in Natural Language Processing (EMNLP), pp. 523–533, 2014._
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han.
Large language models can self-improve. CoRR, abs/2210.11610, 2022. doi: 10.48550/arXiv.2210.
[11610. URL https://doi.org/10.48550/arXiv.2210.11610.](https://doi.org/10.48550/arXiv.2210.11610)
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. MAWPS:
A math word problem repository. In Proceedings of the 2016 Conference of the North American
_Chapter of the Association for Computational Linguistics: Human Language Technologies, pp._
1152–1157, San Diego, California, June 2016. Association for Computational Linguistics. doi:
[10.18653/v1/N16-1136. URL https://aclanthology.org/N16-1136.](https://aclanthology.org/N16-1136)
Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. Platypus: Quick, cheap, and powerful refinement of
llms. arXiv preprint arXiv:2308.07317, 2023.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making
language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–5333,_
2023.
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu,
and Shuming Shi. Encouraging divergent thinking in large language models through multi-agent
debate. arXiv preprint arXiv:2305.19118, 2023.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter
Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured
mathematical reasoning. In The Eleventh International Conference on Learning Representations,
[2023. URL https://openreview.net/forum?id=DHyHRBwJUTN.](https://openreview.net/forum?id=DHyHRBwJUTN)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical
reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583,
2023.
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta
Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented
language models: a survey. arXiv preprint arXiv:2302.07842, 2023.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing
English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association
_for Computational Linguistics, pp. 975–984, Online, July 2020. Association for Computational_
[Linguistics. doi: 10.18653/v1/2020.acl-main.92. URL https://aclanthology.org/](https://aclanthology.org/2020.acl-main.92)
[2020.acl-main.92.](https://aclanthology.org/2020.acl-main.92)
-----
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay
Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. Lila: A unified
benchmark for mathematical reasoning. In Proceedings of the 2022 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), 2022._
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher
Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted
question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
OpenAI. Gpt-4 technical report, 2023.
Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and
Marco Tulio Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models.
_arXiv preprint arXiv:2303.09014, 2023._
Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. arXiv preprint
_arXiv:2205.12255, 2022._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple
math word problems? In Proceedings of the 2021 Conference of the North American Chapter of
_the Association for Computational Linguistics: Human Language Technologies, pp. 2080–2094,_
Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.
[168. URL https://aclanthology.org/2021.naacl-main.168.](https://aclanthology.org/2021.naacl-main.168)
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli,
Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The RefinedWeb
dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv
_[preprint arXiv:2306.01116, 2023. URL https://arxiv.org/abs/2306.01116.](https://arxiv.org/abs/2306.01116)_
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with
gpt-4. arXiv preprint arXiv:2304.03277, 2023.
Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. Zero-infinity:
Breaking the gpu memory wall for extreme scale deep learning. In Proceedings of the International
_Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–14, 2021._
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code.
_arXiv preprint arXiv:2308.12950, 2023._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761, 2023.
Zhihong Shao, Fei Huang, and Minlie Huang. Chaining simultaneous thoughts for numerical
reasoning. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Findings of the As_sociation for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates,_
_December 7-11, 2022, pp. 2533–2547. Association for Computational Linguistics, 2022. doi:_
10.18653/v1/2022.findings-emnlp.187. [URL https://doi.org/10.18653/v1/2022.](https://doi.org/10.18653/v1/2022.findings-emnlp.187)
[findings-emnlp.187.](https://doi.org/10.18653/v1/2022.findings-emnlp.187)
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. Synthetic
prompting: Generating chain-of-thought demonstrations for large language models. In Andreas
Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett
(eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu,
_Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 30706–30775._
[PMLR, 2023a. URL https://proceedings.mlr.press/v202/shao23a.html.](https://proceedings.mlr.press/v202/shao23a.html)
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. Enhancing
retrieval-augmented large language models with iterative retrieval-generation synergy. CoRR,
[abs/2305.15294, 2023b. doi: 10.48550/arXiv.2305.15294. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2305.15294)
[48550/arXiv.2305.15294.](https://doi.org/10.48550/arXiv.2305.15294)
-----
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
[https://github.com/tatsu-lab/stanford_alpaca, 2023.](https://github.com/tatsu-lab/stanford_alpaca)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian
Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin
Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar
Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana
Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor
Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan
Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang,
Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models.
_[CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/](https://doi.org/10.48550/arXiv.2307.09288)_
[10.48550/arXiv.2307.09288.](https://doi.org/10.48550/arXiv.2307.09288)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in
_[Neural Information Processing Systems, 2022. URL https://openreview.net/forum?](https://openreview.net/forum?id=_VjQlMeSB_J)_
[id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan
Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International
_[Conference on Learning Representations, 2023. URL https://openreview.net/forum?](https://openreview.net/forum?id=WE_vluYUL-X)_
[id=WE_vluYUL-X.](https://openreview.net/forum?id=WE_vluYUL-X)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling
relationship on learning mathematical reasoning with large language models. arXiv preprint
_arXiv:2308.01825, 2023._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653, 2023._
Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen.
Evaluating and improving tool-augmented computation-intensive math reasoning. arXiv preprint
_arXiv:2306.02408, 2023._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. Least-to-most prompting enables
complex reasoning in large language models. In The Eleventh International Conference on
_Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023._
[URL https://openreview.net/pdf?id=WZH7099tgfM.](https://openreview.net/pdf?id=WZH7099tgfM)
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and
Yujiu Yang. Solving math word problems via cooperative reasoning induced language models. In
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pp. 4471–4485, Toronto, Canada, July 2023. Association for Computational_
[Linguistics. doi: 10.18653/v1/2023.acl-long.245. URL https://aclanthology.org/](https://aclanthology.org/2023.acl-long.245)
[2023.acl-long.245.](https://aclanthology.org/2023.acl-long.245)
-----
A EVALUATION DATASETS
Table 4: Statistics and examples of the 10 evaluation datasets. In the main result table, we present
the average accuracy of SingleEq, SingleOp, AddSub, and MultiArith under the collective name
MAWPS.
Dataset OOD? #Samples Example Problem
The ice cream parlor was offering a deal, buy 2 scoops
of ice cream, get 1 scoop free. Each scoop cost $1.50. If
Erin had $6.00, how many scoops of ice cream should she
buy?
For a constant c, in cylindrical coordinates (r, θ, z), find
the shape described by the equation
_z = c._
(A) Line (B) Circle (C) Plane (D) Sphere (E) Cylinder (F)
Cone. Enter the letter of the correct option.
Jean has 30 lollipops. Jean eats 8714250 of the lollipops. With the remaining lollipops, Jean wants to package 8714250 lollipops in one bag. How many bags can
Jean fill?
During summer break 819058 kids from Lawrence county
go to camp and the other 668278 kids stay home. How
many more kids spent their summer break at the camp
compared to those who stayed home?
Mrs. Hilt saw an iPod for sale. The price tag said the
iPod cost $128, but a sign announced that it was on sale
for "35% off." How much would the iPod cost after the
discount?
GSM8k (Cobbe
IND 1319
et al., 2021)
MATH (Hendrycks
IND 5000
et al., 2021)
GSM-Hard (Gao
OOD 1319
et al., 2022)
SVAMP (Patel
OOD 1000
et al., 2021)
ASDiv (Miao et al.,
OOD 2215
2020)
TabMWP (Lu et al., OOD 1000
2023)
Stem Leaf
2 3, 6, 7, 8, 8
3 0, 7, 9
4 1, 5
5
6 2, 3, 3, 4, 8, 8
7 3, 4, 4, 7, 9
8 5, 5
Read the table regarding “eight lifting results
(lbs)”. Mr. Morrison, a
P.E. teacher, wrote down
how much weight each
of his students could lift.
How many people lifted
at least 28 pounds?
SingleEq (Koncel- Alyssa’s dog had puppies. She gave 7 to her friends.She
Kedziorski et al., OOD 508 now has 5 puppies left. How many puppies did she have
2016) to start with?
SingleOp (Koncel- Rachel removes 47 bottle caps from a jar. There were
Kedziorski et al., OOD 562 originally 87 bottle caps in the jar. How many bottle caps
2016) are left in the jar?
AddSub (KoncelKedziorski et al.,
2016)
Sam went to 14 football games this year. He went to 29
OOD 395
games last year. How many football games did Sam go to
in all?
MultArith (Koncel- Paige had 43 math problems and 12 science problems for
Kedziorski et al., OOD 600 homework. If she finished 44 of the problems at school,
2016) how many problems did she have to do for homework?
We present statistics and examples of the ten evaluation datasets in Table 4.
-----
B ADDITIONAL EXPERIMENTS
B.1 ACCURACIES OF CLOSED-SOURCE MODELS ON MATH
Table 5: Accuracies of ChatGPT and GPT-4 on the MATH dataset, with breakdown w.r.t. different
mathematical subjects. We apply PAL prompting and the Tool-integrated Reasoning method used by
TORA to the two closed-source models.
|Model Tool|Intermediate Number Counting & Precalculus Geometry Prealgebra Algebra Algebra Theory Probability|Overall|
|---|---|---|
||Test Set||
|ChatGPT (PAL) ✓ GPT-4 (PAL) ✓ GPT-4 (Tool-integrated Reasoning) ✓|18.5 19.2 23.2 48.5 43.0 62.7 45.4 32.8 29.3 38.0 58.7 61.0 73.9 59.1 40.0 37.2 44.1 68.9 67.3 82.2 75.8|38.7 51.8 61.6|
||Training Set||
|GPT-4 (Tool-integrated Reasoning) ✓ w/ best@10 ✓|51.0 51.5 42.5 77.4 72.2 89.8 85.1 72.9 70.0 58.9 91.6 81.7 95.5 96.3|64.3 83.1|
Table 5 presents the detailed accuracies of GPT-4 on the MATH dataset. The Tool-integrated
Reasoning method used by TORA significantly outperforms PAL prompting when directly applied to
the closed-source GPT-4, further demonstrating the benefits of synergizing natural language reasoning
and program-based tool use.
As described in Section 2.2, we annotated interactive tool-use trajectories for the training questions
from MATH with GPT-4. GPT-4 achieves a success rate below 65% using greedy decoding. As
MATH is originally annotated with natural language rationales, to improve the annotation success
rate, we tried to provide GPT-4 with the rationales as hints. However, when using this method, GPT-4
tends to replicate the hints and ignore tool-use outputs especially when the outputs are inconsistent
with the hints, thus failing to produce high-quality trajectories. Hence, we deferred the utilization of
the already-annotated natural language rationales for future investigations. Instead, we employed
nucleus sampling to recall valid trajectories for questions that remained unsolved through greedy
decoding. This approach significantly boosted annotation accuracy to 83.1%.
B.2 EFFECTS OF # VALID TRAJECTORIES FOR OUTPUT SPACE SHAPING
|Col1|Col2|GSM8k|Col4|Col5|
|---|---|---|---|---|
||13B|75|75 .4|.8|
|73|7B 74 .5|.8|||
||||72|.6|
||69|70 .4|.9||
|68|.1||||
||||||
|Col1|Col2|MATH|Col4|Col5|
|---|---|---|---|---|
||13B||48|.1|
||7B|45|.8||
|44|45 .6|.1 43|44 .4|.6|
||41|.8|||
|40|.2||||
||||||
GSM8k MATH
78 50
13B 75.8 13B 48.1
76 7B 74.8 75.4 48 7B
74 73.5 46 45.1 45.8
72.6 44.6 44.6
72 70.9 44 43.4
41.8
Accuracy (%) 70 69.4 Accuracy (%) 42
68.1 40.2
68 40
66 38
0 1 2 4 0 1 2 4
#Trajectories for Shaping #Trajectories for Shaping
Figure 7: Effects of using difference numbers of additional valid trajectories per question for output
space shaping.
As shown in Fig 7, it is beneficial to increase the number of additional valid trajectories for output
space shaping.
-----
C PROMPTS
We present instructions and example few-shot prompts of Tool-integrated Reasoning for querying
GPT-4.
Listing 1: Prompts of Tool-integrated Reasoning for GPT-4.
Integrate step-by-step reasoning and Python code to solve math problems
using the following guidelines:
- Analyze the question and write functions to solve the problem; the
function should not take any arguments.
- Present the final result in LaTeX using a ‘\boxed{}‘ without any units.
- Utilize the ‘pi‘ symbol and ‘Rational‘‘ from Sympy for $\pi$ and
fractions, and simplify all fractions and square roots without
converting them to decimal values.
Here are some examples you may refer to:
--
Question: Convert the point $(0, -3 \sqrt{3}, 3)$ in rectangular
coordinates to spherical coordinates. Enter your answer in the form $
(\rho,\theta,\phi),$ where $\rho > 0,$ $0 \le \theta < 2 \pi,$ and $0
\le \phi \le \pi.$
Solution:
To convert the point $(x, y, z)$ in rectangular coordinates to spherical
coordinates $(\rho, \theta, \phi)$, we use the following formulas:
1. $\rho = \sqrt{x^2 + y^2 + z^2}$
2. $\theta = \arctan{\frac{y}{x}}$
3. $\phi = \arccos{\frac{z}{\rho}}$
Let’s write a function:
‘‘‘python
from sympy import sqrt, atan2, acos, pi
def rectangular_to_spherical():
x, y, z = 0, -3*sqrt(3), 3
rho = sqrt(x**2 + y**2 + z**2)
theta = atan2(y, x)
phi = acos(z/rho)
return rho, theta, phi
spherical_coordinates = rectangular_to_spherical()
print(spherical_coordinates)
‘‘‘
‘‘‘output
(6, -pi/2, pi/3)
‘‘‘
The required form of the answer is $(\rho, \theta, \phi)$, where $\rho >
0$, $0 \le \theta < 2\pi$, and $0 \le \phi \le \pi$. Since our
initial output has $\theta = -\pi/2$, which doesn’t meet the
condition for $\theta$, we add $2\pi$ to adjust its value.
Hence, the spherical coordinates for the point $(0, -3\sqrt{3}, 3)$ are $
\boxed{(6, \frac{3\pi}{2}, \frac{\pi}{3})}$.
--
Question: Compute $1011_2 + 101_2 - 1100_2 + 1101_2$. Express your answer
in base 2.
-----
Solution:
Let’s write a single python function for binary computation:
‘‘‘python
def binary_sum_diff():
# Convert binary to decimal
num1 = int("1011", 2)
num2 = int("101", 2)
num3 = int("1100", 2)
num4 = int("1101", 2)
# Perform the operations
result = num1 + num2 - num3 + num4
# Convert the result back to binary
result_binary = format(result, "b")
return result_binary
result = binary_sum_diff()
print(result)
‘‘‘
‘‘‘output
10001
‘‘‘
The answer expressed in base 2 is $\boxed{10001_2}$.
--
Question: Find the matrix that corresponds to projecting onto the vector
$\begin{pmatrix} 2 \\ -3 \end{pmatrix}.$
Solution:
‘‘‘python
from sympy import Matrix, simplify
def projection_matrix():
v = Matrix([2, -3])
P = (v * v.T) / (v.T * v)[0]
# Simplify the elements
P = P.applyfunc(simplify)
return P
result = projection_matrix()
print(result)
‘‘‘
‘‘‘output
Matrix([[4/13, -6/13], [-6/13, 9/13]])
‘‘‘
The matrix is $\boxed{\begin{pmatrix} \frac{4}{13} & -\frac{6}{13} \ -\
frac{6}{13} & \frac{9}{13} \end{pmatrix}}.$
--
Question: Solve the inequality
\[\frac{x^2 - 25}{x + 5} < 0.\]
Solution:
Let’s use ‘sympy‘ to solve the inequality:
‘‘‘python
from sympy import symbols, simplify
-----
from sympy.solvers.inequalities import solve_univariate_inequality
from sympy.core.relational import LessThan
def solve_inequality():
x = symbols(’x’)
expression = (x**2 - 25) / (x + 5)
inequality = LessThan(expression, 0)
solution = solve_univariate_inequality(inequality, x, relational=False
)
simplified_solution = simplify(solution)
return simplified_solution
result = solve_inequality()
print(result)
‘‘‘
‘‘‘output
Union(Interval.open(-oo, -5), Interval.Lopen(-5, 5))
‘‘‘
The answer is $\boxed{(-\infty,-5)\cup(-5,5)}$
--
D EXAMPLES
D.1 SUCCESS CASES
Listing 2: Success case for TORA: Rationale for planning.
Question: What real values of $x$ are not in the domain of $f(x)=\frac
{1}{|x^2-6x+8|+|x^2+x-6|}$?
Solution:
The domain of the function is all real numbers, so we need to find the
values of x for which the denominator of the function is zero.
The denominator of the function is the sum of the absolute values of the
two terms inside the absolute value.
Each of these terms is a quadratic in x, and the roots of a quadratic
equation are real unless the equation is identically zero.
So, we need to find the values of x for which the two quadratic equations
are identically zero.
‘‘‘python
from sympy import symbols, Abs, solveset, S
from sympy.core.relational import Ne
def non_domain_values():
x = symbols(’x’)
# These are the values of x for which the function is not defined
roots1 = solveset(Ne(x**2 - 6*x + 8, 0), x, domain=S.Reals)
roots2 = solveset(Ne(x**2 + x - 6, 0), x, domain=S.Reals)
# The roots of a quadratic equation are always in the domain of the
function, so we need to find the values of x that are not in the
union of the roots
non_domain_values = S.Reals - (roots1.union(roots2))
return non_domain_values
result = non_domain_values()
print(result)
‘‘‘
‘‘‘output
{2}
-----
‘‘‘
The real value of $x$ that is not in the domain of the function is $\
boxed{2}$.
Listing 3: Success case for TORA: Self-Correcting Errors with tool feedback.
Question: The difference between two perfect squares is 133. What is the
smallest possible sum of the two perfect squares?
Solution:
We can solve this problem by noting that any perfect square can be
written as $(a + b)^2$ or $(a - b)^2$ for some integers $a$ and $b$.
So we can rewrite the question as the following problem: Find the
minimum possible value for $(a + b)^2 + (a - b)^2$ given that $(a + b)
^2 - (a - b)^2 = 133$.
‘‘‘python
from sympy import symbols, Eq, solve
def min_sum_perfect_squares():
"""
Find the smallest possible sum of two perfect squares
given that the difference between them is 133.
Returns:
Integer: The minimum possible sum of the two perfect squares.
"""
a, b = symbols(’a b’)
# Equation for the difference of the squares
diff_eq = Eq((a + b)**2 - (a - b)**2, 133)
# Solve the equation for b
b_solutions = solve(diff_eq, b)
# Calculate the sum of the squares for each solution of b
sum_squares = [(a + b_sol)**2 + (a - b_sol)**2 for b_sol in
b_solutions]
# Find the minimum possible sum of the squares
min_sum_squares = min(sum_squares)
return min_sum_squares
result = min_sum_perfect_squares()
print(result)
‘‘‘
‘‘‘output
(a - 133/(4*a))**2 + (a + 133/(4*a))**2
‘‘‘
The problem seems to be more complex than anticipated and the solution
was incorrect.
Since the difference between two squares can be factored, we can write:
$x^2 - y^2 = (x + y)(x - y)$
$133$ has two pairs of factors: 1 and 133, and 7 and 19.
So, either $x+y = 133$ and $x-y = 1$, or $x+y = 19$ and $x-y = 7$.
- For (133, 1), $x = \frac{133 + 1}{2} = 67$ and $y = \frac{133 - 1}{2} =
66$.
- For (19, 7), $x = \frac{19 + 7}{2} = 13$ and $y = \frac{19 - 7}{2} = 6$.
-----
(13, 6) result in the smallest sum of the two perfect squares.
So, let’s modify the previous solution:
‘‘‘python
def min_sum_perfect_squares():
min_squares = [13**2, 6**2]
min_sum_squares = sum(min_squares)
return min_sum_squares
‘‘‘
‘‘‘output
205
‘‘‘
The smallest possible sum of the two perfect squares is $\boxed{205}$.
Listing 4: Success case for TORA: Finalizing answer.
Question:
Compute \[\sum_{k = 1}^\infty \frac{k + 2}{k! + (k + 1)! + (k + 2)!}.\]
Solution:
‘‘‘python
from sympy import symbols, Sum, factorial, oo, simplify
def infinite_sum():
expr = (k + 2) / (factorial(k) + factorial(k + 1) + factorial(k + 2))
sum_expr = Sum(expr, (k, 1, oo))
result = sum_expr.doit()
return result
result = infinite_sum()
print(result)
‘‘‘
‘‘‘output
Sum((k + 2)/(factorial(k) + factorial(k + 1) + factorial(k + 2)), (k, 1,
oo))
‘‘‘
The sum does not converge to a simple expression, so the answer is $\
boxed{\sum_{k = 1}^\infty \frac{k + 2}{k! + (k + 1)! + (k + 2)!}}$.
D.2 FAILURE CASES
Listing 5: Failure case for TORA: Failed fallback from program timeout to rationale-based
**solution.**
Question: For some positive integer $n,$ $0 < n < 180,$
\[\csc (2^3)^\circ + \csc (2^4)^\circ + \csc (2^5)^\circ + \dots + \csc
(2^{2019})^\circ = \sec n^\circ.\]Find $n.$
Solution:
‘‘‘python
from sympy import symbols, solve, csc, sec, pi
def find_n():
n = symbols(’n’)
lhs = sum(csc(2**i) for i in range(3, 2020))
rhs = sec(n)
equation = lhs - rhs
-----
solution = solve(equation, n)
# Filter out solutions that are not in the interval (0, 180)
solution = [sol.evalf() for sol in solution if 0 < sol.evalf() < 180]
return solution[0]
result = find_n()
print(result)
‘‘‘
‘‘‘output
timeout_decorator.timeout_decorator.TimeoutError: ’Timed Out’
‘‘‘
The solution to this problem is not straightforward to solve using sympy
due to the complexity of the equation and the large number of terms
in the sum.
However, we can solve this problem using a different approach. The sum on
the left hand side is a telescoping sum, which means that many of
the terms cancel out. After cancelling, we are left with $\csc
(2^{2019}) - \csc(2^3)$.
Since $2^{2019}$ is very close to $2^{2020}$, we can approximate $\csc
(2^{2019})$ as $\csc(2^{2020})$.
Then, the equation becomes $\csc(2^{2020}) - \csc(2^3) = \sec(n)$, which
simplifies to $\csc(2^{2020}) = \sec(n) + \csc(2^3)$.
Since $\csc(2^3)$ is very close to 1, we can approximate $\csc(2^3)$ as 1.
Then, the equation becomes $\csc(2^{2020}) = \sec(n) + 1$, which
simplifies to $\csc(2^{2020}) - 1 = \sec(n)$.
Since $\csc(2^{2020})$ is very close to 1, we can approximate $\csc
(2^{2020})$ as 1.
Then, the equation becomes $1 - 1 = \sec(n)$, which simplifies to $\sec(n
) = 0$.
Since $0 < n < 180$, the only solution is $n = 90$. So, the value of $n$
is $\boxed{90}$.
**[Ground Truth]**[: 82]
Listing 6: Failure case for TORA: Geometric problems.
Question: A square has a side length of 10 inches. Congruent isosceles
right triangles are cut off each corner so that the resulting octagon
has equal side lengths. How many inches are in the length of one
side of the octagon? Express your answer as a decimal to the nearest
hundredth.
[asy]
size(150);
draw((0,0)--(10,0)--(10,10)--(0,10)--(0,0),linewidth(0.7));
draw((5*sqrt(2)/(1 + sqrt(2)),0)--(0,5*sqrt(2)/(1 + sqrt(2))),linewidth
(0.7));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),0)--(10,5*sqrt(2)/(1 + sqrt(2))),
linewidth(0.7));
draw((0,(10-(5*sqrt(2)/(1+sqrt(2)))))--(5*sqrt(2)/(1 + sqrt(2)),10),
linewidth(0.7));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),10)--(10,10 - 5*sqrt(2)/(1 + sqrt(2)
)),linewidth(0.7));
-----
[/asy]
Solution:
The side length of the octagon is the hypotenuse of the right triangle
formed by the side of the square and the diagonal of the octagon
The hypotenuse is the sum of the side length of the square and the length
of the diagonal of the octagon
The length of the diagonal of the octagon is 5*sqrt(2)/(1 + sqrt(2))
‘‘‘python
from sympy import sqrt, Rational
def octagon_side_length():
square_side_length = 10
octagon_side_length = square_side_length + 5*sqrt(2)/(1 + sqrt(2))
return round(octagon_side_length, 2)
result = octagon_side_length()
print(result)
‘‘‘
‘‘‘output
12.93
‘‘‘
The length of one side of the octagon is $\boxed{12.93}$ inches.
**[Ground Truth]**[:]
Recall that the ratio of the hypotenuse to the leg of an isosceles right
triangle is $\sqrt{2}$. Looking at the removed triangles, we see that
the three segments that make up the side of square are $s/\sqrt{2}$,
$s$, and $s/\sqrt{2}$, where $s$ is the side length of the octagon.
Setting the sum of these three side lengths equal to 10 inches, we
find \begin{align*}
\frac{s}{\sqrt{2}}+s+\frac{s}{\sqrt{2}}&=10 \implies \\
s\left(\frac{1}{\sqrt{2}}+1+\frac{1}{\sqrt{2}}\right)&=10 \implies \\
s(\sqrt{2}+1)&=10 \implies \\
s&=\frac{10}{\sqrt{2}+1}=10(\sqrt{2}-1),
\end{align*} where we have rationalized the denominator twice: \[
\frac{1}{\sqrt{2}}=\frac{1}{\sqrt{2}}\cdot \frac{\sqrt{2}}{\sqrt{2}}=\
frac{\sqrt{2}}{2},
\] and \[
\frac{10}{\sqrt{2}+1}=\frac{10}{\sqrt{2}+1}\cdot\frac{\sqrt{2}-1}{\sqrt
{2}-1}=10(\sqrt{2}-1).
\] To the nearest hundredth, $s=\boxed{4.14}$ inches.
[asy]
size(150);
defaultpen(linewidth(0.7)+fontsize(10));
real s = 10/(1+sqrt(2));
draw((0,0)--(10,0)--(10,10)--(0,10)--(0,0));
draw((5*sqrt(2)/(1 + sqrt(2)),0)--(0,5*sqrt(2)/(1 + sqrt(2))));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),0)--(10,5*sqrt(2)/(1 + sqrt(2))));
draw((0,(10-(5*sqrt(2)/(1+sqrt(2)))))--(5*sqrt(2)/(1 + sqrt(2)),10));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),10)--(10,10 - 5*sqrt(2)/(1 + sqrt(2)
)));
label("$s$",(10-s/(2*sqrt(2)),10-s/(2*sqrt(2))),SW);
label("$\displaystyle{\frac{s}{\sqrt{2}}}$",(10,10-s/(2*sqrt(2))),E);
label("$\displaystyle{\frac{s}{\sqrt{2}}}$",(10,s/(2*sqrt(2))),E);
label("$s$",(10,5),E);
draw(rightanglemark((10,0),(10,10),(0,10)));[/asy]
-----
## TORA: A TOOL-INTEGRATED REASONING AGENT
#### FOR MATHEMATICAL PROBLEM SOLVING
**Zhibin Gou[1][,][2][∗], Zhihong Shao[1][,][2][∗], Yeyun Gong[2][†], Yelong Shen[3]**
**Yujiu Yang[1][†], Minlie Huang[1][†], Nan Duan[2], Weizhu Chen[3]**
1Tsinghua University 2Microsoft Research 3Microsoft Azure AI
{gzb22,szh19}@mails.tsinghua.edu.cn
{yegong,yeshe,nanduan,wzchen}@microsoft.com
ABSTRACT
Large language models have made significant progress in various language tasks,
yet they still struggle with complex mathematics. In this paper, we propose TORA,
a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the
utilization of external tools (e.g., computation libraries and symbolic solvers),
thereby amalgamating the analytical prowess of language and the computational
efficiency of tools. To train TORA, we curate interactive tool-use trajectories on
mathematical datasets, apply imitation learning on the annotations, and propose
output space shaping to further refine models’ reasoning behavior. As a result,
TORA models significantly outperform open-source models on 10 mathematical
reasoning datasets across all scales with 13%-19% absolute improvements on average. Notably, TORA-7B reaches 44.6% on the competition-level dataset MATH,
surpassing the best open-source model WizardMath-70B by 22% absolute. TORACODE-34B is also the first open-source model that achieves an accuracy exceeding
50% on MATH, which significantly outperforms GPT-4’s CoT result, and is competitive with GPT-4 solving problems with programs. Additionally, we conduct a
comprehensive analysis of the benefits and remaining challenges of tool interaction
for mathematical reasoning, providing valuable insights for future research[1].
|Col1|Col2|Col3|Col4|Col5|48.1|Col7|Col8|Col9|Col10|49.7|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||44.6||||||||||||
||||||||||||||
||||||||||||||
|||||||||||22.7|||
||||||||||||||
|7.2|10.7||9.2||14.0||14.414.9||||||
||||||||||||||
|4.1|||6.3||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||81.684.3|||
||||72.6 54.9|||||75.8 63.9 57.855.2||||||||
|||||||||||||||||
|41.3|||||51.1|||||||||||
|||||||||||||||||
|13.3|||||24.3|||||||||||
|||||||||||||||||
|||||||||||||||||
GPT-4-Code ChatGPT-Code Base WizardMath
GPT-4 ChatGPT SFT ToRA
MATH GSM8k
60 100
50 48.1 49.7 81.6 84.3
44.6 80 72.6 75.8
40 63.9
60 54.9 57.8 55.2
51.1
30
41.3
22.7 40
20
Accuracy (%) 14.0 14.4 14.9 Accuracy (%) 24.3
10 7.2 10.7 6.3 9.2 20 13.3
4.1
0 0
7B 13B 70B 7B 13B 70B
Figure 1: Comparing TORA with baselines on LLaMA-2 base models from 7B to 70B. TORA
models exhibit remarkable improvements over previous state-of-the-art approaches across all scales.
In particular, TORA-70B notably outperforms GPT-4’s CoT result on MATH and attains comparable
results to GPT-4 solving problems with code.
_∗Equal contribution. See Contributions section for details. Work done during an internship at Microsoft._
_†Corresponding authors._
[1Code and models are available at https://github.com/microsoft/ToRA.](https://github.com/microsoft/ToRA)
-----
Problem: Suppose that the sum of the squares of two complex Let’s use `Sympy` to calculate and print all
numbers 𝑥 and 𝑦 is 7 and the sum of their cubes is 10. List all possible values for 𝑥+ 𝑦,
possible values for 𝑥+ 𝑦, separated by commas. 𝑟"
from sympy import symbols, Eq, solve, simplify
We are given that 𝑥[!] + import math
𝑦[!] = 7 and 𝑥["] + 𝑦["] = def possible_values():
10. We can factor 𝑥["] + def solution(): x, y = symbols('x y’)
𝑦["] to get 𝑥+ 𝑦 [(]𝑥[!] − x = symbols('x’), y = symbols('y') eq1 = Eq(x**2 + y**2, 7)
𝑥𝑦+ 𝑦[!][)]. x**2 + y**2 = 7x**3 + y**3 = 10 eq2 = Eq(x**3 + y**3, 10) 𝑎"
solutions = solve((eq1,eq2), (x, y))
Thus, we can write sums = [simplify(sol[0] + sol[1]) for sol in
10 = 𝑥+𝑦 7 . It x = (-1 + math.sqrt(17)) / 2 solutions]
follows that 𝑥+ 𝑦 must y = (-1 - math.sqrt(17)) / 2 return sums
divide 10. print(possible_values())
# Get all possible values for x+y
The only possibilities result = [x + y, -x - y] >>> [-5, -5, 1, 1, 4, 4]
are that 𝑥+ 𝑦 is >>> SyntaxError: cannot assign to Removing duplicates, the possible values for 𝑜"
1,2, 5 or 10. ✘ expression. ✘ 𝑥+ 𝑦 are \boxed{-5,1, 4}. ✓ 𝑟!
Rationale Program Output Rationale **(c) Tool-integrated**
**Reasoning**
(Format used by ToRA)
**(a) Rationale-based** **(b) Program-based**
Figure 2: Examples of three reasoning formats for mathematical reasoning: (a) Rationale-based
methods (e.g., CoT prompting) generate step-by-step natural language rationales, (b) Program-based
methods (e.g., PAL prompting) solve tasks with program synthesis, and (c) our proposed Toolintegrated Reasoning format interleaves rationales with program-based tool use. For brevity, we
present a simple example of single-round tool interaction, where the model creates rationale r1 for
analysis, writes program a1 to call an external solver, obtains the execution output o1, and then
generates rationale r2 to finalize the answer.
1 INTRODUCTION
Large language models (LLMs), such as GPT-4 (OpenAI, 2023) and PaLM-2 (Anil et al., 2023), have
demonstrated remarkable progress in a wide range of language tasks, particularly in the longstanding
challenge of mathematical reasoning (Feigenbaum et al., 1963; Hosseini et al., 2014). However,
open-source models, such as LLaMA-2 (Touvron et al., 2023a;b) and Falcon (Penedo et al., 2023),
still struggle with advanced mathematical reasoning tasks.
Existing works improve mathematical performance of language models either with step-by-step
natural language reasoning (Wei et al., 2022) as illustrated in Fig 2 (a), or by synthesizing and
executing programs to obtain the answers (Gao et al., 2022; Chen et al., 2022), as depicted in Fig 2
(b). Both approaches exhibit complementary advantages. Natural language is suitable for semantic
analysis, planning, and abstract reasoning (e.g., commonsense reasoning), but struggles with precise
computation, symbolic manipulation, and algorithmic processing. Conversely, programs excel in
rigorous operations, and can outsource intricate calculations to specialized tools like equation solvers.
To leverage the benefits of both natural language reasoning and program-based tool use, we train
open-source models such as LLaMA-2 to reason in a way where natural language reasoning is
interleaved with program-based tool use synergistically (as depicted in Fig 2 (c)), thereby largely
reducing the gap with closed-source models like GPT-4 in mathematical reasoning. Specifically, we
first design the interleaving format of reasoning, curate corresponding interactive tool-use trajectories
for mathematical problems from the popular GSM8k (Cobbe et al., 2021) and MATH (Hendrycks
et al., 2021) dataset, and then apply imitation learning on the high-quality annotations, leading to a
better performance than any existing open-source model. Furthermore, since the curated data is far
from exhausting all valid trajectories for a problem, relying solely on imitation learning restricts a
model’s output space, hindering the flexibility in exploring plausible trajectories during testing. To
improve the diversity of plausible reasoning steps and mitigate improper tool-use behavior, we apply
_output space shaping which additionally trains the models on both self-sampled valid trajectories and_
invalid ones that have been corrected by a teacher model (e.g., a 34B model can serve as the teacher
-----
① **Imitation Learning**
Tool-integrated Reasoning ToRA-Corpus
Fine-tune
Valid Trajectories
Problem Output Rationale …
LLM M
② **Output Space Shaping** Valid Trajectories
✓ ✓ Fine-tune
Problem Output Rationale ✘ ✓
M Output Rationale ToRA
✘ M’ ✓
Output Sampling Teacher Correction
Figure 3: Training TORA contains two steps. ① **Imitation Learning: Prompt LLMs like GPT-4 to**
generate Tool-integrated Reasoning trajectories (TORA-CORPUS) and use this corpus to fine-tune a
model M; ② **Output Space Shaping: Sample diverse tool-use trajectories with M, keep the valid**
ones, correct the invalid ones with a teacher model M[′], and retrain M on the union of sampled valid
trajectories, corrected ones, and the initial TORA-CORPUS to obtain TORA.
for a 7B model). Output space shaping significantly boosts reasoning, allowing open-source models
to attain an accuracy exceeding 50% on the competition-level MATH dataset for the first time.
We evaluate the resulting suite of Tool-integrated Reasoning Agents (TORA) ranging from 7B to
70B on 10 diverse mathematical reasoning datasets. As shown in Fig 1, TORA series significantly
outperform open-source models across all scales. Notably, on the competition-level MATH dataset,
TORA-7B outperforms the previous SoTA WizardMath-70B (Luo et al., 2023) by 22% absolute.
TORA-CODE-34B beats GPT-4’s CoT result (Bubeck et al., 2023) by 8.3% absolute (50.8% vs.
42.5%), and is competitive with GPT-4 solving problems with code (GPT-4-Code, 51.8%). In addition,
we analyze the benefits and remaining challenges of tool interaction for mathematical reasoning,
providing valuable insights for future work.
2 TORA: TOOL-INTEGRATED AGENTS FOR MATHEMATICAL REASONING
2.1 OVERVIEW
TORA series solve challenging mathematical problems by leveraging both natural language reasoning
and program-based tool use. As shown in Fig 2 (c), given a mathematical problem q, TORA reasons
with natural language, producing r1. When reaching a point where program-based tool use is more
appropriate for the subsequent task, e.g., equation solving, TORA generates a program a1 for tool use
following natural language guidance r1. The execution output o1 will be fed to TORA for subsequent
processing including tool use adjustments, sub-tasks solving, or answer finalization. We repeat the
process until the model places its answer within “\boxed{}”. The resulting trajectory is denoted as
_τ = r1a1o1...rn_ 1an 1on 1rn, where rn contains the answer.
_−_ _−_ _−_
Fig 3 presents the training pipeline of TORA. We first collect interactive tool-use trajectories on
popular mathematical datasets. We then apply imitation learning on the resulting annotations, as well
as output space shaping to further refine models’ reasoning behavior.
2.2 COLLECTING INTERACTIVE TOOL-USE TRAJECTORIES
Existing mathematical reasoning datasets primarily contain annotations in either natural language or
code, posing a challenge for training tool-integrated agents due to the absence of interactive tool-use
annotations. To address this, we utilize GPT-4 to synthesize high-quality trajectories on the GSM8k
and MATH training sets. We select GSM8k and MATH as they exhibit diverse reasoning patterns,
spanning multiple domains and difficulty levels.
-----
**Algorithm 1 Inference of Tool-Integrated Reasoning**
**Require: problem q, model G, prompt p, external tools E, stop condition Stop(·), maximum iteration rounds n**
1:2: τ for0 ← i ←"" 1 to n do _▷_ Trajectory Initialization
4:3: **ifri Stop ∼** P(Gr(i·|) thenp ⊕ _q ⊕_ _τi−1)_ _▷_ Rationale Generation (Eq. 1)▷ Stopping Criteria
5: **return τi** 1 _ri_
6: **end if** _−_ _⊕_
7:8:9: _aoτiii ←E ∼_ PτiG((a1·|i)p ⊕riq ⊕aτii−1o ⊕i _ri)_ _▷_ Program Generation (Eq. 2)▷ Trajectory Update (Eq. 3)▷ Tool Execution
10: end for ← _−_ _⊕_ _⊕_ _⊕_
11: return τn
**Prompt Curation** We compose instructions along with diverse few-shot examples, utilizing an interleaved format as depicted in Fig 2 (c). These examples showcase interactive tool usage trajectories,
incorporating descriptive variable names and combined program outputs. Please refer to Appendix E
for the assembled prompts.
**Inference Procedure** We follow Algorithm 1 and feed GPT-4 (G) with the composed prompt p to
generate a tool-use trajectory τ for each question q from the training set. The trajectory is initialized
as an empty string τ0, for each interaction round i, we first generate a rationale:
_ri_ P ( _p_ _q_ _τi_ 1) (1)
_∼_ _G_ _·|_ _⊕_ _⊕_ _−_
where ⊕ means concatenation. If ri includes an answer within “\boxed{}” (i.e., the stopping
condition Stop(ri)), we cease generation, otherwise the model continues to write a program for tool
use:
_ai_ P ( _p_ _q_ _τi_ 1 _ri)_ (2)
_∼_ _G_ _·|_ _⊕_ _⊕_ _−_ _⊕_
In line with Gou et al. (2023), if the model triggers the code execution stop words like “‘‘‘output”,
we supply it with the corresponding execution message and outputfacilitating the generation of subsequent steps. Then, we update the trajectory by concatenating it oi by calling tools with oi ←E(ai),
with the newly generated rationale ri, program ai, and output oi:
_τi ←_ _τi−1 ⊕_ _ri ⊕_ _ai ⊕_ _oi_ (3)
We repeat the above interaction process until we reach the maximum rounds n.
**Trajectory Sampling** We set n = 3 and perform inference using GPT-4 with greedy decoding,
retaining trajectories that yield correct answers. For questions where GPT-4 fails with greedy
decoding, we apply nucleus sampling with a sample size of 10 and keep up to 4 valid trajectories per
question. Ultimately, we successfully annotate trajectories for 98.2% of GSM8k questions and 83.1%
of MATH questions. After filtering out invalid trajectories with tool-use errors or wrong answers,
we obtain 16k annotations which constitute our dataset TORA-CORPUS. Table 1 compares TORACORPUS with recently proposed mathematical reasoning datasets, while Table 6 in the Appendix
displays MATH annotation accuracy details.
2.3 TRAINING
**Imitation Learning** We apply imitation learning on TORA-CORPUS by minimizing negative
log-likelihood loss on the trajectory τ conditioned on the problem q:
_n−1_
_−_ log PM(ri+1ai+1|q, r1...oi) (4)
_i=1_
X
_M = arg minM_
_q,τ_
where M is the resulting model. After imitation learning, we can simply apply the same procedure in
Algorithm 1 by setting prompt to empty p = "" for inference. Imitation learning leads to state-of-theart mathematical reasoning performance despite the small scale of TORA-CORPUS.
-----
Table 1: Compared with mathematical reasoning datasets, TORA-CORPUS uniquely combines
natural language rationales with program-based tool usage. Note that TORA-CORPUS only employ
questions from the original training set of MATH and GSM8k.
**Methods** **#Annotation** **Tool** **Interleaving LLM Used** **Source**
RFT (Yuan et al., 2023) _>100k_ ✗ ✗ LLaMA-2 GSM8k
Open-Platypus Lee et al. (2023) 25k ✗ ✗ GPT-4 11 datasets with MATH
WizardMath (Luo et al., 2023) _>96k_ ✗ ✗ ChatGPT MATH & GSM8k
Lila (Mishra et al., 2022) 134k ✓(PoT) ✗ - 20 datasets with MATH & GSM8k
MathInstruct (Yue et al., 2023) 260k ✓(PoT) ✗ GPT-4 14 datasets with MATH & GSM8k
TORA-CORPUS (ours) 16k ✓ ✓ GPT-4 MATH & GSM8k
**Output Space Shaping** For each question, TORA-CORPUS mostly demonstrates only one valid
interactive tool-use trajectory, which may restrict a model’s output space, rendering it inflexible in
exploring plausible trajectories during testing. We therefore propose output space shaping in order to
encourage the diversity of plausible reasoning steps and reduce improper tool-use behavior.
To explore diverse valid trajectories, we apply nucleus sampling to imitation learning models M to
sample 64 trajectories per training question q, following the inference procedure in Section 2.2. We
retain valid trajectories with correct answers and no tool-use errors. As many samples are duplicates,
to further improve diversity and in an attempt to correct models’ improper behavior, we seek to
leverage invalid trajectories as well. We observe that trajectories with wrong answers are mostly
incorrect halfway (Li et al., 2023), and the preceding reasoning is still plausible; in other words, we
can obtain valid trajectories by correcting the subsequent portions. Specifically, a wrong trajectory
_τ_, when written in text, can be represented as a sequence of lines separated by line breaks, i.e.,
_τ = l1...lm, where m is the total number of lines in_ _τ_ . We enumerate possible preceding portions of
wrong trajectories, i.e., _τ_ [: j] = l1...lj, and leverage a teacher model to complete the subsequent
e _M[′]_
steps with greedy decoding: τ P _′_ ( _q_ _τ_ [: j]) where we abuse the notation P _′_ ( ) to denote
e _←_ _M_ _·|_ _⊕_ e _M_ _·_
the interactive tool use process following Section 2.2. Finally, corrected trajectories as well as valid
e
trajectory samples will be used for model training, thereby shaping the output space.
e
In our experiments, we always use CodeLLaMA-34B trained on TORA-CORPUS as the teacher
model, and apply sampling with the CodeLLaMA series (ranging from 7B to 34B, with imitation
learning). We obtain a total of 233k distinct valid trajectory samples and 69k corrected ones. From
this combined dataset, we randomly select up to 4 trajectories per GSM8k and MATH problem,
merge them with TORA-CORPUS, and then train all TORA models on the resulting 69k annotations.
3 EXPERIMENTS
3.1 IMPLEMENTATION DETAILS
We fine-tuned LLaMA-2 (Touvron et al., 2023b) and CodeLLaMA (Rozière et al., 2023) series
(ranging from 7B to 70B) using TORA-CORPUS with output space shaping, yielding the TORA and
TORA-CODE series respectively. We used a learning rate of 2e-5 by default except that we used 1e-5
for the 34B and 70B models. We set the global batch size to 128 and used a linear scheduler with a
3% warm-up period for 3 epochs. We trained all models with DeepSpeed ZeRO Stage3 (Rajbhandari
et al., 2021) and Flash-Attention 2 (Dao, 2023). We used greedy decoding for all results, with the
maximum sequence length set to 2,048 and the maximum number of tool executions set to 3.
3.2 EVALUATION SETUP
**Datasets We evaluated models on GSM8k (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021),**
along with 8 out-of-distribution datasets, namely GSM-Hard (Gao et al., 2022), SVAMP (Patel et al.,
2021), ASDIV (Miao et al., 2020), TabMWP (Lu et al., 2023), SingleEQ, SingleOP, AddSub, and
MultiArith (Koncel-Kedziorski et al., 2016), as illustrated in Table 5 in Appendix. The 10 assorted
datasets collectively encompass mathematical problems spanning basic arithmetic to competition
level, covering middle and high school curricula and various mathematical domains. The problem
formats comprise tabular-based, free-form, and multiple-choice questions, ensuring a thorough
assessment of the model’s mathematical reasoning aptitude.
-----
Table 2: Results on 10 mathematical reasoning tasks. MAWPS results are averaged over four tasks:
Singleeq, Singleop, Addsub, and MultArith. Vanilla models are tested with CoT. The best results in
each section are in blue, the second-best results are underlined, while the results of our best model
are bolded. _[∗]_ ZS: Zero-shot inference without demonstrations.
**AVG**
|Model Size Tools ZS∗|GSM8k MATH GSM-Hard SVAMP TabMWP ASDiv MAWPS|
|---|---|
|Used for training?|✓ ✓ ✗ ✗ ✗ ✗ ✗|
|Col1|Proprietary Models|Col3|
|---|---|---|
|GPT-4 - ✗ ✗ GPT-4 (PAL) - ✓ ✗|92.0 42.5 64.7 93.1 67.1 91.3 97.6 94.2 51.8 77.6 94.8 95.9 92.6 97.7|78.3 86.4|
|ChatGPT - ✗ ✗ ChatGPT (PAL) - ✓ ✗ Claude-2 - ✗ ✗ PaLM-2 540B ✗ ✗|80.8 35.5 55.9 83.0 69.1 87.3 94.6 78.6 38.7 67.6 77.8 79.9 81.0 89.4 85.2 32.5 - - - - - 80.7 34.3 - - - - -|72.3 73.3 - -|
**Used for training?** ✓ ✓ ✗ ✗ ✗ ✗ ✗
Open-Source Models
|LLaMA-2 7B ✗ ✗ LLaMA-2 SFT 7B ✗ ✓ LLaMA-2 RFT 7B ✗ ✓ Platypus-2 7B ✗ ✗ WizardMath 7B ✗ ✓ CodeLLaMA (PAL) 7B ✓ ✗ Toolformer† 7B ✓ ✓ TORA 7B ✓ ✓ TORA-CODE 7B ✓ ✓|13.3 4.1 7.8 38.0 31.1 50.7 60.9 41.3 7.2 16.1 31.9 27.8 47.4 60.0 51.2 - - - - - - 14.4 5.4 8.6 36.7 26.5 47.9 58.4 54.9 10.7 20.6 57.3 38.1 59.1 73.7 34.0 16.6 33.6 59.0 47.3 61.4 79.6 - - - 29.4 - 40.4 44.0 68.8 40.1 54.6 68.2 42.4 73.9 88.8 72.6 44.6 56.0 70.4 51.6 78.7 91.3|29.4 33.1 - 28.3 44.9 47.4 - 62.4 66.5 (+19)|
|---|---|---|
|LLaMA-2 13B ✗ ✗ LLaMA-2 SFT 13B ✗ ✓ LLaMA-2 RFT 13B ✗ ✓ Platypus-2 13B ✗ ✗ WizardMath 13B ✗ ✓ CodeLLaMA (PAL) 13B ✓ ✗ TORA 13B ✓ ✓ TORA-CODE 13B ✓ ✓|24.3 6.3 13.6 43.1 39.5 56.3 70.4 51.1 9.2 22.3 46.3 35.8 58.6 75.0 55.3 - - - - - - 23.7 7.1 14.3 50.7 45.3 55.1 69.6 63.9 14.0 28.4 64.3 46.7 65.8 79.7 39.9 19.9 39.0 62.4 59.5 65.3 86.0 72.7 43.0 57.3 72.9 47.2 77.2 91.3 75.8 48.1 60.5 75.7 65.4 81.4 92.5|36.2 42.6 - 38.0 51.8 53.1 65.9 71.3 (+18)|
|LLaMA-1 RFT 34B ✗ ✓ CodeLLaMA (PAL) 34B ✓ ✗ TORA-CODE 34B ✓ ✓|57.9 - - - - - - 53.3 23.9 49.4 71.0 63.1 72.4 91.5 80.7 50.8 63.7 80.5 70.5 84.2 93.3|- 60.7 74.8 (+14)|
|LLaMA-2 70B ✗ ✗ LLaMA-2 SFT 70B ✗ ✓ LLaMA-2 RFT 70B ✗ ✓ Platypus-2 70B ✗ ✗ WizardMath 70B ✗ ✓ LLaMA-2 (PAL) 70B ✓ ✗ TORA 70B ✓ ✓|57.8 14.4 36.0 73.6 57.5 76.0 92.4 69.3 14.9 39.0 64.0 53.0 71.3 84.8 64.8 - - - - - - 45.9 15.0 24.6 74.3 47.3 72.7 91.1 81.6 22.7 50.3 80.0 49.8 76.2 86.2 55.2 18.3 50.0 74.6 59.5 71.9 92.8 84.3 49.7 67.2 82.7 74.0 86.8 93.8|58.2 56.6 - 53.0 63.8 60.3 76.9 (+13)|
**Metrics We report accuracies of predicted answers. Following Lightman et al. (2023), we round**
numerical values and use sympy [2] for parsing expressions. Since the SingleEQ, SingleOP, AddSub,
and MultiArith datasets focus on different aspects of basic arithmetic, we report their average results
under the collective term MAWPS (Koncel-Kedziorski et al., 2016) for all methods.
3.3 BASELINES
**Proprietary Models We present results from an array of SoTA LLMs, such as OpenAI’s GPT-4,**
ChatGPT (gpt-3.5-turbo), Google’s PaLM-2, and Anthropic’s Claude-2. By default, we report
CoT prompting results, and include PAL (Gao et al., 2022) prompting results for selected models.
**Open-Source Models Base models comprise LLaMA-2 and CodeLLaMA with CoT and PAL**
prompting. Supervised Fine-Tuning (SFT) employs CoT rationales from the original GSM8k and
MATH dataset (15k samples) for fine-tuning. Rejection sampling Fine-Tuning (RFT) leverages
multiple models to generate diverse reasoning paths for fine-tuning (Yuan et al., 2023). WizardMath
augments data using ChatGPT, and conducts SFT and RLHF. Platypus-2, the top model on the LLM
Leaderboard [3], is fine-tuned with Open-Platypus reasoning datasets (Lee et al., 2023). We also
compare TORA with Toolformer (Schick et al., 2023) which is a model trained to utilize calculators.
[2https://www.sympy.org](https://www.sympy.org)
[3https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
-----
Table 3: Results on MATH subtopics.
|Model Size Tool|Intermediate Number Counting & Precalculus Geometry Prealgebra Algebra Algebra Theory Probability|Overall|
|---|---|---|
Proprietary Models
|ChatGPT (PAL) - ✓ GPT-4 (PAL) - ✓|18.5 19.2 23.2 48.5 43.0 62.7 45.4 32.8 29.3 38.0 58.7 61.0 73.9 59.1|38.7 51.8|
|---|---|---|
Open-Source Models
|WizardMath 7B ✗ TORA-CODE 7B ✓ w/o Shaping 7B ✓ w/o Rationale 7B ✓|6.2 6.0 6.5 7.6 9.5 18.1 16.3 35.1 (+28.9) 31.0 (+25.0) 24.0 (+17.5) 50.7 (+43.1) 30.6 (+21.1) 55.0 (+36.9) 61.7 (+45.4) 29.7 (-5.4) 25.1 (-5.9) 17.7 (-6.3) 46.9 (-3.8) 32.3 (+1.7) 51.9 (-3.1) 55.7 (-6.0) 25.5 (-9.6) 14.7 (-16.3) 15.4 (-8.6) 45.9 (-4.8) 29.7 (-0.9) 51.0 (-4.0) 52.4 (-9.3)|11.2 44.6 (+33.4) 40.2 (-4.4) 36.8 (-7.8)|
|---|---|---|
|WizardMath 13B ✗ TORA-CODE 13B ✓ w/o Shaping 13B ✓ w/o Rationale 13B ✓|6.4 6.6 11.5 9.6 11.0 28.5 21.1 35.7 (+29.3) 31.1 (+24.5) 25.7 (+14.2) 55.6 (+46.0) 39.5 (+28.5) 58.7 (+30.2) 66.7 (+45.6) 32.8 (-2.9) 26.0 (-5.1) 24.0 (-1.7) 52.6 (-3.0) 38.4 (-1.1) 55.6 (-3.1) 61.2 (-5.5) 27.1 (-8.6) 15.8 (-15.3) 16.3 (-9.4) 50.4 (-5.2) 36.9 (-2.6) 55.3 (-3.4) 56.5 (-10.2)|15.0 48.1 (+33.1) 44.6 (-3.5) 40.2 (-7.9)|
|TORA-CODE 34B ✓ w/o Shaping 34B ✓ w/o Rationale 34B ✓|38.9 34.6 27.3 57.8 41.4 63.7 67.7 34.0 (-4.9) 29.9 (-4.7) 24.6 (-2.7) 55.6 (-2.2) 41.6 (+0.2) 63.8 (+0.1) 61.4 (-6.3) 28.3 (-10.6) 15.8 (-18.8) 18.0 (-9.3) 52.4 (-5.4) 40.7 (-0.7) 58.6 (-5.1) 57.5 (-10.2)|50.8 47.4 (-3.4) 41.9 (-8.9)|
|WizardMath 70B ✗ TORA 70B ✓ w/o Shaping 70B ✓ w/o Rationale 70B ✓|9.1 13.4 16.9 16.5 19.2 42.7 35.0 37.1 (+28) 30.4 (+17) 30.1 (+13.2) 54.6 (+38.1) 40.3 (+21.1) 64.9 (+22.2) 66.6 (+31.6) 33.8(-3.3) 28.9(-1.5) 27.1(-3) 53.0(-1.6) 38.0(-2.3) 62.2(-2.7) 64.2(-2.4) 26.7(-10.4) 14.7(-15.7) 20.3(-9.8) 48.9(-5.7) 39.2(-1.1) 59.8(-5.1) 57.6(-9)|24.1 49.7 (+25.6) 47.3(-2.4) 41.5(-8.2)|
3.4 MAIN RESULTS
Table 2 presents the results of TORA on 10 mathematical datasets, highlighting the following
salient observations: (1) Using interleaved formatting and output space shaping, TORA consistently
surpasses prior state-of-the-art open-source models across all scales, achieving 13% to 19% absolute
improvements across 10 tasks. (2) TORA-70B substantially outperforms ChatGPT with both CoT
and PAL prompting on GSM8k (84.3% vs. 80.4%) and MATH (49.7% vs. 38.7%), while TORACODE-34B is competitive with GPT-4 solving competition-level MATH dataset with code (50.8%
vs. 51.8%). (3) The accuracy of TORA-CODE is about 5% higher than TORA of the same size,
demonstrating that continued training on code data significantly benefits program-based tool use.
**(4) While rationale-based fine-tuning negatively affects out-of-distribution generalization, TORA**
displays superior generalization. For instance, WizardMath-70B underperforms the base model on
TabMWP (49.8% vs. 57.5%), while TORA-70B effectively generalizes to this tabular reasoning task
(74.0%). (5) TORA attains fast zero-shot inference speed, averaging 1.02 tool interaction rounds per
problem, while effectively addressing problems that require interactive tool utilization.
3.5 ABLATION STUDY
Rationale-only Program-only Tool-integrated Reasoning
60
50
40
30
20
10
0
LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B GPT-4
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|61.6|Col20|Col21|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||51|||||.8|||
|||||||||37.5 39|||||47.3 42.5 .2||||||||
||||||||||||||||||||||
||||||||||||||||||||||
||||33.6 31|||||.3|||||||||||||
||||||||||||||||||||||
|27|||.8||||||||||||||||||
||||||||||||||||||||||
||||||9.2|||||14.9|||||||||||
||||||||||||||||||||||
||7.2||||||||||||||||||||
Figure 4: Comparison of three formats: (1) Rationale-only: step-by-step natural language reasoning
like CoT; (2) Program-only: solving problems with programs like PAL; (3) Tool-integrated Reasoning
used by TORA: interweaving rationale and program execution to solve problems. We evaluated
GPT-4 with few-shot prompting. We trained LLaMA-2 models to reason in the three types of formats,
respectively. For a fair comparison, we do not apply output space shaping for all LLaMA-2 models.
-----
3.5.1 COMPARISONS OF FORMATTING
To evaluate the efficacy of the reasoning format adopted by TORA which interleaves rationales with
programs, we compared it with Rationale-only and Program-only formats using GPT-4 and LLaMA-2
trained with the same size of data from MATH. As shown in Fig 4, the TORA method consistently
surpasses Rationale-only and Program-only approaches. Remarkably, using LLaMA-2, the TORA
method achieves substantial improvements of 29.0% and 6.7% over Rationale-only and Program-only,
respectively. With the closed-source GPT-4, the improvements are 19.1% and 9.8%, respectively.
This emphasizes the effectiveness of integrating natural language rationales with programs.
3.5.2 EFFECTS OF OUTPUT SPACE SHAPING
Sampling
ToRA Correction ToRA Correction ToRA
GSM8k MATH
80 50 48.1
46.7
75.8
75 73.5 74.9 45 44.6 44.6 44.6
72.6
71.1
40.2
Accuracy (%)70 68.1 Accuracy (%)40
65 35
7B 13B 7B 13B
Figure 5: Ablation on output space shaping strategies using CodeLLaMA: (1) TORA[−][Sampling]Correction [is]
trained on TORA-CORPUS without shaping. (2) TORA Correction employs only the sampling strategy
_−_
for shaping, trained with up to 4 additional valid trajectory samples per problem. (3) TORA utilizes
We assess the effectiveness of the output space shaping strategies presented in Section 2.3, specifically
sampling and correction. As shown in Fig 5 and Table 3: (1) Output space shaping yields a
considerable average improvement of 3.4% and 4.0% absolute for GSM8k and MATH, respectively,
with greater benefits for smaller models; (2) Applying the sampling strategy results in a 2.7% absolute
improvement on average, while additionally incorporating correction offers a more substantial boost
|Col1|Col2|Col3|MATH|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|44|||46 .6 44.6 44.6|||||48.1 .7|||
||||||||||||
||||||||||||
|40.2|||||||||||
||||||||||||
|us C ry o ra bl s s ng|ing orrecti sam 4 add tegie e 3: olute ampl corr|7B Code em on ples ition s pres (1) for G ing s ectio||LLa ploy per p al tra ente Outp SM trateg n off|MA: (1) T s only the roblem. ( jectories d in Sectio ut space 8k and M y results i ers a more|ORA samp 3) T per p n 2.3 shap ATH, n a 2 subs|13B −Sa −Co ling ORA roble, spe ing resp .7% tanti||mpling rrection strate utili m. cifica yield ectiv absol al bo|i g ze ll s el ut os|
of up to 4.5%, without using more training data; (3) Output space shaping benefits even the largest
model TORA-70B, with a notable improvement from 47.3% to 49.7% on MATH. These findings
We investigate the benefits, detailed patterns, and remaining challenges of tool interaction for
mathematical reasoning on the challenging MATH dataset. Performance breakdowns on all subtopics
**Benefits from Tool-Integration for MATH Sub-topics As shown in Table 3, TORA outperforms**
|Col1|Col2|Col3|GSM8k|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||74|||||75.8 .9|||
||||||||||||
|71|||73.5 72.6 .1||||||||
||||||||||||
||||||||||||
||||||||||||
|68.1|||||||||||
||||||||||||
|e d ap he se in e re ve to l T gh A v m T|5: A on T ing, sam ss the g an rable ater men 4.5% OR t the NAL estiga atica H ar|7B blatio ORA train pling effe d co aver benef t on a, wi A-70 effe YSIS te th l reas e rep||n on -COR ed wi and ctive rrecti age i its fo vera thout B, w ctive e be onin orted|output sp PUS with th up to 4 correctio ness of the on. As mprovem r smaller ge, while using mo ith a nota ness of ou nefits, de g on the c in Table|ace out s addi n, als outp show ent of mode addit re tr ble i r sha taile halle 3.|13B shapi hapin tiona o trai ut sp n in 3.4 ls; (2 ional ainin mpro ping d pat nging||ng s g. (2 l vali ned ace s Fig % an ) Ap ly in g dat veme strat terns MA|tra ) d wi ha 5 d pl co a; n eg, T|
WizardMath by around 45% in Algebra and Number Theory, which is attributed to stimulating and
shaping tool-use behavior. Problems from the two sub-topics typically need intricate computation
and data manipulation. Algebra mainly focuses on solving equations and application problems, while
many Number Theory problems can be tackled using brute-force approaches through code.
**Patterns of Library Usage for Problem Solving Fig 6 presents the most frequently used libraries**
for different sub-topics and the corresponding accuracies of their solutions. Tool-use behavior on
different mathematical areas demonstrates distinct patterns. sympy and its internal solvers are
primarily employed for algebra-related topics. Precalculus exhibits extensive matrix operations via
matrices, resulting in a high accuracy. Number Theory depends on algorithms like gcd
and lcm. Geometry mainly uses the rational library for fraction-based computations, while the
application of other tools is limited, signifying the potential for improvement.
**Detailed Impact of Rationale on Different Topics Table 3 shows that using an interleaved format,**
in contrast to merely writing the program, leads to significant improvements across all subtopics,
especially in Precalculus, Algebra, and Geometry, where notable increases range from 8.6% to 18.8%.
-----
|Col1|Col2|Col3|Col4|sy so|mpy lvers|Col7|Col8|Col9|matri binom|ces ial|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|||||ra c|tional alculus||||algori|thm|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|Col1|sym solv|Col3|py ers|Col5|Col6|matrice binomia|Col8|s l|Col10|Col11|Col12|Col13|Col14|Col15|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||
||||||||||||||||
||ratio calc||nal ulus|||algorith||m|||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
||||||||||||||||
Library Usage Frequency for Each Topic Library Usage Accuracy for Each Topic
100 100
sympy matrices sympy matrices
80 solvers binomial 80 solvers binomial
rational algorithm rational algorithm
60 calculus 60 calculus
40 40
Frequency (%) Accuracy (%)
20 20
0 0
Int. Alg.PreCalcGeometryNum. Th.C&P PreAlgAlgebraOverall Int. Alg.PreCalcGeometryNum. Th.C&P PreAlgAlgebraOverall
Figure 6: Library usage frequency and accuracy on each sub-topic of MATH.
Appendix F.1 provides representative examples demonstrating how the rationale aids in planning,
multi-round self-correction, and finalizing answers.
Table 4: The failure modes of the TORA on MATH, and their corresponding percentages in random
samples analyzed by humans. We include specific examples of each failure mode in Appendix F.
**Error Type** **Definition** **%** Examples
Reasoning Error Mistakes due to incorrect reasoning steps or missing conditions. 38% Ex. 5
Hallucination Fabrication of numbers or answers. 5% Ex. 6
Diagram Understanding Misinterpretation of the input diagram. 21% Ex. 7
Incorrect use of external tools, especially when
Inappropriate Tool Use 10% Ex. 8
the problem can’t be solved directly with libraries.
Syntax Error Persistent syntax errors despite multiple correction attempts. 9% Ex. 9
Runtime Error Errors during program execution, unresolved by retrying. 9% Ex. 10
Rationale-only Error Cannot be formalized into a program and the rationale is incorrect. 3% Ex. 11
False Negative Correct answers that don’t fully match the ground truth. 5% Ex. 12
**Remaining Challenges in Mathematical Reasoning for TORA To better understand the failure**
modes and remaining challenges, we manually annotated 100 randomly selected trajectories from the
MATH test set, identifying and categorizing their failure modes. The results are shown in Table 4:
Primarily, incorrect reasoning steps constitute the primary source of errors for ToRA on complex math
reasoning tasks (38%), with some hallucination issues also evident during problem interpretation and
answer finalization (5%). Secondly, the misinterpretation of input diagrams contributes significantly
to the error rate (21%). This is particularly noticeable in Geometry, Precalculus, and Intermediate
Algebra. The diagrams in the MATH dataset are usually detailed in text using the Asymptote language
(Hendrycks et al., 2021), thus making it challenging for TORA to comprehend diagrams purely from
textual descriptions. Thirdly, issues with tool usage include Inappropriate Tool Usage (10%), Syntax
Error (9%), and Runtime Error (9%). These problems frequently arise when TORA fails to use tools
correctly after several corrections or attempts. There are certain inputs that fail to formalize well as
programs (3%), which require abstract reasoning rather than computation. Finally, we also found that
there are false negatives when using automatic indicators, i.e., correct predictions that are misjudged
as wrong, but the proportion is relatively small (5%).
4 CONCLUSION
This paper presents TORA, a series of novel Tool-integrated Reasoning Agents that synergistically
combines natural language rationale with program-based tool-use for mathematical problem solving.
Our approach demonstrates the potential of integrating external tools in the reasoning process,
enabling language models to effectively tackle complex quantitative tasks. TORA achieves stateof-the-art performance on 10 diverse mathematical reasoning tasks, substantially outperforming
existing rationale-based and program-based approaches. Furthermore, our systematic analysis of the
benefits and remaining challenges of tool interaction provides valuable insights for future research,
contributing to the development of more advanced and versatile reasoning agents.
-----
AUTHOR CONTRIBUTIONS
Zhibin Gou proposed the interleaved tool-use format of TORA and curated TORA-CORPUS dataset,
implemented the training and evaluation pipeline, conducted experiments and analysis on all datasets,
implemented baselines, and was a main contributor to the paper writing. Zhihong Shao proposed the
project, conducted preliminary experiments, proposed and implemented the training and evaluation
pipelines, proposed and trained all TORA models with output space shaping as well as TORA variants
in the ablation study, designed and oversaw experimental analysis, and contributed to many parts of
the paper writing. Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
Chen provided research mentorship, oversaw project coordination, and advised and contributed to
many parts of the writing.
ACKNOWLEDGMENTS
Zhibin Gou and Yujiu Yang were supported by the National Natural Science Foundation
of China (Grant No. U1903213) and the Shenzhen Science and Technology Program
(JSGG20220831110203007). Zhihong Shao and Minlie Huang were supported by the NSFC projects
(Key project with No. 61936010 ), and were also supported by the National Science Foundation for
Distinguished Young Scholars (with No. 62125604).
-----
REFERENCES
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403, 2023._
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican,
George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al.
Improving language models by retrieving from trillions of tokens. In International conference on
_machine learning, pp. 2206–2240. PMLR, 2022._
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio
Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4.
_[CoRR, abs/2303.12712, 2023. doi: 10.48550/arXiv.2303.12712. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2303.12712)_
[48550/arXiv.2303.12712.](https://doi.org/10.48550/arXiv.2303.12712)
Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. Inˇ _Proceedings_
_of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp._
535–541, 2006.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
[Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/](https://arxiv.org/abs/2110.14168)
[abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023.
Edward A Feigenbaum, Julian Feldman, et al. Computers and thought, volume 7. New York
McGraw-Hill, 1963.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towards multi-step reasoning. In Andreas Krause, Emma Brunskill, Kyunghyun
Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Confer_ence on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume_
202 of Proceedings of Machine Learning Research, pp. 10421–10430. PMLR, 2023. URL
[https://proceedings.mlr.press/v202/fu23d.html.](https://proceedings.mlr.press/v202/fu23d.html)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen.
Critic: Large language models can self-correct with tool-interactive critiquing. arXiv preprint
_arXiv:2305.11738, 2023._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv
_preprint arXiv:1503.02531, 2015._
Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning teachers. In
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pp. 14852–14882, Toronto, Canada, July 2023. Association for Computational_
[Linguistics. doi: 10.18653/v1/2023.acl-long.830. URL https://aclanthology.org/](https://aclanthology.org/2023.acl-long.830)
[2023.acl-long.830.](https://aclanthology.org/2023.acl-long.830)
-----
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to
solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference
_on Empirical Methods in Natural Language Processing (EMNLP), pp. 523–533, 2014._
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han.
Large language models can self-improve. CoRR, abs/2210.11610, 2022. doi: 10.48550/arXiv.2210.
[11610. URL https://doi.org/10.48550/arXiv.2210.11610.](https://doi.org/10.48550/arXiv.2210.11610)
Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu, Yu Zhang, Zhenguo Li, and James T Kwok.
Backward reasoning in large language models for verification. arXiv preprint arXiv:2308.07758,
2023.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. MAWPS:
A math word problem repository. In Proceedings of the 2016 Conference of the North American
_Chapter of the Association for Computational Linguistics: Human Language Technologies, pp._
1152–1157, San Diego, California, June 2016. Association for Computational Linguistics. doi:
[10.18653/v1/N16-1136. URL https://aclanthology.org/N16-1136.](https://aclanthology.org/N16-1136)
Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. Platypus: Quick, cheap, and powerful refinement of
llms. arXiv preprint arXiv:2308.07317, 2023.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making
language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–5333,_
2023.
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu,
and Shuming Shi. Encouraging divergent thinking in large language models through multi-agent
debate. arXiv preprint arXiv:2305.19118, 2023.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter
Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured
mathematical reasoning. In The Eleventh International Conference on Learning Representations,
[2023. URL https://openreview.net/forum?id=DHyHRBwJUTN.](https://openreview.net/forum?id=DHyHRBwJUTN)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical
reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583,
2023.
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta
Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented
language models: a survey. arXiv preprint arXiv:2302.07842, 2023.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing
English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association
_for Computational Linguistics, pp. 975–984, Online, July 2020. Association for Computational_
[Linguistics. doi: 10.18653/v1/2020.acl-main.92. URL https://aclanthology.org/](https://aclanthology.org/2020.acl-main.92)
[2020.acl-main.92.](https://aclanthology.org/2020.acl-main.92)
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay
Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. Lila: A unified
benchmark for mathematical reasoning. In Proceedings of the 2022 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), 2022._
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher
Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted
question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
-----
OpenAI. Gpt-4 technical report, 2023.
Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and
Marco Tulio Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models.
_arXiv preprint arXiv:2303.09014, 2023._
Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. arXiv preprint
_arXiv:2205.12255, 2022._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple
math word problems? In Proceedings of the 2021 Conference of the North American Chapter of
_the Association for Computational Linguistics: Human Language Technologies, pp. 2080–2094,_
Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.
[168. URL https://aclanthology.org/2021.naacl-main.168.](https://aclanthology.org/2021.naacl-main.168)
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli,
Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The RefinedWeb
dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv
_[preprint arXiv:2306.01116, 2023. URL https://arxiv.org/abs/2306.01116.](https://arxiv.org/abs/2306.01116)_
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with
gpt-4. arXiv preprint arXiv:2304.03277, 2023.
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. Zero-infinity:
Breaking the gpu memory wall for extreme scale deep learning. In Proceedings of the International
_Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–14, 2021._
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code.
_arXiv preprint arXiv:2308.12950, 2023._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761, 2023.
Zhihong Shao, Fei Huang, and Minlie Huang. Chaining simultaneous thoughts for numerical
reasoning. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Findings of the As_sociation for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates,_
_December 7-11, 2022, pp. 2533–2547. Association for Computational Linguistics, 2022. doi:_
10.18653/v1/2022.findings-emnlp.187. [URL https://doi.org/10.18653/v1/2022.](https://doi.org/10.18653/v1/2022.findings-emnlp.187)
[findings-emnlp.187.](https://doi.org/10.18653/v1/2022.findings-emnlp.187)
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. Synthetic
prompting: Generating chain-of-thought demonstrations for large language models. In Andreas
Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett
(eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu,
_Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 30706–30775._
[PMLR, 2023a. URL https://proceedings.mlr.press/v202/shao23a.html.](https://proceedings.mlr.press/v202/shao23a.html)
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. Enhancing
retrieval-augmented large language models with iterative retrieval-generation synergy. CoRR,
[abs/2305.15294, 2023b. doi: 10.48550/arXiv.2305.15294. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2305.15294)
[48550/arXiv.2305.15294.](https://doi.org/10.48550/arXiv.2305.15294)
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
[https://github.com/tatsu-lab/stanford_alpaca, 2023.](https://github.com/tatsu-lab/stanford_alpaca)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
-----
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian
Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin
Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar
Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana
Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor
Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan
Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang,
Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models.
_[CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/](https://doi.org/10.48550/arXiv.2307.09288)_
[10.48550/arXiv.2307.09288.](https://doi.org/10.48550/arXiv.2307.09288)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in
_[Neural Information Processing Systems, 2022. URL https://openreview.net/forum?](https://openreview.net/forum?id=_VjQlMeSB_J)_
[id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan
Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International
_[Conference on Learning Representations, 2023. URL https://openreview.net/forum?](https://openreview.net/forum?id=WE_vluYUL-X)_
[id=WE_vluYUL-X.](https://openreview.net/forum?id=WE_vluYUL-X)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling
relationship on learning mathematical reasoning with large language models. arXiv preprint
_arXiv:2308.01825, 2023._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653, 2023._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with
reasoning. Advances in Neural Information Processing Systems, 35:15476–15488, 2022.
Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen.
Evaluating and improving tool-augmented computation-intensive math reasoning. arXiv preprint
_arXiv:2306.02408, 2023._
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code
interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921, 2023a.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. Least-to-most prompting enables
complex reasoning in large language models. In The Eleventh International Conference on
_Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b._
[URL https://openreview.net/pdf?id=WZH7099tgfM.](https://openreview.net/pdf?id=WZH7099tgfM)
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and
Yujiu Yang. Solving math word problems via cooperative reasoning induced language models. In
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pp. 4471–4485, Toronto, Canada, July 2023. Association for Computational_
[Linguistics. doi: 10.18653/v1/2023.acl-long.245. URL https://aclanthology.org/](https://aclanthology.org/2023.acl-long.245)
[2023.acl-long.245.](https://aclanthology.org/2023.acl-long.245)
-----
CONTENTS
**A Related Works** **15**
**B** **Evaluation Datasets** **15**
**C Additional Experiments and Analysis** **17**
C.1 Accuracies of Closed-Source Models on MATH . . . . . . . . . . . . . . . . . . . 17
C.2 Effects of # Valid Trajectories for Output Space Shaping . . . . . . . . . . . . . . 17
C.3 Impact of Output Space Shaping in Relation to Question Difficulty . . . . . . . . . 17
**D Detailed Information of TORA-CORPUS** **18**
**E** **Prompts** **20**
**F** **Examples** **22**
F.1 Success Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
F.2 Failure Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
A RELATED WORKS
**Mathematical Reasoning Recent research has greatly improved reasoning in LLMs with step-by-step**
natural language reasoning (Polu & Sutskever, 2020; Wei et al., 2022; Zhou et al., 2023b; Zhu et al.,
2023; Huang et al., 2022; Liang et al., 2023). However, natural language reasoning struggles with
complex computations and symbolic manipulations. To overcome the limitations, recent research has
exploited tools like calculators (Cobbe et al., 2021; Shao et al., 2022), code interpreters (Mishra et al.,
2022), and symbolic solvers (Zhang et al., 2023). Program-based methods (Gao et al., 2022; Chen
et al., 2022; Shao et al., 2023a) transform reasoning tasks into program synthesis tasks, thus offering
complementary advantages over natural language reasoning, but they face challenges in nuanced
reasoning, planning, and error handling (Gou et al., 2023), where natural language reasoning should
be more suitable.
**Tool-Augmented Language Models Augmenting LLMs with tools can largely alleviate LLMs’**
limitations and improve reasoning and generation performance (Parisi et al., 2022; Mialon et al.,
2023; Yao et al., 2023). Recent work demonstrates the benefits of integrating retrievers (Borgeaud
et al., 2022; Shao et al., 2023b), search engines (Nakano et al., 2021), and multi-tool approaches
(Schick et al., 2023; Paranjape et al., 2023; Gou et al., 2023) to improve generation.
**Knowledge Distillation Knowledge distillation (KD) transfers knowledge from teacher models to**
student models (Bucilua et al., 2006; Hinton et al., 2015). Using LLM-generated trajectories forˇ
fine-tuning is a form of KD (Fu et al., 2023; Taori et al., 2023; Peng et al., 2023; Ho et al., 2023).
Our proposed TORA shows that learning interactive tool-use trajectories is a promising direction to
adapt language models to reasoning tasks.
B EVALUATION DATASETS
We present statistics and examples of the ten evaluation datasets in Table 5.
-----
Table 5: Statistics and examples of the 10 evaluation datasets. In the main result table, we present
the average accuracy of SingleEq, SingleOp, AddSub, and MultiArith under the collective name
MAWPS.
Dataset OOD? #Samples Example Problem
The ice cream parlor was offering a deal, buy 2 scoops
of ice cream, get 1 scoop free. Each scoop cost $1.50. If
Erin had $6.00, how many scoops of ice cream should she
buy?
For a constant c, in cylindrical coordinates (r, θ, z), find
the shape described by the equation
_z = c._
(A) Line (B) Circle (C) Plane (D) Sphere (E) Cylinder (F)
Cone. Enter the letter of the correct option.
Jean has 30 lollipops. Jean eats 8714250 of the lollipops. With the remaining lollipops, Jean wants to package 8714250 lollipops in one bag. How many bags can
Jean fill?
During summer break 819058 kids from Lawrence county
go to camp and the other 668278 kids stay home. How
many more kids spent their summer break at the camp
compared to those who stayed home?
Mrs. Hilt saw an iPod for sale. The price tag said the
iPod cost $128, but a sign announced that it was on sale
for "35% off." How much would the iPod cost after the
discount?
GSM8k (Cobbe
IND 1319
et al., 2021)
MATH (Hendrycks
IND 5000
et al., 2021)
GSM-Hard (Gao
OOD 1319
et al., 2022)
SVAMP (Patel
OOD 1000
et al., 2021)
ASDiv (Miao et al.,
OOD 2215
2020)
TabMWP (Lu et al., OOD 1000
2023)
Stem Leaf
2 3, 6, 7, 8, 8
3 0, 7, 9
4 1, 5
5
6 2, 3, 3, 4, 8, 8
7 3, 4, 4, 7, 9
8 5, 5
Read the table regarding “eight lifting results
(lbs)”. Mr. Morrison, a
P.E. teacher, wrote down
how much weight each
of his students could lift.
How many people lifted
at least 28 pounds?
SingleEq (Koncel- Alyssa’s dog had puppies. She gave 7 to her friends.She
Kedziorski et al., OOD 508 now has 5 puppies left. How many puppies did she have
2016) to start with?
SingleOp (Koncel- Rachel removes 47 bottle caps from a jar. There were
Kedziorski et al., OOD 562 originally 87 bottle caps in the jar. How many bottle caps
2016) are left in the jar?
AddSub (KoncelKedziorski et al.,
2016)
Sam went to 14 football games this year. He went to 29
OOD 395
games last year. How many football games did Sam go to
in all?
MultArith (Koncel- Paige had 43 math problems and 12 science problems for
Kedziorski et al., OOD 600 homework. If she finished 44 of the problems at school,
2016) how many problems did she have to do for homework?
-----
Table 6: Accuracies of ChatGPT and GPT-4 on the MATH dataset, with breakdown w.r.t. different
mathematical subjects. We apply PAL prompting and the Tool-integrated Reasoning method used by
TORA to the two closed-source models.
|Model Tool|Intermediate Number Counting & Precalculus Geometry Prealgebra Algebra Algebra Theory Probability|Overall|
|---|---|---|
||Test Set||
|ChatGPT (PAL) ✓ GPT-4 (PAL) ✓ GPT-4 (Tool-integrated Reasoning) ✓|18.5 19.2 23.2 48.5 43.0 62.7 45.4 32.8 29.3 38.0 58.7 61.0 73.9 59.1 40.0 37.2 44.1 68.9 67.3 82.2 75.8|38.7 51.8 61.6|
||Training Set||
|GPT-4 (Tool-integrated Reasoning) ✓ w/ best@10 ✓|51.0 51.5 42.5 77.4 72.2 89.8 85.1 72.9 70.0 58.9 91.6 81.7 95.5 96.3|64.3 83.1|
C ADDITIONAL EXPERIMENTS AND ANALYSIS
C.1 ACCURACIES OF CLOSED-SOURCE MODELS ON MATH
Table 6 presents the detailed accuracies of GPT-4 on the MATH dataset. The Tool-integrated
Reasoning method used by TORA significantly outperforms PAL prompting when directly applied to
the closed-source GPT-4, further demonstrating the benefits of synergizing natural language reasoning
and program-based tool use.
C.2 EFFECTS OF # VALID TRAJECTORIES FOR OUTPUT SPACE SHAPING
|Col1|Col2|GSM8k|Col4|Col5|
|---|---|---|---|---|
||13B|75|75 .4|.8|
|73|7B 74 .5|.8|||
||||72|.6|
||69|70 .4|.9||
|68|.1||||
||||||
|Col1|Col2|MATH|Col4|Col5|
|---|---|---|---|---|
||13B||48|.1|
||7B|45|.8||
|44|45 .6|.1 43|44 .4|.6|
||41|.8|||
|40|.2||||
||||||
GSM8k MATH
78 50
13B 75.8 13B 48.1
76 7B 74.8 75.4 48 7B
74 73.5 46 45.1 45.8
72.6 44.6 44.6
72 70.9 44 43.4
41.8
Accuracy (%) 70 69.4 Accuracy (%) 42
68.1 40.2
68 40
66 38
0 1 2 4 0 1 2 4
#Trajectories for Shaping #Trajectories for Shaping
Figure 7: Effects of using different numbers of additional valid trajectories per question for output
space shaping.
As shown in Fig 7, it is beneficial to increase the number of additional valid trajectories for output
space shaping.
C.3 IMPACT OF OUTPUT SPACE SHAPING IN RELATION TO QUESTION DIFFICULTY
We compare the effects of output space shaping on MATH problems of different difficulty levels
(from level 1 to level 5) in Figure 8, and present the statistics of MATH problems at different levels in
Table 7. As can be seen:
- Across these different difficulty levels and model sizes, output space shaping generally
brings a significant improvement of 4.0% on average across different model sizes.
- Output space shaping brings significant improvements for difficult, long problems. E.g.,
with TORA-CODE-13B, shaping does not significantly improve level 1 to level 2 problems,
but it brings a substantial improvement of 5.4% to 5.7% for level 3 to level 5 problems.
-----
Performance Comparison on Different MATH Levels
80
70
60
50
40
30
20
|-0.|Col2|Col3|+2.8 2|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|ToRA-7B|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|Col32|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||||||ToRA-7B||||||||
|+3.2||||||||||||||||||||||||ToRA-7B ToRA-13B ToRA-13B|||+ Shaping + Shaping|||||
|||||||||||||||||||||||||||||||||
|||||||+0 +4.8|||+1.1 .8||||||+5.1||||||ToRA-34B ToRA-34B GPT-4 PA|||ToRA-34B ToRA-34B GPT-4 PA|||+ Shaping L|||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||+5 +3.1|||.6|||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||+5 +5.7|||+3.2 .7|||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||+5 +2.9|||.4+2.9|||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||
||Le Fi|vel gur||1 e 8|: I|mpact|Le of|vel Ou||2 tpu|t S|pace S|Le ha|vel Lev pin||3 els g i|n|Relation|Le to|vel Q||4 ue|sti|on Diff|Le icu|vel lty.||5||||
- After using shaping, TORA-CODE-34B outperforms GPT-4 PAL on problems from Level 1
to Level 4, but there is still a gap at Level 5 (27.3% vs. 30.0%). These problems are usually
longer (average about 248.4 characters), require more reasoning steps (>1,000 characters) to
solve, and more often include diagram inputs (about 20%). These observations may guide
future work to focus more on solving these more difficult problems.
Table 7: Statistics of MATH problems at different levels. Average Answer Length indicates the
average length of TORA outputs; Training query coverage indicates the proportion of queries with at
least one valid trajectory in TORA-CORPUS relative to the total queries in the original dataset.
Level 1 Level 2 Level 3 Level 4 Level 5
# Test Samples 437 894 1131 1214 1324
Avg Question Length 123.8 150.9 169.1 203.0 248.4
Avg Answer Length 503.1 655.8 751.2 881.6 1083.8
Training query coverage 97.7% 91.6% 86.5% 81.3% 68.0%
D DETAILED INFORMATION OF TORA-CORPUS
We provide a more detailed introduction to the data construction process, quality control, and data
statistical information, beyond Sec. 2.2.
**Data Format and Quality Control** In our preliminary experiments, we found that the toolintegrated reasoning trajectory format generated by zero-shot prompting was somewhat chaotic.
Therefore, we designed a few-shot prompting to control the reasoning format, which effectively
improved data quality. On the other hand, we increased the annotation success rate by sampling,
ensuring more comprehensive coverage of the training query.
**Data Filtering Process** For the data constructed, we filtered out paths that produced incorrect
answers by matching them with standard answers. To prevent the model from learning incorrect
-----
Table 8: Accuracy of TORA-CORPUS on GSM8k and MATH training set. TORA-CORPUS-Greedy
uses only the greedy trajectories, while ToRA-Corpus-16k combines sampled trajectories.
|GSM8k MATH Intermediate Number Counting & All All Precalculus Geometry Prealgebra Algebra Algebra Theory Probability|GSM8k|MATH|
|---|---|---|
||All|Intermediate Number Counting & All Precalculus Geometry Prealgebra Algebra Algebra Theory Probability|
|TORA-CORPUS-Greedy TORA-CORPUS-16k|94.4 98.2|64.3 51.0 51.5 70.0 77.4 72.2 89.8 85.1 83.1 72.9 70.0 58.9 91.6 81.7 95.5 96.3|
Table 9: Statistics of TORA-CORPUS-16k
**GSM8k** **MATH** **Total**
# Train Samples 7,657 7,881 15,538
Avg Question Length 236 189 211
Avg Trajectory Length 678 704 691
Min Trajectory Length 218 119 119
Max Trajectory Length 1,713 2,486 2,486
intermediate reasoning processes, we further filtered out data samples with intermediate program
execution errors.
**Dataset Statistics** In Table 8, we compared the annotation accuracy (i.e., sample coverage) of the
training set on GSM8k, MATH, and MATH subtopics of TORA-CORPUS-Greedy using only the
greedy trajectories, and TORA-CORPUS-16k combined with sampled trajectories. Furthermore, in
Table 9, we reported the statistical data of TORA-CORPUS-16k, such as the number of samples,
average question length, average, minimum, and maximum trajectory length, as shown in the
following tables.
**Rationale as Hints** As described in Section 2.2, we annotated interactive tool-use trajectories for
the training questions from MATH with GPT-4. GPT-4 achieves a success rate below 65% using
greedy decoding. As MATH was originally annotated with natural language rationales, to improve
the annotation success rate, we tried to provide GPT-4 with the human rationales as hints (Zelikman
et al., 2022). However, when using this method, GPT-4 tends to replicate the hints and ignore
tool-use outputs especially when the outputs are inconsistent with the hints, thus failing to produce
high-quality trajectories. Hence, we deferred the utilization of the already-annotated natural language
rationales for future investigations. Instead, we employed nucleus sampling to recall valid trajectories
for questions that remained unsolved through greedy decoding. This approach significantly boosted
annotation accuracy to 83.1%.
-----
E PROMPTS
We present instructions and example few-shot prompts of Tool-integrated Reasoning for querying
GPT-4.
Listing 1: Prompts of Tool-integrated Reasoning for GPT-4.
Integrate step-by-step reasoning and Python code to solve math problems
using the following guidelines:
- Analyze the question and write functions to solve the problem; the
function should not take any arguments.
- Present the final result in LaTeX using a ‘\boxed{}‘ without any units.
- Utilize the ‘pi‘ symbol and ‘Rational‘‘ from Sympy for $\pi$ and
fractions, and simplify all fractions and square roots without
converting them to decimal values.
Here are some examples you may refer to:
--
Question: Convert the point $(0, -3 \sqrt{3}, 3)$ in rectangular
coordinates to spherical coordinates. Enter your answer in the form $
(\rho,\theta,\phi),$ where $\rho > 0,$ $0 \le \theta < 2 \pi,$ and $0
\le \phi \le \pi.$
Solution:
To convert the point $(x, y, z)$ in rectangular coordinates to spherical
coordinates $(\rho, \theta, \phi)$, we use the following formulas:
1. $\rho = \sqrt{x^2 + y^2 + z^2}$
2. $\theta = \arctan{\frac{y}{x}}$
3. $\phi = \arccos{\frac{z}{\rho}}$
Let’s write a function:
‘‘‘python
from sympy import sqrt, atan2, acos, pi
def rectangular_to_spherical():
x, y, z = 0, -3*sqrt(3), 3
rho = sqrt(x**2 + y**2 + z**2)
theta = atan2(y, x)
phi = acos(z/rho)
return rho, theta, phi
spherical_coordinates = rectangular_to_spherical()
print(spherical_coordinates)
‘‘‘
‘‘‘output
(6, -pi/2, pi/3)
‘‘‘
The required form of the answer is $(\rho, \theta, \phi)$, where $\rho >
0$, $0 \le \theta < 2\pi$, and $0 \le \phi \le \pi$. Since our
initial output has $\theta = -\pi/2$, which doesn’t meet the
condition for $\theta$, we add $2\pi$ to adjust its value.
Hence, the spherical coordinates for the point $(0, -3\sqrt{3}, 3)$ are $
\boxed{(6, \frac{3\pi}{2}, \frac{\pi}{3})}$.
--
Question: Compute $1011_2 + 101_2 - 1100_2 + 1101_2$. Express your answer
in base 2.
-----
Solution:
Let’s write a single python function for binary computation:
‘‘‘python
def binary_sum_diff():
# Convert binary to decimal
num1 = int("1011", 2)
num2 = int("101", 2)
num3 = int("1100", 2)
num4 = int("1101", 2)
# Perform the operations
result = num1 + num2 - num3 + num4
# Convert the result back to binary
result_binary = format(result, "b")
return result_binary
result = binary_sum_diff()
print(result)
‘‘‘
‘‘‘output
10001
‘‘‘
The answer expressed in base 2 is $\boxed{10001_2}$.
--
Question: Find the matrix that corresponds to projecting onto the vector
$\begin{pmatrix} 2 \\ -3 \end{pmatrix}.$
Solution:
‘‘‘python
from sympy import Matrix, simplify
def projection_matrix():
v = Matrix([2, -3])
P = (v * v.T) / (v.T * v)[0]
# Simplify the elements
P = P.applyfunc(simplify)
return P
result = projection_matrix()
print(result)
‘‘‘
‘‘‘output
Matrix([[4/13, -6/13], [-6/13, 9/13]])
‘‘‘
The matrix is $\boxed{\begin{pmatrix} \frac{4}{13} & -\frac{6}{13} \ -\
frac{6}{13} & \frac{9}{13} \end{pmatrix}}.$
--
Question: Solve the inequality
\[\frac{x^2 - 25}{x + 5} < 0.\]
Solution:
Let’s use ‘sympy‘ to solve the inequality:
‘‘‘python
from sympy import symbols, simplify
-----
from sympy.solvers.inequalities import solve_univariate_inequality
from sympy.core.relational import StrictLessThan
def solve_inequality():
x = symbols(’x’)
expression = (x**2 - 25) / (x + 5)
inequality = StrictLessThan(expression, 0)
solution = solve_univariate_inequality(inequality, x, relational=False
)
simplified_solution = simplify(solution)
return simplified_solution
result = solve_inequality()
print(result)
‘‘‘
‘‘‘output
Union(Interval.open(-oo, -5), Interval.open(-5, 5))
‘‘‘
The answer is $\boxed{(-\infty,-5)\cup(-5,5)}$
--
F EXAMPLES
F.1 SUCCESS CASES
Listing 2: Success case for TORA: Rationale for planning.
Question: What real values of $x$ are not in the domain of $f(x)=\frac
{1}{|x^2-6x+8|+|x^2+x-6|}$?
Solution:
The domain of the function is all real numbers, so we need to find the
values of x for which the denominator of the function is zero.
The denominator of the function is the sum of the absolute values of the
two terms inside the absolute value.
Each of these terms is a quadratic in x, and the roots of a quadratic
equation are real unless the equation is identically zero.
So, we need to find the values of x for which the two quadratic equations
are identically zero.
‘‘‘python
from sympy import symbols, Abs, solveset, S
from sympy.core.relational import Ne
def non_domain_values():
x = symbols(’x’)
# These are the values of x for which the function is not defined
roots1 = solveset(Ne(x**2 - 6*x + 8, 0), x, domain=S.Reals)
roots2 = solveset(Ne(x**2 + x - 6, 0), x, domain=S.Reals)
# The roots of a quadratic equation are always in the domain of the
function, so we need to find the values of x that are not in the
union of the roots
non_domain_values = S.Reals - (roots1.union(roots2))
return non_domain_values
result = non_domain_values()
print(result)
‘‘‘
‘‘‘output
{2}
-----
‘‘‘
The real value of $x$ that is not in the domain of the function is $\
boxed{2}$.
Listing 3: Success case for TORA: Self-Correcting Errors with tool feedback.
Question: The difference between two perfect squares is 133. What is the
smallest possible sum of the two perfect squares?
Solution:
We can solve this problem by noting that any perfect square can be
written as $(a + b)^2$ or $(a - b)^2$ for some integers $a$ and $b$.
So we can rewrite the question as the following problem: Find the
minimum possible value for $(a + b)^2 + (a - b)^2$ given that $(a + b)
^2 - (a - b)^2 = 133$.
‘‘‘python
from sympy import symbols, Eq, solve
def min_sum_perfect_squares():
"""
Find the smallest possible sum of two perfect squares
given that the difference between them is 133.
Returns:
Integer: The minimum possible sum of the two perfect squares.
"""
a, b = symbols(’a b’)
# Equation for the difference of the squares
diff_eq = Eq((a + b)**2 - (a - b)**2, 133)
# Solve the equation for b
b_solutions = solve(diff_eq, b)
# Calculate the sum of the squares for each solution of b
sum_squares = [(a + b_sol)**2 + (a - b_sol)**2 for b_sol in
b_solutions]
# Find the minimum possible sum of the squares
min_sum_squares = min(sum_squares)
return min_sum_squares
result = min_sum_perfect_squares()
print(result)
‘‘‘
‘‘‘output
(a - 133/(4*a))**2 + (a + 133/(4*a))**2
‘‘‘
The problem seems to be more complex than anticipated and the solution
was incorrect.
Since the difference between two squares can be factored, we can write:
$x^2 - y^2 = (x + y)(x - y)$
$133$ has two pairs of factors: 1 and 133, and 7 and 19.
So, either $x+y = 133$ and $x-y = 1$, or $x+y = 19$ and $x-y = 7$.
- For (133, 1), $x = \frac{133 + 1}{2} = 67$ and $y = \frac{133 - 1}{2} =
66$.
- For (19, 7), $x = \frac{19 + 7}{2} = 13$ and $y = \frac{19 - 7}{2} = 6$.
-----
(13, 6) result in the smallest sum of the two perfect squares.
So, let’s modify the previous solution:
‘‘‘python
def min_sum_perfect_squares():
min_squares = [13**2, 6**2]
min_sum_squares = sum(min_squares)
return min_sum_squares
‘‘‘
‘‘‘output
205
‘‘‘
The smallest possible sum of the two perfect squares is $\boxed{205}$.
Listing 4: Success case for TORA: Finalizing answer.
Question:
Compute \[\sum_{k = 1}^\infty \frac{k + 2}{k! + (k + 1)! + (k + 2)!}.\]
Solution:
‘‘‘python
from sympy import symbols, Sum, factorial, oo, simplify
def infinite_sum():
expr = (k + 2) / (factorial(k) + factorial(k + 1) + factorial(k + 2))
sum_expr = Sum(expr, (k, 1, oo))
result = sum_expr.doit()
return result
result = infinite_sum()
print(result)
‘‘‘
‘‘‘output
Sum((k + 2)/(factorial(k) + factorial(k + 1) + factorial(k + 2)), (k, 1,
oo))
‘‘‘
The sum does not converge to a simple expression, so the answer is $\
boxed{\sum_{k = 1}^\infty \frac{k + 2}{k! + (k + 1)! + (k + 2)!}}$.
F.2 FAILURE CASES
Listing 5: Failure case for TORA: Reasoning Error.
Idx: 4289
Type: Prealgebra
Level: Level 2
Question: ’What is the smallest positive multiple of $13$ that is greater
than $500?$’
Solution:
‘‘‘python
def smallest_positive_multiple():
"""What is the smallest positive multiple of $13$ that is greater than
$500?$"""
# Start from 500 and keep adding 13 until we get a number that is
greater than 500
num = 500
while num <= 500:
-----
num += 13
return num
result = smallest_positive_multiple()
print(result)
‘‘‘
‘‘‘output
513
‘‘‘
The smallest positive multiple of $13$ that is greater than $500$ is $\
boxed{513}$.
Execution: None
Prediction: ’513’
Ground Truth: ’507’
Score: [False]
Reference CoT: ’Dividing $500$ by $13$ gives a quotient $38$ with a
remainder of $6$. In other words, \\[500 \\div 13=13 \\cdot 38+6.\\]
So, the smallest positive multiple of $13$ that is greater than $500$
is \\[13\\cdot39=\\boxed{507}.\\]’
Listing 6: Failure case for TORA: Hallucination.
Idx: 2775
Type: Intermediate Algebra
Level: Level 3
Question: ’Find the eighth term of the sequence $1440, 1716, 1848, \\
ldots$, whose terms are formed by multiplying the corresponding terms
of two arithmetic sequences.’
Solution:
‘‘‘python
from sympy import symbols, Rational, simplify
def eighth_term():
"""Find the eighth term of the sequence $1440, 1716, 1848, \ldots$,
whose terms are formed by multiplying the corresponding terms of
two arithmetic sequences."""
a1, d1 = 1440, 36
a2, d2 = 16, 2
# Calculate the eighth term of each sequence
term1 = a1 + 7*d1
term2 = a2 + 7*d2
# Multiply the terms
result = term1 * term2
return result
result = eighth_term()
print(result)
‘‘‘
‘‘‘output
50760
‘‘‘
The eighth term of the sequence is $\boxed{50760}$.
Execution: None
Prediction: ’50760’
Ground Truth: ’348’
Score: [False]
Reference CoT: ’If $a_n =an + b$ and $b_n = cn + d$ are two arithmetic
sequences, then their term-wise product takes the form \\[a_nb_n = (
an+b)(cn+d) = An^2 + Bn + C,\\]where $A, B, C$ are constants.
-----
Therefore, some sequence $x_n = An^2 + Bn + C$ has $x_0 = 1440,$ $x_1
= 1716,$ and $x_2 = 1848.$ Then we have the equations \\[\\begin{
aligned} C &= 1440, \\\\ A+B+C&=1716,\\\\ 4A+2B+C&=1848. \\end{
aligned}\\]Subtracting $C=1440$ from the second and third equations,
we have $A+B=276$ and $4A+2B=408,$ or $2A+B=204.$ Then \\[A = (2A+B) (A+B) = 204 - 276 = -72,\\]and so $B = 276-A=348.$ We conclude that
the given sequence has the general formula \\[x_n = -72n^2 + 348n +
1440.\\]Then the eighth term is \\[\\begin{aligned} x_7 &= -72 \\cdot
7^2 + 348 \\cdot 7 + 1440 = \\boxed{348}. \\end{aligned}\\]’
Listing 7: Failure case for TORA: Diagram Understanding. Navigating the complexities of input
diagrams is key to solving many problems in Geometry, Precalculus, and Intermediate Algebra. This
ideal process encompasses not just visual modalities, but also involves interactions with diagrams,
such as adding auxiliary lines. Nonetheless, the utility of computational tools like SymPy is,
unfortunately, only marginally beneficial.
Question: A square has a side length of 10 inches. Congruent isosceles
right triangles are cut off each corner so that the resulting octagon
has equal side lengths. How many inches are in the length of one
side of the octagon? Express your answer as a decimal to the nearest
hundredth.
[asy]
size(150);
draw((0,0)--(10,0)--(10,10)--(0,10)--(0,0),linewidth(0.7));
draw((5*sqrt(2)/(1 + sqrt(2)),0)--(0,5*sqrt(2)/(1 + sqrt(2))),linewidth
(0.7));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),0)--(10,5*sqrt(2)/(1 + sqrt(2))),
linewidth(0.7));
draw((0,(10-(5*sqrt(2)/(1+sqrt(2)))))--(5*sqrt(2)/(1 + sqrt(2)),10),
linewidth(0.7));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),10)--(10,10 - 5*sqrt(2)/(1 + sqrt(2)
)),linewidth(0.7));
[/asy]
Solution:
The side length of the octagon is the hypotenuse of the right triangle
formed by the side of the square and the diagonal of the octagon
The hypotenuse is the sum of the side length of the square and the length
of the diagonal of the octagon
The length of the diagonal of the octagon is 5*sqrt(2)/(1 + sqrt(2))
‘‘‘python
from sympy import sqrt, Rational
def octagon_side_length():
square_side_length = 10
octagon_side_length = square_side_length + 5*sqrt(2)/(1 + sqrt(2))
return round(octagon_side_length, 2)
result = octagon_side_length()
print(result)
‘‘‘
‘‘‘output
12.93
‘‘‘
The length of one side of the octagon is $\boxed{12.93}$ inches.
**[Ground Truth]**[:]
Recall that the ratio of the hypotenuse to the leg of an isosceles right
triangle is $\sqrt{2}$. Looking at the removed triangles, we see that
the three segments that make up the side of square are $s/\sqrt{2}$,
$s$, and $s/\sqrt{2}$, where $s$ is the side length of the octagon.
-----
Setting the sum of these three side lengths equal to 10 inches, we
find \begin{align*}
\frac{s}{\sqrt{2}}+s+\frac{s}{\sqrt{2}}&=10 \implies \\
s\left(\frac{1}{\sqrt{2}}+1+\frac{1}{\sqrt{2}}\right)&=10 \implies \\
s(\sqrt{2}+1)&=10 \implies \\
s&=\frac{10}{\sqrt{2}+1}=10(\sqrt{2}-1),
\end{align*} where we have rationalized the denominator twice: \[
\frac{1}{\sqrt{2}}=\frac{1}{\sqrt{2}}\cdot \frac{\sqrt{2}}{\sqrt{2}}=\
frac{\sqrt{2}}{2},
\] and \[
\frac{10}{\sqrt{2}+1}=\frac{10}{\sqrt{2}+1}\cdot\frac{\sqrt{2}-1}{\sqrt
{2}-1}=10(\sqrt{2}-1).
\] To the nearest hundredth, $s=\boxed{4.14}$ inches.
[asy]
size(150);
defaultpen(linewidth(0.7)+fontsize(10));
real s = 10/(1+sqrt(2));
draw((0,0)--(10,0)--(10,10)--(0,10)--(0,0));
draw((5*sqrt(2)/(1 + sqrt(2)),0)--(0,5*sqrt(2)/(1 + sqrt(2))));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),0)--(10,5*sqrt(2)/(1 + sqrt(2))));
draw((0,(10-(5*sqrt(2)/(1+sqrt(2)))))--(5*sqrt(2)/(1 + sqrt(2)),10));
draw((10 - (5*sqrt(2)/(1 + sqrt(2))),10)--(10,10 - 5*sqrt(2)/(1 + sqrt(2)
)));
label("$s$",(10-s/(2*sqrt(2)),10-s/(2*sqrt(2))),SW);
label("$\displaystyle{\frac{s}{\sqrt{2}}}$",(10,10-s/(2*sqrt(2))),E);
label("$\displaystyle{\frac{s}{\sqrt{2}}}$",(10,s/(2*sqrt(2))),E);
label("$s$",(10,5),E);
draw(rightanglemark((10,0),(10,10),(0,10)));[/asy]
Listing 8: Failure case for TORA: Inappropriate Tool Use.
Idx: 2489
Type: Intermediate Algebra
Level: Level 5
Question: ’Let $z_1,$ $z_2,$ $z_3$ be complex numbers such that $|z_1| =
2,$ $|z_2| = 3,$ and $|z_3| = 4.$ Find the largest possible value of\
n\\[|z_1 - z_2|^2 + |z_1 - z_3|^2 + |z_2 - z_3|^2.\\]’
Solution:
‘‘‘python
from sympy import symbols, simplify, Max
from sympy.functions import Abs
def max_expression_value():
"""Let $z_1,$ $z_2,$ $z_3$ be complex numbers such that $|z_1| = 2,$ $
|z_2| = 3,$ and $|z_3| = 4.$ Find the largest possible value of
\[|z_1 - z_2|^2 + |z_1 - z_3|^2 + |z_2 - z_3|^2.\]"""
# Define the variables
z1, z2, z3 = symbols(’z1 z2 z3’)
# Define the expression
expr = Abs(z1 - z2)**2 + Abs(z1 - z3)**2 + Abs(z2 - z3)**2
# Define the conditions
conditions = [Abs(z1) - 2, Abs(z2) - 3, Abs(z3) - 4]
# Find the maximum value of the expression under the conditions
max_value = Max(expr, *conditions)
# Simplify the maximum value
simplified_max_value = simplify(max_value)
return simplified_max_value
-----
result = max_expression_value()
print(result)
‘‘‘
‘‘‘output
Max(Abs(z1) - 2, Abs(z2) - 3, Abs(z3) - 4, Abs(z1 - z2)**2 + Abs(z1 - z3)
**[2 + Abs(z2 - z3)]**[2)]
‘‘‘
The largest possible value of $|z_1 - z_2|^2 + |z_1 - z_3|^2 + |z_2 - z_3
|^2$ is $\boxed{25}$.
Execution: None
Prediction: ’25’
Ground Truth: ’87’
Score: [False]
Reference CoT: ’We have that\n\\begin{align*}\nz_1 \\overline{z}_1 &= |
z_1|^2, \\\\\nz_2 \\overline{z}_2 &= |z_2|^2, \\\\\nz_3 \\overline{z}
_3 &= |z_3|^2.\n\\end{align*}Likewise,\n\\begin{align*}\n&|z_1 - z_2
|^2 + |z_1 - z_3|^2 + |z_2 - z_3|^2 \\\\\n&= (z_1 - z_2)(\\overline{
z_1 - z_2}) + (z_1 - z_3)(\\overline{z_1 - z_3}) + (z_2 - z_3)(\\
overline{z_2 - z_3}) \\\\\n&= (z_1 - z_2)(\\overline{z}_1 - \\
overline{z}_2) + (z_1 - z_3)(\\overline{z}_1 - \\overline{z}_3) + (
z_2 - z_3)(\\overline{z}_2 - \\overline{z}_3) \\\\\n&= z_1 \\overline{
z}_1 - z_1 \\overline{z}_2 - \\overline{z}_1 z_2 + z_2 \\overline{z}
_2 + z_1 \\overline{z}_1 - z_1 \\overline{z}_3 - \\overline{z}_1 z_3 +
z_1 \\overline{z}_3 + z_2 \\overline{z}_3 - z_2 \\overline{z}_3 - \\
overline{z}_2 z_3 + z_2 \\overline{z}_3 \\\\\n&= 2|z_1|^2 + 2|z_2|^2 +
2|z_3|^2 - (z_1 \\overline{z}_2 + \\overline{z}_1 z_2 + z_1 \\
overline{z}_3 + \\overline{z}_1 z_3 + z_2 \\overline{z}_3 + \\
overline{z}_2 z_3).\n\\end{align*}
...
Adding these two equations, we get\n\\[|z_1 - z_2|^2 + |z_1 - z_3|^2 + |
z_2 - z_3|^2 + |z_1 + z_2 + z_3|^2 = 3|z_1|^2 + 3|z_2|^2 + 3|z_3
|^2.\\]Therefore,\n\\begin{align*}\n|z_1 - z_2|^2 + |z_1 - z_3|^2 + |
z_2 - z_3|^2 &= 3|z_1|^2 + 3|z_2|^2 + 3|z_3|^2 - |z_1 + z_2 + z_3|^2
\\\\\n&\\le 3 \\cdot 2^2 + 3 \\cdot 3^2 + 3 \\cdot 4^2 \\\\\n&= 87.\n
\\end{align*}For equality to occur, we must have $z_1 + z_2 + z_3 = 0.
$ Without loss of generality, we can assume that $z_1 = 2.$
...
[asy]\nunitsize(1 cm);\n\npair zone, ztwo, zthree;\n\nzone = (2,0);\nztwo
= (3/4,3*sqrt(15)/4);\nzthree = (-11/4,-3*sqrt(15)/4);\n\ndraw(
Circle((0,0),2),red);\ndraw(Circle((0,0),3),green);\ndraw(Circle((0,0)
,4),blue);\ndraw(zone--ztwo--zthree--cycle);\n\ndot("$z_1$", zone, E)
;\ndot("$z_2$", ztwo, N);\ndot("$z_3$", zthree, SW);\n[/asy]\n\
nAlternative: For equality to occur, we must have $z_1 + z_2 + z_3 =
0.$ Without loss of generality, we can assume that $z_1 = 2.$ Then
$z_2 + z_3 = -2.$ Let $z_2 = x + iy$ so that $z_3 = -x - 2 - iy,$
where $x$ and $y$ are real numbers. We need\n\\begin{align*}\n |z_2
|^2 = x^2 + y^2 &= 9 \\\\\n |z_3|^2 = (x + 2)^2 + y^2 &= 16.\n\\end{
align*}Subtracting the first equation from the second, we get $4x + 4
= 7,$ or $x = \\dfrac34.$ One solution is $z_2 = \\dfrac34 + i\\
dfrac{3\\sqrt{15}}{4}$ and $z_3 = -\\dfrac{11}4 + i\\dfrac{3\\sqrt
{15}}{4}.$ This example shows that equality is possible, so the
maximum value is $\\boxed{87}.$’
Listing 9: Failure case for TORA: Syntax Error.
Idx: 106
Type: Algebra
Level: Level 5
Question: ’Below is a portion of the graph of a function, $y=h(x)$:\n\n[
asy]\nimport graph; size(8cm); real lsf=0.5; pen dps=linewidth(0.7)+
fontsize(10); defaultpen(dps); pen ds=black; real xmin=-0.75,xmax
=8.25,ymin=-1.25,ymax=10.25;\n\npen cqcqcq=rgb(0.75,0.75,0.75);\n\n/*
grid*/ pen gs=linewidth(0.7)+cqcqcq+linetype("2 2"); real gx=1,gy=1;\
-----
nfor(real i=ceil(xmin/gx)*gx;i<=floor(xmax/gx)*gx;i+=gx) draw((i,ymin)
--(i,ymax),gs); for(real i=ceil(ymin/gy)*gy;i<=floor(ymax/gy)*gy;i+=
gy) draw((xmin,i)--(xmax,i),gs);\n\nLabel laxis; laxis.p=fontsize(10)
;\n\nxaxis("",xmin,xmax,Ticks(laxis,Step=1.0,Size=2,NoZero),Arrows(6),
above=true); yaxis("",ymin,ymax,Ticks(laxis,Step=1.0,Size=2,NoZero),
Arrows(6),above=true);\nreal f1(real x){return (x-0.5)*(x-2.5)*(x-6.5)
*[(x-7.5)/16+x;}\ndraw(graph(f1,-0.25,8.25),linewidth(1));\nclip((xmin,]
ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle);\nlabel("$y=h(x)
$",(8.5,8),E);\n[/asy]\n\nWhat is the sum of all integers $x$ in the
interval shown ($0\\le x\\le 8$) such that $h(x)>x$ ?’
Solution:
‘‘‘python
from sympy import symbols, solve
def sum_of_x():
"""Below is a portion of the graph of a function, $y=h(x)$:
[asy]
import graph; size(8cm); real lsf=0.5; pen dps=linewidth(0.7)+fontsize
(10); defaultpen(dps); pen ds=black; real xmin=-0.75,xmax=8.25,ymin
=-1.25,ymax=10.25;
pen cqcqcq=rgb(0.75,0.75,0.75);
/*grid*/ pen gs=linewidth(0.7)+cqcqcq+linetype("2 2"); real gx=1,gy=1;
for(real i=ceil(xmin/gx)*gx;i<=floor(xmax/gx)*gx;i+=gx) draw((i,ymin)--(i,
ymax),gs); for(real i=ceil(ymin/gy)*gy;i<=floor(ymax/gy)*gy;i+=gy)
draw((xmin,i)--(xmax,i),gs);
Label laxis; laxis.p=fontsize(10);
xaxis("",xmin,xmax,Ticks(laxis,Step=1.0,Size=2,NoZero),Arrows(6),above=
true); yaxis("",ymin,ymax,Ticks(laxis,Step=1.0,Size=2,NoZero),Arrows
(6),above=true);
real f1(real x){return (x-0.5)*(x-2.5)*(x-6.5)*(x-7.5)/16+x;}
draw(graph(f1,-0.25,8.25),linewidth(1));
clip((xmin,ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle);
label("$y=h(x)$",(8.5,8),E);
[/asy]
What is the sum of all integers $x$ in the interval shown ($0\le x\le 8$)
such that $h(x)>x$ ?"""
x = symbols(’x’)
h_x = (x - 0.5) * (x - 2.5) * (x - 6.5) * (x - 7.5) / 16 + x
# Find the x-values where h(x) > x
x_values = [x for x in range(9) if h_x.subs(x, x) > x]
# Calculate the sum of these x-values
sum_x = sum(x_values)
return sum_x
result = sum_of_x()
print(result)
‘‘‘
‘‘‘output
TypeError: cannot determine truth value of Relational
‘‘‘
The code above has an error. The ‘if‘ statement can only be used with
boolean values, but the ‘h_x.subs(x, x) > x‘ comparison is not a
boolean value. We need to find a different way to solve this problem.
Let’s analyze the problem. We are asked to find the sum of all integers
$x$ in the interval shown ($0\le x\le 8$) such that $h(x)>x$.
-----
We can see that $h(x)$ is a parabola that opens upwards. For a parabola
that opens upwards, the maximum value of the parabola will occur at
the midpoint of the interval, which is $x = \frac{0 + 8}{2} = 4$.
For $x < 4$, $h(x)$ is below the line $y = x$, so $h(x) \le x$. For $x >
4$, $h(x)$ is above the line $y = x$, so $h(x) > x$.
Therefore, we only need to find the sum of the integers $x$ in the
interval $0 \le x \le 4$ such that $h(x) > x$.
Here’s the corrected code:
‘‘‘python
from sympy import symbols, solve
def sum_of_x():
"""
Find the sum of all integers x in the interval shown ($0\le x\le 8$)
such that h(x)>x.
Returns:
int: The sum of all integers x in the interval shown ($0\le x\le 8$
) such that h(x)>x.
"""
x = symbols(’x’)
h_x = (x - 0.5) * (x - 2.5) * (x - 6.5) * (x - 7.5) / 16 + x
# Find the x-values where h(x) > x and 0 <= x <= 4
x_values = [x for x in range(5) if h_x.subs(x, x) > x]
# Calculate the sum of these x-values
sum_x = sum(x_values)
return sum_x
result = sum_of_x()
print(result)
‘‘‘
‘‘‘output
SyntaxError: invalid decimal literal
‘‘‘
The code above still has an error. The ‘h_x.subs(x, x) > x‘ comparison is
not a boolean value. We need to find a different way to solve this
problem.
Let’s analyze the problem. We are asked to find the sum of all integers
$x$ in the interval shown ($0\le x\le 8$) such that $h(x)>x$.
We can see that $h(x)$ is a parabola that opens upwards. For a parabola
that opens upwards, the maximum value of the parabola will occur at
the midpoint of the interval, which is $x = \frac{0 + 8}{2} = 4$.
For $x < 4$, $h(x)$ is below the line $y = x$, so $h(x) \le x$. For $x >
4$, $h(x)$ is above the line $y = x$, so $h(x) > x$.
Therefore, we only need to find the sum of the integers $x$ in the
interval $0 \le x \le 4$ such that $h(x) > x$.
...
Here’s the corrected code:
‘‘‘python
def sum_of_x():
-----
"""
Find the sum of all integers x in the interval shown ($0\le x\le 8$)
such that h(x)>x.
Returns:
int: The sum of all integers x in the interval shown ($0\le x\le 8$
) such that h(x)>x.
"""
# Define the interval
interval = range(5)
# Find the x-values where h(x) > x
x_values = [x for x in interval if x * (x - 4) * (x - 6) * (x - 7) /
16 > x]
# Calculate the sum of these x-values
sum_x = sum(x_values)
return sum_x
result = sum_of_x()
print(result)
‘‘‘
‘‘‘output
SyntaxError: invalid decimal literal
‘‘‘
Reach max function call limit.
Execution: None
Prediction: ’SyntaxError:invaliddecimalliteral’
Ground Truth: ’26’
Score: [False]
Reference CoT: ’We could check $h(x)$ separately for each integer $x$
from $0$ to $8$: for example, $h(0)\\approx 3.8$, so $h(0)>0$, but $h
(1)\\approx -0.7$, so $h(1)\\not>1$, and so on.\n\nHowever, it is
easier to see at a glance which $x$ satisfy $h(x)>x$ by superimposing
a graph of $y=x$ on the graph of $y=h(x)$:\n\n[asy]\ndraw
((-0.75,-0.75)--(8.25,8.25),red+1);\nimport graph; size(8cm); real
lsf=0.5; pen dps=linewidth(0.7)+fontsize(10); defaultpen(dps); pen ds
=black; real xmin=-0.75,xmax=8.25,ymin=-1.25,ymax=10.25;\n\npen
cqcqcq=rgb(0.75,0.75,0.75);\n\n/*grid*/ pen gs=linewidth(0.7)+cqcqcq+
linetype("2 2"); real gx=1,gy=1;\nfor(real i=ceil(xmin/gx)*gx;i<=
floor(xmax/gx)*gx;i+=gx) draw((i,ymin)--(i,ymax),gs); for(real i=ceil
(ymin/gy)*gy;i<=floor(ymax/gy)*gy;i+=gy) draw((xmin,i)--(xmax,i),gs);\
n\nLabel laxis; laxis.p=fontsize(10);\n\nxaxis("",xmin,xmax,Ticks(
laxis,Step=1.0,Size=2,NoZero),Arrows(6),above=true); yaxis("",ymin,
ymax,Ticks(laxis,Step=1.0,Size=2,NoZero),Arrows(6),above=true);\nreal
f1(real x){return (x-0.5)*(x-2.5)*(x-6.5)*(x-7.5)/16+x;}\ndraw(graph
(f1,-0.25,8.25),linewidth(1));\nclip((xmin,ymin)--(xmin,ymax)--(xmax,
ymax)--(xmax,ymin)--cycle);\nlabel("$y=h(x)$",(8.5,8),E);\ndot((0,0),
blue); dot((3,3),blue); dot((4,4),blue); dot((5,5),blue); dot((6,6),
blue); dot((8,8),blue);\n[/asy]\n\nThe six blue dots above mark the
integer points $(x,x)$ which lie below the graph of $y=h(x)$,
indicating that $h(x)>x$. Their $x$-coordinates are $0,3,4,5,6,8,$
which add up to $\\boxed{26}$.’
Listing 10: Failure case for TORA: Runtime Error. TORA failed to fall back from program timeout
to a rationale-based solution. For Intermediate Algebra and Precalculus problems like this example,
direct brute-force solutions are often infeasible, resulting in exceptions like timeout. Addressing
these challenges requires complex reasoning over math expressions, and may involve sophisticated
problem-solving and proof techniques like forward and backward reasoning (Jiang et al., 2023), as
well as result verification (Gou et al., 2023; Zhou et al., 2023a).
Idx: 2261
-----
Yype: Intermediate Algebra
Level: Level 3
Question: ’For some positive integers $m$ and $n,$\n\\[\\sqrt{23 + \\sqrt
{28}} + \\sqrt{23 - \\sqrt{28}} = \\sqrt{m + \\sqrt{n}}.\\]Enter the
ordered pair $(m,n).$’
Solution:
‘‘‘python
from sympy import sqrt, solve, symbols
def find_m_n():
"""For some positive integers $m$ and $n,$
\[\sqrt{23 + \sqrt{28}} + \sqrt{23 - \sqrt{28}} = \sqrt{m + \sqrt{n}}.\]
Enter the ordered pair $(m,n).$"""
m, n = symbols(’m n’)
# Define the given equation
equation = sqrt(23 + sqrt(28)) + sqrt(23 - sqrt(28)) - sqrt(m + sqrt(n
))
# Solve the equation for m and n
solutions = solve(equation, (m, n))
# Filter out the solutions where m and n are not positive integers
solutions = [(sol_m, sol_n) for sol_m, sol_n in solutions if sol_m > 0
and sol_n > 0 and sol_m.is_integer and sol_n.is_integer]
return solutions[0]
result = find_m_n()
print(result)
‘‘‘
‘‘‘output
TypeError: cannot determine truth value of Relational
‘‘‘
The error in the code above is due to the fact that the ‘solve‘ function
in SymPy does not handle square roots very well. Instead, we can
manually solve the equation for $m$ and $n$ using the properties of
square roots.
Let’s denote $\sqrt{23 + \sqrt{28}}$ as $a$ and $\sqrt{23 - \sqrt{28}}$
as $b$. Then the given equation can be rewritten as $a + b = \sqrt{m +
\sqrt{n}}$.
Squaring both sides, we get $a^2 + 2ab + b^2 = m + \sqrt{n}$.
Since $a^2 = 23 + \sqrt{28}$ and $b^2 = 23 - \sqrt{28}$, we can
substitute these values into the equation to get $28 + 2\sqrt{28} +
23 - \sqrt{28} = m + \sqrt{n}$.
Solving this equation for $m$ and $n$ gives us $m = 49$ and $n = 72$.
Therefore, the ordered pair $(m, n)$ is $\boxed{(49, 72)}$.
Execution: None
Prediction: (49,72)
Ground Truth: (46,2004)
Score: [False]
Reference CoT: ’Let $x = \\sqrt{23 + \\sqrt{28}} + \\sqrt{23 - \\sqrt
{28}}.$ Then\n\\begin{align*}\nx^2 &= 23 + \\sqrt{28} + 2 \\sqrt{23 +
\\sqrt{28}} \\sqrt{23 - \\sqrt{28}} + 23 - \\sqrt{28} \\\\\n&= 46 +
2 \\sqrt{23^2 - 28} \\\\\n&= 46 + 2 \\sqrt{501} \\\\\n&= 46 + \\sqrt
{2004}.\n\\end{align*}Hence, $(m,n) = \\boxed{(46,2004)}.$’
Listing 11: Failure case for TORA: Rationale-only Error.
-----
Idx: 2638
Type: Intermediate Algebra
Level: Level 1
Question: ’The graph of $y = f(x)$ is shown below.\n\n[asy]\nunitsize(0.5
cm);\n\nreal func(real x) {\n real y;\n if (x >= -3 && x <= 0) {y =
-2 - x;}\n if (x >= 0 && x <= 2) {y = sqrt(4 - (x - 2)^2) - 2;}\n if
(x >= 2 && x <= 3) {y = 2*(x - 2);}\n return(y);\n}\n\nint i, n;\n\
nfor (i = -5; i <= 5; ++i) {\n draw((i,-5)--(i,5),gray(0.7));\n draw
((-5,i)--(5,i),gray(0.7));\n}\n\ndraw((-5,0)--(5,0),Arrows(6));\ndraw
((0,-5)--(0,5),Arrows(6));\n\nlabel("$x$", (5,0), E);\nlabel("$y$",
(0,5), N);\n\ndraw(graph(func,-3,3),red);\n\nlabel("$y = f(x)$",
(3,-2), UnFill);\n[/asy]\n\nWhich is the graph of $y = -f(x)$?\n\n[
asy]\nunitsize(0.5 cm);\n\npicture[] graf;\nint i, n;\n\nreal func(
real x) {\n real y;\n if (x >= -3 && x <= 0) {y = -2 - x;}\n if (x >=
0 && x <= 2) {y = sqrt(4 - (x - 2)^2) - 2;}\n if (x >= 2 && x <= 3) {
y = 2*(x - 2);}\n return(y);\n}\n\nreal funcb(real x) {\n return(func(x));\n}\n\nreal funcd(real x) {\n return(-func(-x));\n}\n\nreal
funce(real x) {\n return(func(-x));\n}\n\nfor (n = 1; n <= 5; ++n) {\
n graf[n] = new picture;\n for (i = -5; i <= 5; ++i) {\n draw(graf[n
],(i,-5)--(i,5),gray(0.7));\n draw(graf[n],(-5,i)--(5,i),gray(0.7));\
n }\n draw(graf[n],(-5,0)--(5,0),Arrows(6));\n draw(graf[n],(0,-5)
--(0,5),Arrows(6));\n\n label(graf[n],"$x$", (5,0), E);\n label(graf[
n],"$y$", (0,5), N);\n}\n\ndraw(graf[1],(-5,3)--(-2,0),red);\ndraw(
graf[1],arc((-2,2),2,270,360),red);\ndraw(graf[1],(0,2)--(2,4),red);\
ndraw(graf[2],graph(funcb,-3,3),red);\ndraw(graf[3],(-3,2)--(-2,0),
red);\ndraw(graf[3],arc((-2,-2),2,0,90),red);\ndraw(graf[3],(0,-2)
--(3,-5),red);\ndraw(graf[4],graph(funcd,-3,3),red);\ndraw(graf[5],
graph(funce,-3,3),red);\n\nlabel(graf[1], "A", (0,-6));\nlabel(graf
[2], "B", (0,-6));\nlabel(graf[3], "C", (0,-6));\nlabel(graf[4], "D",
(0,-6));\nlabel(graf[5], "E", (0,-6));\n\nadd(graf[1]);\nadd(shift
((12,0))*(graf[2]));\nadd(shift((24,0))*(graf[3]));\nadd(shift((6,-12)
)*(graf[4]));\nadd(shift((18,-12))*(graf[5]));\n[/asy]\n\nEnter the
letter of the graph of $y = -f(x).$’
Solution:
The correct graph is the one that reflects the original graph y = f (x)
about the y-axis. This is graph D.
So, the answer is $\boxed{\text{D}}$.
Execution: None
Prediction: ’D’
Ground Truth: ’B’
Score: [False]
Reference CoT: ’The graph of $y = -f(x)$ is the reflection of the graph
of $y = f(x)$ in the $x$-axis. The correct graph is $\\boxed{\\text{B
}}.$’
Listing 12: Failure case for TORA: False Negative.
Idx: 2228
Type: Intermediate Algebra
Level: Level 3
Question: ’Find all the real roots of\n\\[(12x - 1)(6x - 1)(4x - 1)(3x 1) = 5.\\]Enter all the real roots, separated by commas.’
Solution:
‘‘‘python
from sympy import symbols, solve, simplify
def find_real_roots():
"""Find all the real roots of
\[(12x - 1)(6x - 1)(4x - 1)(3x - 1) = 5.\]Enter all the real roots,
separated by commas."""
x = symbols(’x’)
equation = (12*x - 1) * (6*x - 1) * (4*x - 1) * (3*x - 1) - 5
-----
roots = solve(equation, x)
real_roots = [root for root in roots if root.is_real]
return real_roots
result = find_real_roots()
print(result)
‘‘‘
‘‘‘output
[-1/12, 1/2]
‘‘‘
The real roots of the equation are $\boxed{-\frac{1}{12}, \frac{1}{2}}$.
Execution: None
Reference CoT: ’We can expand using the pairs $(3x - 1)(12x - 1) = 36x^2
Prediction: ’− 12[1] _[,][ 1]2_ [’Ground Truth: ’][ 1]2 _[,][ −]_ 12[1] [’Score: [False]]
15x + 1$ and $(6x - 1)(4x - 1) = 24x^2 - 10x + 1,$ so\n\\[(36x^2 15x + 1)(24x^2 - 10x + 1) = 5.\\]Let $y = 12x^2 - 5x.$ Then\n\\[(3y +
1)(2y + 1) = 5.\\]This simplifies to $6y^2 + 5y - 4 = 0,$ which
factors as $(2y - 1)(3y + 4) = 0.$ Hence, $y = \\frac{1}{2}$ or $y =
-\\frac{4}{3}.$\n\nIf $12x^2 - 5x = \\frac{1}{2},$ then $24x^2 - 10x 1 = 0,$ which factors as\n\\[(2x - 1)(12x + 1) = 0.\\]Hence, $x = \\
frac{1}{2}$ or $x = -\\frac{1}{12}.$\n\nIf $12x^2 - 5x = -\\frac
{4}{3},$ then\n\\[36x^2 - 15x + 4 = 0,\\]which has no real solutions.\
n\nTherefore, the real roots are $\\boxed{\\frac{1}{2}, -\\frac
{1}{12}}.$’
-----
| [
"Zhibin, Gou",
"Zhihong, Shao",
"Nan, Duan",
"Yeyun, Gong",
"Yelong, Shen",
"Yujiu, Yang",
"Minlie, Huang",
"Weizhu, Chen"
] | 2023-10-04T00:00:00 | ICLR 2024 Poster | true | 74 | 9 | null | http://arxiv.org/abs/2309.17452 | https://arxiv.org/abs/2309.17452 | https://www.semanticscholar.org/paper/b272513916b45c8517d289d7abee4a53e6832187 |
Solving Math Word Problems by Combining Language Models With Symbolic Solvers | Automatically generating high-quality step-by-step solutions to math word problems has many applications in education. Recently, combining large language models (LLMs) with external tools to perform complex reasoning and calculation has emerged as a promising direction for solving math word problems, but prior approaches such as Program-Aided Language model (PAL) are biased towards simple procedural problems and less effective for problems that require declarative reasoning. We propose an approach that combines an LLM that can incrementally formalize word problems as a set of variables and equations with an external symbolic solver that can solve the equations. Our approach achieves comparable accuracy to the original PAL on the GSM8K benchmark of math word problems and outperforms PAL by an absolute 20% on ALGEBRA, a new dataset of more challenging word problems extracted from Algebra textbooks. Our work highlights the benefits of using declarative and incremental representations when interfacing with an external tool for solving complex math word problems. Our data and prompts are publicly available at https://github.com/joyheyueya/declarative-math-word-problem. | This work proposes an approach that combines an LLM that can incrementally formalize word problems as a set of variables and equations with an external symbolic solver that can solve the equations. | # Solving Math Word Problems by Combining Language Models With Symbolic Solvers
**Joy He-Yueya, Gabriel Poesia, Rose E. Wang, Noah D. Goodman**
Stanford University
```
{heyueya, poesia, rewang, ngoodman}@stanford.edu
```
**Abstract**
Automatically generating high-quality step-by-step solutions to math word problems has many applications in education. Recently, combining large language
models (LLMs) with external tools to perform complex reasoning and calculation
has emerged as a promising direction for solving math word problems, but prior
approaches such as Program-Aided Language model (PAL) are biased towards
simple procedural problems and less effective for problems that require declarative
reasoning. We propose an approach that combines an LLM that can incrementally formalize word problems as a set of variables and equations with an external
symbolic solver that can solve the equations. Our approach achieves comparable
accuracy to the original PAL on the GSM8K benchmark of math word problems
and outperforms PAL by an absolute 20% on ALGEBRA, a new dataset of more
challenging word problems extracted from Algebra textbooks. Our work highlights
the benefits of using declarative and incremental representations when interfacing
with an external tool for solving complex math word problems. Our data and
[prompts are publicly available at https://github.com/joyheyueya/declarative-math-](https://github.com/joyheyueya/declarative-math-word-problem)
[word-problem.](https://github.com/joyheyueya/declarative-math-word-problem)
**1** **Introduction**
Learning to solve mathematical word problems is an important skill but can be challenging for
students. [5, 13]. A tool that can automatically generate step-by-step solutions to such problems has
the potential to provide personalized support for students working through word problems [14, 6] and
help educators with curriculum development [12].
Using few-shot prompting over large language models (LLMs) has recently emerged as a promising
approach for solving math word problems [15, 17, 7]. The chain-of-thought (COT) [15] prompting
method presents explicit intermediate reasoning steps to the LLM to further enhance its reasoning
capability. However, LLMs often struggle with performing arithmetic operations [8, 9, 15]. To
address this, [15] uses an external calculator to evaluate the arithmetic operations in the generated
reasoning steps. Program-Aided Language model (PAL) [7] extends this idea by generating Python
programs as reasoning steps, offloading all calculations to a Python interpreter. Although programs
offer a direct representation of procedures, they require special devices to represent more abstract
mathematical declarations. For example, a statement like a = b + 1 can be directly interpreted as
a variable assignment in Python if b is known, but not if b is unknown. Nonetheless, the equation
remains a valid mathematical expression even when b is unknown, suggesting that we instead want to
allow models to perform mathematical declarations beyond those that yield a procedure (for a full
example, see the problem in Figure 1).
In this work, we present an approach that combines an LLM, which can incrementally formalize
word problems as a set of variables and equations, with an external symbolic solver that can solve
the equations. Our approach achieves comparable performance to the original PAL on the GSM8K
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
Figure 1: Declarative solutions are typically more intuitive to write than procedural solutions for
challenging algebra word problems. PAL and COT try to generate procedural solutions that describe
a set of plans for achieving the goal, which are incorrect in this case. The DECLARATIVE prompting
generates a correct solution that describes the properties of the goal, which is generally more
appropriate for hard problems with no obvious procedural solutions.
[4] benchmark of math word problems. To evaluate current approaches on more challenging word
problems, we introduce ALGEBRA, a dataset of 222 word problems collected from open access
Algebra textbooks. We show that our approach outperforms PAL by an absolute 20% on ALGEBRA.
Our work highlights the effectiveness of incrementally generating declarative formalizations when
interfacing with an external tool for solving complex math word problems.
**2** **Related work**
Recent studies have explored the use of few-shot prompting over LLMs for solving math word
problems [15, 17, 7]. The chain-of-thought [15] prompting method presents explicit intermediate
reasoning steps to the LLM to improve its reasoning capability. Since LLMs often make arithmetic
errors [8, 9, 15], several prior works [15, 3] have experimented with using an external calculator to
carry out the operations generated by LLMs. This generally improves final performance by less than
5% on GSM8K. Program-Aided Language model [7] extends to more complex arithmetic by generating Python programs as reasoning steps and using a Python interpreter to perform the calculations.
However, generating Python programs carries a strong bias toward procedural calculations and does
not work well for word problems that do not have a straightforward procedural solution.
**3** **Our Approach: Equipping an LLM With an External Symbolic Solver**
Our approach for solving a math word problem consists of two steps: (1) declarative and incremental
_formalization using an LLM and (2) solving equations using a symbolic solver._
**3.1** **Declarative and incremental formalization using an LLM**
To solve a math word problem, we first use an LLM to formalize the problem as a set of variables
and equations. Recently, using few-shot prompting over LLMs has emerged as an effective approach
for natural language understanding and decomposition. Few-shot prompting is a technique that uses
LLMs to solve a task by providing the LLMs with a few demonstrations of the task as part of the
input at inference time [1]. In this technique, the demonstrations (i.e., examples of input-output pairs)
are concatenated into a prompt, which is passed to the model along with the new input to generate
-----
Figure 2: An example of a math word problem and its solution from the DECLARATIVE prompt.
Variables and equations are in red.
an output. Formally, a set of k input-output examples {(xi, yi)}i[k]=1 [are concatenated in a prompt]
_p_ (x1, y2) (x1, y2) _..._ (xk, yk) where denotes the concatenation of examples. At inference
_≡_ _||_ _||_ _||_ _||_
time, p||xtest is passed to the model where xtest denotes a new input instance, and the model attempts
to complete p||xtest by generating the output ytest.
To formalize word problems using few-shot prompting, we introduce the DECLARATIVE prompt
_p ≡_ (x1, y2)||(x1, y2)||...||(xk, yk) where xi is the word problem in natural language, and yi is the
step-by-step solution to xi. In the DECLARATIVE prompt, yi consists of interleaved natural language
statements and formal variable or equation declarations in double-square brackets. Our approach aims
to generate solutions that formalize word problems based on a set of principles listed in Appendix A.
Figure 2 shows an example used in the DECLARATIVE prompt that we created according to these
principles. To solve a new word problem, xtest, we append it to p and pass p||xtest to an LLM, which
generates ytest as the solution for xtest.
**3.2** **Solving equations using a symbolic solver**
The step-by-step solution generated by the LLM using the DECLARATIVE prompt includes the list of
variables and equations that describe the word problem but does not provide the final answer (see
Figure 2). Instead of relying on the LLM to solve the equations directly, we pass the equations to an
external symbolic solver to do the calculation. In this work, we use SymPy [11], a Python library
for symbolic computation, to algebraically solve a system of equations extracted from the solution
generated by the LLM.
**4** **Experimental Setup**
**4.1** **Datasets**
We evaluate our approach on two math word problem datasets: GSM8K [4] and a new dataset called
ALGEBRA [1]. We use the GSM8K test set, which contains 1319 math word problems at grade-school
level. To evaluate our approach on more challenging problems, we curated ALGEBRA, which consists
of 222 word problems from two open-access Algebra textbooks: Basic Algebra with Applications
([16]; released under the Creative Commons Attribution-ShareAlike license) and Elementary Algebra
2e ([10]; released under the Creative Commons Attribution license). We took every word problem that
has a solution in these textbooks. The resulting dataset includes word problems covering all topics
leading up to System of Equations, with the exception of problems related to geometry, graphing, or
inequalities.
[1The ALGEBRA dataset is publically available at https://github.com/joyheyueya/declarative-math-word-](https://github.com/joyheyueya/declarative-math-word-problem)
[problem.](https://github.com/joyheyueya/declarative-math-word-problem)
-----
**4.2** **Baselines and variants of the DECLARATIVE prompting**
We consider three methods: chain-of-thought (COT) prompting [15], Program-Aided Language model
(PAL) [7], and our DECLARATIVE prompting combined with SymPy (DECLARATIVE + SymPy).
We created two different prompts for each prompting method. The first prompt (8-shot) uses the same
set of eight examples used in prior work [15]. The second prompt (3-shot) uses three examples that
we designed to help illustrate step-by-step and declarative thinking and the formalization format we
expect.
For our DECLARATIVE prompting method, we experimented with three variants.
1. DECLARATIVE3-shot + principles + SymPy: adding the list of principles in Table 2 at the
beginning of the prompt (see an example in Figure 3a).
2. DECLARATIVE3-shot + principles: using the LLM to directly calculate the value of the goal
variable (see an example in Figure 3b).
3. ONE-STEP DECLARATIVE3-shot + SymPy: formalizing the word problem in a single step
instead of incrementally (see an example in Figure 4).
We use Codex (code-davinci-002) [2] as the LLM for all methods. We use top-1 decoding and a
temperature of 0. We set max_tokens to be 600.
**5** **Results**
GSM8K ALGEBRA
COT8-shot (original) 62.5 ± 0.16 45.3 ± 0.56
COT3-shot (ours) 58.9 ± 0.16 47.9 ± 1.18
PAL8-shot (original) 70.2 ± 0.25 51.7 ± 0.21
PAL3-shot (ours) **73.3 ± 0.13** 56.2 ± 0.21
DECLARATIVE8-shot + SymPy 64.7 -
DECLARATIVE3-shot + SymPy 66.0 ± 0.33 -
DECLARATIVE3-shot + principles + SymPy 69.4 ± 0.65 **76.3 ± 0.93**
DECLARATIVE3-shot + principles 22.4 ± 0.27 -
ONE-STEP DECLARATIVE3-shot + SymPy 57.5 ± 0.06 -
Table 1: Problem solve rate (%) on GSM8K and ALGEBRA. We report the average and standard
deviation across three runs. The highest number on each dataset is in bold. For COT and PAL, we
ran both the 8-shot prompt used in the original papers and the 3-shot prompt we created.
On GSM8K (Table 1), our 3-shot prompt leads to a better performance than the original 8-shot prompt
for PAL and DECLARATIVE. PAL outperforms DECLARATIVE across both sets of comparable examples, but using our DECLARATIVE prompting method with the 3-shot prompt (DECLARATIVE3-shot +
principles + SymPy) gives a performance equivalent to the original PAL (PAL8-shot (original)).
Interestingly, prepending the list of principles to the DECLARATIVE prompt (DECLARATIVE3-shot +
principles + SymPy) leads to a better performance on GSM8K than DECLARATIVE3-shot + SymPy.
Asking the LLM to solve the equations directly leads to a dramatic drop in accuracy (from 69.4%
to 22.4%), which highlights the benefit of using an external solver. Additionally, our DECLARATIVE prompting benefits from incremental formalization, as shown by the performance gap between
the incremental version (DECLARATIVE3-shot + principles + SymPy) and the non-incremental variant
(ONE-STEP DECLARATIVE3-shot + SymPy).
On ALGEBRA (Table 1), our approach (DECLARATIVE3-shot + principles + SymPy) achieves the
highest accuracy among all methods, outperforming PAL by an absolute 20%. The accuracy of
the original COT drops from 62.5% on GSM8K to 45.3% on ALGEBRA, which demonstrates that
problems in ALGEBRA are generally harder than those in GSM8K. The main reason that the
DECLARATIVE prompting method works better than COT and PAL on ALGEBRA is that it is less
intuitive to generate procedural solutions to Algebra problems that require declarative reasoning (see
-----
an example in Figure 1). Although our 3-shot prompt improves the performance of COT and PAL on
ALGEBRA compared to the original 8-shot prompt, our DECLARATIVE method is still much more
effective than COT and PAL.
**6** **Conclusion**
We present an approach for automatically generating step-by-step solutions to math word problems
by equipping an LLM with an external symbolic solver. Our approach uses an LLM to incrementally
formalize word problems as variables and equations and avoids arithmetic errors by using an external
symbolic solver that can solve the equations. Our approach achieves comparable accuracy to the
original PAL on GSM8K and improves over PAL by an absolute 20% on a new dataset consisting of
harder word problems from Algebra textbooks. We demonstrate the effectiveness of using declarative
formalization when interfacing with an external tool for solving complex math word problems.
Additionally, encouraging incremental formalization is beneficial, especially when using declarative
representations. Our approach is particularly useful for math education since many advanced math
problems can be divided into separate conceptual pieces, with one piece being declarative and the
other involving procedural knowledge.
**References**
[1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural
_information processing systems, 33:1877–1901, 2020._
[2] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda,
N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv
_preprint arXiv:2107.03374, 2021._
[3] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W.
Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv
_preprint arXiv:2204.02311, 2022._
[4] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek,
J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint
_arXiv:2110.14168, 2021._
[5] D. D. Cummins. Children’s interpretations of arithmetic word problems. Cognition and
_instruction, 8(3):261–289, 1991._
[6] J. del Olmo-Muñoz, J. A. González-Calero, P. D. Diago, D. Arnau, and M. Arevalillo-Herráez.
Intelligent tutoring systems for word problem solving in covid-19 days: could they have been
(part of) the solution? ZDM–Mathematics Education, pages 1–14, 2022.
[7] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig. Pal: Programaided language models. arXiv preprint arXiv:2211.10435, 2022.
[8] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. _arXiv preprint_
_arXiv:2103.03874, 2021._
[9] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone,
C. Anil, I. Schlag, T. Gutman-Solo, et al. Solving quantitative reasoning problems with language
models. arXiv preprint arXiv:2206.14858, 2022.
[10] L. Marecek, M. Anthony-Smith, and A. H. Mathis. Elementary Algebra 2E. OpenStax, 2020.
[11] A. Meurer, C. P. Smith, M. Paprocki, O. Certík, S. B. Kirpichev, M. Rocklin, A. Kumar,[ˇ]
S. Ivanov, J. K. Moore, S. Singh, et al. Sympy: symbolic computing in python. PeerJ Computer
_Science, 3:e103, 2017._
-----
[12] O. Polozov, E. O’Rourke, A. M. Smith, L. Zettlemoyer, S. Gulwani, and Z. Popovi´c. Personalized mathematical word problem generation. In Twenty-Fourth International Joint Conference
_on Artificial Intelligence, 2015._
[13] N. Pongsakdi, A. Kajamies, K. Veermans, K. Lertola, M. Vauras, and E. Lehtinen. What
makes mathematical word problem solving challenging? exploring the roles of word problem
characteristics, text comprehension, and arithmetic skills. ZDM, 52:33–44, 2020.
[14] S. Ritter, J. R. Anderson, K. R. Koedinger, and A. Corbett. Cognitive tutor: Applied research in
mathematics education. Psychonomic bulletin & review, 14:249–255, 2007.
[15] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought
prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[16] I. G. Zaigralin. Basic Algebra with Applications. Ivan G. Zaigralin, 6 edition, 2018.
[17] D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, O. Bousquet, Q. Le,
and E. Chi. Least-to-most prompting enables complex reasoning in large language models.
_arXiv preprint arXiv:2205.10625, 2022._
-----
**A** **Principles for declarative solutions**
Principles for declarative solutions
1. Each sentence in the solution either introduces a new variable or states a new equation.
2. The last sentence gives the goal: which variable will contain the answer to the problem.
3. Each equation only uses previously introduced variables.
4. Each quantity is only named by one variable.
5. The solution uses all the numbers in the question.
Table 2: A list of principles we would like the solutions to satisfy.
**B** **Prompt examples**
[All the prompts used in this work are publicly available at https://github.com/joyheyueya/declarative-](https://github.com/joyheyueya/declarative-math-word-problem)
[math-word-problem.](https://github.com/joyheyueya/declarative-math-word-problem)
(a) Adding principles to the beginning of the DECLARATIVE prompt.
(b) Adding principles to the beginning of the DECLARATIVE prompt and calculating the final answer. The
final answer is in red.
Figure 3: The difference between “DECLARATIVE3-shot + principles + SymPy” and
“DECLARATIVE3-shot + principles” is that “DECLARATIVE3-shot + principles + SymPy” passes the
equations to SymPy to solve, but “DECLARATIVE3-shot + principles” asks the LLM to solve the
equations directly.
Figure 4: An example of formalizing a math word problem in a single equation.
-----
| [
"Gabriel, Poesia",
"Joy, He-Yueya",
"Noah, Goodman",
"Rose, Wang"
] | 2023-10-28T00:00:00 | null | false | 73 | 5 | null | https://openreview.net/forum?id=m7m14acWQi | https://arxiv.org/abs/2304.09102 | https://www.semanticscholar.org/paper/57100e39d0413ee585b381ba9ab366e8a6cf2866 |
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models | Fine-tuning language models~(LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often limited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar feedback, for example, on math problems where one can verify correctness. To do so, we investigate a simple self-training method based on expectation-maximization, which we call ReST$^{EM}$, where we (1) generate samples from the model and filter them using binary feedback, (2) fine-tune the model on these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS coding benchmarks using PaLM-2 models, we find that ReST$^{EM}$ scales favorably with model size and significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with feedback can substantially reduce dependence on human-generated data. | Testing on advanced MATH reasoning and APPS coding benchmarks using PaLM-2 models, it is found that ReST$^{EM}$ scales favorably with model size and significantly surpasses fine-tuning only on human data. | _2024-4-19_
# Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
**Avi Singh[1,*], John D Co-Reyes[1,*], Rishabh Agarwal[1,2,*],**
**Ankesh Anand[1], Piyush Patil[1], Xavier Garcia[1], Peter J. Liu[1], James Harrison[1], Jaehoon Lee[1], Kelvin Xu[1],**
**Aaron Parisi[1], Abhishek Kumar[1], Alex Alemi[1], Alex Rizkowsky[1], Azade Nova[1], Ben Adlam[1], Bernd Bohnet[1],**
**Gamaleldin Elsayed[1], Hanie Sedghi[1], Igor Mordatch[1], Isabelle Simpson[1], Izzeddin Gur[1], Jasper Snoek[1],**
**Jeffrey Pennington[1], Jiri Hron[1], Kathleen Kenealy[1], Kevin Swersky[1], Kshiteej Mahajan[1], Laura Culp[1], Lechao**
**Xiao[1], Maxwell L Bileschi[1], Noah Constant[1], Roman Novak[1], Rosanne Liu[1], Tris Warkentin[1], Yundi Qian[1],**
**Yamini Bansal[1], Ethan Dyer[1], Behnam Neyshabur[1], Jascha Sohl-Dickstein[1], Noah Fiedel[1]**
*Contributed equally, 1Google DeepMind, 2 Mila
**Fine-tuning language models (LMs) on human-generated data remains a prevalent practice. However,**
**the performance of such models is often limited by the quantity and diversity of high-quality human data.**
**In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar**
**feedback, for example, on math problems where one can verify correctness. To do so, we investigate a**
**simple self-training method based on expectation-maximization, which we call ReST[𝐸𝑀], where we (1)**
**generate samples from the model and filter them using binary feedback, (2) fine-tune the model on**
**these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS**
**coding benchmarks using PaLM-2 models, we find that ReST[𝐸𝑀]** **scales favorably with model size and**
**significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with**
**feedback can reduce dependence on human-generated data.**
_Keywords: RL from external feedback, EM for RL, Language, LLMs, Reasoning, Coding, Self-Improvement_
### 1. Introduction
Large Language Models (LLMs) are revolutionizing the landscape of deep learning, showcasing
remarkable capabilities in generating human-quality text and tackling diverse language tasks (Google
et al., 2023; OpenAI, 2023). While supervised fine-tuning (SFT) on human-collected data further
boosts their performance on tasks of interest, acquiring high-quality human data poses a significant
bottleneck. This is particularly demanding for complex problem-solving tasks, requiring significant
resources and expert knowledge. To address this hurdle, model-generated synthetic data emerges as
a promising alternative, offering scalability and cost-effectiveness, provided its quality can be ensured.
While LLMs hold the potential to self-evaluate generated data, this paper explores a simpler setting
where an external, scalar feedback signal serves as a quality indicator for each generated sample.
To investigate training on model-generated data, we consider a simple yet powerful self-training
approach for language models that requires only two capabilities: 1) generating samples from the
model and 2) evaluating these samples with a scoring mechanism. This approach shares similarities
with Reinforced Self-Training (ReST) proposed by Gulcehre et al. (2023). We make some modifications
to ReST (detailed in Section 3), and call our approach ReST[𝐸𝑀]. We show that ReST[𝐸𝑀] can be viewed
as applying expectation-maximization for reinforcement learning (Dayan and Hinton, 1997; Peters
and Schaal, 2007), which we present formally in Section 3. Specifically, ReST[𝐸𝑀] alternates between
the expectation and maximization steps:
_Corresponding author(s): [email protected], [email protected], [email protected]_
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Reasoning: MATH
Code Generation: HumanEval
GPT-4
PaLM 2-L (ReST[EM])
LLaMA-2 70B
|Col1|Col2|PaLM|Col4|Col5|2-L|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||Minerva 540|B|||||||
||Minerva 62B|||||MetaM|ath 70B||
|||Wiz|||ardMath 70B|PaLM|Llemma 34B 2-S (ReSTEM)||
||Pa|LM 2-S|||Inflection-1|Grok-0|Llemma 7B (33B)||
||||||||||
Mistral 7B (maj@4)
40
35
30
25
20
15
60
50
40
30
|Wizard|Coder 15B Code L|PaL lama Python 3|M 2-L (ReSTEM) 4B|
|---|---|---|---|
|GPT-3.5 (Chat|GPT) Code L|LaMA 34B PaLM|2-S* (ReSTEM)|
|-L -S*||Grok-0|(33B)|
||LLaMA-2 70B|Inflection-1 Mistral|7B|
GPT-4
WizardCoder 15B PaLM 2-L (ReST[EM])
Code Llama Python 34B
GPT-3.5 (ChatGPT) Code LLaMA 34B
PaLM 2-S* (ReST[EM])
PaLM 2-L
PaLM 2-S* Grok-0 (33B)
Inflection-1
LLaMA-2 70B Mistral 7B
Figure 1 Self-training with ReST[𝐸𝑀] substantially improves test performance of PaLM 2 models on
|
two challenging benchmarks: MATH and HumanEval. Results for other models are shown for general
progress on these tasks and are typically not comparable due to difference in model scales. GPT-4
results are taken from Bubeck et al. (2023). The x-axis approximately denotes release time (not to
scale).
1. Generate (E-step): The language model generates multiple output samples for each input
context. Then, we filter these samples using a binary reward to collect the training dataset.
2. Improve (M-step): The original language model is supervised fine-tuned on the training
dataset from the previous Generate step. The fine-tuned model is then used in the next
```
Generate step.
```
ReST[𝐸𝑀], with its various adaptations (Section 4), has demonstrated success in enhancing language
models across diverse domains, including machine translation (Gulcehre et al., 2023; Norouzi et al.,
2016), semantic parsing (Agarwal et al., 2019), preference alignment (Dong et al., 2023), and elementary reasoning (Yuan et al., 2023; Zelikman et al., 2022). However, prior works primarily applied
training with self-generated data to relatively small language models (up to 7B parameters), with
limited scalability observed for larger models (Yuan et al., 2023). Complementing these efforts, our
work aims to investigate the effectiveness and scalability of model-generated synthetic data compared
to human-generated data in two challenging, less explored domains: competition-level mathematical
problem-solving (MATH) (Hendrycks et al., 2021b) and code generation (APPS) (Hendrycks et al.,
2021a).
Our empirical findings reveal significant advancements in both mathematical reasoning and code
generation capabilities when applying ReST[𝐸𝑀] to PaLM 2 models of varying scales (Figure 1). Notably,
models fine-tuned on model-generated synthetic data exhibit remarkably larger performance gains
compared to those trained on human-written data (Figure 2, 3). Interestingly, exceeding a couple
of iterations of ReST[𝐸𝑀] leads to diminishing improvement, indicating potential overfitting on small
amount of training problems (Figure 4). Additionally, models fine-tuned using ReST[𝐸𝑀] improve
pass@k as well as majority voting performance. Furthermore, these fine-tuned models demonstrate
enhanced performance on related but held-out benchmarks, including math problems (GSM8K and
Hungarian HS finals), coding (HumanEval), and Big-Bench Hard tasks. We also perform ablation
studies to investigate the effect of number of model-generated solutions, training problems, and
iterations for ReST[𝐸𝑀] fine-tuning. Overall, our findings suggest self-training with feedback as a
promising approach to reduce dependence on human data.
The key contributions of this work are:
- We introduce ReST[𝐸𝑀] that enables learning from self-generated data for LLMs, employing a
Published in Transactions on Machine Learning Research (04/2024) 2
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
principled expectation-maximization approach within a reinforcement learning framework.
- We demonstrate that training on self-generated solutions surpasses training on human-generated
solutions in problem-solving domains, such as mathematics and code generation.
- Through comprehensive ablation studies, we pinpoint the crucial elements necessary for attaining
optimal performance.
- LLMs fine-tuned with ReST[𝐸𝑀] exhibit robust transfer capabilities across various held-out tasks.
### 2. Preliminaries
An autoregressive language model produces an output sequence 𝒚 = ( _𝑦1, 𝑦2, ....𝑦𝑇_ ) given a context (or
source input) 𝒙 = (𝑥1, 𝑥2, ...𝑥𝐿), where the tokens 𝑥𝑙, 𝑦𝑡 belong to a fixed vocabulary. Auto-regressive
generation involves predicting tokens one at a time, based on the previously generated tokens.
Assuming that the model is parameterized by 𝜃, the conditional probability distribution of generating
a sequence 𝒚 given 𝒙 is
_𝑇_
_𝑝𝜃_ _𝒚_ _𝒙_ = _𝑝𝜃_ _𝑦𝑡_ _𝒚<𝑡, 𝒙_ _,_
( | ) ( | )
_𝑡=1_
Ö
with the convention 𝒚1:0 = ∅ and 𝒚1:𝑡−1 = ( _𝑦1, 𝑦2, ....𝑦𝑡−1). For ease of notation, we define 𝑝(_ _𝑦𝑡_ |𝑥) :=
_𝑝_ _𝑦𝑡_ _𝑦<𝑡, 𝑥_ . The probability of predicting 𝑡[𝑡ℎ] token 𝑦𝑡, 𝑝 _𝑦𝑡_ _𝑥_, is determined using a softmax with
temperature( | ) _𝛾: 𝑝_ _𝑦𝑡_ _𝑥_ = _𝑀exp(𝑧𝑡/𝛾)_ ( | )
( | ) _𝑖=1_ [exp][(][𝑧][𝑖][/][𝛾][)][, where][ 𝑧][𝑡] [is the logit score for the token][ 𝑦][𝑡][. Higher values of]
temperature 𝛾 introduces more randomness, while a lower value makes the output more deterministicÍ
by favoring the most probable words.
Given a dataset of inputs 𝒙 and human-generated outputs 𝒚, supervised fine-tuning (SFT)
D
trains the policy by minimizing the negative log likelihood loss:
_𝑇_
LSFT(𝜃) = −𝔼(𝒙,𝒚 )∼D log 𝑝𝜃( _𝑦𝑡_ | 𝒚1:𝑡−1, 𝒙) _._ (1)
" _𝑡=1_ #
∑︁
We also assume access to a deterministic sequence-level (or terminal) reward 𝑟 _𝒙, 𝒚_ . Then, the
( )
reinforcement learning (RL) objective corresponds to:
LRL(𝜃) = 𝔼𝒙∼D 𝔼𝒚∼𝑝𝜃 ( _𝒚_ | _𝒙) [𝑟(𝒙, 𝒚)]_ _._
Optimizing RL loss directly using online RL methods, such as policy gradients, requires updating
L
and sampling from the policy numerous times during training. However, the computational cost of
fine-tuning on a continual flow of new samples becomes a limitation of online methods, especially
when the sizes of the policy network grow to tens or hundreds of billion parameters. We discuss an
alternative to such online RL approaches in the next section.
### 3. Expectation-Maximization for Reinforced Self-Training
**Expectation-Maximization (EM) for RL** We first describe the EM-based framework for RL with
language models, building upon the prior work by Dayan and Hinton (1997). Let’s define a binary
optimality variable O, such that 𝑝 _𝑂_ = 1 _𝒙, 𝒚_ _𝑓_ _𝑟_ _𝒙, 𝒚_, for some non-decreasing non-negative
( | ) ∝ ( ( ))
function 𝑓 : ℝ ℝ[+]. We want to maximize the log-likelihood of observing 𝑂 = 1 (obtaining high
→
reward):
log 𝑝 _𝑂_ = 1 _𝒙_ := log _𝑝𝜃_ _𝒚_ _𝒙_ _𝑝_ _𝑂_ = 1 _𝒙, 𝒚_ _._
( | ) ( | ) ( | )
_𝒚_
∑︁
Published in Transactions on Machine Learning Research (04/2024) 3
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
However, the sum over all possible sequences 𝒚 is typically intractable. Instead of maximizing
log 𝑝 _𝑂_ = 1; 𝒙, one can consider maximizing its ELBO 𝐿 _𝑝𝜃, 𝑞_ with respect to parameters 𝜃 and
( ) ( )
variational distribution 𝑞 _𝑦_ _𝑥_ . Specifically,
( | )
_𝑝_ _𝑂_ = 1 _𝒙, 𝒚_ _𝑝𝜃_ _𝒚_ _𝒙_
log 𝑝 _𝑂_ = 1 _𝒙_ = log 𝔼𝑞 _𝒚_ _𝒙_ ( | ) ( | )
( | ) ( | ) _𝑞_ _𝒚_ _𝒙_
( | )
𝔼𝑞 _𝒚_ _𝒙_ log _[𝑝][(][𝑂]_ [=][ 1][ |][ 𝒙][,][ 𝒚][)] _[𝑝][𝜃][(]_ _[𝒚][|][𝒙][)]_ Jensen’s inequality
≥ ( | ) _𝑞_ _𝒚_ _𝒙_ ( )
( | )
= 𝔼𝑞 _𝒚_ _𝒙_ log 𝑝 _𝑂_ = 1 _𝒙, 𝒚_ KL _𝑞_ _𝒚_ _𝒙_ _𝑝𝜃_ _𝒚_ _𝒙_
( | ) [ ( | )] − [ ( | )|| ( | )]
=: 𝐿 _𝑝𝜃, 𝑞_ (2)
( )
The EM algorithm (Dempster et al., 1977) for Equation 2 alternates between an E-step and M-step:
at iteration 𝑡, denote the language model parameter to be 𝜃[𝑡] and the variational distribution to be 𝑞[𝑡].
- E-step: 𝑞[𝑡][+][1] = arg max𝑞 _𝐿_ _𝑝𝜃[𝑡]_ _, 𝑞_ . Since 𝐿 _𝑝𝜃[𝑡]_ _, 𝑞_ can be written as _𝐾𝐿_ _𝑞_ _𝒚_ _𝒙_ _𝑞[∗]_ _𝒚_ _𝒙_, 𝑞[𝑡][+][1] _𝒚_
( ) ( ) − [ ( | )|| ( | )] ( |
_𝒙_ _𝑞[∗]_ _𝒚_ _𝒙_ := 𝑝 _𝑂_ = 1 _𝒙, 𝒚_ _𝑝𝜃[𝑡]_ _𝒚_ _𝒙_ . Thus, this step is equivalent to weighting the output
) ∝ ( | ) ( | ) ( | )
samples from conditional language model distribution based on their likelihood of obtaining
high rewards.
- M-step: 𝜃[𝑡][+][1] := arg max𝜃 _𝐿_ _𝑝𝜃, 𝑞[𝑡][+][1]_ = arg min𝜃 KL _𝑞[𝑡][+][1]_ _𝒚_ _𝒙_ _𝑝𝜃_ _𝒚_ _𝒙_ = arg min𝜃 _𝒚_ [−][𝑞][𝑡][+][1][(] _[𝒚]_ [|]
( ) ( | )|| ( | )
_𝒙_ log 𝑝𝜃 _𝒚_ _𝒙_ . As such, this step corresponds to maximizing a weighted negative log-likelihood
) ( | ) Í
loss.
Alternating between above steps ensures a monotonic improvement in the ELBO: 𝐿 _𝑝𝜃𝑡+1, 𝑞[𝑡][+][1]_
( ) ≥
_𝐿_ _𝑝𝜃[𝑡]_ _, 𝑞[𝑡][+][1]_ _𝐿_ _𝑝𝜃[𝑡]_ _, 𝑞[𝑡]_ .
( ) ≥ ( )
**EM with non-negative rewards. If the rewards are non-negative and 𝑓** is set to the identity
function, then 𝑝 _𝑂_ = 1 _𝒙, 𝒚_ _𝑟_ _𝒙, 𝒚_ which implies 𝑞[𝑡][+][1] _𝒚_ _𝒙_ _𝑟_ _𝒙, 𝒚_ _𝑝𝜃[𝑡]_ _𝒚_ _𝒙_ . In this scenario,
( | ) ∝ ( ) ( | ) ∝ ( ) ( | )
the updated policy parameters 𝜃[𝑡][+][1] resulting from the M-step at iteration 𝑡 are given by:
_𝜃[𝑡][+][1]_ := arg max𝜃 𝔼𝑥∼D 𝔼𝒚∼𝑝𝜃[𝑡] [(] _[𝒚]_ [|] _[𝒙][)][ [][𝑟][(][𝒙][,][ 𝒚][)][ log][ 𝑝][𝜃][(]_ _[𝒚]_ [|][ 𝒙][)]] _._ (3)
h i
Comparing the above equation with the typical RL objective (LRL) reveals the key distinction
between standard RL and EM-based RL: how output data is sampled. Standard RL continuously
updates the policy and uses this latest policy to collect data. In contrast, EM-based RL employs a fixed
sampling policy from the previous iteration, decoupling data collection from policy optimization. This
decoupling in EM-based approaches enables easier scaling to large policy networks, such as LLMs.
**ReST[𝐸𝑀]** Motivated by the EM framework, we now discuss a simplified version of Reinforced SelfTraining (ReST) approach by Gulcehre et al. (2023). This approach, which we call ReST[𝐸𝑀], decouples
data collection (E-step) and policy optimization (M-step) in a typical RL pipeline. Algorithm 1 outlines
the ReST[𝐸𝑀] algorithm with multiple iterations, where each iteration corresponds to one Generate
and Improve step. We describe these steps in detail below.
- Generate (E-step): In this step, we generate a dataset _𝑖_ by sampling many output sequences
D
from the current policy 𝑝𝜃: D𝑖 = { (𝒙 _[𝑗], 𝒚_ _[𝑗])|_ _[𝑁]𝑗=1_ [s.t.][ 𝒙] _[𝑗]_ [∼D][,][ 𝒚] _[𝑗]_ [∼] _[𝑝][𝜃][(]_ _[𝒚][|][𝒙]_ _[𝑗][) }][. Here, the inputs]_
are resampled from the original dataset 𝒙 _[𝑗]_ . The output sequences in _𝑖_ are then scored
∼D D
with a binary reward function 𝑟 _𝒙, 𝒚_ . In our experiments, we condition the language model
( )
using a few-shot prompt with programs for code generation and step-by-step solutions for math
problems.
Published in Transactions on Machine Learning Research (04/2024) 4
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
**Algorithm 1: ReST (Expectation-Maximization). Given a initial policy (e.g., pre-trained**
LM), ReST[𝐸𝑀] iteratively applies Generate and Improve steps to update the policy.
**Input:** : Training dataset, _𝑣𝑎𝑙: Validation dataset,_ _𝒙, 𝒚; 𝜃_ : loss, 𝑟 _𝒙, 𝒚_ : Non-negative
D D L( ) ( )
reward function, 𝐼: number of iterations, 𝑁: number of samples per context
**for 𝑖** = 1 to 𝐼 **do**
```
// Generate (E-step)
```
Generate dataset D𝑖 by sampling: D𝑖 = { (𝒙 _[𝑗], 𝒚_ _[𝑗])|_ _[𝑁]𝑗=1_ [s.t.][ 𝒙] _[𝑗]_ [∼D][,][ 𝒚] _[𝑗]_ [∼] _[𝑝][𝜃][(]_ _[𝒚][|][𝒙]_ _[𝑗][) }]_
Annotate _𝑖_ with the reward 𝑟 _𝒙, 𝒚_ .
D ( )
```
// Improve (M-step)
```
**while reward improves on** _𝑣𝑎𝑙_ **do**
D
Optimise 𝜃 to maximize objective: 𝐽 _𝜃_ = 𝔼 _𝒙,𝒚_ _𝑖_ _𝑟_ _𝒙, 𝒚_ log 𝑝𝜃 _𝒚_ _𝒙_
( ) ( )∼D [ ( ) ( | )]
**end**
**end**
**Output: Policy 𝑝𝜃**
- Improve (M-step): In the 𝑖[𝑡ℎ] iteration, we use the new dataset _𝑖_ from Generate step to
D
fine-tune the policy 𝑝𝜃. To mitigate task-specific over-fitting, we minimize drift from the base
model by always fine tuning the base pretrained language model. For fine-tuning, we minimize
the reward-weighted negative log-likelihood loss 𝐽 _𝜃_ = 𝔼 _𝒙,𝒚_ _𝑖_ _𝑟_ _𝒙, 𝒚_ log 𝑝𝜃 _𝒚_ _𝒙_ . Once
( ) ( )∼D [ ( ) ( | )]
the policy is improved, a new dataset of better quality samples can be created once again.
_Differences with ReST (Gulcehre et al., 2023). Unlike ReST, we refrain from augmenting_ _𝑖_ in
D
```
Generate step with human-generated outputs as such data may not always be optimal for learning
```
or it might not be easily available. Furthermore, each Improve step fine-tunes the base model instead
of the model obtained from the previous ReST iteration. This results in comparable task-specific
performance but much better transfer performance on held-out tasks (see Figure 7).
_Remark. Our experiments focus on problem-solving settings with binary rewards (either 0 or 1),_
unlike the bounded real-valued rewards assumed by Gulcehre et al. (2023). Specifically, for each
```
Generate step, Gulcehre et al. (2023) perform multiple Improve steps, where each Improve step
```
can be viewed as an M-step with the function 𝑓 _𝑟_ _𝒙, 𝒚_ = 𝑟 _𝒙, 𝒚_ _> 𝜏, where 𝜏_ ℝ[+] increases in
( ( )) ( ) ∈
successive M-steps. However, with binary rewards, any value of 𝜏 0, 1 corresponds to the identical
∈( )
```
Improve steps.
### 4. Related work
```
Several prior methods can be instantiated using the expectation-maximization framework presented
in Section 3. We discuss methods and their relation to ReST[𝐸𝑀] in this section.
- Expert Iteration (ExiT) (Anthony et al., 2017) alternates between two steps: expert improvement and policy distillation. During the expert improvement step (E-step), we combine a base
policy with a search procedure to generate samples from a better policy, called the expert policy.
Then, in the policy distillation step (M-step), we use these expert samples to train the base
policy in a supervised way, effectively improving it to match the expert policy. While ExiT used
monte-carlo tree-search, we simply use temperature sampling for collecting samples from the
expert policy in ReST. That said, improving the E-step in ReST using the ExIT framework via
search and planning procedures with language models would be interesting for future work. For
example, Huang et al. (2022) implement a single iteration of ReST[𝐸𝑀] on simple math reasoning
Published in Transactions on Machine Learning Research (04/2024) 5
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
problems. However, unlike our setup, they do not assume access to a correctness reward and
instead employ majority-voting (Wang et al., 2023) as a search procedure within the E-step.
- Self-Taught Reasoner (STaR) (Zelikman et al., 2022) employed greedy decoding instead of
temperature sampling for the E-step in ReST[𝐸𝑀], which is restricted to one model-generated
solution per problem during data collection. Additionally, STaR proposed rationalization as an
alternative to temperature sampling, where the language model is provided with the correct
answer as part of the input to generate correct solutions for difficult problems. However, in our
preliminary experiments, rationalization leads to substantial increase in false positive solutions
that result in correct answer but with incorrect reasoning.
- Rejection Sampling Fine-tuning (RFT) (Yuan et al., 2023) improves reasoning performance
on GSM8K and corresponds to running a single generate (E-step) and improve (M-step) of
ReST[𝐸𝑀]. While RFT demonstrated limited performance improvements on GSM8K with increasing
language model capacity, ReST[𝐸𝑀] achieves larger gains on more challenging APPS and MATH
benchmarks when scaling PaLM 2 model capacity. Moreover, we observe that using multiple
iterations of ReST[𝐸𝑀] result in larger performance gains.
- Iterative Maximum Likelihood (IML) optimizes a policy using a reward-weighted log-likelihood
objective on self-collected data. IML has been shown to perform well with relatively small-scale
language models for semantic parsing (Agarwal et al., 2019; Liang et al., 2016), machine
translation (Wu et al., 2016) and simple math reasoning (Ni et al., 2022). Each E-step and
M-step in IML is performed over a mini-batch of training examples instead of the entire training
dataset, as done in ReST[𝐸𝑀]. In IML, the learned policy can significantly diverge from the initial
pretrained model, which can manifest as task-specific overfitting, where the model performs
well on the target task but loses its ability to generalize to other tasks or domains. Additionally,
the tightly coupled nature of data collection and policy optimization in IML leads to high
computational cost with large LMs, making it significantly more expensive than ReST[𝐸𝑀].
- Reward weighted regression (RWR) (Peters and Schaal, 2007) corresponds to EM where
we set 𝑝 _𝑂_ = 1 _𝒙, 𝒚_ exp _𝑟_ _𝒙, 𝒚_ in Section 3. RWR has been previously applied to robotic
( | ) ∝ ( ( ))
control, as it can be easily applied to non-binary reward functions. Norouzi et al. (2016) build
on RWR to propose a general variant of IML for machine translation.
- Reward ranked fine-tuning (RAFT) (Dong et al., 2023) can be interpreted as alternating
between E-step and M-step over mini-batches, where E-step uses the the output sample with
maximum reward for each input context. For binary reward functions, RAFT is analogous to
IML and as such, can be viewed as an instantiation of ReST[𝐸𝑀].
**Other related works: TRICE (Phan et al., 2023) proposes an EM-based approach to maximize**
the marginal log-likelihood (MML) of generating a correct answer for a reasoning problem, where the
chain-of-thought rationale is treated as a latent variable. While E-step in ReST[𝐸𝑀] simply corresponds
to sampling from the model and filtering with a binary reward, TRICE uses Markov-chain Monte
Carlo with a control variate to approximate the MML gradient. Sordoni et al. (2023) propose a
gradient-free EM-based approach, similar to RAFT, for prompt-optimization for frozen LLMs.
Inspired by an earlier version of this manuscript, Agarwal et al. (2024) investigated if modelgenerated data can outperform human data for few-shot and many-shot prompting. They found that
this is indeed the case, especially for few-shot prompting.
Published in Transactions on Machine Learning Research (04/2024) 6
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
**ReST[𝐸𝑀]** **ReST** **STaR** **RFT**
✗ ✓ ✗ ✗
Starts from fine-tuned model
✓ ✗ ✓
Finetunes from base model in each iteration N/A
✗ ✗ ✓ ✗
Uses rationalizations for unsolved questions
✓ ✓ ✗ ✓
Temperature sampling for exploration
✓ ✗ ✗ ✓
Experiments with Large LMs
✓ ✓ ✓ ✗
Multiple iterations
✓ ✗
Larger gains on bigger models N/A N/A
✓ ✗ ✗ ✗
Evaluation on held out tasks
Table 1 Differences between ReST[𝐸𝑀] and other closely related approaches utilizing synthetic data
|
for advancing language model capabilities.
### 5. Experiments and analysis
The goal of our experiments is to answer the following questions:
1. How effective is ReST[𝐸𝑀] compared to fine-tuning on human-generated data?
2. How many iterations are needed for optimal performance? How quickly does ReST[𝐸𝑀] leads to
overfitting on training set?
3. How does ReST[𝐸𝑀] affect pass@k and majority voting performance?
4. If we fine-tune using model-generated data on a specific task, do we see positive transfer
to related tasks? Is there any performance degradation compared to the base model when
evaluating our fine-tuned models on a broad suite of tasks?
5. How much input data do we need to get most of the performance gains from ReST[𝐸𝑀]? Is one
iteration of ReST[𝐸𝑀] sufficient?
**Training Datasets. We evaluate ReST[𝐸𝑀]** primarily on mathematical problem solving using the
Hendrycks’ MATH dataset (Hendrycks et al., 2021b) and code generation using the APPS (Introductory)
dataset (Hendrycks et al., 2021a). MATH and APPS (Introductory) contain 7500 and 2342 training
problems respectively. We select these tasks because the model outputs can be automatically evaluated
as correct or incorrect, perfectly suited for ReST[𝐸𝑀]. Both these datasets offer binary rewards: on
MATH, model-generated answers can be easily verified for correctness using the ground-truth answer,
while on APPS, test cases determine whether the generated code is correct.
**Models. We use the PaLM 2 models (Google et al., 2023) with public APIs on Google Cloud for**
experiments, including PaLM 2-S (Bison), PaLM 2-S* (Codey), and PaLM 2-L (Unicorn).
**Evaluation. We report generalization performance using the test splits of the MATH and APPS**
(Introductory) datasets. For measuring transfer performance, we look at GSM8K (Cobbe et al., 2021),
Hungarian HS finals (Paster, 2023), and HumanEval (Chen et al., 2021) datasets. We also evaluate
our models using the Big-Bench Hard (Suzgun et al., 2022) benchmark to evaluate general capabilities.
All evaluations follow the settings from Google et al. (2023), unless specified otherwise.
**Implementation Details. During each iteration of ReST[𝐸𝑀], we generated a fixed number of**
solutions per problem for the E-step: 32 for the MATH dataset and 64 for the APPS dataset. For
generating solutions, we sample from the language model using top-K sampling with K=40 and
temperature of 0.7. However, directly using all these model-generated solutions can lead to an
imbalanced dataset, as we will have a lot more correct solutions for the easier problems. To mitigate
this, we introduced a cut-off threshold for the maximum number of solutions per problem, a design
Published in Transactions on Machine Learning Research (04/2024) 7
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Hendrycks MATH
1 2
Num iterations
Transfer to GSM8K
1 2
Num iterations
40
35
30
25
20
80
70
60
Palm-2-S Palm-2-L Palm-2-L-SFT Palm-2-S-SFT
Figure 2 **ReST[𝐸𝑀]** **for math problem-solving. Test performance on MATH and GSM8K (transfer) for**
|
PaLM 2-S* and PaLM 2-L as a function of ReST[𝐸𝑀] iterations. We also report performance of models
fine-tuned via SFT on human-generated data as a baseline. Iteration 0 corresponds to pre-trained
model performance. Following Google et al. (2023), we use greedy decoding for evaluation.
APPS (Introductory)
1
Num iterations
Transfer to HumanEval
1
Num iterations
26
24
22
20
18
55
50
45
40
Palm-2-S* Palm-2-L Palm-2-L-SFT Palm-2-S*-SFT
Figure 3 **ReST[𝐸𝑀]** **for code-generation.** Test performance on APPS (introductory) and Hu|
manEval (transfer) for PaLM 2-S* and PaLM 2-L as a function of ReST[𝐸𝑀] iterations.
choice also used by Zelikman et al. (2022), included in the fine-tuning dataset: 10 for both MATH
and APPS. This approach ensures diversity in the training data and safeguards against overfitting
on easier problems. For fine-tuning, we use the few-shot prompt (and the question) as input to the
model, and use the model-generated solutions as targets. We only apply the next token prediction
loss (Equation 1) on the targets.
**5.1. ReST[𝐸𝑀]** **on MATH and APPS**
Figures 2 and 3 show the performance of ReST[𝐸𝑀] when trained on the MATH and APPS datasets,
respectively. We see that MATH benefits from performing multiple iterations of ReST[𝐸𝑀], both in terms
of performance on the MATH test set, as well as transfer to GSM8K. On the other hand, we see that
most of the gains for APPS come from the first iteration, and more iterations lead to a regression on
both APPS and HumanEval.
Interestingly, Figures 2 and 3 demonstrate that fine-tuning on model-generated solutions substan
Published in Transactions on Machine Learning Research (04/2024)
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Hendrycks MATH APPS (Introductory)
60 60
55
50
50
40
45
40 30
Pass@1 Performance (%) Pass@1 Performance (%)
35
20
0 1 2 3 0 1 2
Num iterations Num iterations
Palm-2-L (Train) Palm-2-L (Test) Palm-2-L (Train) Palm-2-L (Test)
Figure 4 **Train-test performance gap on (left) MATH with PaLM-2-L, and (right) APPS with PaLM-**
|
2-S*, as a function of ReST[𝐸𝑀] iterations.
tially outperforms using human-written solutions, especially for the PaLM 2-L model. This aligns with
findings of Yuan et al. (2023) and recent work on distilling LLMs using model-generated data (Agarwal
et al., 2023; Gu et al., 2023). However, unlike Yuan et al. (2023), who observed diminishing returns
from model-generated data on GSM8K when scaling model capacity, our results suggest an opposite
trend: ReST[𝐸𝑀] leads to larger performance gains as model capacity increases. On the MATH dataset,
the test accuracy improvement with ReST[𝐸𝑀] is 5.94% for PaLM 2-S compared to 6.34% for the larger
PaLM 2-L model. Similarly, on the APPS dataset, improvements are 5.6% for PaLM 2-S* compared to
6.4% for PaLM 2-L. This is in addition to the fact that the larger models start with a much stronger
initial performance, and improvements on these benchmarks generally get harder as the baseline
performance goes up.
**Train-test performance gap. Figure 4 shows that while training performance increases linearly**
with the number of ReST[𝐸𝑀] iterations, test set performance does not. For MATH, test performance
improvements are small after the first iteration, and for APPS, we observe a regression in performance
in the 2[𝑛𝑑] iteration. We suspect that the regression in performance is likely due to overfitting on the
small set of training problems. Since the APPS dataset is about a third of the size of the MATH dataset,
it suffers more from this problem.
**5.2. Impact on Pass@K and Majority-Voting Performance**
To investigate the impact of fine-tuning with ReST[𝐸𝑀] on the diversity of the final model’s generated
outputs, we evaluate pass@k (Chen et al., 2021) and majority voting (Wang et al., 2023) performance
of the fine-tuned PaLM 2-L model relative to the base model.
**Pass@K measures the probability that at least one of the K generated solutions for a problem is**
correct, that is, outputs the correct answer for math problems or passes all the unit tests for code
generation. Figure 5 shows the performance of Palm-2-L on the pass@K metric. We see that model
obtained after ReST[𝐸𝑀] fine-tuning is stronger for all values of K, with the performance gap typically
being the highest for K=1.
**Majority voting first samples a diverse set of reasoning paths instead of only taking the greedy**
one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths.
For Hendrycks MATH, it is possible to use majority voting to maximize Pass@1 performance, and we
find that when using 64 samples per question, the PaLM 2-L fine-tuned with ReST[𝐸𝑀] obtains a test
accuracy of 48.82, while the base model gets 44.02.
Published in Transactions on Machine Learning Research (04/2024)
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
HumanEval
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||PaLM-2-L PaLM-2-L (ReST|)|
|||||||
|||||||
20 40 60
Num samples (K)
APPS (Introductory)
Hendrycks MATH
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|||||Palm-2-L||
|||||||
|||||Palm-2-L (Re|ST)|
20 40 60
Num samples (K)
80%
70%
60%
50%
40%
30%
20%
|Col1|PaLM-2-L PaLM-2-L (R|eST)|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
2 4 6
Num samples (K)
40%
30%
80%
60%
20%
10%
40%
10
Figure 5 **Pass@K results for PaLM-2-L pretrained model as well as model fine-tuned with ReST[𝐸𝑀].**
|
For a fixed number of samples K, fine-tuning with ReST[𝐸𝑀] substantially improves Pass@K performance.
We set temperature to 1.0 and use nucleus sampling with 𝑝 = 0.95.
**5.3. Ablation Studies**
**Impact of multiple iterations** Our results show that multiple iterations can sometimes lead to
over-fitting on the train set (Figure 4). This raises the question of whether multiple iterations are
really necessary. Is it better to collect a larger dataset and perform just a single iteration of ReST[𝐸𝑀]?
To investigate this, we collect a dataset with the base PaLM-2-L model on Hendrycks MATH that is
3 as many solutions per problem as used in a single iteration of ReST[𝐸𝑀] for the E-step. Fine-tuning
×
with this dataset results in pass@1 performance of 40.3%, which is lower than the 41% in second
and 41.9% in third iteration, as shown in Figure 2. These results indicate that performing multiple
iterations of ReST[𝐸𝑀] leads to higher performance compared a single iteration with 3x the data.
**Comparing model-generated data with human data** A key strength of ReST[𝐸𝑀] is its ability to
generate multiple correct solutions for each problem. This provides valuable additional training data
compared to human-generated data, which typically offers only a single solution per problem. While
this makes a comparison in Figures 2 and 3 not entirely fair, it also highlights the potential of ReST[𝐸𝑀]
to boost performance with diverse and correct solutions.
In order to enable an apples-to-apples comparison, we conduct the following study: we select all
Hendrycks MATH questions for which we have at least one correct model-generated solution, resulting
in about 5K questions. For these 5K questions, we run two fine-tuning experiments: SFT(5K) where
we fine-tune on human-written solutions (one per question), and ReST[∗](5K) where we fine-tune on
model-generated solutions (also one per question, selected at random).
The results in Figure 6 (right), show that ReST[𝐸𝑀] outperforms fine-tuning on human data even in
this much more restricted setting. Furthermore, the efficacy of ReST(5K) over ReST[∗](5K) highlights
the additional gain in performance that we can obtain by spending more compute on sampling a
large number of solutions and performing multiple iterations of ReST[𝐸𝑀].
**Distillation with ReST[𝐸𝑀]-generated data** The above results indicate that self-generated data can
be better than human data for fine-tuning language models. We hypothesize this may be because
model-generated solutions are more in-distribution compared to human-written solutions. This raises
the question of whether ReST[𝐸𝑀]-generated data can benefit different models than the one generating
the data.
To answer this question, we consider a distillation setup on MATH where we fine-tune PaLM 2-S
using data generated by PaLM 2-L, resulting in solutions for about 5K questions. Specifically, we ran
Published in Transactions on Machine Learning Research (04/2024) 10
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Hendrycks MATH (Test)
42 PaLM 2-S on Hendrycks MATH (Test)
30.0
27.5
40
25.0
38 22.5
20.0
36
17.5
Pass@1 Performance (%) Pass@1 Performance (%)
34 15.0
SFT (7K) SFT (5K) ReST [*] (5K) ReST[EM] (5K) SFT (Human) Distill* (2-L) ReST[EM] (2-S) Distill (2-L)
Method (Num questions) Method (Data Source)
Figure 6 **Left. Comparing ReST[𝐸𝑀]** with SFT on MATH. SFT refers to fine-tuning on human data,
|
while ReST* refers to a version of ReST[𝐸𝑀] with one iteration that uses only one correct sample per
problem. Here, ReST denotes ReST[𝐸𝑀] with 3 iterations. For each method, we denote the number of
questions in parenthesis. Right. Impact of Model-Generated Data for Distillation.
two distillation experiments: Distill[∗] (2-L) where we fine-tune on teacher-generated solutions (one
per question), similar to ReST (5K), and Distill (2-L), which includes multiple solutions per problem,
generated during the final iteration of ReST[𝐸𝑀] with PaLM 2-L.
Our results, shown in Figure 6 (right), reveal that Distill[∗] surpasses the performance achieved
by fine-tuning on human-written solutions, despite having smaller number of training questions.
Additionally, fine-tuning PaLM 2-S with multiple solutions from PaLM 2-L, namely Distill (2-L), is
superior than using self-generated solutions via ReST[𝐸𝑀]. This improvement is likely due to the larger
number of training questions with solutions in PaLM 2-L generated data compared to 2-S. Overall,
these results indicate that model-generated data can be more effective for fine-tuning smaller models
than relying on human-generated data.
APPS (Introductory) Transfer to HumanEval
22.0 48
**ReST vs ReST[𝐸𝑀]** A major difference between
21.8 46
ReST[𝐸𝑀]and ReST is that while ReST[𝐸𝑀]always fine-
tunes the base model for each iteration, ReST con- 21.6 44
tinues to finetune the model from the last iteration. est Accuracy (%)21.4 est Accuracy (%)42
We run an ablation comparing these options us- Pass@1 T21.2 Pass@1 T40
21.0 38
ReST[EM] ReST ReST[EM] ReST
ing PaLM 2-S* in Figure 7 and observe that while Method Method
Palm-2-S*-SFT
ReST and ReST[𝐸𝑀] have similar performance on
APPS, the transfer performance to HumanEval is Figure 7 ReST[𝐸𝑀] _vs ReST using PaLM 2-S*._
|
substantially better with ReST[𝐸𝑀].
**Impact of dataset size** Since one of the main ingredients needed for ReST[𝐸𝑀] is a dataset of input
contexts (e.g., questions for MATH), we are interested in evaluating the effect of number of input
problems. The results from our dataset ablations using the PaLM-2-L model on Hendrycks MATH,
Figure 8 (left), show that utilizing just 1000 MATH questions results in significant gains, implying that
the method is very efficient in the number of prompts needed. However, we noted a slight decrease
in performance when using 4,000 questions compared to 2,000, indicating potential variance in
the fine-tuning process. Ideally, conducting this experiment multiple times would help quantify this
variance, but this is prohibitively resource-intensive. Overall, we find that ReST[𝐸𝑀] is quite sample
efficient and performance gains from ReST[𝐸𝑀] improve as we increase the dataset size.
Published in Transactions on Machine Learning Research (04/2024) 11
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Average Success Rate by Difficulty Level
|B R|ase Model eSTEM|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
|Col1|PaLM 2-L PaLM 2-L (MATH) PaLM 2-L (APPS)|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
42 Hendrycks MATH (Test) 100 Base ModelReST[EM]
80
40
60
38
40
36 20
Pass@1 Performance (%) Average Success Rate (%)
34 0
0 1000 2000 4000 7000 Very Hard Hard Medium Easy
Number of Questions Problem Difficulty Level
**Left. Performance for a single iteration of ReST[𝐸𝑀]** as a function of dataset size (number of
|
questions) on MATH. Right. Improvement from ReST[𝐸𝑀] based on the difficulty level of the question.
**Which Questions Benefit Most from ReST[𝐸𝑀]** We evaluate the performance enhancement of ReST
across different question difficulties in the Hendrycks MATH dataset. Questions are classified based
on success rates from the base model at a temperature setting of T=1.0 into four categories: “easy”
(answered correctly 75%-100% of the time), “medium” (50%-75%), “hard” (25%-50%), and “very
hard” (below 25%). Figure 8 (right) presents the average success rates for these categories, comparing
the base model to the ReST[𝐸𝑀]-finetuned model. The results demonstrate that ReST[𝐸𝑀]
improves performance across all difficulties, with the highest gains coming for questions categorized
as medium and hard.
**5.4. Impact on Reasoning capabilities**
PaLM 2-L PaLM 2-L (MATH) PaLM 2-L (APPS)
100
90
80
70
60 PaLM 2-LPaLM 2-L (APPS)
PaLM 2-L (MATH)
50 80
40
Few-shot Performance with CoT 30 75
Boolean ExpressionsCausal JudgementDate UnderstandingDisambiguation QADyck LanguagesFormal FallaciesGeometric ShapesMovie RecommendationHyperbatonObject CountingPenguins in a TNavigatewo] Ruin NamesableSports UnderstandingTemporal SequencesSnarksLogical Deduction (avg)Web of LiesWord Sorting Average BBH Performance7065
Multi-step Arithmetic [T
Reasoning about Colored ObjectsSalient Translation Error DetectionTracking Shuffled Objects (avg) 60
Big-Bench Hard (BBH) Task
|Col1|PaLM 2-L PaLM 2-L (APPS) PaLM 2-L (MATH)|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
CoT Direct
Prompt Type
60
Figure 9 Comparing the ReST[𝐸𝑀] models to the base model on the Big-Bench Hard suite of tasks.
|
Evaluations were conducted across multiple checkpoints, and the vertical black lines denote standard
deviation.
**General capabilities. BIG-Bench provides a suite of over 200 tasks that can be used to probe**
LLMs’ performance across a range of fields and capabilities. BIG-Bench Hard (BBH) (Suzgun et al.,
Published in Transactions on Machine Learning Research (04/2024) 12
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
2022) is a subset of 23 BIG-Bench tasks where the previous generation of LLMs, such as Codex
and PaLM 540B, performed below the average human rater. We follow the protocol of Google et al.
(2023) and evaluate on BBH using both few-shot and chain-of-thought prompting. Figure 9 shows the
performance of ReST[𝐸𝑀]-finetuned models, and compares them against the base PaLM-2 model. We
see no major degradation on any of the BBH tasks. Furthermore, the model fine-tuned on Hendrycks
MATH outperforms the base model on this suite when using chain-of-thought prompting, and the
model fine-tuned on APPS also shows slight performance gains. When using direct prompting, all
three models perform similarly.
**Problem-solving. To stress test the math problem-solving capabilities on a held-out “real-world"**
evaluation set, we evaluate our model on the 2023 Hungarian high school finals exam in mathematics,
following the evaluation protocol from Paster (2023). Specifically, we evaluate the PaLM 2-L model,
fine-tuned with ReST[𝐸𝑀] on Hendrycks MATH, using the 1-shot prompt from Grok, sample solutions
using temperature 0.1, and manually grade the outputs using the rubric provided by the examiners.
The results from evaluation are shown in Figure 10. We find that PaLM-2-L fine-tuned with ReST[𝐸𝑀]
performs well on this exam, surpassing the performance of all existing models except GPT-4.
Exam Score vs GSM8K Performance of Various Models
90
80
70
60
50
40
30
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
||||||Claude 2|||
||||||PaLM 2-L (Re|STEM)||
||Math|Mistral 7B|Open|Chat 3.5||||
||Met|aMath 7B|||G|rok-1||
|||G Qwen 7B|rok-0 (33B)|GPT-3.5 Turbo Llemma 34|B|||
|||||||||
||moT Mi|H 7B stral 7B||||||
||ama|34B||||||
|||||||||
20 30 40 50 60 70
Hungarian HS Finals Exam Score (%)
Figure 10 **Transfer results on Hungarian HS Finals Exam. Results for models other than PaLM-2-L**
|
finetuned with ReST[𝐸𝑀] are taken from Paster (2023). Several models specialized for mathematics
perform well on the widely-used GSM8K benchmark but perform poorly on the Hungarian exam. In
contrast, PaLM 2-L model fine-tuned with ReST[𝐸𝑀] performs well on both these benchmarks.
### 6. Discussion
In this paper, we propose training on model-generated data combined with a reward function,
via ReST[𝐸𝑀], for improving the performance of LLMs on problem-solving tasks. Furthermore, we
demonstrate that ReST[𝐸𝑀] is theoretically grounded in the application of expectation-maximization
to RL. We evaluate ReST[𝐸𝑀] on mathematical problem solving and code generation, and show that
ReST[𝐸𝑀] offers significant performance gains at a relatively low computational cost, especially when
compared to the cost of pre-training. Our experiments also show that ReST[𝐸𝑀] does not lead to
regression on other tasks. We conduct a number of ablations to better understand the strengths and
weaknesses of this method, and find that it is data-efficient, but also requires some vigilance to avoid
over-fitting.
There are a number of limitations associated with ReST[𝐸𝑀]. First, this method requires a moderatelysized training set of problems or prompts, which would need to be collected (from humans) for any
new task of interest. Second, ReST[𝐸𝑀] also requires access to a manually-designed or learned reward
Published in Transactions on Machine Learning Research (04/2024) 13
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
function, ideally one that can be computed automatically. Finally, while ReST[𝐸𝑀] allows significant
performance improvements in pass@1 performance, it may not quite close the gap to pass@K
performance for the same task (with a sufficiently large K). Future research in self-improvement in
language models should focus on automating manual parts of the pipeline (likely through language
models as well), and explore algorithmic improvements that reduce the gap to pass@K performance.
### Acknowledgements
We would like to thank Tom Le Paine for providing feedback to an early draft. We also acknowledge
Benjamin Anderson, Sridhar Thiagarajan, Feryal Behbahani, Aleksandra Faust, Doina Precup, Olivier
Bachem, and Slav Petrov for helpful discussions.
### Author Contributions
Avi, Rishabh, and JD jointly led the project. Avi was responsible for training and evaluation infrastructure, ablations and experiments on MATH, JD led the experiments on APPS, Rishabh was responsible
for the paper writing, evaluations, and distillation ablations.
Ankesh, Piyush, Ethan, and Behnam observed preliminary findings about efficacy of modelgenerated data on MATH for Minerva models and motivated this research. Piyush also helped Avi
in setting up infrastructure. Xavier, Peter, James, Jaeheoon, Kelvin and Yamini took part in project
discussions. Jascha and Noah sponsored and advised the project. All other authors provided feedback
on this work.
### References
R. Agarwal, C. Liang, D. Schuurmans, and M. Norouzi. Learning to generalize from sparse and
underspecified rewards. In International conference on machine learning, pages 130–140. PMLR,
2019.
R. Agarwal, N. Vieillard, P. Stanczyk, S. Ramos, M. Geist, and O. Bachem. Gkd: Generalized knowledge
distillation for auto-regressive sequence models. arXiv preprint arXiv:2306.13649, 2023.
R. Agarwal, A. Singh, L. M. Zhang, B. Bohnet, S. Chan, A. Anand, Z. Abbas, A. Nova, J. D. Co-Reyes,
E. Chu, F. Behbahani, A. Faust, and H. Larochelle. Many-shot in-context learning, 2024.
T. Anthony, Z. Tian, and D. Barber. Thinking fast and slow with deep learning and tree search.
_Advances in neural information processing systems, 30, 2017._
S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. M.
Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y. Zhang. Sparks of artificial general intelligence:
Early experiments with GPT-4. CoRR, abs/2303.12712, 2023. doi: 10.48550/ARXIV.2303.12712.
[URL https://doi.org/10.48550/arXiv.2303.12712.](https://doi.org/10.48550/arXiv.2303.12712)
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda,
N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin,
B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P.
Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol,
A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr,
J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer,
Published in Transactions on Machine Learning Research (04/2024) 14
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton,
R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems. arXiv
_preprint arXiv:2110.14168, 2021._
P. Dayan and G. E. Hinton. Using expectation-maximization for reinforcement learning. Neural
_Computation, 9(2):271–278, 1997._
A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em
algorithm. Journal of the royal statistical society: series B (methodological), 39(1):1–22, 1977.
H. Dong, W. Xiong, D. Goyal, R. Pan, S. Diao, J. Zhang, K. Shum, and T. Zhang. Raft: Reward ranked
finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
Google, R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey,
Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
Y. Gu, L. Dong, F. Wei, and M. Huang. Knowledge distillation of large language models. arXiv preprint
_arXiv:2306.08543, 2023._
C. Gulcehre, T. L. Paine, S. Srinivasan, K. Konyushkova, L. Weerts, A. Sharma, A. Siddhant, A. Ahern,
M. Wang, C. Gu, et al. Reinforced self-training (rest) for language modeling. arXiv preprint
_arXiv:2308.08998, 2023._
D. Hendrycks, S. Basart, S. Kadavath, M. Mazeika, A. Arora, E. Guo, C. Burns, S. Puranik, H. He,
D. Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938,
2021a.
D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring
mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021b.
J. Huang, S. S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu, and J. Han. Large language models can self-improve.
_[CoRR, abs/2210.11610, 2022. doi: 10.48550/ARXIV.2210.11610. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2210.11610)_
```
48550/arXiv.2210.11610.
```
C. Liang, J. Berant, Q. Le, K. D. Forbus, and N. Lao. Neural symbolic machines: Learning semantic
parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020, 2016.
A. Ni, J. P. Inala, C. Wang, A. Polozov, C. Meek, D. Radev, and J. Gao. Learning math reasoning from
self-sampled correct and partially-correct solutions. In The Eleventh International Conference on
_Learning Representations, 2022._
M. Norouzi, S. Bengio, N. Jaitly, M. Schuster, Y. Wu, D. Schuurmans, et al. Reward augmented
maximum likelihood for neural structured prediction. Advances In Neural Information Processing
_Systems, 29, 2016._
OpenAI. Gpt-4 technical report, 2023.
[K. Paster. Testing language models on a held-out high school national finals exam. https://](https://huggingface.co/datasets/keirp/hungarian_national_hs_finals_exam)
```
huggingface.co/datasets/keirp/hungarian_national_hs_finals_exam, 2023.
```
J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operational space
control. In Proceedings of the 24th international conference on Machine learning, pages 745–750,
2007.
Published in Transactions on Machine Learning Research (04/2024) 15
-----
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
D. Phan, M. D. Hoffman, D. Dohan, S. Douglas, T. A. Le, A. Parisi, P. Sountsov, C. Sutton, S. Vikram,
and R. A. Saurous. Training chain-of-thought via latent-variable inference. _arXiv preprint_
_arXiv:2312.02179, 2023._
A. Sordoni, X. Yuan, M.-A. Côté, M. Pereira, A. Trischler, Z. Xiao, A. Hosseini, F. Niedtner, and
N. Le Roux. Joint prompt optimization of stacked llms using variational inference. In Thirty-seventh
_Conference on Neural Information Processing Systems, 2023._
M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. V. Le, E. H. Chi,
D. Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv
_preprint arXiv:2210.09261, 2022._
X. Wang, J. Wei, D. Schuurmans, Q. V. Le, E. H. Chi, S. Narang, A. Chowdhery, and D. Zhou. Selfconsistency improves chain of thought reasoning in language models. In The Eleventh International
_Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net,_
[2023. URL https://openreview.net/pdf?id=1PL1NIMMrw.](https://openreview.net/pdf?id=1PL1NIMMrw)
Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey,
et al. Google’s neural machine translation system: Bridging the gap between human and machine
translation. arXiv preprint arXiv:1609.08144, 2016.
Z. Yuan, H. Yuan, C. Li, G. Dong, C. Tan, and C. Zhou. Scaling relationship on learning mathematical
reasoning with large language models. arXiv preprint arXiv:2308.01825, 2023.
E. Zelikman, Y. Wu, J. Mu, and N. Goodman. Star: Bootstrapping reasoning with reasoning. Advances
_in Neural Information Processing Systems, 35:15476–15488, 2022._
Published in Transactions on Machine Learning Research (04/2024) 16
-----
| [
"Rishabh, Agarwal",
"Peter J., Liu",
"Aaron, Parisi",
"Azade, Nova",
"Avi, Singh",
"Yamini, Bansal",
"Noah, Constant",
"Kevin, Swersky",
"Bernd, Bohnet",
"Rosanne, Liu",
"Kelvin, Xu",
"Ethan, Dyer",
"Igor, Mordatch",
"Jaehoon, Lee",
"Jascha, Sohl-Dickstein",
"John D., Co-Reyes",
"Ankesh, Anand",
"Hanie, Sedghi",
"James, Harrison",
"Behnam, Neyshabur",
"Abhishek, Kumar",
"Alex, Alemi",
"Alex, Rizkowsky",
"Ben, Adlam",
"Gamaleldin, Elsayed",
"Isabelle, Simpson",
"Izzeddin, Gur",
"Jasper, Snoek",
"Jeffrey, Pennington",
"Jiri, Hron",
"Kathleen, Kenealy",
"Kshiteej, Mahajan",
"Laura, Culp",
"Lechao, Xiao",
"Maxwell L., Bileschi",
"Xavier, Garcia",
"Tris, Warkentin",
"Yundi, Qian",
"Noah, Fiedel",
"Piyush, Patil",
"Roman, Novak"
] | 2024-04-17T00:00:00 | null | false | 72 | 7 | null | http://arxiv.org/abs/2312.06585 | https://arxiv.org/abs/2312.06585 | https://www.semanticscholar.org/paper/48362b169a235ca650918c489c8cea4c597da645 |
MultiHiertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data | Numerical reasoning over hybrid data containing both textual and tabular content (e.g., financial reports) has recently attracted much attention in the NLP community. However, existing question answering (QA) benchmarks over hybrid data only include a single flat table in each document and thus lack examples of multi-step numerical reasoning across multiple hierarchical tables. To facilitate data analytical progress, we construct a new large-scale benchmark, MultiHiertt, with QA pairs over Multi Hierarchical Tabular and Textual data. MultiHiertt is built from a wealth of financial reports and has the following unique characteristics: 1) each document contain multiple tables and longer unstructured texts; 2) most of tables contained are hierarchical; 3) the reasoning process required for each question is more complex and challenging than existing benchmarks; and 4) fine-grained annotations of reasoning processes and supporting facts are provided to reveal complex numerical reasoning. We further introduce a novel QA model termed MT2Net, which first applies facts retrieving to extract relevant supporting facts from both tables and text and then uses a reasoning module to perform symbolic reasoning over retrieved facts. We conduct comprehensive experiments on various baselines. The experimental results show that MultiHiertt presents a strong challenge for existing baselines whose results lag far behind the performance of human experts. The dataset and code are publicly available at https://github.com/psunlpgroup/MultiHiertt. | A new large-scale benchmark, MultiHiertt, with QA pairs over Multi Hierarchical Tabular and Textual data is constructed and a novel QA model termed MT2Net is introduced, which first applies facts retrieving to extract relevant supporting facts from both tables and text and then uses a reasoning module to perform symbolic reasoning over retrieved facts. | ## MULTIHIERTT: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data
**Yilun Zhao[1]** **Yunxiang Li[2]** **Chenying Li[3]** **Rui Zhang[4]**
1Yale University 2The Chinese University of Hong Kong
3Northeastern University 4Penn State University
[email protected] [email protected]
[email protected] [email protected]
**Abstract**
Numerical reasoning over hybrid data containing both textual and tabular content (e.g., financial reports) has recently attracted much attention in the NLP community. However, existing
question answering (QA) benchmarks over hybrid data only include a single flat table in each
document and thus lack examples of multistep numerical reasoning across multiple hierarchical tables. To facilitate data analytical progress, we construct a new large-scale
benchmark, MULTIHIERTT, with QA pairs
over Multi Hierarchical Tabular and Textual
data. MULTIHIERTT is built from a wealth of
financial reports and has the following unique
characteristics: 1) each document contain multiple tables and longer unstructured texts; 2)
most of tables contained are hierarchical; 3)
the reasoning process required for each question is more complex and challenging than existing benchmarks; and 4) fine-grained annotations of reasoning processes and supporting
facts are provided to reveal complex numerical reasoning. We further introduce a novel
QA model termed MT2Net, which first applies
facts retrieving to extract relevant supporting
facts from both tables and text and then uses a
reasoning module to perform symbolic reasoning over retrieved facts. We conduct comprehensive experiments on various baselines. The
experimental results show that MULTIHIERTT
presents a strong challenge for existing baselines whose results lag far behind the performance of human experts. The dataset and code
[are publicly available at https://github.](https://github.com/psunlpgroup/MultiHiertt)
[com/psunlpgroup/MultiHiertt.](https://github.com/psunlpgroup/MultiHiertt)
Figure 1: An example of MULTIHIERTT: The system
needs to first locate which segment got the most funds
in 2017 in the second hierarchical table, then select relevant numbers from the first hierarchical table and generate the correct reasoning program to get the answer.
The annotated supporting facts are highlighted in red,
and the hierarchical column and row headers are highlighted in orange and green, respectively.
reasoning over hybrid data containing both textual
and tabular content (Zhu et al., 2021; Chen et al.,
2021) has attracted much attention. For example,
**1** **Introduction**
In recent years, as key to many NLP tasks such as
QA, there is a flurry of works on numerical reasoning over various types of data including textual
data (Dua et al., 2019; Amini et al., 2019; Xie and
Sun, 2019) and tabular data (Moosavi et al., 2021;
Suadaa et al., 2021). More recently, numerical
-----
the FinQA dataset (Chen et al., 2021) focuses on
questions that require numerical reasoning over financial report pages, e.g., "What portion of the total
identifiable net assets is in cash?". Such questions
need the system to locate relevant cells in the tabular content and then perform a division operation
to get the final answer.
However, existing QA datasets over hybrid data
only contain a single flat table in each document (Zhu et al., 2021; Chen et al., 2021). Therefore, they lack examples that require multi-step reasoning processes across multiple paragraphs and
hierarchical tables. Hierarchical tables are widely
used in scientific or business documents. A hierarchical table usually contains multi-level headers,
which makes cell selection much more challenging
because it requires multi-level and bi-dimensional
indexing techniques. For instance, consider the example of our proposed dataset MULTIHIERTT in
Figure 1, each table contains both column headers
and row headers, which are hierarchical in nature.
And ignoring the row / column headers or not reasoning on the entire header hierarchy may lead to
the wrong result. For instance, in the given example, if the system simply searched for cells with a
flat row header containing "Product" and "Service"
and column header containing "2018", it may mistakenly return the value 2,894 and 382 appearing in
the beginning of the first table. Additionally, in real
life, when analyzing financial reports, professionals such as analysts or investors often refer to multiple hierarchical tables and multiple paragraphs
to obtain conclusions. For instance, finding "the
segments with most funds in 2017" requires the
system to locate and perform numerical reasoning
on the second hierarchical table. Then the system
should use the results gained from the second table
to reason on the first table. However, existing QA
datasets lack such examples of reasoning across
multiple tables.
To address these shortcomings, we present MUL
TIHIERTT: an expert-annotated dataset that contains 10,440 QA pairs, along with annotations
of reasoning processes and supporting facts. To
the best of our knowledge, MULTIHIERTT is the
first dataset for solving complicated QA tasks over
documents containing multiple hierarchical tables
and paragraphs. In addition, to address the challenge of MULTIHIERTT, we propose MT2Net to
first retrieve supporting facts from financial reports then generate executable reasoning programs
to answer the questions. Our experiments show
that MT2Net outperforms all other baselines and
achieves 38.43% F1 score. However, all models
still lag far behind the performance of human experts with 87.03% in F1. It demonstrates MUL
TIHIERTT presents a strong challenge for existing
baseline models and is a valuable benchmark for
future research.
The main contribution of this work can be summarized as follows:
- We propose a new large-scale dataset MULTIHIERTT. It contains 10,440 examples along
with fully annotated numerical reasoning processes and supporting facts. A strict quality control procedure is applied to ensure the
meaningfulness, diversity, and correctness of
each annotated QA example.
- Compared with existing datasets, each document in MULTIHIERTT contains multiple hierarchical tables and longer unstructured text. A
more complex reasoning process across multiple tables and paragraphs is required to correctly answer the question.
- We propose a novel QA model, MT2Net. The
model first applies facts retrieving to extract
relevant supporting facts from both hierarchical tables and text. And it then uses a reasoning module to reason over retrieved facts.
- Comprehensive experiments are conducted on
various baselines. The experimental results
demonstrate that the current QA models still
lag far behind the human expert performance,
and further research is needed.
**2** **Related Work**
**Question Answering Benchmark** There are
numerous QA datasets focusing on text, table/knowledge base (KB), and hybrid data.
SQuAD (Rajpurkar et al., 2016) and CNN/Daily
Mail (Hermann et al., 2015) are classic datasets
for textual data. Table/KB QA datasets mainly focus on structured tables (Pasupat and Liang, 2015;
Zhong et al., 2017; Yu et al., 2018; Nan et al.,
2022) and knowledge bases (Berant et al., 2013;
Yih et al., 2015; Talmor and Berant, 2018; Xie
et al., 2022). And some recent works focus on
reasoning over more complex tables including hierarchical tables (Cheng et al., 2021b; Katsis et al.,
-----
**Textual & Tabular Data / Doc (DB)** **Numerical**
**QA Dataset** **# Doc (DB)** **# Questions**
**Reasoning**
Avg. # words Table types Avg. # tables
**Textual QA Dataset**
DROP (Dua et al., 2019) 210.0 6,735 45,959
MathQA (Amini et al., 2019) 37.9 37,259 37,259
Math23K (Xie and Sun, 2019) 35.4 23,161 23,161
**Tabular QA Dataset**
WTQ (Pasupat and Liang, 2015) Flat 1 2,108 22,033
Spider (Yu et al., 2018) Relational 5.13 200 10,181
AIT-QA (Katsis et al., 2021) Hierarchical 1 116 515
HiTab (Cheng et al., 2021b) Hierarchical 1 few 3,597 10,686
**Hybrid QA Dataset**
HybridQA (Chen et al., 2020) 2,326.0 Flat 1 13,000 69,611
MMQA (Talmor et al., 2021) 240.7 Flat 1 29,918 29,918
GeoTSQA (Li et al., 2021) 52.4 Flat 1.58 few 556 1,012
TAT-QA (Zhu et al., 2021) 43.6 Mostly Flat 1 2,757 16,552
FINQA (Chen et al., 2021) 628.1 Flat 1 2,789 8,281
MULTIHIERTT (Ours) 1,645.9 Hierarchical 3.89 2,513 10,440
Table 1: Comparison of MULTIHIERTT with other QA datasets (Doc, DB denote Document and DataBase).
2021). More recently, there are also some pioneering studies working on QA over hybrid data.
Specifically, HybridQA (Chen et al., 2020), TATQA (Zhu et al., 2021), and FinQA (Chen et al.,
2021) focus on both textual and tabular data, while
MMQA (Talmor et al., 2021) focus on QA over
text, tables, and images. In addition, reasoning
including numerical reasoning and multi-hop reasoning has gained attention lately. For example,
DROP (Dua et al., 2019) is a machine reading comprehension benchmark that requires numerical reasoning on text data. HotpotQA (Yang et al., 2018)
and HybridQA (Chen et al., 2020) are datasets requiring multi-hop reasoning.
**Numerical** **Reasoning** Numerical reasoning
plays an important role in different NLP tasks (Dua
et al., 2019; Zhang et al., 2021; Chen et al., 2021;
Zhu et al., 2021). To enhance the model’s numerical reasoning ability, some work adapt standard
extractive QA models with specialized modules
to perform numerical reasoning (Ran et al., 2019;
Hu et al., 2019). Recent work also focus on probing and injecting numerical reasoning skills to pretrained language models (Geva et al., 2020; Lin
et al., 2020; Zhang et al., 2020; Berg-Kirkpatrick
and Spokoyny, 2020). Meanwhile, various benchmarks and models are proposed to solve math word
problems (Koncel-Kedziorski et al., 2016; Xie and
Sun, 2019; Amini et al., 2019; Hendrycks et al.,
2021; Hong et al., 2021; Cobbe et al., 2021). The
most recent numerical reasoning QA benchmark
over hybrid data are FinQA (Chen et al., 2021) and
TAT-QA (Zhu et al., 2021).
**Financial NLP** Financial NLP has attracted
much attention recently. There have been various application in different tasks like risk management (Han et al., 2018; Theil et al., 2018; Nourbakhsh and Bang, 2019; Mai et al., 2019; Wang
et al., 2019), asset management (Filgueiras et al.,
2019; Blumenthal and Graf, 2019), market sentiment analysis (Daudert et al., 2018; Tabari et al.,
2018; Buechel et al., 2019), financial event extraction (Ein-Dor et al., 2019; Zhai and Zhang, 2019)
and financial question answering (Lai et al., 2018;
Maia et al., 2018). More recently, pre-trained language models are presented for finance text mining (Araci, 2019; Yang et al., 2020). The most
relevant work to us is FinQA (Chen et al., 2021)
and TAT-QA (Zhu et al., 2021), which both construct a QA dataset acquiring numerical reasoning
skills on financial reports with tabular data.
**3** **MULTIHIERTT Dataset**
**3.1** **Data Collection and Preprocessing**
MULTIHIERTT are deployed based on the FinTabNet dataset (Zheng et al., 2021), which contains
89,646 pages with table annotations extracted from
the annual reports of S&P 500 companies. For each
table contained, the FinTabNet dataset provides a
detailed HTML format annotation, in which table
hierarchies and cell information such as text and
-----
|What is the total amount of options granted and accepted in 2007 for exercise price? What is the proportion of long-term debt to the total in 2019 for consumer section? What is the average value of premiums in 2011 for GAAP, operating, and adjustments? What is the difference between gross carrying amount and accumulated amortization's highest value for intangible assets ? What is the growth rate of capital leases for OPEB plans between 2013 and 2014? What|How much is the sum of stock purchase rights in 2018 lower than those in 2017? How many years were the sales and client service expenses higher than software development expenses? How much of US corporate debt securities is there in total (in 2009) without consider gross unrealized gain and gross unrelized loss? How many financing activities continues to increase every year from 2017 to 2021? How|Which types of fuel emission allowance sales exceed 16 % of total in CIPS? Which year does the supply chain revenues have the largest proportion to the total? In which section the sum of trading non-derivative assets has the highest value? Which|When does net investment income reach the peak value? When does the restructuring costs exceed the average value? When|
|---|---|---|---|
|||If expected return on assets develops with the same growth rate in 2010, what will it reach in 2011? If If salaries and wages needs to make up 40% of the total benefits, what is the difference between the target value and the actual value?||
Figure 2: Examples of question by top-5 most frequent starting words, where box size represents frequency.
formats can be extracted and post-processed according to HTML tags.
The raw data is filtered as follows: First, we
extract documents with 1 to 4 pages and 2 to 6
tables from FinTabNet. Second, we filter out the
documents with limited textual contents. Third, as
we aim for the numerical reasoning ability, we also
exclude documents with tables containing little numerical information. Then, we use a pre-processing
script to extract the hierarchical structure of each
HTML-format table. And we ignore those tables
that cannot be handled by the pre-processing script.
As a result, a total of 4,791 documents were selected for further annotation.
**3.2** **Question-Answer Pair Annotation**
For each document selected in §3.1, the annotators are required to compose one or two QA examples along with detailed annotation. The process
of annotating each QA example is as follows: 1)
The annotators are first asked to compose a complex question that requires numerical reasoning
and is meaningful for helping novices understand
the annual reports. The annotators are encouraged
to compose questions that require the information
from both the textual and tabular content or from
multiple tables. 2) For those questions requiring
numerical expression, the annotators are then asked
to write down the reasoning programs to answer
the question. In detail, the annotators are asked to
elaborate on the operation steps to answer the question. The definitions of all operations are shown
in Table 7 in Appendix. 3) They are also required
to mark all the supporting facts from tabular and
textual content for each question.
**3.3** **Quality Control**
Strict quality control procedures are designed to
ensure the quality of dataset annotation, especially
the diversity and meaningfulness of proposed questions. The human evaluation scores and interevaluator agreements are reported in Table 2.
**Kappa**
**Annotation Quality** **%S ≥** **4 Agree** **/ 95% CI**
Question Complexity 76.8 0.77 0.72 / [0.65, 0.79]
Question Correctness 93.2 0.91 0.83 / [0.77, 0.89]
Question Meaningfulness 91.4 0.87 0.81 / [0.74, 0.88]
Reasoning Correctness 92.4 0.92 0.89 / [0.84, 0.94]
Support Facts Correctness 84.9 0.81 0.77 / [0.72, 0.82]
Answer Correctness 94.0 0.93 0.90 / [0.87, 0.93]
Table 2: Human evaluation over 100 samples of MULTIHIERTT. Four internal evaluators are asked to rate
the samples on a scale of 1 to 5. We report 1) percent of samples that have average score ≥ 4 to show
high quality of MULTIHIERTT; and 2) percent of agreement and Randolph’s Kappa with 95% CI (Randolph,
2005) to show high inter-annotator agreement of MULTIHIERTT.
**Expert Annotators** To help improve the annotation process, we first enroll five experts with professional experience in finance. During annotation,
they are asked to provide feedback regarding the
task instructions and the user experience of the annotation interface, based on which we iteratively
modify the annotation guideline and interface design. In the stage of crowd-sourced annotation, we
hire 23 graduate students (14 females and 9 males)
majoring in finance or similar discipline. Before
starting the official annotation process, each annotator is given a two-hour training session to learn
-----
the requirements and the annotation interface.
**Annotation De-Biasing** As suggested in previous research (Kaushik and Lipton, 2018; Clark
et al., 2019; Jiang and Bansal, 2019), consider annotation bias of QA benchmarks is of great significance. During the pilot annotation period, we
found that when generating question-answering
pairs, annotators may prefer simpler ones. To solve
this issue, we use thresholds to restrict the proportions of questions with different numbers of
numerical reasoning steps. Meanwhile, the proportions of questions with span selection answer
types are set to ≤ 20%. To further increase the
diversity of question-answer pair annotation, we
also select and include 2,119 QA examples from
FinQA (Chen et al., 2021).
**Multi-Round Validation** To further ensure the
diversity and correctness of proposed questionreasoning pairs, each document is assigned to three
annotators and one verifier in order. For annotators, each is required to first validate the previous
annotator’s annotation and fix the mistakes if there
are. Then, they are asked to create one or two more
question-reasoning pairs that are different from the
existing ones. After each annotator finishes tasks,
we assign another verifier with good performance
on this project to validate all the annotations.
**3.4** **Dataset Analysis**
Core statistics of MULTIHIERTT are reported in
Table 3. Table 1 shows a comprehensive comparison of related datasets. MULTIHIERTT is the first
dataset to study numerical reasoning questions over
hybrid data containing multiple hierarchical tables.
Compared with TAT-QA and FinQA, documents
in MULTIHIERTT contain longer unstructured input text and multiple tables, making the evidence
retrieving and reasoning more challenging. And
MULTIHIERTT has diverse and complex questions,
as illustrated in Figure 2.
We also analyze supporting facts coverage for
each question. In MULTIHIERTT, 1) 10.24% of
the questions only require the information in the
paragraphs to answer; 2) 33.09% of the questions
only require the information in one table to answer; 3) 7.93% require the information in more
than one table but without paragraphs to answer;
4) 48.74% require both the text and table information to answer, and among them, 23.20% required
the information in more than one table. The average number of annotated supporting facts are 7.02.
**Property** **Value**
# Examples (Q&A pairs with annotation) 10,440
# Documents 2,513
Vocabulary 24,193
Avg. # Sentences in input text 68.06
Avg. # Words in input text 1,645.9
Avg. # Tables per Document 3.89
Avg. # Rows per Table 10.78
Avg. # Columns per Table 4.97
Avg. # Question Length 16.78
Training Set Size 7,830 (75%)
Development Set Size 1,044 (10%)
Test Set Size 1,566 (15%)
Table 3: Core Statistics of MULTIHIERTT.
Meanwhile, among those questions with annotated
numerical reasoning programs, 28.94% of them
have 1 step; 37.76% of them have 2 steps; 15.21%
of them have 3 steps; and 18.10% of them have
more than 3 steps. As a result, the average number
of numerical reasoning steps is 2.47.
**4** **MT2Net Model**
To address the challenge of MULTIHIERTT, we propose a framework named MT2Net. Figure 3 gives
an overview of our proposed model. MT2Net first
applies fact retrieving module to extract relevant
supporting facts from the hierarchical tables and
paragraphs. Then, a reasoning module is adapted
to perform reasoning over retrieved facts and get
the final answer.
**Fact Retrieving Module** The whole input text in
each document of MULTIHIERTT can exceed 3,000
tokens and contain many numbers, which is beyond
the capability of the current popular QA models
(Devlin et al., 2019; Liu et al., 2019). Therefore,
we employ a fact retrieving module to first retrieve
the supporting facts from the documents. Previous
works on hybrid datasets (Zhu et al., 2021; Chen
et al., 2021; Li et al., 2021) use templates to flatten
each row of the table into sentences. And our facts
retrieving module applies similar ideas. However,
different from other hybrid datasets, most tables in
MULTIHIERTT are hierarchical. Therefore, we turn
each cell into a sentence, along with its hierarchical
row and column headers. For example, the first data
cell in the first table in Figure 1 is translated as "For
Innovation Systems of Segment, sales of product
in 2018, Year Ended December 31 is 2,894".
We concatenate each annotated supporting fact
with the question as input to train a BERT-based
-----
**Whole Document containing Multiple** **Reasoning Module**
|Col1|. abbreviate... ) The following table presents product and service sales and operating costs Reasoning Module hierarchical tables and paragraphs|Reasoning Module|
|---|---|---|
||d exp T oe phn eese r ( a as n.f. td.ob i nay lelb goxsb p wer eeeg ni xvnm si pagee et s en n.t bt. s.ay( d ) ebs oT sell ehl g a be m r y f epion nsrl l temo e(w Ysdgil eieoln mi ao lng l ran etrt Es sa ni n) n b: td l e mp e( ddrip lol r oi Doed lns elu ae csc)n er: mt ts i n bpa emro rn d 3id lu 1l ic ots nae snrd )v : siceerv icsea slaeless aannd doperating costs Span Sub-module Type 2018 Year Ended D2e0ce1m7 ber 31 egment Sales Expens2e0s1 8 Sales Expe2n0s1e7s Predicted novation SSyesgtemmesn t Sales E xpenses Sales Ex penses Prediction oduct Innovation Systems2,894 2,5 82 — — Answer ervice Product 382 2,8943 51 2,582 — — — — erospace SSyesrtveicmes 382 351 — — e io srd v siu ic oc e nt SystA P S Mer ee imo srr d v so siu is c oc ep nt a c Se sS ty es mte sm 1s 21,,00 08 97 1 21,,00 089 197,,8 7 8 99 6 9 1,,8 718 90 29 6,,0 0 6 64 7 10 2,,0 06 64 78 1,,9 88 58 4 8 1,,9 88 58 4 Facts Retrieving Program Sub-module y eo rd viu cc et P Sr eo rd viu cc et 7 4,,3 32 89 0 7 4,,3 32 86 39 0,,3 8 3 55 4 6 3,,3 83 57 45 4,,0 4 1 52 8 7 4,,0 41 52 86 3,,0 98 48 0 6 3,,0 98 48 0 Module LM Encoder (RoBERTa-large) chnology STeecrhvincoel ogy Service oduct Product 485 4854 50 4503 91 391 360 360 ervice Service 3,812 3,8132,4 04 3,4044,2 96 4,2963,878 3,878 Prod uct sales for 2018 increased $4.3 billion, or 25 percent, as rd eu ac st ec a os dwam dl ae iP isp tns ir coa o prfd nero reu ia r m c sod2t ea f 0 s rwiw $1a ly8l a2ei t s s .dhi 9 n up f co erb2 irr me it02 la oal10 is roi1 te7 lhyn8d. e d i o$n uT ac4 fed hr. e pd3e ta oi rt s b oioteii hdl nndl ei u o c o$n cr af4e, td . $da3o s2i rs tab i .oe 92i ll nel 5i o bs own ip lf, l fae i or$ors on2cr m. e 92 onp 5 ft br I,i i npp lm lae rinoosra noc d erc vuoinol cafytm t, t pip sa ord oasa nu d lr e ce ue Ssod c m t y fwt rp sso oaai tt m elrh eet msdh2 I n e0 fsw rn1 ooit m7h v. a 2 IT tni0 oh n1 ne o7 v. a T tioh ne <s> What … 2017 ? </s> … 9560 … … stemas nadnd Sh yhisgigtehhmeesrr rareensdst rthirciigtcehtdee rad rne adst nrFidc-t3 eF5d - va3on5ldu mvFo-e3l 5ua tmv Aoeleu rmaoest paAat ecAereo rSosyspspataeccmee sS S.y (sy .ts.e.mt aesbm.b (rs e....v aiabtber.e..v )i aTteh.e.. ) The le below rteacbolen bceilleosw f ruencdosn cpirleosv ifduendds b pyr oovpiedreadt ibnyg o apcetrivaittiinegs a toct ievaiticehs stoe geamcehn ste (gdmolelanrt i(nd omllaillri oinn ms)i:ll ions): Segment Segment F u 2 n0 d1 e 8 d F u 2 n0 d1 e8 dF unded 2 F0 u1 n%7 d eCdh 2 a0 n1 %7 g eC hange Innovation Systems 5,928 — — I An en ro ov sa pt aio cn e S SyA Mys se it stre esom mis ops nsa Sce y sS tey mst se ms 115,,9 42 48 8 11 9,,64 74 68 9,5 6— 0 9 9,,5 26 70 71 9.7 — % 19 4. .7 3 % % Retrieved top-n Facts: M Teis cs hi no on l oS gy ys t SeT eem rc vsh i cn eo slo gy Services 9 2,,6 8 7 86 3 2,883 9 2,,2 77 97 2 2,792 4 3. .3 3 % % 3.3 % 1. The funded Aerospace Systems in 2017 was 9560. Approximately $26.6 billion of the $53.5 billion total at December 2. The funded Mission Systems in 2017 was 9277. proxi3 m1 a, t e2A cloy0p n p1 $vr 28o e6x r tii .ems 6d a e bit ne ix ltll op iy o e sn$ ac2 ol6t ee f. s 6 t d ih nb e ti 2lo l $i 0o 51bn 93 eo .. 5f (c t .boh ..ie n lal iv b$ obe5 nr3 r et.t ov5e it ad ab tli e lil a.ino .t.tn )o D t eost caa ell meats b D e i ren c 3 e2 1m0, b1 2e 09r 1 .3 81 i, s 2 e0 x1 p8 e i cs teex dp te oc t be ed to be 3. Approximately $26.6 billion of the $53.5 billion total verted into sales in 2019. ( ... abbreQviautee...s ) tion at December 31, 2018 is expected to be converted into What was the total sales increase in the segment with most sales in 2019. funds in 2017? 4. ……|Span Sub-module Type Predicted Prediction Answer Program Sub-module LM Encoder (RoBERTa-large) <s> What … 2017 ? </s> … 9560 … …|
||opera antdi negxp eenxspees nbsy esseg bmye nst e( Ydgeomall ra er E ni nntd m e(ddill oi Doln elacs) er: m i nb emr 3il1li ons) 2018 Year Ended D2e0ce1m7 ber 2018||
|S In Pr S A Pr S M Pr S Te Pr S|egment Sales Expenses Sales Expe novation SSyesgtemmesn t Sales E xpenses Sales oduct Innovation Systems2,894 2,5 82 — ervice Product 382 2,8943 51 2,582 — — erospace SSyesrtveicmes 382 351 — eo rd viu cc et A P Sr ee o rr d vo ius cc ep t a ce System1s 21,,00 08 97 1 21,,00 089 197,,8 7 8 99 6 9 1,,8 718 90 29 6,,0 0 6 64 7 10 2,,0 06 64 7 ission SystMeimsssi on Systems oduct Product 7,329 7,3269,3 35 6,3375,0 12 7,012 ervice Service 4,380 4,3830,8 54 3,8544,4 58 4,458 chnology STeecrhvincoel ogy Service oduct Product 485 4854 50 4503 91 391 ervice Service 3,812 3,8132,4 04 3,4044,2 96 4,296||
||Prod uct sales for 2018 increased $4.3 billion, or ductc osamlePpsra ofdroeur cd2t 0sw1a8leit shi n fco2rre 02a10s1e78d. i$nTc4hr.e3ea sbeiilndlio c$nr4e, .a3o rsb e2ill 5io wnp,ae orscr e2np5tr,i pmaesrac reasea dwdaiistni coprneria msoeaf rwi$lya2 s.d 9up erbi mitloali roitlhyn e d ouafed pdtoirt oiothdneu ocaftd $ds2itai.o9lne bsoi lfl fior$on2m. 9o fb Iinpllrinoonod stemas nadnd Sh yhisgigtehhmeesrr rareensdst rthirciigtcehtdee rad rne adst nrFidc-t3 eF5d - va3on5ldu mvFo-e3l 5ua tmv Aoeleu rmaoest paAat ecAereo rSosyspspataec le below rteacbolen bceilleosw f ruencdosn cpirleosv ifduendds b pyr oovpiedreadt ibnyg o apcetrivaittiinegs a toct ievaiticehs Segment Segment F u 2 n0 d1 e 8 d F u 2 n0 d1 e8 dF Innovation Systems 5,928 Innovation SyAsetreomspsa ce Systems 5,928 11,448 Aerospace SMysistesmions Systems 11,448 9,676 Mission SysteTemcsh nology Services 9,676 2,883 Technology Services 2,8 83 Approximately $26.6 billion of the $53.5 billion tot 31, 2A0pp1r8ox iims aetexlpy e$c26te.6d b tilolio bn eo fc tohen v$e53r.t5e dbi lilinotno t ostaall eats D ienc proximatecloyn $v2e6rt.e6d biniltloio sna olefs t ihne 2 $05193.. 5( .b..i lalibobnr etovitaatle a..t. )D ecember 3 verted into sales in 2019. ( ... abbreQviautee...s ) tion What was the total sales increase in the segment funds in 2017?||
Figure 3: The framework of MT2Net. The model consists of a facts retrieving module and a reasoning module.
bi-classifier (Devlin et al., 2019). During the inference stage, the top-n sentences are retrieved as
supporting facts. They are reordered according to
the order of appearance in the original document.
Then they will serve as input to reasoning module.
**Reasoning Module** We first use pre-trained LMs
to encode the retrieved sentences from the facts
retrieving module. Then, we divide the answers
into two types: arithmetic program and span. For
each answer type, we use a unique sub-module
to calculate the conditional answer probability
_P_ (answer|type):
_Program sub-module: The structure is similar_
with the program generator of FinQANet (Chen
et al., 2021). The sub-module aims to generate the
executable program to answer the question. Specifically, an LSTM is used for decoding. At each
decoding step T, the LSTM can generate one token from 1) the numbers from the retrieved, 2)
pre-defined operators, and 3) the tokens already
generated in the previous steps. After the completion of generation, the sub-module will execute the
generated programs and get the predicted answer.
_Span sub-module: The span sub-module aims to_
select the predicted span candidate, which is a span
of retrieved sentences. The answer probability is
defined as the product of the probabilities of the
start and end positions in the retrieved evidence.
Meanwhile, an extra output layer is used to predict the probability P (type) of each answer type. In
particular, we take the output vector [CLS] from
LMs as input to compute the probability. In the
training stage, the final answer probability is defined as the joint probability over all feasible answer types, i.e., type _[P]_ [(][type][)][ ×][ P] [(][answer][|][type][)][.]
Here, both P (type) and P (answer|type) is learned
by the model. In the inference stage, the model first[P]
selects the most probable answer type and then uses
corresponding sub-modules to predict the answer.
**5** **Experiments**
**5.1** **Baseline Systems**
**TAGOP** TAGOP[1] is the baseline model for TATQA dataset (Zhu et al., 2021). It first uses sequence
tagging with the Inside–Outside tagging (IO) approach to extract supporting facts. Then an operator
classifier is applied to decide which operator is used
to infer the final answer via extracted facts. Different from ours, TAGOP can only perform symbolic
reasoning with a single type of pre-defined aggregation operators (e.g. change Ratio, division), and
might fail to answer complex questions requiring
multi-step reasoning.
**FinQANet** FinQANet[2] is the baseline model for
FinQA dataset (Chen et al., 2021). It first uses a
BERT-based retriever to take the top-n supporting
facts. Then a program generator is applied to generate the reasoning programs to get the final answers.
[1https://github.com/NExTplusplus/](https://github.com/NExTplusplus/tat-qa)
[tat-qa](https://github.com/NExTplusplus/tat-qa)
[2https://github.com/czyssrs/FinQA](https://github.com/czyssrs/FinQA)
-----
Different from ours, FinQANet ignores the hierarchical structure of tables when linearizing each row
of a table. And it is not designed to answer span
selection questions.
**Longformer + Reasoning module** To demonstrate the necessity of breaking up models into facts
retrieving and reasoning modules, we directly use
the pre-trained Longformer-base[3] (Beltagy et al.,
2020) as the input encoder in the reasoning module,
and encode the whole document.
**Fact Retrieving Module + TAPAS** We employ
TAPAS (MASKLM-base)[4] (Herzig et al., 2020;
Eisenschlos et al., 2020) as a baseline over tabular
data. TaPas is pretrained over large-scale tables and
associated text from Wikipedia jointly. To finetune
it, we use the table with most supporting facts along
with the answer as input for each example. For the
inference stage, the table with most portion of top15 retrieved facts is used as input.
**Fact Retrieving + NumNet** NumNet+[5] (Ran
et al., 2019) has demonstrated its effectiveness on
the DROP dataset (Dua et al., 2019). It designs
a NumGNN between the encoding and prediction
module to perform numerical comparison and numerical reasoning. However, NumNet+ only supports addition and subtraction when performing
symbolic reasoning, thus cannot handle those complex questions requiring operators such as division.
**Fact** **Retrieving** **Module** **+** **Seq2Prog** A
Seq2Prog architecture adopted from baseline of
MathQA dataset (Amini et al., 2019) is used as the
reasoning module. Specifically, we use a biLSTM
encoder and an LSTM decoder with attention.
**5.2** **Implementation Details**
For the fact retrieving module, we use BERT-base
as the classifier. Since most of the examples in our
dataset have less than 7 supporting facts (89.3%),
and we find that longer inputs might lower the performance of the reasoning module, we take the
top-10 retrieving facts as the retriever results. For
the reasoning module, we experiment on using
BERT (Devlin et al., 2019) and RoBERTa (Liu
et al., 2019) as the encoder. We use the Adam optimizer (Kingma and Ba, 2014) for all models. The
[3https://github.com/allenai/longformer](https://github.com/allenai/longformer)
[4https://github.com/google-research/](https://github.com/google-research/tapas)
[tapas](https://github.com/google-research/tapas)
[5https://github.com/llamazing/numnet_](https://github.com/llamazing/numnet_plus)
[plus](https://github.com/llamazing/numnet_plus)
**Dev** **Test**
EM F1 EM F1
Longformer + Reasoning 2.71 6.93 2.86 6.23
Facts Retrieving + TAPAS 8.94 10.70 7.67 10.04
Facts Retrieving + NumNet 10.32 12.59 10.77 12.02
TAGOP (RoBERTa-large) 19.16 21.08 17.81 19.35
Facts Retrieving + Seq2Prog 26.19 28.74 24.58 26.30
FinQANet (RoBERTa-large) 32.41 35.37 31.72 33.60
MT2Net (BERT-base) 33.68 35.94 32.07 33.67
MT2Net (BERT-large) 34.03 36.13 33.25 34.98
MT2Net (RoBERTa-base) 35.69 37.81 34.32 36.17
MT2Net (RoBERTa-large) **37.05** **39.96** **36.22** **38.43**
Human Expert Performance – – 83.12 87.03
Table 4: Performance of MT2Net compared with different baseline models on the dev and test sets of MULTIHIERTT. While MT2Net outperforms other baselines,
all models perform far behind human experts.
training of all models is conducted on RTX 3090s.
All the implementation of LMs is based on the huggingface transformers library. To ensure fairness,
we set batch size as 32 for all baseline models.
For Evaluation Metrics, following TAT-QA (Zhu
et al., 2021), we report exact matching (EM) and
adopted numeracy-focused F1 (Dua et al., 2019).
**5.3** **Human Performance**
To test the performance of the human expert on
MULTIHIERTT, we invite another two professionals. We randomly sampled 60 examples from the
test set, and ask them to answer the questions individually within three hours. The results are reported in the last row of Table 4.
**5.4** **Model Performance**
Table 4 summarizes our evaluation results of different models. We use the same fact retrieving results
for all "Retrieving + Reasoning" models. For the
fact retrieving module, we have 76.4% recall for
the top-10 retrieved facts and 80.8% recall for the
top-15 retrieved facts.
**Necessity** **of** **applying** **retrieving-reasoning**
**pipeline** Directly using an end-to-end pretrained Longformer model to replace a retrieving
module falls far behind. This makes sense because
longer input contains much irrelevant numerical
information, which makes the reasoning module
difficult to learn.
**Necessity of understanding hierarchical table**
**structure** Both TAGOP and FinQANet perform
-----
worse than MT2Net because they ignore the table’s
hierarchical structure in the retrieving part. Different from ours, which flatten each cell with its
header hierarchical structures, both TAGOP and
FinQANet flatten each table by rows, losing the
table’s hierarchical structure information.
**Necessity of an effective reasoning module**
Most questions in MULTIHIERTT require models
to perform multi-step reasoning and integrate different kinds of operators. Generally, the reasoning
module generating reasoning programs to get answers performs better than directly generating answers by end-to-end method, i.e. adopted TAPAS.
Both adopted NumNet and TAGOP perform much
worse than MT2Net because they only support limited symbolic reasoning. Specifically, TAGOP can
only perform with a single type of pre-defined aggregation operator for each question, and NumNet
only supports addition and subtraction operators
when performing symbolic reasoning. By contrast, MT2Net performs better than FinQANet and
Seq2Prog because it applies different sub-modules
to answer questions with different answer types.
The results also show that larger pre-trained models have better performance. This is because they
are pre-trained on more financial corpus. However,
all the models perform significantly worse than
human experts, indicating MULTIHIERTT is challenging to state-of-the-art QA models and there is
a large room for improvements for future research.
**5.5** **Further Analysis**
To guide the future directions of model improvement, various performance breakdown experiments
on the test set are conducted using the MT2Net
(RoBERTa-large) model. Table 5 shows the results.
Generally, the model has a much lower accuracy on
questions with more than two numerical reasoning
steps. Meanwhile, the model performs poorly on
questions requiring cross-table supporting facts.
We further investigate the proposed MT2Net
by analyzing error cases. We randomly sample
100 error cases from the results of the MT2Net
(RoBERTa-large) model on the test set, and classify them into four main categories as shown in Table 6, along with examples. The analysis shows that
around 64% error (Wrong Operand/Span+Missing
Operand) is caused by the failure to integrate the
supporting facts correctly. Meanwhile, the current
model fails to integrate external finance knowledge
to answer questions.
**Performance Breakdown** **EM** **F1**
**Regarding supporting facts coverage**
text-only questions 49.26 53.29
table-only questions 36.77 38.55
w/ ≥ 2 tables 24.32 24.96
table-text questions 33.04 35.15
w/ ≥ 2 tables 21.04 23.36
**Regarding numerical reasoning steps**
1 step 43.62 47.80
2 steps 34.67 37.91
3 steps 22.43 24.57
> 3 steps 15.14 17.19
**Full Results** **36.22** **38.43**
Table 5: Results of performance breakdown using
MT2Net (RoBERTa-large). The model performance
deteriorates as the numbers of tables and reasoning
steps increase.
|Wrong Operand or Span (43%)|Q: What was the total of premiums granted in the year with the highest GAAP? G: 327 + 415 + 1217 P: 426 + 517 + 1109 Explain: Locate the wrong year.|
|---|---|
|Missing Operand (21%)|Q: What was the average value of trading asserts between 2015 and 2018? G: (1203 + 1437 + 1896 + 1774) / 4 P: (1203 + 1774) / 2 Explain: Only account year 2015 and 2018.|
|---|---|
|Wrong Program (19%)|Q: What is the change ratio of corporate debt from 2018 to 2019? G: (1024 - 979) / 979 P: 1024 - 979|
|---|---|
|Lack of Domain Knowledge (4%)|Q: What is the earning rate of ATTA stock in 2017? G: 17.32 / 35.80 P: 17.32 Explain: Not know the formula of calculat- ing earning rate.|
|---|---|
Table 6: Examples of error cases and corresponding
preparations. Q, G, P denote question, ground truth,
and predicted results, respectively.
**5.6** **Limitations and Future Work**
Although the proposed MT2Net model outperforms other baseline models, it still performs significantly worse than human experts, which reflects
the challenge of MULTIHIERTT. Primarily, we find
that models do not perform well on certain types of
questions: 1) questions requiring reasoning across
multiple tables; 2) questions requiring multi-step
reasoning; 3) questions requiring reasoning over
tables with complex hierarchical structures; and 4)
questions requiring external financial knowledge.
To deal with these challenges, we believe that
-----
four main directions of work may be workable: 1)
designing a specialized module to handle multitable reasoning; 2) decomposing a complex question requiring multi-step reasoning into several
simpler sub-questions that QA models can handle (Perez et al., 2020; Chen et al., 2020); 3) applying a more advanced table-encoding method. For
example, a pre-trained model with specialized table structure-aware mechanisms (Wang et al., 2021;
Cheng et al., 2021a; Yang et al., 2022) can be utilized in the facts retrieving module to better understand hierarchical tables; and 4) leveraging structured knowledge (Xie et al., 2022) to inject external
financial knowledge to models.
**6** **Conclusion**
We have proposed MULTIHIERTT, a new largescale QA dataset that aims to solve complicated QA
tasks that require numerical reasoning over documents containing multiple hierarchical tables and
paragraphs. To address the challenge of MULTIHIERTT, we introduce a baseline framework named
MT2Net. The framework first retrieves supporting
facts from financial reports and then generates executable reasoning programs to answer the question.
The results of comprehensive experiments showed
that current QA models (best F1: 38.43%) still
lag far behind the human expert performance (F1:
87.03%). This motivates further research on developing QA models for such complex hybrid data
with multiple hierarchical tables.
**7** **Ethics Considerations**
Data in MULTIHIERTT is collected from the
FinQA dataset (Chen et al., 2021) and FinTabNet
dataset (Zheng et al., 2021). FinQA is publicly
available under the MIT license[6]. FinTabNet is publicly available under the license CDLA-Permissive1.0[7]. Both licenses permits us to compose, modify,
publish, and distribute additional annotations upon
the original dataset.
For the internal annotation of MULTIHIERTT,
each expert is paid $20 per hour. For the external
annotation, we hire 23 graduate students majoring
in finance or similar disciplines. We regard creating one question-reasoning pair, or validating one
document’s annotation as a unit task. And we pay
around $1.1 for each unit task. Averagely, an annotator can finish 7 unit tasks per hour after training
6https://opensource.org/licenses/MIT
7https://cdla.dev/permissive-1-0/
and practicing. And the hourly rates are in the
range of $6 and $9 based on the different working speed (above the local average wage of similar
jobs). In total, the approximate working hours to
annotate MULTIHIERTT dataset is 1500 hours. The
whole annotation work lasts about 70 days.
**Acknowledgements**
We appreciate all the annotators’ efforts to construct MULTIHIERTT. And we would like to thank
the anonymous reviewers and action editors for
their constructive discussions and feedback.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. Mathqa: Towards interpretable math](https://arxiv.org/abs/1905.13319)
[word problem solving with operation-based for-](https://arxiv.org/abs/1905.13319)
[malisms. In Proceedings of the 2019 Conference of](https://arxiv.org/abs/1905.13319)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367.
[Dogu Araci. 2019. Finbert: Financial sentiment analy-](https://arxiv.org/abs/1908.10063)
[sis with pre-trained language models. arXiv preprint](https://arxiv.org/abs/1908.10063)
_arXiv:1908.10063._
Iz Beltagy, Matthew E. Peters, and Arman Cohan.
[2020. Longformer: The long-document transformer.](http://arxiv.org/abs/2004.05150)
_CoRR, abs/2004.05150._
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy
Liang. 2013. [Semantic parsing on freebase from](https://aclanthology.org/D13-1160)
[question-answer pairs. In Proceedings of the 2013](https://aclanthology.org/D13-1160)
_conference on empirical methods in natural lan-_
_guage processing, pages 1533–1544._
Taylor Berg-Kirkpatrick and Daniel Spokoyny. 2020.
[An empirical investigation of contextualized number](https://doi.org/10.18653/v1/2020.emnlp-main.385)
[prediction. In Proceedings of the 2020 Conference](https://doi.org/10.18653/v1/2020.emnlp-main.385)
_on Empirical Methods in Natural Language Process-_
_ing (EMNLP), pages 4754–4764, Online. Associa-_
tion for Computational Linguistics.
[Frederick Blumenthal and Ferdinand Graf. 2019. Uti-](https://aclanthology.org/W19-6402)
[lizing pre-trained word embeddings to learn classifi-](https://aclanthology.org/W19-6402)
[cation lexicons with little supervision. In Proceed-](https://aclanthology.org/W19-6402)
_ings of the Second Financial Narrative Processing_
_Workshop (FNP 2019), pages 5–15, Turku, Finland._
Linköping University Electronic Press.
Sven Buechel, Simon Junker, Thore Schlaak, Claus
Michelsen, and Udo Hahn. 2019. [A time series](https://doi.org/10.18653/v1/D19-5103)
[analysis of emotional loading in central bank state-](https://doi.org/10.18653/v1/D19-5103)
[ments.](https://doi.org/10.18653/v1/D19-5103) In Proceedings of the Second Workshop
_on Economics and Natural Language Processing,_
pages 16–21, Hong Kong. Association for Computational Linguistics.
-----
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan
[Xiong, Hong Wang, and William Wang. 2020. Hy-](https://aclanthology.org/2020.findings-emnlp.91/)
[bridqa: A dataset of multi-hop question answering](https://aclanthology.org/2020.findings-emnlp.91/)
[over tabular and textual data. Findings of EMNLP](https://aclanthology.org/2020.findings-emnlp.91/)
_2020._
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena
Shah, Iana Borova, Dylan Langdon, Reema Moussa,
Matt Beane, Ting-Hao Huang, Bryan Routledge,
[and William Yang Wang. 2021. Finqa: A dataset](https://arxiv.org/abs/2109.00122)
[of numerical reasoning over financial data. Proceed-](https://arxiv.org/abs/2109.00122)
_ings of the 2021 Conference on Empirical Methods_
_in Natural Language Processing._
Zhoujun Cheng, Haoyu Dong, Fan Cheng, Ran
Jia, Pengfei Wu, Shi Han, and Dongmei Zhang.
2021a. Fortap: [Using formulae for numerical-](https://arxiv.org/abs/2109.07323)
[reasoning-aware table pretraining.](https://arxiv.org/abs/2109.07323) _arXiv preprint_
_arXiv:2109.07323._
Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia,
Jiaqi Guo, Yan Gao, Shi Han, Jian-Guang Lou, and
[Dongmei Zhang. 2021b. Hitab: A hierarchical table](https://arxiv.org/abs/2108.06712)
[dataset for question answering and natural language](https://arxiv.org/abs/2108.06712)
[generation. arXiv preprint arXiv:2108.06712.](https://arxiv.org/abs/2108.06712)
Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. [Don’t take the easy way out: En-](https://arxiv.org/abs/1909.03683)
[semble based methods for avoiding known dataset](https://arxiv.org/abs/1909.03683)
[biases. In Proceedings of the 2019 Conference on](https://arxiv.org/abs/1909.03683)
_Empirical Methods in Natural Language Processing_
_and the 9th International Joint Conference on Natu-_
_ral Language Processing (EMNLP-IJCNLP), pages_
4069–4082.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. 2021. [Training veri-](https://arxiv.org/abs/2110.14168)
[fiers to solve math word problems. arXiv preprint](https://arxiv.org/abs/2110.14168)
_arXiv:2110.14168._
Tobias Daudert, Paul Buitelaar, and Sapna Negi. 2018.
[Leveraging news sentiment to improve microblog](https://doi.org/10.18653/v1/W18-3107)
[sentiment classification in the financial domain. In](https://doi.org/10.18653/v1/W18-3107)
_Proceedings of the First Workshop on Economics_
_and Natural Language Processing, pages 49–54,_
Melbourne, Australia. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. [BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing.](https://doi.org/10.18653/v1/N19-1423) In Proceedings of the 2019 Conference
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
[DROP: A reading comprehension benchmark requir-](https://aclanthology.org/N19-1246/)
[ing discrete reasoning over paragraphs. In Proc. of](https://aclanthology.org/N19-1246/)
_NAACL._
Liat Ein-Dor, Ariel Gera, Orith Toledo-Ronen, Alon
Halfon, Benjamin Sznajder, Lena Dankin, Yonatan
[Bilu, Yoav Katz, and Noam Slonim. 2019. Finan-](https://doi.org/10.18653/v1/D19-5102)
[cial event extraction using Wikipedia-based weak](https://doi.org/10.18653/v1/D19-5102)
[supervision.](https://doi.org/10.18653/v1/D19-5102) In Proceedings of the Second Work_shop on Economics and Natural Language Process-_
_ing, pages 10–15, Hong Kong. Association for Com-_
putational Linguistics.
Julian Eisenschlos, Syrine Krichene, and Thomas
Müller. 2020. [Understanding tables with interme-](https://doi.org/10.18653/v1/2020.findings-emnlp.27)
[diate pre-training.](https://doi.org/10.18653/v1/2020.findings-emnlp.27) In Findings of the Association
_for Computational Linguistics: EMNLP 2020, pages_
281–296, Online. Association for Computational
Linguistics.
João Filgueiras, Luís Barbosa, Gil Rocha, Henrique Lopes Cardoso, Luís Paulo Reis, João Pedro
Machado, and Ana Maria Oliveira. 2019. [Com-](https://doi.org/10.18653/v1/D19-5107)
[plaint analysis and classification for economic and](https://doi.org/10.18653/v1/D19-5107)
[food safety.](https://doi.org/10.18653/v1/D19-5107) In Proceedings of the Second Work_shop on Economics and Natural Language Process-_
_ing, pages 51–60, Hong Kong. Association for Com-_
putational Linguistics.
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
[Injecting numerical reasoning skills into language](https://doi.org/10.18653/v1/2020.acl-main.89)
[models. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/v1/2020.acl-main.89)
_ing of the Association for Computational Linguis-_
_tics, pages 946–958, Online. Association for Com-_
putational Linguistics.
Jingguang Han, Utsab Barman, Jeremiah Hayes, Jinhua Du, Edward Burgin, and Dadong Wan. 2018.
[NextGen AML: Distributed deep learning based lan-](https://doi.org/10.18653/v1/P18-4007)
[guage technologies to augment anti money launder-](https://doi.org/10.18653/v1/P18-4007)
[ing investigation. In Proceedings of ACL 2018, Sys-](https://doi.org/10.18653/v1/P18-4007)
_tem Demonstrations, pages 37–42, Melbourne, Aus-_
tralia. Association for Computational Linguistics.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. [Measuring mathematical](https://arxiv.org/abs/2103.03874)
[problem solving with the math dataset. NeurIPS.](https://arxiv.org/abs/2103.03874)
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
[and Phil Blunsom. 2015. Teaching machines to read](https://arxiv.org/abs/1506.03340)
[and comprehend. Advances in neural information](https://arxiv.org/abs/1506.03340)
_processing systems, 28:1693–1701._
Jonathan Herzig, Pawel Krzysztof Nowak, Thomas
Mueller, Francesco Piccinno, and Julian Eisensch[los. 2020. Tapas: Weakly supervised table parsing](https://aclanthology.org/2020.acl-main.398/)
[via pre-training. In Proceedings of the 58th Annual](https://aclanthology.org/2020.acl-main.398/)
_Meeting of the Association for Computational Lin-_
_guistics, pages 4320–4333._
Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, and
[Song-Chun Zhu. 2021. Learning by fixing: Solv-](https://arxiv.org/abs/2012.10582)
[ing math word problems with weak supervision. In](https://arxiv.org/abs/2012.10582)
_AAAI Conference on Artificial Intelligence._
Minghao Hu, Yuxing Peng, Zhen Huang, and Dong[sheng Li. 2019. A multi-type multi-span network](https://doi.org/10.18653/v1/D19-1170)
-----
[for reading comprehension that requires discrete rea-](https://doi.org/10.18653/v1/D19-1170)
[soning. In Proceedings of the 2019 Conference on](https://doi.org/10.18653/v1/D19-1170)
_Empirical Methods in Natural Language Processing_
_and the 9th International Joint Conference on Natu-_
_ral Language Processing (EMNLP-IJCNLP), pages_
1596–1606, Hong Kong, China. Association for
Computational Linguistics.
[Yichen Jiang and Mohit Bansal. 2019. Avoiding rea-](https://doi.org/10.18653/v1/P19-1262)
[soning shortcuts: Adversarial evaluation, training,](https://doi.org/10.18653/v1/P19-1262)
[and model development for multi-hop QA. In Pro-](https://doi.org/10.18653/v1/P19-1262)
_ceedings of the 57th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 2726–_
2736, Florence, Italy. Association for Computational Linguistics.
Yannis Katsis, Saneem Chemmengath, Vishwajeet
Kumar, Samarth Bharadwaj, Mustafa Canim,
Michael Glass, Alfio Gliozzo, Feifei Pan, Jaydeep Sen, Karthik Sankaranarayanan, and Soumen
Chakrabarti. 2021. Ait-qa: [Question answering](http://arxiv.org/abs/2106.12944)
[dataset over complex tables in the airline industry.](http://arxiv.org/abs/2106.12944)
[Divyansh Kaushik and Zachary C Lipton. 2018. How](https://aclanthology.org/D18-1546/)
[much reading does reading comprehension require?](https://aclanthology.org/D18-1546/)
[a critical investigation of popular benchmarks. In](https://aclanthology.org/D18-1546/)
_Proceedings of the 2018 Conference on Empirical_
_Methods in Natural Language Processing, pages_
5010–5015.
[Diederik P Kingma and Jimmy Ba. 2014. Adam: A](https://arxiv.org/abs/1412.6980)
[method for stochastic optimization. arXiv preprint](https://arxiv.org/abs/1412.6980)
_arXiv:1412.6980._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. Mawps:](https://aclanthology.org/N16-1136)
[A math word problem repository. In Proceedings of](https://aclanthology.org/N16-1136)
_the 2016 Conference of the North American Chap-_
_ter of the Association for Computational Linguistics:_
_Human Language Technologies, pages 1152–1157._
Tuan Lai, Trung Bui, Sheng Li, and Nedim Lipka. 2018.
[A simple end-to-end question answering model for](https://doi.org/10.18653/v1/W18-3105)
[product information.](https://doi.org/10.18653/v1/W18-3105) In Proceedings of the First
_Workshop on Economics and Natural Language Pro-_
_cessing, pages 38–43, Melbourne, Australia. Associ-_
ation for Computational Linguistics.
Xiao Li, Yawei Sun, and Gong Cheng. 2021. [Tsqa:](https://ojs.aaai.org/index.php/AAAI/article/view/17570)
[Tabular scenario based question answering.](https://ojs.aaai.org/index.php/AAAI/article/view/17570) _Pro-_
_ceedings of the AAAI Conference on Artificial Intel-_
_ligence, 35(15):13297–13305._
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xi[ang Ren. 2020. Birds have four legs?! NumerSense:](https://doi.org/10.18653/v1/2020.emnlp-main.557)
[Probing Numerical Commonsense Knowledge of](https://doi.org/10.18653/v1/2020.emnlp-main.557)
[Pre-Trained Language Models. In Proceedings of](https://doi.org/10.18653/v1/2020.emnlp-main.557)
_the 2020 Conference on Empirical Methods in Nat-_
_ural Language Processing (EMNLP), pages 6862–_
6868, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
[Roberta: A robustly optimized bert pretraining ap-](http://arxiv.org/abs/1907.11692)
[proach.](http://arxiv.org/abs/1907.11692)
Feng Mai, Shaonan Tian, Chihoon Lee, and Ling Ma.
[2019. Deep learning models for bankruptcy predic-](https://doi.org/10.1016/j.ejor.2018.10.024)
[tion using textual disclosures. European journal of](https://doi.org/10.1016/j.ejor.2018.10.024)
_operational research, 274(2):743–758._
Macedo Maia, Siegfried Handschuh, André Freitas,
Brian Davis, Ross McDermott, Manel Zarrouk, and
[Alexandra Balahur. 2018. Www’18 open challenge:](https://dl.acm.org/doi/abs/10.1145/3184558.3192301)
[financial opinion mining and question answering. In](https://dl.acm.org/doi/abs/10.1145/3184558.3192301)
_Companion Proceedings of the The Web Conference_
_2018, pages 1941–1942._
Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth,
and Iryna Gurevych. 2021. [Scigen: a dataset for](https://openreview.net/forum?id=Jul-uX7EV_I)
[reasoning-aware text generation from scientific ta-](https://openreview.net/forum?id=Jul-uX7EV_I)
[bles. In Thirty-fifth Conference on Neural Informa-](https://openreview.net/forum?id=Jul-uX7EV_I)
_tion Processing Systems Datasets and Benchmarks_
_Track (Round 2)._
Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry´sci´nski, Nick Schoelkopf, Riley Kong, Xiangru Tang,
Mutethia Mutuma, Ben Rosand, Isabel Trindade,
Renusree Bandaru, Jacob Cunningham, Caiming
[Xiong, and Dragomir Radev. 2022. FeTaQA: Free-](https://doi.org/10.1162/tacl_a_00446)
[form Table Question Answering.](https://doi.org/10.1162/tacl_a_00446) _Transactions_
_of the Association for Computational Linguistics,_
10:35–49.
Armineh Nourbakhsh and Grace Bang. 2019. [A](http://arxiv.org/abs/1908.09156)
[framework for anomaly detection using language](http://arxiv.org/abs/1908.09156)
[modeling, and its applications to finance.](http://arxiv.org/abs/1908.09156) _CoRR,_
abs/1908.09156.
[Panupong Pasupat and Percy Liang. 2015. Composi-](https://arxiv.org/abs/1508.00305)
[tional semantic parsing on semi-structured tables. In](https://arxiv.org/abs/1508.00305)
_Proceedings of the 53rd Annual Meeting of the Asso-_
_ciation for Computational Linguistics and the 7th In-_
_ternational Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pages 1470–_
1480.
Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun
[Cho, and Douwe Kiela. 2020. Unsupervised ques-](https://doi.org/10.18653/v1/2020.emnlp-main.713)
[tion decomposition for question answering.](https://doi.org/10.18653/v1/2020.emnlp-main.713) In
_Proceedings of the 2020 Conference on Empirical_
_Methods in Natural Language Processing (EMNLP),_
pages 8864–8880, Online. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
[Percy Liang. 2016. SQuAD: 100,000+ questions for](https://doi.org/10.18653/v1/D16-1264)
[machine comprehension of text. In Proceedings of](https://doi.org/10.18653/v1/D16-1264)
_the 2016 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 2383–2392, Austin,_
Texas. Association for Computational Linguistics.
Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan
[Liu. 2019. Numnet: Machine reading comprehen-](https://aclanthology.org/D19-1251/)
[sion with numerical reasoning. In Proceedings of](https://aclanthology.org/D19-1251/)
_the 2019 Conference on Empirical Methods in Nat-_
_ural Language Processing and the 9th International_
-----
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 2474–2484._
Justus J Randolph. 2005. [Free-marginal multirater](https://eric.ed.gov/?id=ED490661)
[kappa (multirater k [free]): An alternative to fleiss’](https://eric.ed.gov/?id=ED490661)
[fixed-marginal multirater kappa. Advances in Data](https://eric.ed.gov/?id=ED490661)
_Analysis and Classification, 4._
Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro
Funakoshi, Manabu Okumura, and Hiroya Taka[mura. 2021. Towards table-to-text generation with](https://doi.org/10.18653/v1/2021.acl-long.115)
[numerical reasoning. In Proceedings of the 59th An-](https://doi.org/10.18653/v1/2021.acl-long.115)
_nual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Confer-_
_ence on Natural Language Processing (Volume 1:_
_Long Papers), pages 1451–1465, Online. Associa-_
tion for Computational Linguistics.
Narges Tabari, Piyusha Biswas, Bhanu Praneeth,
Armin Seyeditabari, Mirsad Hadzikadic, and
[Wlodek Zadrozny. 2018. Causality analysis of Twit-](https://doi.org/10.18653/v1/W18-3102)
[ter sentiments and stock market returns. In Proceed-](https://doi.org/10.18653/v1/W18-3102)
_ings of the First Workshop on Economics and Natu-_
_ral Language Processing, pages 11–19, Melbourne,_
Australia. Association for Computational Linguistics.
Alon Talmor and Jonathan Berant. 2018. [The web](https://doi.org/10.18653/v1/N18-1059)
[as a knowledge-base for answering complex ques-](https://doi.org/10.18653/v1/N18-1059)
[tions. In Proceedings of the 2018 Conference of the](https://doi.org/10.18653/v1/N18-1059)
_North American Chapter of the Association for Com-_
_putational Linguistics: Human Language Technolo-_
_gies, Volume 1 (Long Papers), pages 641–651, New_
Orleans, Louisiana. Association for Computational
Linguistics.
Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav,
Yizhong Wang, Akari Asai, Gabriel Ilharco, Han[naneh Hajishirzi, and Jonathan Berant. 2021. Mul-](https://openreview.net/forum?id=ee6W5UgQLa)
[timodal{qa}: complex question answering over text,](https://openreview.net/forum?id=ee6W5UgQLa)
[tables and images. In International Conference on](https://openreview.net/forum?id=ee6W5UgQLa)
_Learning Representations._
Christoph Kilian Theil, Sanja Štajner, and Heiner
[Stuckenschmidt. 2018. Word embeddings-based un-](https://doi.org/10.18653/v1/W18-3104)
[certainty detection in financial disclosures. In Pro-](https://doi.org/10.18653/v1/W18-3104)
_ceedings of the First Workshop on Economics and_
_Natural Language Processing, pages 32–37, Mel-_
bourne, Australia. Association for Computational
Linguistics.
Weikang Wang, Jiajun Zhang, Qian Li, Chengqing
[Zong, and Zhifei Li. 2019. Are you for real? de-](https://doi.org/10.18653/v1/D19-1185)
[tecting identity fraud via dialogue interactions. In](https://doi.org/10.18653/v1/D19-1185)
_Proceedings of the 2019 Conference on Empirical_
_Methods in Natural Language Processing and the_
_9th International Joint Conference on Natural Lan-_
_guage Processing (EMNLP-IJCNLP), pages 1762–_
1771, Hong Kong, China. Association for Computational Linguistics.
Zhiruo Wang, Haoyu Dong, Ran Jia, Jia Li, Zhiyi Fu,
[Shi Han, and Dongmei Zhang. 2021. Tuta: Tree-](https://doi.org/10.1145/3447548.3467434)
[based transformers for generally structured table pre-](https://doi.org/10.1145/3447548.3467434)
[training. In Proceedings of the 27th ACM SIGKDD](https://doi.org/10.1145/3447548.3467434)
_Conference on Knowledge Discovery & Data_
_Mining, KDD ’21, page 1780–1790, New York, NY,_
USA. Association for Computing Machinery.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong,
Torsten Scholak, Michihiro Yasunaga, Chien-Sheng
Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al.
[2022. Unifiedskg: Unifying and multi-tasking struc-](https://arxiv.org/abs/2201.05966)
[tured knowledge grounding with text-to-text lan-](https://arxiv.org/abs/2201.05966)
[guage models. arXiv preprint arXiv:2201.05966.](https://arxiv.org/abs/2201.05966)
[Zhipeng Xie and Shichao Sun. 2019. A goal-driven](https://doi.org/10.24963/ijcai.2019/736)
[tree-structured neural model for math word prob-](https://doi.org/10.24963/ijcai.2019/736)
[lems.](https://doi.org/10.24963/ijcai.2019/736) In Proceedings of the Twenty-Eighth In_ternational Joint Conference on Artificial Intelli-_
_gence, IJCAI-19, pages 5299–5305. International_
Joint Conferences on Artificial Intelligence Organization.
Jingfeng Yang, Aditya Gupta, Shyam Upadhyay,
Luheng He, Rahul Goel, and Shachi Paul. 2022.
[Tableformer: Robust transformer modeling for table-](https://arxiv.org/abs/2203.00274)
[text encoding. arXiv preprint arXiv:2203.00274.](https://arxiv.org/abs/2203.00274)
Yi Yang, Mark Christopher Siy Uy, and Allen Huang.
[2020. Finbert: A pretrained language model for fi-](http://arxiv.org/abs/2006.08097)
[nancial communications. CoRR, abs/2006.08097.](http://arxiv.org/abs/2006.08097)
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio,
William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. [HotpotQA: A dataset](https://doi.org/10.18653/v1/D18-1259)
[for diverse, explainable multi-hop question answer-](https://doi.org/10.18653/v1/D18-1259)
[ing. In Proceedings of the 2018 Conference on Em-](https://doi.org/10.18653/v1/D18-1259)
_pirical Methods in Natural Language Processing,_
pages 2369–2380, Brussels, Belgium. Association
for Computational Linguistics.
Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and
Jianfeng Gao. 2015. [Semantic parsing via staged](https://doi.org/10.3115/v1/P15-1128)
[query graph generation: Question answering with](https://doi.org/10.3115/v1/P15-1128)
[knowledge base. In Proceedings of the 53rd Annual](https://doi.org/10.3115/v1/P15-1128)
_Meeting of the Association for Computational Lin-_
_guistics and the 7th International Joint Conference_
_on Natural Language Processing (Volume 1: Long_
_Papers), pages 1321–1331, Beijing, China. Associa-_
tion for Computational Linguistics.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga,
Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingn[ing Yao, Shanelle Roman, et al. 2018. Spider: A](https://arxiv.org/abs/1809.08887)
[large-scale human-labeled dataset for complex and](https://arxiv.org/abs/1809.08887)
[cross-domain semantic parsing and text-to-sql task.](https://arxiv.org/abs/1809.08887)
In Proceedings of the 2018 Conference on Empiri_cal Methods in Natural Language Processing, pages_
3911–3921.
Shuang (Sophie) Zhai and Zhu (Drew) Zhang. 2019.
[Forecasting firm material events from 8-k reports.](https://doi.org/10.18653/v1/D19-5104)
In Proceedings of the Second Workshop on Eco_nomics and Natural Language Processing, pages_
22–30, Hong Kong. Association for Computational
Linguistics.
Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang
Wang, Yang Wang, Jing Jiang, and Ee-Peng Lim.
-----
2021. [NOAHQA: Numerical reasoning with in-](https://doi.org/10.18653/v1/2021.findings-emnlp.350)
[terpretable graph question answering dataset.](https://doi.org/10.18653/v1/2021.findings-emnlp.350) In
_Findings of the Association for Computational Lin-_
_guistics: EMNLP 2021, pages 4147–4161, Punta_
Cana, Dominican Republic. Association for Computational Linguistics.
Xikun Zhang, Deepak Ramachandran, Ian Tenney,
Yanai Elazar, and Dan Roth. 2020. [Do language](https://doi.org/10.18653/v1/2020.blackboxnlp-1.27)
[embeddings capture scales?](https://doi.org/10.18653/v1/2020.blackboxnlp-1.27) In Proceedings of the
_Third BlackboxNLP Workshop on Analyzing and In-_
_terpreting Neural Networks for NLP, pages 292–299,_
Online. Association for Computational Linguistics.
Xinyi Zheng, Doug Burdick, Lucian Popa, Peter
[Zhong, and Nancy Xin Ru Wang. 2021. Global ta-](https://arxiv.org/abs/2005.00589)
[ble extractor (gte): A framework for joint table iden-](https://arxiv.org/abs/2005.00589)
[tification and cell structure recognition using visual](https://arxiv.org/abs/2005.00589)
[context. Winter Conference for Applications in Com-](https://arxiv.org/abs/2005.00589)
_puter Vision (WACV)._
Victor Zhong, Caiming Xiong, and Richard Socher.
2017. Seq2sql: [Generating structured queries](https://arxiv.org/abs/1709.00103)
[from natural language using reinforcement learning.](https://arxiv.org/abs/1709.00103)
_arXiv preprint arXiv:1709.00103._
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao
Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and
[Tat-Seng Chua. 2021. TAT-QA: A question answer-](https://arxiv.org/abs/2105.07624)
[ing benchmark on a hybrid of tabular and textual](https://arxiv.org/abs/2105.07624)
[content in finance. In Proceedings of the 59th An-](https://arxiv.org/abs/2105.07624)
_nual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Confer-_
_ence on Natural Language Processing (Volume 1:_
_Long Papers), pages 3277–3287._
**A** **Dataset Annotation**
The definitions of all operators used for annotators
are shown in Table 7.
Operator Arguments Numerical Expression
Add number1, number2 _number1 + number2_
Subtract number1, number2 _number1 −_ _number2_
Multiply number1, number2 _number1 × number2_
Divide number1, number2 _number1 ÷ number2_
Exp number1, number2 _number1[number][2]_
Table 7: Definitions of all operations
-----
| [
"Yilun, Zhao",
"Yunxiang, Li",
"Rui, Zhang",
"Chenying, Li"
] | 2022-06-02T00:00:00 | ACL 2022 Long Papers | true | 71 | 4 | null | http://arxiv.org/abs/2206.01347 | https://arxiv.org/abs/2206.01347 | https://www.semanticscholar.org/paper/0b7e9b6b588baa3f24fdde06feec26a067aa74bd |
Answering Questions by Meta-Reasoning over Multiple Chains of Thought | Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of-thought (CoT), before arriving at a final answer. Often, multiple chains are sampled and aggregated through a voting mechanism over the final answers, but the intermediate steps themselves are discarded. While such approaches improve performance, they do not consider the relations between intermediate steps across chains and do not provide a unified explanation for the predicted answer. We introduce Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought, rather than aggregate their answers. MCR examines different reasoning chains, mixes information between them and selects the most relevant facts in generating an explanation and predicting the answer. MCR outperforms strong baselines on 7 multi-hop QA datasets. Moreover, our analysis reveals that MCR explanations exhibit high quality, enabling humans to verify its answers. | This work introduces Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought, rather than aggregating their answers. | ## Answering Questions by Meta-Reasoning over Multiple Chains of Thought
**Ori Yoran[∗][1]** **Tomer Wolfson[∗][1][,][2]** **Ben Bogin[1]** **Uri Katz[3]**
**Daniel Deutch[1]** **Jonathan Berant[1]**
1Tel Aviv University 2Allen Institute for AI 3Bar Ilan University
[email protected] [email protected]
**Abstract**
Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of_thought (CoT), before arriving at a final answer._
Often, multiple chains are sampled and aggregated through a voting mechanism over the final answers, but the intermediate steps themselves are discarded. While such approaches
improve performance, they do not consider
the relations between intermediate steps across
chains and do not provide a unified explanation
for the predicted answer. We introduce MultiChain Reasoning (MCR), an approach which
prompts large language models to meta-reason
over multiple chains of thought, rather than aggregate their answers. MCR examines different
reasoning chains, mixes information between
them and selects the most relevant facts in generating an explanation and predicting the answer. MCR outperforms strong baselines on 7
multi-hop QA datasets. Moreover, our analysis reveals that MCR explanations exhibit high
quality, enabling humans to verify its answers.
**Did Brad Peyton need to know about seismology?**
**Sampling Multiple Reasoning Chains**
**Reasoning chain #1**
Q: What is Brad Peyton's occupation?
A: Brad Peyton is a film director.
Q: What is seismology? **Answer #1: No.**
A: Seismology is the study of earthquakes.
Q: Do film directors need to know about earthquakes?
A: No, film directors do not need to know about earthquakes.
**Reasoning chain #2**
Q: What did Brad Peyton direct?
A: Brad Peyton is a Canadian film director.
… **Answer #2: Yes.**
Q: What is San Andreas about?
A: San Andreas is a film directed by Brad Peyton, about a
massive earthquake caused by the San Andreas Fault.
**Reasoning chain #3**
Q: What is Brad Peyton's occupation?
A: Brad Peyton is a film director, writer, and producer.
Q: What do film directors have to know? **Answer #3: No.**
A: Film directors have to know a lot of things.
Q: Is seismology one of the things that Brad Peyton has to
know?
A: No.
**Multi-Chain Reasoning (MCR)** **Self-Consistency**
**Answer: Brad Peyton is a film director, writer, and**
producer. Seismology is the study of earthquakes. San
Andreas is a film directed by Brad Peyton, about a
massive earthquake caused by the San Andreas Fault.
**So the final answer is: Yes.**
Answer #1: No.
Answer #2: Yes.
Answer #3: No.
**Majority answer: No.**
Figure 1: An example from STRATEGYQA, showing the output of Multi-Chain Reasoning versus SelfConsistency. MCR uses reasoning chains as its context
for QA. SC solely relies on the chains’ answers.
**1** **Introduction**
In chain-of-thought (CoT) prompting, a large language model (Brown et al., 2020; Chowdhery et al.,
2022; Kadavath et al., 2022; Touvron et al., 2023)
is prompted to generate its answer following a stepby-step explanation (Wei et al., 2022; Nye et al.,
2022). CoT prompting has been shown to dramatically improve performance on reasoning-heavy
tasks (Kojima et al., 2022; Zhou et al., 2022). Furthermore, Wang et al. (2023) showed that sampling
_multiple chains of thought and returning their ma-_
jority output further improves accuracy, a method
which they term self-consistency (SC).
While SC leads to performance gains, it also has
several shortcomings. First, when the space of possible outputs is large (Kalyan et al., 2021), each
reasoning chain may lead to a different output, in
which case no significant majority will be formed.
Second, focusing exclusively on the final output
discards relevant information that is present in the
intermediate reasoning steps. Consider answering
the question “Did Brad Peyton need to know about
_seismology?” (Fig. 1). Reasoning chain #1 leads_
to an incorrect answer (“No”), but its steps provide
useful information. For example, the intermediate question, and following answer, on “What is
_seismology?” constitute an important fact that is_
absent from the other two chains. Last, using SC
jointly with chain-of-thought prompting reduces interpretability, as there is no single reasoning chain
that can be considered as an explanation.
In this work, we propose Multi-Chain Reasoning
(MCR), where we prompt a large language model
*Both authors contributed equally to this work.
-----
|2. Retrieval|Col2|
|---|---|
|||
|3. Meta-Reason||
**How many ants would fit**
**into The Shard?**
**Reasoning chain #1** **Reasoning chain #2** **Reasoning chain #3**
**1. Decomposition** Q1: What is the height of The Shard? Q1: How high is the shard? Q1: How many ants are in the world?
E1: The Shard comprises a 26-floor E1: 1,016 feet: Standing 309.6 metres E1: Ant: This study also puts a
Iteratively generate office complex, occupied by… (1,016 feet) high, The Shard is … conservative estimate of the ants at …
the reasoning chain
by interleaving
retrieved evidence E1: The Shard comprises a 26-floor… E1: Standing 309.6 metres (1,016 E1: Ant: This study also puts …
E2: Ants are eusocial insects of the… feet) high, The Shard … E2: The Shard comprises …
Q1: What is the height of The Shard? E2: Ants range in size from 0.75 … Q1: How many ants are in the world?
A1: The height of The Shard is 26- Q1: How high is the shard? A1: There are around 20 quadrillion
**2. Retrieval** floors, standing 309.6 metres tall. A1: The shard is 1016 feet in height. ants in the world.
Q2: What is the volume of an ant? Q2: How long is the average ant? Q2: What is the volume of The Shard?
A2: The volume of an ant is 2e-23 A2: The length of an average ant is A2: The volume of The Shard is 90000
cubic meters. 0.75mm. cubic meters.
So the answer is: 4e+5/(2e-23)=2e+28. So the answer is: So the answer is:
1e+6*1016/0.75=1.354e+9 20e15/90000=2.222222e+12.
What is the height of The Shard? The height of The Shard is 26-floors, standing 309.6 metres tall.
**3. Meta-Reason** What is the volume of an ant? The volume of an ant is 2e-23 cubic meters.
How high is the shard? The shard is 1016 feet in height.
How long is the average ant? The length of an average ant is 0.75mm.
Generate the final answer How many ants are there in the world? There are around 20 quadrillion ants in the world.
based on the content of What is the volume of The Shard? The volume of The Shard is 90000 cubic meters
sampled reasoning chains
**Answer: The volume of The Shard is 90000 cubic meters. The volume of an ant is 2e-23 cubic meters.**
**So the answer is: 4.5e+22.**
Figure 2: An overview of MCR, given a question from the FERMI dataset. Steps 1-2 generate multiple reasoning
chains by conditioning the generation of intermediate questions and answers on retrieved evidence sentences. In
step 3, the meta-reasoner generates the final answer, given multiple reasoning chains from the previous steps.
(LLM) to meta-reason across multiple reasoning
chains and produce a final answer, alongside an
explanation. Unlike prior work, sampled reasoning chains are used not for their predictions (as in
SC) but as a means to collect pieces of evidence
from multiple chains. Fig. 1 illustrates MCR compared to SC. While both methods rely on sampling
multiple reasoning chains, SC returns the majority answer, “No” (grey box, bottom right). By
contrast, MCR concatenates the intermediate steps
from each chain (blue boxes, top left) into a unified
context, which is passed, along with the original
question, to a meta-reasoner model. The metareasoner is a separate LLM, prompted to metareason on multiple reasoning chains and produce a
final answer along with an explanation (pink box,
bottom left). By reasoning on multiple reasoning chains, MCR is able to mitigate the aforementioned drawbacks – it combines facts from multiple
chains to produce the correct final answer, with an
explanation of the answer’s validity.
MCR has three main components (§3). To generate reasoning chains we use two components, a
_decomposition model and a retriever which jointly_
generate the chain (Fig. 2), similar to prior work
(Press et al., 2022; Trivedi et al., 2022a). These
chains are then concatenated into a unified multi_chain context which is fed to the aforementioned_
meta-reasoner. Fig. 1 highlights the ability of the
meta-reasoner to combine facts from different reasoning chains (intermediate answers in pink). The
output explanation combines facts from each of
the three chains: (1) “Seismology is the study of
_earthquakes”; (2) “San Andreas is a film...”; (3)_
_“Brad Peyton is a film director, writer...”. SC (in_
grey) errs due to only using the answers, while the
meta-reasoner reads entire reasoning chains, and is
able to correctly answer the question.
We evaluate MCR on a wide range of challenging multi-hop question answering (QA) datasets,
in an open-domain setting. The datasets can be
categorized into two types of tasks: implicit rea_soning tasks, where reasoning steps are implicit_
given the question text and need to be inferred using a strategy (Tafjord et al., 2019; Geva et al.,
2021; Kalyan et al., 2021); explicit reasoning tasks,
where a single reasoning strategy exists and can
be directly inferred given the language of the question (Yang et al., 2018; Welbl et al., 2018; Press
et al., 2022; Aly et al., 2021). As our baselines, we
compare MCR to SC, as well as to variants of SelfAsk (Press et al., 2022) and CoT augmented with
retrieval, following Trivedi et al. (2022a). Our results show MCR consistently outperforms all other
baselines, in particular, beating SC by up to 5.7%,
while using the same reasoning chains (§4).
-----
We analyze the qualities of MCR in §5, by manually scoring its generated explanations and estimating their accuracy. Our analysis shows that
MCR generates high quality explanations for over
82% of examples, while fewer than 3% are unhelpful. To conclude, our main contributions are:
- We introduce the MCR method for metareasoning on multiple chains-of-thought.
- We show that MCR outperforms all baselines,
including self-consistency, on all 7 multi-hop
open-domain QA benchmarks.
- We analyze MCR for its explanation quality
and its multi-chain reasoning capabilities.
Our data and codebase are publicly available.[1]
**2** **Background**
Recently, there has been a surge of interest in
answering multi-hop questions through few-shot
prompting of LLMs (Wei et al., 2022; Nye et al.,
2022; Yao et al., 2022). The majority of these
works follow a common standard: First, given a
question, plan a step-by-step reasoning chain to
derive the answer and solve all intermediate steps,
aided by a retriever to minimize model hallucination (Khot et al., 2023; Press et al., 2022; Yao et al.,
2022; Lazaridou et al., 2023; Trivedi et al., 2022a;
Khattab et al., 2022). Then, incorporate multiple
reasoning chains with answers to derive the final
answer (Wang et al., 2023; Li et al., 2022). In our
work, we follow this template and focus on the latter part. However, our meta-reasoning approach
differs from prior work by reasoning on multiple
reasoning chains. Namely, we use multiple chains
to collect relevant evidence for question answering.
**3** **Method**
We present a method for answering questions by
meta-reasoning on multiple reasoning chains. Our
focus is on open-domain QA, where the input is a
question q, and the evidence to answer it is found
in one or more sentences in a corpus C. When answering q requires multiple reasoning steps, it can
be expressed by a reasoning chain, denoted by r.
The reasoning chain is a list of one or more intermediate question-evidence-answer triples (qi, ei, ai).
Evidence ei _C is a sentence that is relevant to_
_∈_
answering the intermediate question qi.
Fig. 2 describes our approach when answering
_“How many ants would fit into The Shard?”. First,_
[1https://github.com/oriyor/reasoning-on-cots](https://github.com/oriyor/reasoning-on-cots)
**Is it true that Colonel Walter Phelps served the United**
**States Army for more than 30 years?**
|Decomposition step #1|Col2|
|---|---|
|||
|helps served the United than 30 years?|Col2|
|---|---|
|Decomposition step #2||
|||
|Retrieval step #1|Col2|
|---|---|
|||
|Decomposition ans. #1|Col2|
|---|---|
|||
**Decomposition step #1** **Decomposition step #2**
E1: Walter Phelps: (Oct 29, 1832–
**Q1: Who is Colonel Walter Phelps?**
February 20, 1878) was an officer…
Q1: Who is Colonel Walter Phelps?
A1: Colonel Walter Phelps was an officer
**Retrieval step #1** in the Union Army throughout the
American Civil War.
**Q2: How long did Colonel Walter**
E1: Walter Phelps: (Oct 29, 1832– **Phelps serve the United States Army?**
February 20, 1878) was an officer in the
Union Army throughout the American E2: Walter Phelps: [['Walter Phelps Jr.'],
Civil War, serving as commanding ['Allegiance', 'United States of America
officer of the Eastern Iron Brigade. Union'], ['Service/branch', 'United States
Army Union Army'], ['Years of service',
'1861-1865'], ['Rank', 'Colonel Bvt.
**Decomposition ans. #1** Brigadier General']]
E1: Walter Phelps: (Oct 29, 1832– E1: Walter Phelps: (Oct 29, 1832–
February 20, 1878) was an officer… February 20, 1878) was an officer…
Q1: Who is Colonel Walter Phelps? E2: Walter Phelps: [['Walter Phelps Jr.'] …
**A1: Colonel Walter Phelps was an** Q1: Who is Colonel Walter Phelps?
**officer in the Union Army** A1: Colonel Walter …
**throughout the American Civil War.** Q2: How long did Colonel Walter Phelps
serve the United States Army?
**A2: Colonel Walter Phelps served the**
**United States Army for 4 years.**
Figure 3: Interleaving decomposition and retrieval steps.
we use a prompted LLM to generate multiple reasoning chains, r[(1)], ..., r[(][k][)] (steps 1-2). Each r[(][j][)]
is generated by interleaving generated intermediate questions with retrieved contexts (§3.1). Our
main contribution is step 3: We introduce a second
LLM that is prompted to meta-reason on multiple
reasoning chains, collecting evidence facts as its
explanation and generating the final answer (§3.2).
**3.1** **Generating Reasoning Chains**
Given a question q, we generate its reasoning chain
using: (1) a decomposition model, and (2) a retriever component. Our reasoning chain generation
process is largely based on prior work (Press et al.,
2022; Trivedi et al., 2022a), discussed in §2. Fig. 3
describes the interleaving of decomposition and
retrieval. At each step, the decomposition model
generates an intermediate question qi, based on
the original question q and the previous reasoning
steps. Then, the retriever uses qi to retrieve relevant
evidence ei ∈ _C. We feed ei and qi to the decom-_
position model (along with the previous steps) to
generate intermediate answer ai. During answer
generation, we prepend intermediate evidence sentences to the beginning of the chain rather than
interleaving them, as it improves the accuracy for
all baselines. For decomposition prompts, see §D.
**3.2** **Reasoning over Reasoning Chains**
The meta-reasoner module is the core contribution
of MCR. Instead of sampling multiple chains for
their predicted answers (Wang et al., 2023), we
utilize them for context generation. This context
-----
is fed to a prompted LLM to read the generated
chains and reason over them to return the answer.
In §3.1, we defined a reasoning chain as a list
of (qi, ei, ai) triples. We first sample multiple
chains and use all of their intermediate questionanswer pairs (qi, ai) as our multi-chain context (a
variant using question-evidence pairs (qi, ei) is described in §B.4). Fig. 2 presents the multi-chain
context of the three sampled chains (lower pink
box). Next, the multi-chain context and the original question are input to the meta-reasoner. This
model is an LLM, few-shot prompted for QA over
a multi-chain context. Fig. 4 presents one exemplar
from the meta-reasoner prompt for the FEVEROUS
dataset (full prompts in §D). We instruct the LLM
to “answer the question step-by-step” given its
multi-chain context, where each line describes a
(qi, ai) pair from one of the sampled chains. Next,
we append the question and a step-by-step reasoning chain followed by the final answer. This last
chain serves as the explanation for solving the question. The meta-reasoner is prompted with 6-10
exemplars, based on the dataset (§4.1).
Providing the meta-reasoner with multiple
chains allows it to combine and aggregate facts
across chains. Moreover, the model needs to extract the most relevant facts in the chains to serve
as its explanation. This enables MCR to be both
more accurate and more interpretable than past
multi-chain approaches (as we analyze in §5).
**4** **Experiments**
We compare MCR to existing methods on 7 multihop QA benchmarks. These cover a wide range
of reasoning skills, including commonsense, composition, comparison and fact verification. MCR
consistently outperforms existing approaches on all
benchmarks, when experimenting with two different LLMs and retrievers. Our setting is described
in §4.1 and we discuss our main results in §4.2.
**4.1** **Experimental Setting**
**4.1.1** **Datasets**
As our focus is on multi-hop questions (in an opendomain setting), all datasets require multiple reasoning steps. Following prior work (Khattab et al.,
2022; Trivedi et al., 2022a) and to limit the cost
of model API calls, we evaluate on 500-1000 random examples from the development set of each
**_Given a question and a context, answer the question step-_**
**_by-step. If you are unsure, answer Unknown._**
**Context:**
Who is Robert Broderip? Robert Broderip was an English
organist and composer.
Where did Robert Broderip live all his life? Robert Broderip
lived in Bristol all his life.
When did Robert Broderip live? Robert Broderip lived
during the 19th century.
...
Where did Robert Broderip live? Broderip lived in Bristol.
During what part of the nineteenth century did Robert
Broderip write music? Robert Broderip wrote music during
the latter part of the eighteenth century.
**Question: Is it true that Robert Broderip lived in London all**
his life and wrote a considerable quantity of music during
the earlier part of the nineteenth century?
**Answer: Robert Broderip lived in Bristol all his life, not in**
London. So the answer is: No.
Figure 4: An exemplar from the meta-reasoner prompt.
dataset.[2] We also evaluate on the official test sets
of STRATEGYQA and FERMI, as they target implicit reasoning, have multiple valid strategies, and
their test set evaluation cost is reasonable. For all
datasets, we make sure that no evaluation questions
appear in any of our prompts. Tab. 1 has example questions from each dataset. Our multi-hop
QA benchmarks can be categorized based on their
required reasoning skills:
- Implicit Reasoning: Questions that entail implicit reasoning steps (Geva et al., 2021). The
reasoning steps for solving it cannot be explicitly derived from the language of the question
and require commonsense or arithmetic reasoning. Such questions may have multiple valid reasoning chains. We evaluate on: STRATEGYQA
(Geva et al., 2021), FERMI (Kalyan et al., 2021)
and QUARTZ (Tafjord et al., 2019).
- Explicit Reasoning: Multi-hop questions
where the reasoning steps are explicitly expressed in the language of the question (composition, comparison). These include HOTPOTQA
(Yang et al., 2018), 2WIKIMQA (Welbl et al.,
2018) and BAMBOOGLE (Press et al., 2022).
We also evaluate on FEVEROUS (Aly et al.,
2021), a fact verification dataset where claims
require verifying multiple facts, and evidence
may be either in sentences, tables or both.
For evaluation, we use F1 to compare predicted
and gold answers for all explicit reasoning datasets
2We use the entire development set for QUARTZ and BAMBOOGLE, since they include less than 500 examples. For
FERMI we use all 286 “Real Fermi Problems” in its train and
development sets. Exact numbers are listed in Tab. 2.
-----
reasoning on more than a single chain.
- MCR: The meta-reasoner is given five reasoning chains as its multi-chain context (§3.2).
We decode one chain with greedy decoding,
and sample another four reasoning chains with
temperature t = 0.7.[3] This enables the metareasoner to review different pieces of evidence
when answering the full question (§5).
- SCR: Single-Chain Reasoning (SCR) serves
as an ablation for the effect of the multi-chain
context. In SCR, the meta-reasoner is given
the same prompt as MCR aside from having
only the greedy-decoded chain in its context.
This disentangles the effect of using multiple
chains from the effect of having an LLM that
is separate from the decomposition model to
generate the final answer.
**Baselines** We evaluate the following baselines:
- SA: Self-Ask (Press et al., 2022) returns the
answer of a single reasoning chain, that was
generated with greedy decoding.
- SC: Self-Consistency serves as a baseline
which incorporates multiple reasoning chains
(Wang et al., 2023). It returns the majority answer based on multiple chains sampled from
the decomposition model. We experiment with
variants of 3, 5 and 15 sampled chains (SC@3,
SC@5 and SC@15), in line with prior work
(Wang et al., 2023; Khattab et al., 2022; Sun
et al., 2023). As in MCR, we use the chain
generated with greedy decoding along with additional chains sampled with t = 0.7.
**Retrieval** Similar to Press et al. (2022); Lazaridou et al. (2023); Paranjape et al. (2023), our models and baselines use a retriever based on Google
Search, via the SerpAPI service.[4] However, we
also include results using an open-source retriever
(Khattab and Zaharia, 2020) in §4.3. As most of our
datasets contain evidence from Wikipedia (§4.1.1),
we consider it as our retrieval corpus. Therefore,
we format search queries as “en.wikipedia.org
_qi”, with the Wikipedia domain preceding the in-_
termediate question. We return the top-1 evidence
retrieved by Google. Retrieved evidence may be
either sentences or parsed lists. Following Trivedi
et al. (2022a), we also retrieve evidence for the
original question q. Last, all retrieved evidence sentences are prepended to the decomposition (§3.1).
3Like Wang et al. (2023), we observe that greedy-decoded
chains have higher accuracy compared to the other chains.
[4https://serpapi.com/](https://serpapi.com/)
**Dataset** **Example**
STRATEGYQA Can Arnold Schwarzenegger deadlift an
(implicit) adult Black rhinoceros?
FERMI How many high fives has LeBron James
(implicit) given/received?
QUARTZ Jeff drained his rice field in the winter(implicit) time. The field likely will produce __
crops when he uses it. A. more B. less
HOTPOTQA What city did the musician whose debut
(explicit) album shares its title with the 1959 Alfred Hitchcock film hail from?
2WIKIMQA Where was the place of death of Isabella
(explicit) of Bourbon’s father?
BAMBOOGLE What is the maximum airspeed (in km/h)
(explicit) of the third fastest bird?
FEVEROUS Is it true that Robert Broderip lived in
(explicit) London all his life and wrote a considerable quantity of music during the earlier
part of the nineteenth century?
Table 1: The multi-hop QA datasets in our experiments.
and exact-match for the binary-choice datasets. In
FERMI, we use the official order-of-magnitude evaluation by Kalyan et al. (2021). We provide additional technical details on evaluation in §A.
**4.1.2** **Models**
Our main models and baselines are all retrievalaugmented instances of code-davinci-002,
prompted with in-context learning exemplars
(Brown et al., 2020). In §4.3, we include additional
experiments with the open-source Vicuna-13B
(Chiang et al., 2023) LLM. Prompt exemplars
are formatted as described in §3.2. The number
of exemplars varies from 6-12 between datasets.
Decomposition prompt exemplars are based on
random examples from the train and development
sets, coupled with their gold reasoning chain. For
the meta-reasoner exemplars, we use reasoning
chains sampled from the decomposition model
as the multi-chain context. We ensure that the
answer can be inferred using the sampled chains
and add an explanation before the final answer, as
shown in Fig. 4. For the binary-choice datasets,
STRATEGYQA, QUARTZ, and FEVEROUS, the
prompt contains an equal number of exemplars
from each label. For additional details regarding
the full prompts, length statistics and robustness to
a different choice of prompts, please refer to §D.
**Meta-Reasoner** We experiment with two variants of the meta-reasoner to measure the effect of
-----
Dataset Reasoning Examples Oracle SA SC@3 SC@5 SCR MCR
STRATEGYQA 1,000 94.4±0.1 69.3±0.3 71.5±0.8 72.2±0.8 70.0±0.6 **73.6±0.7**
FERMI implicit 286 65.1±0.8 38.3±0.7 38.4±0.7 38.3±0.8 38.1±0.8 **38.9±0.8**
QUARTZ 374 94.1±0.5 78.3±0.4 78.2±0.7 77.6±0.5 80.7±0.1 **81.6±1.3**
HOTPOTQA 500 68.0±0.4 50.2±0.3 50.5±0.8 51.3±0.2 56.4±0.4 **57.0±0.8**
2WIKIMQA explicit 500 77.5±0.8 63.8±0.1 64.5±0.8 65.4±0.6 67.2±0.2 **67.9±0.4**
BAMBOOGLE 120 77.3±0.5 64.6±0.6 64.6±0.4 65.0±1.5 64.7±0.4 **66.5±1.7**
FEVEROUS 500 88.0±0.4 66.0±1.0 67.8±0.2 67.9±0.6 65.1±0.4 **69.4±1.0**
Table 2: Experiments using code-davinci-002 on seven multi-hop open-domain QA datasets. Results are averaged
over 3 runs. BAMBOOGLE results are averaged over 5 runs due to its smaller size.
Additional implementation details about our retrieval and MCR are described in §B.1 and §B.2.
**4.2** **Main Results**
Dataset SC@15 MCR MCR+SC@3
STRATEGYQA 74.6 73.6 **76.4**
FERMI 38.6 38.9 **39.2**
QUARTZ 78.3 81.6 **82.6**
HOTPOTQA 54.1 57.0 **59.2**
2WIKIMQA 65.8 67.9 **68.6**
BAMBOOGLE 65.6 **66.5** 66.3
FEVEROUS 68.6 69.4 **71.5**
Next, we report our evaluation results. Overall,
MCR outperforms our baselines on all 7 datasets.
**MCR Performance** Tab. 2 presents the results
for all 7 multi-hop datasets (evaluation described in
§4.1.1). We evaluate both SC@5 and MCR using
five reasoning chains. In addition, we list an oracle
_score which uses the best answer out of all five_
chains. MCR outperforms all baselines on all of
the benchmarks, beating SC@5 on STRATEGYQA
(+1.4%), FERMI (+0.6%), QUARTZ (+4.0%), HOT
POTQA (+5.7%), 2WIKIMQA (+2.5%), BAM
BOOGLE (+1.5%) and FEVEROUS (+1.5%).
**Adding Reasoning Chains** We measure the
gains of MCR and SC when adding reasoning
chains. As extending MCR is bounded by context
length,[5] we follow a straightforward approach and
perform self-consistency on three MCR runs. We
compare this model, MCR+SC@3, which used 15
reasoning chains (5 for each MCR run), to SC@15.
Tab. 3 shows that MCR+SC@3 consistently outperforms SC@15. Furthermore, though MCR uses
only 5 reasoning chains, it beats SC@15 on all
datasets, save STRATEGYQA. Fig. 5 plots, for
each dataset, the effect that adding more reasoning chains has on meta-reasoning performance. It
presents the results with 1 chain (SCR), 5 chains
(MCR) and 15 reasoning chains (MCR+SC@3).
**Test Set Results** We evaluate our models on the
official test sets of STRATEGYQA[6] and FERMI,
which include 490 and 558 examples respectively.
The results in Tab. 4 show that on STRATEGYQA
MCR consistently beats SC, when using the same
5code-davinci-002 context is capped at 8,001 tokens.
[6https://leaderboard.allenai.org/strategyqa](https://leaderboard.allenai.org/strategyqa)
Table 3: Running SC and MCR on 15 reasoning chains.
80
70
60
50
40
|MCR performance per dataset|Col2|
|---|---|
|||
||StrategyQA Fermi Quartz HotpotQA 2WikiMQA Bamboogle Feverous|
|||
15
Number of reasoning chains
Figure 5: Per-dataset performance as a function of the
number of reasoning chains used by MCR (1, 5, 15).
number of reasoning chains. In FERMI, both methods perform similarly.
**Recent Approaches** Previously, we established
the advantages of meta-reasoning over multiple
reasoning chains. While an apples-to-apples comparison with other recent approaches is impossible
due to fundamental differences in the experimental
setup (see §B.3), it serves as a rough measuring
stick for the robustness of MCR across different
tasks. In §B.3, Tab. 8 we compare MCR to five
recent CoT-based approaches for multi-hop QA.
MCR performance is comparable with the best results on all datasets (shared between these works),
showcasing its robustness.
-----
Model # chains STRATEGYQA FERMI
SC@5 5 71.4 **39.8**
MCR 5 **72.5** 39.7
SC@15 15 74.1 39.7
MCR+SC@3 15 **75.3** **40.1**
Table 4: Test set results for STRATEGYQA and FERMI.
**4.3** **Open-source Models**
To further examine MCR’s performance (§4.2)
and for better reproducibility, we experiment
with an additional open-source retriever and
LLM. As our retriever, we use ColBERTv2 (Santhanam et al., 2022) over the 2018 Wikipedia
dump from Karpukhin et al. (2020). In addition to code-davinci-002, we experiment with
Vicuna-13B (Chiang et al., 2023), a 13-billion parameters model shown to outperform LLMs like
LLaMA and Alpaca (Touvron et al., 2023; Taori
et al., 2023). We use the same prompts as in
code-davinci-002, trimmed to fit a 2,048 tokens
context length.
We report the full results of the open-source ColBERTv2 retriever with code-davinci-002 and
Vicuna-13B in Tab. 5. In addition, we provide results of open-source models when reasoning over 15 reasoning chains in Tab. 6. For
code-davinci-002, substituting Google Search
with ColBERTv2 exhibits the same trend as
in Tab. 2, albeit a slight decrease in performance. MCR outperforms all other baselines,
beating SC@5 on STRATEGYQA (+2.3%), FERMI
(+3.4%), QUARTZ (+3.9%), HOTPOTQA (+3.5%),
2WIKIMQA (+1.2%), BAMBOOGLE (+3.6%)
and FEVEROUS (+1.4%). Unsurprisingly, results sharply decrease when evaluating the smaller
Vicuna-13B with ColBERTv2. The comparison
between MCR and SCR suggests that reasoning
over multiple chains is a challenge for the weaker
Vicuna-13B model. For example, it generates
open-ended answers such as “Unknown” or “It
_depends” for over 24% of the questions in STRAT-_
EGYQA. This suggests that meta-reasoning over
multiple chains has greater gains (compared to
SCR) when both the decomposition model and
meta-reasoner are larger LLMs.
However, even on Vicuna-13B, MCR still outperforms all baselines on 5 datasets and beats
SC@5 on all 7 of them: STRATEGYQA (+0.5%),
FERMI (+4.6%), QUARTZ (+3.6%), HOTPOTQA
(+6.5%), 2WIKIMQA (+0.3%), BAMBOOGLE
**Does Mercury help detect coronavirus?**
**Reasoning chain #1 (Greedy)**
**...**
**Reasoning chain #5**
|coronavirus? A2: Common symptoms of coronavirus are fever... Q3: Does the temperature of the body increase...|Col2|
|---|---|
|||
|Multi-Chain Reasoning||
|||
Q1: What is the name of the Q1: What is mercury used to
coronavirus? detect?
A1: The coronavirus is called A1: Mercury is used to detect the
COVID-19. temperature of things.
Q2: Does Mercury help detect ... Q2: What are some symptoms of
COVID-19? coronavirus?
A2: Mercury does not help detect A2: Common symptoms of
COVID-19. coronavirus are fever...
Q3: Does the temperature of the
body increase...
**Single-Chain Reasoning** **Multi-Chain Reasoning**
Mercury is used to detect the
temperature of things. You have a
The coronavirus is called COVID
higher than normal body
19. Mercury does not help detect
temperature when you have a
COVID-19.
fever. Common symptoms of
So the answer is: No.
coronavirus are fever...
So the answer is: Yes.
Figure 6: An example from STRATEGYQA where
the greedy chain is insufficient to answer the question.
MCR beats SCR by having access to multiple chains.
(+3.0%) and FEVEROUS (+1.3%). When evaluating with 15 reasoning chains, in Tab. 6,
MCR+SC@3 continually beats SC@15.
**5** **Analysis**
Next, we measure the importance of incorporating multiple reasoning chains in MCR and qualitatively assess its output.
**When are Multiple Chains Helpful?** In §4.2
we observed that MCR consistently outperforms
single-chain reasoning (SCR). We wish to prove
that this advantage lies in cases where the metareasoner uses additional chains. To this end, we sort
examples based on the similarity of their greedydecoded chain to the MCR explanation (details
in §C.1). Lower similarity indicates less reliance
of MCR on the greedy chain. Fig. 6 presents an
example where the MCR explanation (pink box)
includes relevant facts from a chain other than the
greedy one (additional examples in §C.2). Results
in Fig. 7 empirically demonstrate that on STRAT
EGYQA, MCR gains over SCR are highest when
MCR explanations are less similar to the greedy
chain. We observe this trend in all datasets (§C.1),
serving as further evidence for MCR’s strengths.
**Combining Reasoning Chains** In addition to
choosing between reasoning chains, an interesting property of the meta-reasoner is that it can
_combine facts from different chains. We estimate_
the prevalence of this phenomenon on the implicit
datasets, STRATEGYQA and FERMI, which are
more challenging. Given an example, we automati
-----
Dataset Model Oracle SA SC@3 SC@5 SCR MCR
SB2WHTRATEGYFAMBOOGLEQOTPOTEVEROUSFIKIUARTZERMIMQAQAQA code-davinci-002code-davinci-002code-davinci-002code-davinci-002code-davinci-002code-davinci-002code-davinci-002 94.593.956.484.164.367.768.3±±±±±±±0.70.70.60.70.41.20.7 67.133.277.150.752.445.961.2±±±±±±±0.60.30.60.30.11.10.4 69.933.275.651.551.147.262.9±±±±±±±0.10.40.70.60.21.40.6 70.833.176.052.552.747.063.1±±±±±±±0.60.41.50.10.40.71.0 67.833.979.355.353.747.160.9±±±±±±±0.50.60.30.20.31.00.3 **73.136.579.956.053.950.664.5±±±±±±±2.12.11.21.10.31.30.8**
SB2WHTRATEGYFAMBOOGLEQOTPOTEVEROUSFIKIUARTZERMIMQAQAQA Vicuna-13BVicuna-13BVicuna-13BVicuna-13BVicuna-13BVicuna-13BVicuna-13B 82.489.645.752.752.242.388.7±±±±±±±0.21.01.60.50.31.60.2 59.719.161.134.832.230.761.5±±±±±±±0.10.20.10.00.40.00.6 61.419.159.835.833.830.461.0±±±±±±±0.50.32.30.20.60.60.6 62.218.861.437.134.031.461.0±±±±±±±0.80.31.60.41.00.60.8 21.563.943.431.360.663.735.1±±±±±±±0.00.00.00.00.00.00.0 62.723.465.043.634.334.462.3±±±±±±±0.10.40.31.60.41.31.2
Table 5: Experiments using ColBERTv2 retriever with code-davinci-002 and Vicuna-13B, evaluated on the
development sets of our datasets. Results are averaged over 3 runs. The number of examples per dataset is the same
as in Tab. 2. Vicuna-13B generation is deterministic so, using greedy decoding, SCR has a standard deviation of 0.
|Dataset|Model SC@15 MCR+SC@3|Model SC@15 MCR+SC@3|
|---|---|---|
|STRATEGYQA FERMI QUARTZ HOTPOTQA 2WIKIMQA BAMBOOGLE FEVEROUS|code-davinci-002 72.6 75.6 code-davinci-002 34.0 36.3 code-davinci-002 76.5 80.7 code-davinci-002 54.3 56.8 code-davinci-002 52.5 54.0 code-davinci-002 48.9 51.8 code-davinci-002 62.7 66.2|Vicuna-13B 62.3 63.7 Vicuna-13B 18.8 23.2 Vicuna-13B 60.1 64.3 Vicuna-13B 37.8 44.8 Vicuna-13B 35.5 35.6 Vicuna-13B 31.8 35.1 Vicuna-13B 61.1 64.0|
|---|---|---|
Table 6: Running SC and MCR on 15 reasoning chains using ColBERTv2 retriever with code-davinci-002 (left
columns) and Vicuna-13B (right columns).
(see examples in §C.2, Fig. 10). For the remaining 90%, the reasoning expressed in the resulting
combination is a paraphrase of an individual chain.
**Explanation Quality** The meta-reasoner is
prompted to generate an explanation alongside the
final answer (§3.2). Inspired by past work (Pruthi
et al., 2022), we test the quality of the MCR explanations. Four of the authors manually reviewed 600
random examples, 100 per dataset (sans FEVER
OUS §B.2) and scored their meta-reasoner explanations. Each explanation is scored as either 1 (irrelevant), 2 (partially relevant) or 3 (highly relevant),
based on its relevance to answering the question.
We find the explanation is highly relevant in 82%
of the cases (87% excluding FERMI, which is the
most challenging), and is irrelevant in less than 3%.
Next, we evaluate the faithfulness of explanations (Jacovi and Goldberg, 2020), namely, whether
a person provided only with the question and MCR
explanation would answer the same as the model.
Our focus was on examples with quality explanations (score 3), since they are answerable given the
explanation. We answered each question based on
Figure 7: MCR and SCR accuracy on STRATEGYQA,
categorized by the similarity of the greedy chain to the
MCR explanation. When MCR uses a chain other than
the greedy one (lower similarity), it outperforms SCR.
cally check if its meta-reasoner explanation is the
result of combining chains. We examine if one of
the output sentences appears in exactly one chain,
while another sentence is absent from that chain
and is part of a different chain. We consider sentences as similar if their ROUGE-1 precision is
above 0.8, and distinct if it is below 0.2. Overall,
in 20% of STRATEGYQA examples and 25% of
FERMI, the MCR explanation results from combining reasoning chains. From a manual analysis
of 50 such examples for each dataset, we observe
that these multi-chain explanations are better than
any individual reasoning chain in 10% of cases
-----
**6** **Related Work**
For a thorough survey on LLM reasoning see Lu
et al. (2022); Huang and Chang (2022); Qiao et al.
(2022). A slew of recent works have focused on
eliciting multi-step reasoning in LLMs, including
scratchpads (Nye et al., 2022), chain-of-thought
prompting (Wei et al., 2022; Zhou et al., 2022),
learned verifiers (Cobbe et al., 2021), selectioninference (Creswell et al., 2022) and bootstrapping
(Zelikman et al., 2022).
Self-consistency (Wang et al., 2023; Fu et al.,
2022) selects the majority answer across multiple chains, outperforming learned verifiers and
“sample-and-rank” approaches (Adiwardana et al.,
2020; Freitas et al., 2020). Li et al. (2022) further
improve SC by increasing chains’ diversity and introducing a trained verifier. Tafjord et al. (2022)
over-samples chains and verifies them using a natural language inference model on intermediate steps,
while He et al. (2022) re-rank chains based on intermediate retrieved evidence. In addition, metareasoning is closely tied to self-reflection in LLMs,
which is becoming increasingly important in using
the LLM to review multiple strategies (Yao et al.,
2023; Shinn et al., 2023; Madaan et al., 2023).
Recent works proposed revising LLM-generated
texts by using retrieved sentences (Gao et al., 2022)
or model-generated feedback (Madaan et al., 2023;
Chen et al., 2023; Paul et al., 2023). MCR similarly
reviews LLM-generated reasoning chains however,
its focus is meta-reasoning on multiple chains.
Significant QA research has been dedicated to
reasoning over multiple facts retrieved from an
underlying corpus. Such tasks include multi-step
questions that require explicit reasoning (Talmor
and Berant, 2018; Welbl et al., 2018; Wolfson et al.,
2020; Trivedi et al., 2022b), implicit reasoning
(Geva et al., 2021) and multi-modal capabilities
(Talmor et al., 2021).
Recent works also target retrieval-augmented
LLMs, prompted to solve open-domain questions
(Lazaridou et al., 2023; Khattab et al., 2022; Trivedi
et al., 2022a; Ram et al., 2023; Yoran et al., 2023).
**7** **Conclusion**
This work introduces MCR for meta-reasoning
over multiple reasoning chains. We evaluate MCR
on 7 datasets for multi-hop QA that require both
implicit and explicit reasoning in an open-domain
setting and show that it outperforms previous approaches on all evaluation benchmarks.
|Dataset|Va. De. Re. Co. Ex. An.|
|---|---|
|STRATEGYQA FERMI QUARTZ|20% 24% 8% 15% 20% 17% 6% 39% 20% 4% 17% 23% 14% 6% 13% 19% 11% 40%|
|---|---|
|HOTPOTQA 2WIKIMQA BAMBOOGLE FEVEROUS|33% 24% 24% 11% 11% 5% 39% 4% 35% 12% 8% 6% 26% 8% 32% 24% 13% 0% 7% 14% 34% 23% 20% 6%|
|---|---|
Table 7: Error classes per dataset: Valid (Va.), Decomposition (De.), Retrieval (Re.), Contradicting facts (Co.),
Explanation (Ex.) and Answer (An.). We allow multiple
error categories per example.
the model’s explanation. In 90% of cases (95% excluding FERMI), the MCR predictions matched our
own, highlighting the faithfulness of its explanations. We attribute part of the gap between human
and MCR predictions to implicit reasoning tasks,
where humans lead by five points, on average. For
the full results, see §C.3.
**Error Analysis** We manually analyzed 700 errors by MCR (100 per dataset). We consider the
following categories: Valid predictions where the
generated answer is accurate or the original question is ambiguous; Decomposition errors where no
chain has the necessary reasoning steps to answer
the question; Retrieval errors where the retrieved
contexts were irrelevant, leading the model to hallucinate; Explanation errors where MCR generates
a wrong explanation while a correct one is present
in the multi-chain context; Answer errors are when
the MCR explanation is correct, but the answer
is not; Contradicting facts are cases where MCR
errs due to contrasting statements appearing in the
multi-chain context.
Tab. 7 lists the prevalence of the error categories
per dataset. In four datasets, over 20% of errors
appear to be valid predictions, labeled as incorrect
due to ambiguous questions, outdated answers or
dataset errors. Decomposition is a challenge in
the implicit datasets, STRATEGYQA and FERMI,
with more than 24% of errors. Comparing errors
on different reasoning datasets (excluding valid
examples): Explanation and Answer errors are 50%
on implicit reasoning datasets compared to 23% on
explicit reasoning ones; Retrieval errors are more
prevalent in explicit reasoning tasks with 66% of
errors being due to Retrieval or Contradicting facts,
compared to 30% in implicit datasets. Additional
technical details on our analysis are in §C.4.
-----
**8** **Limitations**
In this work we introduce a meta-reasoner model
to reason over multiple reasoning chains. While we
opt for a prompted LLM as our meta-reasoner, we
do not experiment with a fine-tuned meta-reasoning
model. For the meta-reasoner context, we experiment with variants which include either generated
QA pairs or retrieved evidence sentences. We leave
further improvements to the meta-reasoner context as future work. Due to the inference costs of
current state-of-the-art LLMs we evaluate on the
code-davinci-002 model, similar to prior work
(Trivedi et al., 2022a; Wang et al., 2023). However,
to improve the reproducibility of our work we also
provide results with an open-source LLM (Chiang
et al., 2023) and retriever (Khattab and Zaharia,
2020).
**Acknowledgements**
We would like to thank Harsh Trivedi, Ofir Press,
Mor Geva, Peter Clark and Ashish Sabharwal for
their feedback and insightful comments. We thank
SerpAPI for their support by granting us an academic discount. This research was partially supported by the Yandex Initiative for Machine Learning and the European Research Council (ERC) under the European Union Horizons 2020 research
and innovation programme (grant ERC DELPHI
802800). This work was completed in partial fulfillment of the Ph.D. of Ori Yoran and the Ph.D. of
Tomer Wolfson.
**References**
Daniel Adiwardana, Minh-Thang Luong, David R. So,
Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang,
Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu,
[and Quoc V. Le. 2020. Towards a human-like open-](http://arxiv.org/abs/2001.09977)
[domain chatbot.](http://arxiv.org/abs/2001.09977)
Rami Aly, Zhijiang Guo, M. Schlichtkrull, James
Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021.
Feverous: Fact extraction and verification over
unstructured and structured information. _ArXiv,_
abs/2106.05707.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners. In Ad-](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
_vances in Neural Information Processing Systems 33:_
_Annual Conference on Neural Information Process-_
_ing Systems 2020, NeurIPS 2020, December 6-12,_
_2020, virtual._
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and
[Denny Zhou. 2023. Teaching large language models](http://arxiv.org/abs/2304.05128)
[to self-debug.](http://arxiv.org/abs/2304.05128)
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng,
Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan
Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion
[Stoica, and Eric P. Xing. 2023. Vicuna: An open-](https://lmsys.org/blog/2023-03-30-vicuna/)
[source chatbot impressing gpt-4 with 90%* chatgpt](https://lmsys.org/blog/2023-03-30-vicuna/)
[quality.](https://lmsys.org/blog/2023-03-30-vicuna/)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha
Tsvyashchenko, Joshua Maynez, Abhishek Rao,
Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C.
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat,
Sunipa Dev, Henryk Michalewski, Xavier García,
Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
Barret Zoph, Alexander Spiridonov, Ryan Sepassi,
David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai,
Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Díaz,
Orhan Firat, Michele Catasta, Jason Wei, Kathleen S.
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov,
and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. ArXiv, abs/2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021. Training verifiers to solve math](https://arxiv.org/abs/2110.14168)
[word problems. ArXiv preprint, abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Antonia Creswell, Murray Shanahan, and Irina Higgins.
2022. Selection-inference: Exploiting large language
models for interpretable logical reasoning. ArXiv,
abs/2205.09712.
Daniel De Freitas, Minh-Thang Luong, David R. So,
Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang,
Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu,
and Quoc V. Le. 2020. Towards a human-like opendomain chatbot. ArXiv, abs/2001.09977.
Yao Fu, Hao-Chun Peng, Ashish Sabharwal, Peter Clark,
and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. ArXiv, abs/2210.00720.
-----
Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony
Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan,
[and Kelvin Guu. 2022. Rarr: Researching and re-](http://arxiv.org/abs/2210.08726)
[vising what language models say, using language](http://arxiv.org/abs/2210.08726)
[models.](http://arxiv.org/abs/2210.08726)
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://doi.org/10.1162/tacl_a_00370)
[use a laptop? a question answering benchmark with](https://doi.org/10.1162/tacl_a_00370)
[implicit reasoning strategies. Transactions of the](https://doi.org/10.1162/tacl_a_00370)
_Association for Computational Linguistics, 9:346–_
361.
Hangfeng He, Hongming Zhang, and Dan Roth. 2022.
[Rethinking with retrieval: Faithful large language](http://arxiv.org/abs/2301.00303)
[model inference.](http://arxiv.org/abs/2301.00303)
[Jie Huang and Kevin Chen-Chuan Chang. 2022. To-](http://arxiv.org/abs/2212.10403)
[wards reasoning in large language models: A survey.](http://arxiv.org/abs/2212.10403)
[Alon Jacovi and Yoav Goldberg. 2020. Towards faith-](https://doi.org/10.18653/v1/2020.acl-main.386)
[fully interpretable NLP systems: How should we](https://doi.org/10.18653/v1/2020.acl-main.386)
[define and evaluate faithfulness?](https://doi.org/10.18653/v1/2020.acl-main.386) In Proceedings
_of the 58th Annual Meeting of the Association for_
_Computational Linguistics, pages 4198–4205, On-_
line. Association for Computational Linguistics.
Saurav Kadavath, Tom Conerly, Amanda Askell, T. J.
Henighan, Dawn Drain, Ethan Perez, Nicholas
Schiefer, Zachary Dodds, Nova DasSarma, Eli TranJohnson, Scott Johnston, Sheer El-Showk, Andy
Jones, Nelson Elhage, Tristan Hume, Anna Chen,
Yuntao Bai, Sam Bowman, Stanislav Fort, Deep
Ganguli, Danny Hernandez, Josh Jacobson, John
Kernion, Shauna Kravec, Liane Lovitt, Kamal
Ndousse, Catherine Olsson, Sam Ringer, Dario
Amodei, Tom B. Brown, Jack Clark, Nicholas Joseph,
Benjamin Mann, Sam McCandlish, Christopher Olah,
and Jared Kaplan. 2022. Language models (mostly)
know what they know. ArXiv, abs/2207.05221.
Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal, and Peter Clark. 2021.
[How much coffee was consumed during EMNLP](https://doi.org/10.18653/v1/2021.emnlp-main.582)
[2019? fermi problems: A new reasoning challenge](https://doi.org/10.18653/v1/2021.emnlp-main.582)
[for AI. In Proceedings of the 2021 Conference on](https://doi.org/10.18653/v1/2021.emnlp-main.582)
_Empirical Methods in Natural Language Processing,_
pages 7318–7328, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
[Wen-tau Yih. 2020. Dense passage retrieval for open-](https://doi.org/10.18653/v1/2020.emnlp-main.550)
[domain question answering. In Proceedings of the](https://doi.org/10.18653/v1/2020.emnlp-main.550)
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 6769–6781,_
Online. Association for Computational Linguistics.
O. Khattab, Keshav Santhanam, Xiang Lisa Li, David
Leo Wright Hall, Percy Liang, Christopher Potts,
and Matei A. Zaharia. 2022. Demonstrate-searchpredict: Composing retrieval and language models
for knowledge-intensive nlp. ArXiv, abs/2212.14024.
[Omar Khattab and Matei Zaharia. 2020. Colbert: Ef-](https://doi.org/10.1145/3397271.3401075)
[ficient and effective passage search via contextual-](https://doi.org/10.1145/3397271.3401075)
[ized late interaction over BERT. In Proceedings of](https://doi.org/10.1145/3397271.3401075)
_the 43rd International ACM SIGIR conference on_
_research and development in Information Retrieval,_
_SIGIR 2020, Virtual Event, China, July 25-30, 2020,_
pages 39–48. ACM.
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu,
Kyle Richardson, Peter Clark, and Ashish Sabharwal.
[2023. Decomposed prompting: A modular approach](http://arxiv.org/abs/2210.02406)
[for solving complex tasks.](http://arxiv.org/abs/2210.02406)
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://openreview.net/forum?id=6p3AuaHAFiN)
[guage models are zero-shot reasoners. In ICML 2022](https://openreview.net/forum?id=6p3AuaHAFiN)
_Workshop on Knowledge Retrieval and Language_
_Models._
Angeliki Lazaridou, Elena Gribovskaya, Wojciech Jan
[Stokowiec, and Nikolai Grigorev. 2023. Internet-](https://openreview.net/forum?id=hFCUPkSSRE)
[augmented language models through few-shot](https://openreview.net/forum?id=hFCUPkSSRE)
[prompting for open-domain question answering.](https://openreview.net/forum?id=hFCUPkSSRE)
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Jian-Guang Lou, and Weizhu Chen. 2022. On the
advance of making language models better reasoners.
_ArXiv, abs/2206.02336._
[Chin-Yew Lin. 2004. ROUGE: A package for auto-](https://aclanthology.org/W04-1013)
[matic evaluation of summaries. In Text Summariza-](https://aclanthology.org/W04-1013)
_tion Branches Out, pages 74–81, Barcelona, Spain._
Association for Computational Linguistics.
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
[Kai-Wei Chang. 2022. A survey of deep learning for](http://arxiv.org/abs/2212.10535)
[mathematical reasoning.](http://arxiv.org/abs/2212.10535)
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Katherine Hermann, Sean Welleck, Amir Yazdan[bakhsh, and Peter Clark. 2023. Self-refine: Iterative](http://arxiv.org/abs/2303.17651)
[refinement with self-feedback.](http://arxiv.org/abs/2303.17651)
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, Charles Sutton, and Augustus Odena.
[2022. Show your work: Scratchpads for intermediate](https://openreview.net/forum?id=iedYJm92o0a)
[computation with language models.](https://openreview.net/forum?id=iedYJm92o0a)
Bhargavi Paranjape, Scott M. Lundberg, Sameer
Singh, Hanna Hajishirzi, Luke Zettlemoyer, and
Marco Tulio Ribeiro. 2023. Art: Automatic multistep reasoning and tool-use for large language models. ArXiv, abs/2303.09014.
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi
[Faltings. 2023. Refiner: Reasoning feedback on in-](http://arxiv.org/abs/2304.01904)
[termediate representations.](http://arxiv.org/abs/2304.01904)
-----
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt,
Noah A. Smith, and Mike Lewis. 2022. Measuring
and narrowing the compositionality gap in language
models. ArXiv, abs/2210.03350.
Danish Pruthi, Rachit Bansal, Bhuwan Dhingra,
Livio Baldini Soares, Michael Collins, Zachary C.
Lipton, Graham Neubig, and William W. Cohen.
[2022. Evaluating explanations: How much do ex-](https://doi.org/10.1162/tacl_a_00465)
[planations from the teacher aid students? Transac-](https://doi.org/10.1162/tacl_a_00465)
_tions of the Association for Computational Linguis-_
_tics, 10:359–375._
Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen,
Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang,
[and Huajun Chen. 2022. Reasoning with language](http://arxiv.org/abs/2212.09597)
[model prompting: A survey.](http://arxiv.org/abs/2212.09597)
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay,
Amnon Shashua, Kevin Leyton-Brown, and Yoav
[Shoham. 2023. In-context retrieval-augmented lan-](https://arxiv.org/abs/2302.00083)
[guage models. Transactions of the Association for](https://arxiv.org/abs/2302.00083)
_Computational Linguistics._
Keshav Santhanam, Omar Khattab, Jon Saad-Falcon,
[Christopher Potts, and Matei Zaharia. 2022. Col-](https://doi.org/10.18653/v1/2022.naacl-main.272)
BERTv2: [Effective and efficient retrieval via](https://doi.org/10.18653/v1/2022.naacl-main.272)
[lightweight late interaction. In Proceedings of the](https://doi.org/10.18653/v1/2022.naacl-main.272)
_2022 Conference of the North American Chapter of_
_the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 3715–3734, Seat-_
tle, United States. Association for Computational
Linguistics.
Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
[2023. Reflexion: Language agents with verbal rein-](http://arxiv.org/abs/2303.11366)
[forcement learning.](http://arxiv.org/abs/2303.11366)
Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, and
Denny Zhou. 2023. Recitation-augmented language
models. ICLR.
Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter
[Clark. 2019. QuaRTz: An open-domain dataset of](https://doi.org/10.18653/v1/D19-1608)
[qualitative relationship questions. In Proceedings of](https://doi.org/10.18653/v1/D19-1608)
_the 2019 Conference on Empirical Methods in Natu-_
_ral Language Processing and the 9th International_
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 5941–5946, Hong Kong,_
China. Association for Computational Linguistics.
Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark.
[2022. Entailer: Answering questions with faithful](http://arxiv.org/abs/2210.12217)
[and truthful chains of reasoning.](http://arxiv.org/abs/2210.12217)
[Alon Talmor and Jonathan Berant. 2018. The web as](https://doi.org/10.18653/v1/N18-1059)
[a knowledge-base for answering complex questions.](https://doi.org/10.18653/v1/N18-1059)
In Proceedings of the 2018 Conference of the North
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
_Volume 1 (Long Papers), pages 641–651, New Or-_
leans, Louisiana. Association for Computational Linguistics.
Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav,
Yizhong Wang, Akari Asai, Gabriel Ilharco, Han[naneh Hajishirzi, and Jonathan Berant. 2021. Mul-](https://openreview.net/forum?id=ee6W5UgQLa)
[timodalqa: complex question answering over text,](https://openreview.net/forum?id=ee6W5UgQLa)
[tables and images. In 9th International Conference](https://openreview.net/forum?id=ee6W5UgQLa)
_on Learning Representations, ICLR 2021, Virtual_
_Event, Austria, May 3-7, 2021. OpenReview.net._
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann
Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
and Tatsunori B. Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. [https://](https://github.com/tatsu-lab/stanford_alpaca)
[github.com/tatsu-lab/stanford_alpaca.](https://github.com/tatsu-lab/stanford_alpaca)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aur’elien Rodriguez, Armand Joulin, Edouard
Grave, and Guillaume Lample. 2023. Llama: Open
and efficient foundation language models. ArXiv,
abs/2302.13971.
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot,
[and Ashish Sabharwal. 2022a. Interleaving retrieval](http://arxiv.org/abs/2212.10509)
[with chain-of-thought reasoning for knowledge-](http://arxiv.org/abs/2212.10509)
[intensive multi-step questions.](http://arxiv.org/abs/2212.10509)
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot,
[and Ashish Sabharwal. 2022b. MuSiQue: Multi-](https://doi.org/10.1162/tacl_a_00475)
[hop questions via single-hop question composition.](https://doi.org/10.1162/tacl_a_00475)
_Transactions of the Association for Computational_
_Linguistics, 10:539–554._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_The Eleventh International Conference on Learning_
_Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and
Denny Zhou. 2022. Chain of thought prompting
elicits reasoning in large language models. NeurIPS.
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel.
[2018. Constructing datasets for multi-hop reading](https://doi.org/10.1162/tacl_a_00021)
[comprehension across documents. Transactions of](https://doi.org/10.1162/tacl_a_00021)
_the Association for Computational Linguistics, 6:287–_
302.
Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, and Jonathan
[Berant. 2020. Break it down: A question understand-](https://doi.org/10.1162/tacl_a_00309)
[ing benchmark. Transactions of the Association for](https://doi.org/10.1162/tacl_a_00309)
_Computational Linguistics, 8:183–198._
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio,
William Cohen, Ruslan Salakhutdinov, and Christo[pher D. Manning. 2018. HotpotQA: A dataset for](https://doi.org/10.18653/v1/D18-1259)
[diverse, explainable multi-hop question answering.](https://doi.org/10.18653/v1/D18-1259)
In Proceedings of the 2018 Conference on Empiri_cal Methods in Natural Language Processing, pages_
2369–2380, Brussels, Belgium. Association for Computational Linguistics.
-----
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. 2023. [Tree of thoughts: Deliberate](http://arxiv.org/abs/2305.10601)
[problem solving with large language models.](http://arxiv.org/abs/2305.10601)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
[React: Synergizing reasoning and acting in language](https://arxiv.org/abs/2210.03629)
[models. ArXiv preprint, abs/2210.03629.](https://arxiv.org/abs/2210.03629)
Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan
[Berant. 2023. Making retrieval-augmented language](http://arxiv.org/abs/2310.01558)
[models robust to irrelevant context.](http://arxiv.org/abs/2310.01558)
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good[man. 2022. STar: Bootstrapping reasoning with rea-](https://openreview.net/forum?id=_3ELRdg2sgI)
[soning. In Advances in Neural Information Process-](https://openreview.net/forum?id=_3ELRdg2sgI)
_ing Systems._
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Huai hsin
Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. _ArXiv,_
abs/2205.10625.
**A** **Evaluation**
**A.1** **Generating Unknown as the Answer**
As we prompt LLMs to generate answers, a potential outcome is for the model to abstain from
answering the question, by generating Unknown as
its answer. Additional cases are when the model
generates an end-of-sequence token without any
final answer. In the binary-choice datasets, STRAT
EGYQA, QUARTZ and FEVEROUS, we assign a
score of 0.5 to such examples, thereby simulating a
random guess. When submitting predictions to the
STRATEGYQA test set, we identify cases of model
abstains or null predictions beforehand. For these
examples, we assign a label of either Yes or No
at random. In datasets with open-ended answers,
we assign a score of 0 when the predicted answer
is either Unknown or null. To make Self-Ask a
stronger baseline, when the greedy decoded chain
has a null answer, we randomly choose a prediction
from one of the other chains. For SC, we do not
consider predictions from chains where answers
are Unknown or null.
**A.2** **FERMI**
The FERMI dataset requires approximating numeric
answers for open-ended questions. Example questions are shown in Tab. 1 and Fig. 2. When providing a FERMI question to our models and baselines
we also add the gold answers measure units (e.g.
meters, cubes, litres, etc.). While this additional
Figure 8: Example for a retrieved evidence snippet for
one of the intermediate questions from Fig. 1.
input helps the model, we note that we provide
it to all our baselines for a fair comparison with
MCR. Nevertheless, even when given the gold
units, predicting the final answers to FERMI problems remains highly challenging.
**B** **Models**
**B.1** **Retrieval**
For our retrieval, we use the Google Search Engine,
via SerpAPI, and return the top-1 retrieved result as
an evidence snippet. Snippets can include answerboxes and tables.[7] We prepend the page title to the
beginning of the snippet, as shown in Fig. 8.
**B.2** **Implementation Details**
We describe the design choices made in our MCR
model, such as preforming retrieval on the original
question and a variant of the meta-reasoner prompt
for FEVEROUS. Due to cost limitations, we evaluate our design choices at a smaller scale and avoid
running an exhaustive grid search.
**Retrieving the Original Question** We follow
past work (Trivedi et al., 2022a) by incorporating retrieved evidence for the original question
in addition to evidence retrieved for the intermediate steps (§3.1). This has a positive or negligible effect on most datasets however, it dramatically decreases the results of all models on
the FERMI task. Results drop for SA (38.3±0.7
to 34.7±0.5), SC (38.3±0.8 to 34.4±0.3), SCR
(38.1±0.8 to 34.4±0.8) and MCR (38.9±0.8 to
7https://serpapi.com/organic-results
-----
|Model Ret. LLM # Chains|STRGYQA HOTPOTQA 2WIKIMQA|
|---|---|
|CoT (Wang et al., 2023) no code-davinci-002 1 CoT+SC@40 (Wang et al., 2023) no code-davinci-002 40 Self-Ask (Press et al., 2022) yes text-davinci-002 1 DSP (Khattab et al., 2022) yes text-davinci-002 20 IR-CoT (Trivedi et al., 2022a) yes code-davinci-002 1|73.4 39.8 - 79.8 44.6 - - - 52.6 - 62.9 - - 61.2 65.2|
|---|---|
|Self-Ask (ours) yes code-davinci-002 1 MCR yes code-davinci-002 5 MCR+SC@3 yes code-davinci-002 15|69.3 50.2 63.8 73.6 (+4.3) 57.0 (+6.8) 67.9 (+4.1) 76.4 (+7.1) 59.2 (+9.0) 68.6 (+4.8)|
|---|---|
Table 8: Recent ODQA results using CoT prompting on LLMs. We list whether a retrieval component is used (Ret.),
the LLM, and the number of reasoning chains used to generate the answer. Different systems evaluate on different
evaluation sets for each dataset. Retrieval-augmented systems vary in terms of their retriever and corpus.
37.0±0.7). Therefore, our models are run without original question retrieval when evaluated on
FERMI. Interestingly, while all models perform
roughly the same without original question retrieval, MCR appears better by 2 points when evidence for the original question is used. We hypothesize that it might be due to MCR being somewhat
more robust to the addition of irrelevant evidence.
**FEVEROUS Meta-Reasoner Prompt** As described in §3.2, the meta-reasoner generates an explanation which precedes the final answer. FEVER
OUS is distinct from all other datasets as it require
verification of multiple facts in order to verify or
disprove a complex statement. When a statement is
false, we list one or more of its false intermediate
facts along with its correction. For example, in
Fig. 4 we list that Robert Broderip lived in Bristol,
not London. When prompting the meta-reasoner
to list both true and false intermediate facts, we
observed a decrease in performance for both MCR
(69.4±1.0 to 66.4±0.7) and SCR (65.1±0.4 to
62.9±0.3). We hypothesize that repeating multiple
true facts excessively prompts the model to predict
the label “Yes” in cases where most (but not all) of
the intermediate facts are correct.
**B.3** **Empirical Comparison to Recent**
**Approaches**
In Tab. 8, we compare MCR to recent CoT-based
approaches for multi-hop reasoning. An apples-toapples comparison is not possible, as these methods
do not evaluate on all 7 of our datasets and use varying samples of 500-1,000 dev examples for their
evaluation. Moreover, different methods use different retrieval corpora, hyperparameters, prompts
and LLMs. Nevertheless, we argue that a direct
comparison serves as a measuring stick for MCR’s
robustness across multiple datasets, compared to
similar solutions.
Evaluation differences include the retrieval corpora, as both IR-CoT and DSP use the official
Wikipedia dump provided with the HOTPOTQA
dataset (Yang et al., 2018). Our retrieved evidence
are from an updated version of Wikipedia, via
Google Search. Since certain facts may change
over time, this could potentially explain the high
percentage of MCR predictions labeled as valid in
our error analysis (§5).
We emphasize that our focus is on highlighting the potential of reasoning on reasoning chains.
MCR is a method aimed at improving models
which generate reasoning chains. Compared to SC,
we observe that MCR further boosts the underlying SA model. While task-specific improvements
are possible, they are orthogonal to our work.
**B.4** **Reasoning on Retrieved Evidence**
The meta-reasoner answers questions given a multichain context of question-answer (qi, ai) pairs, extracted from multiple reasoning chains (§3.2). We
experiment with an alternative multi-chain context, comprised of questions and retrieved evi_dence (qi, ei) (§3.1). This setting resembles past_
work (Trivedi et al., 2022a) however, our sentences
are intermediate evidence from multiple reasoning chains, not just the greedy-decoded chain. We
compare these variants, MCR-EV and SCR-EV,
to MCR and SCR that reason on QA pairs. Tab. 9
shows that meta-reasoning on retrieved evidence is
less effective. The gap is more evident in implicit
reasoning tasks, perhaps due to retrieved evidence
being less relevant on average. Example prompts
for MCR-EV and SCR-EV are listed in §D.
-----
|Dataset|SCR-EV SCR|MCR-EV MCR|
|---|---|---|
|STRATEGYQA FERMI QUARTZ|69.1±0.4 70.0±0.6 34.1±0.6 38.1±0.8 76.1±0.2 80.7±0.1|73.2±0.6 73.6±0.7 33.9±0.3 38.9±0.8 76.2±1.9 81.6±1.3|
|---|---|---|
|HOTPOTQA 2WIKIMQA BAMBOOGLE FEVEROUS|53.5±0.1 56.4±0.4 66.2±0.2 67.2±0.2 64.1±0.0 64.7±0.4 64.0±0.5 65.1±0.4|58.2±1.0 57.0±0.8 67.1±0.9 67.9±0.4 67.4±2.3 66.5±1.7 62.5±0.5 69.4±1.0|
|---|---|---|
Table 9: Effect of using question-answer pairs versus question-evidence pairs as input to the meta-reasoner.
**C** **Analysis**
**C.1** **When are Multiple Chains Helpful?**
In §5, we have shown that the advantage of
MCR over SCR lies in examples where the metareasoner uses chains other than the one generated
through greedy decoding. In Fig. 9 we provide
the results for all other datasets, in addition to the
STRATEGYQA results in Fig. 7. The similar trend
among all datasets is that in examples with lower
similarity to the greedy chain, MCR gains over
SCR are higher.
The similarity between the meta-reasoner explanation and the greedy decoded reasoning chain is
defined as follows: We calculate the ROUGE-1precision (Lin, 2004) between the explanation and
the chain. Low, Medium, and High are based on
thresholds of [1]3 [,][ 2]3 [, and][ 1][ respectively, with the]
Identical category indicating an exact match.
**C.2** **Combining Reasoning Chains**
Fig. 10 provides additional examples for combining
facts between multiple reasoning chains.
**C.3** **Explanation Quality Analysis**
We provide additional details on the annotation
for the scoring meta-reasoner explanations. The
annotation was performed by 4 graduate students
that are authors of this paper. The annotators were
presented with a question and an explanation, and
asked to perform two tasks: (a) score the explanation for its quality and (b) answer the question
based on the meta-reasoner explanation. We provide the full instructions shown to the annotators
in Fig. 11 and the full results in Tab. 10.
**C.4** **Error Analysis**
We provide additional details regarding our error
analysis (§5). In less than 5%, we encountered
grammatically bad questions which we were unable to comprehend and were therefor discarded
from our analysis. For example the HOTPOTQA
question: “What does the goddess associated with
the goddess Frigg consists of what tales?”
The input to our meta-reasoner model is a context comprised of (qi, ai) pairs, generated by the decomposition model. As the decomposition model
is an LLM that is conditioned on retrieved evidence
(and prior decomposition steps) it may hallucinate
false intermediate answers. In cases of such hallucinations we distinguish between two error types,
based on the relevant component. First, Retrieval
_errors are cases where no relevant information was_
retrieved, leading to the decomposition model hallucinating an incorrect ai, passed on to the metareasoner’s context. Second, we treat cases where
relevant evidence was retrieved, but the decomposition model ignored it and hallucinated an incorrect
_ai as Decomposition errors._
Errors stemming from Contradicting Facts, are
cases where the meta-reasoner context contains two
contradicting facts, one accurate while the other
was hallucinated by the decomposition model. For
example, Fig. 12 displays an example where the
context has contradicting facts on who was the
father of Eliezer Ben-Yehuda. When the metareasoner has contradicting facts, it is expected to
select the correct fact, based on the knowledge
encoded in its parameters. Addressing such errors
in future work could rely on refining generated text
with methods such as RARR (Gao et al., 2022).
As our error classes mainly match the MCR
components, this error breakdown could potentially
help to guide future improvements.
**D** **Prompts**
**D.1** **Prompt Details**
We provide example prompts for our models for
one explicit dataset (2WIKIMQA, decomposition:
Fig. 13, MCR/SCR: Fig. 15, MCR-EV/SCR-EV:
Fig. 17) and one implicit dataset (STRATEGYQA,
decomposition: Fig. 14, MCR/SCR: Fig. 16,
MCR-EV/SCR-EV:Fig. 18). All of our prompts
-----
Figure 9: MCR and SCR accuracy on FERMI, QUARTZ, 2WIKIMQA, BAMBOOGLE, HOTPOTQA, and FEVEROUS,
on examples categorized by their MCR explanation’s similarity to the greedy chain. MCR performs similarly to
SCR when similarity is high, and outperfoms SCR when similarity is lower. Error bars indicate standard deviation,
which tends to be high when the number of examples in the bin is small. For FEVEROUS we display the variant
where MCR has to repeat all relevant facts (§B.2), to make sure the MCR explanation is not empty.
will be released along with our codebase. We
use random examples and spend minimal effort
on prompt engineering. The number of exemplars
varies slightly between dataset and model, with the
exact numbers listed in Tab. 11.
**D.2** **Prompt Statistics**
In Tab. 12 we provide statistics of the sequence
lengths for all of our models, which include all the
decomposition prompts, output decomposition sequences, retrieved evidence and the meta-reasoning
prompts. The statistics are for our decomposition
model (used by all of our baselines), as well as
for the meta-reasoning prompts (used by SCR and
MCR). Note that generating a single reasoning
chain requires multiple LLM calls, one for each
decomposition step. Therefore, a single decomposition generation is generally longer than applying
one additional meta-reasoning step.
Results are averaged over multiple runs, corresponding to the results in Tab. 2. Sequence lengths
in Tab. 12 correspond to the number of tokens provided by the code-davinci-002 tokenizer.
**D.3** **Robustness to Choice of Prompt**
We empirically measure our method’s sensitivity
to the prompt of choice. To this end, we randomly
sampled new exemplars for both our decomposition and meta-reasoning prompts for STRATE
GYQA and HOTPOTQA. When using different random exemplars, we observe that MCR still outperforms all baselines. Even though decomposition
performance (SA) is more affected by the set of exemplars, the performance trend remains the same,
with MCR being on top. Tab. 13 lists the experiment results, evaluated on 500 examples from each
dataset. We also provide the original prompt results
in parenthesis (averaged over 3 runs).
-----
Figure 10: Examples for combining facts from multiple reasoning chains.
|dataset Reasoning|%3 %2 %1|Sim_predictions|Human_acc MCR_acc|
|---|---|---|---|
|STRATEGYQA FERMI implicit QUARTZ|79 18 3 60 32 8 91 7 2|89.9 76.0 98.9|77.2 72.2 47.8 40.7 87.9 86.8|
|---|---|---|---|
|HOTPOTQA 2WIKIMQA explicit BAMBOOGLE|77 21 2 97 2 1 90 9 1|95.4 94.9 98.2|49.2 49.2 69.6 69.2 71.5 71.4|
|---|---|---|---|
|Average implicit Average explicit Average|76.7 19.0 4.3 88.0 10.7 1.3 82.3 14.8 2.8|88.3 96.2 92.2|71.0 66.6 63.4 63.3 67.2 64.9|
|---|---|---|---|
Table 10: Full results for the explanation quality analysis. Sim_predictions indicates the similarity between the
human and MCR prediction, calculated using the dataset-specific metrics described in §4.1.1. Human_acc and
MCR_acc represent the accuracy of humans and MCR predictions, respectively. Since only explanations with a
score of 3 are guaranteed to contain the necessary information to arrive at an answer, we filter other examples when
calculating sim_predictions, Human_acc, and MCR_acc.
Dataset SA SCR MCR SCR-EV MCR-EV
STRATEGYQA 10 6 6 6 6
FERMI 6 6 6 6 6
QUARTZ 6 8 8 8 8
HOTPOTQA 12 10 10 10 10
2WIKIMQA 6 6 6 6 6
BAMBOOGLE 6 6 6 6 6
FEVEROUS 10 10 10 10 10
Table 11: The number of exemplars for each model and dataset. Since MCR and SCR, and MCR-EV and SCR-EV
use the same prompt they have the same number of exemplars.
-----
Dataset Dec. Dec. steps Dec. out. Ret. len. Meta-reason SCR MCR
STRATEGYQA 2,242 2.9±0.6 103.3± 48.0 190.6±66.7 1,652 1,749.0± 52.0 2,032.9±110.7
FERMI 1,442 2.3±0.9 91.4±31.9 165.8±78.9 1,681 1,765.5±25.5 1,984.1±79.6
QUARTZ 839 1.2±0.5 55.2±20.6 92.7±28.7 2,129 2,202.4±19.8 2,343.6±48.3
HOTPOTQA 2,508 1.7±0.7 86.1±91.9 153.0±84.1 2,380 2,460.3±28.1 2,666.2±90.0
2WIKIMQA 1,920 2.4±0.8 92.7±30.5 201.3±59.3 2,029 2,116.4±25.6 2,363.0±104.6
BAMBOOGLE 1,342 2.0±0.3 74.3±37.5 204.5±72.1 966 1,035.1±12.9 1,223.1±50.2
FEVEROUS 3,741 2.9±0.9 118.2±36.4 197.1±69.2 2,826 2,956.8±38.1 3,276.8±123.1
Average 2,004.9 2.2±0.6 88.7±18.7 172.1±37.0 1,951.9 2,040.8±563.1 2,270.0±587.7
Table 12: Prompt lengths (number of tokens) used for each dataset: decomposition prompts (Dec.); number of
output decomposition steps (Dec. steps); output decomposition length (Dec. out.); retrieved evidence length (Ret.
len.); meta-reasoning prompt; SCR prompt length; MCR prompt length.
Dataset Examples SA SC@5 SCR MCR
STRATEGYQA 500 66.6 (69.3±0.3) 71.0 (72.2±0.8) 68.2 (70.0±0.6) **72.0 (73.6±0.7)**
HOTPOTQA 500 54.3 (50.2±0.3) 56.2 (51.3±0.2) 57.1 (56.4±0.4) **59.3 (57.0±0.8)**
Table 13: Experiments with code-davinci-002 when using prompts with different random exemplars for both the
decomposition and meta-reasoning prompts. Original prompt results are in parenthesis, for comparison.
-----
Figure 11: The annotation instructions for the MCR
explanation quality analysis.
**_Given a question and a context, answer the question step-_**
**_by-step. If you are unsure, answer Unknown._**
**Context:**
Who is the father of modern Hebrew? The father of modern
Hebrew is Eliezer Ben-Yehuda.
Who is the father of Eliezer Ben-Yehuda? The father of
Eliezer Ben-Yehuda is Abraham.
...
Who is the father of modern Hebrew? The father of modern
Hebrew is Eliezer Ben-Yehuda.
Who is the father of Eliezer Ben-Yehuda? Eliezer BenYehuda’s father is Yehuda Leib.
**Question: Who is the father of the father of modern He-**
brew?
**Answer: The father of modern Hebrew is Eliezer Ben-**
Yehuda. The father of Eliezer Ben-Yehuda is Abraham.
**So the answer is: Abraham.**
**Gold answer is: Yehuda Leib**
Figure 12: Example a Contradicting Facts error. When
generating the explanation, the meta-reasoner has to
rely on knowledge encoded in its parameters to decide
between multiple contradicting facts in its context on
who was the father of Eliezer Ben-Yehuda.
-----
**_Given the following question, answer it by providing follow up questions and intermediate answers. If no follow up_**
**_questions are necessary, answer the question directly. You are also provided with the most relevant google snippet for_**
**_each intermediate question._**
#
Context1: Xawery Zuławski: Polish-Russian War (Wojna polsko-ruska) is a 2009 Polish film directed by Xawery [˙] Zuławski[˙]
based on the novel Polish-Russian War under the white-red flag by Dorota Masłowska. So the answer is Xawery Zuławski.[˙]
Context2: Xawery Zuławski: Xawery [˙] Zuławski ; National Film School in Łód´[˙] z · 1995–present · Maria Strzelecka · 2.
Question: Who is the mother of the director of film Polish-Russian War (Film)?
Are follow up questions needed here: Yes.
Follow up: Who is the director of the film Polish-Russian War (Film)?
Intermediate answer: The director of the film Polish-Russian War is Xawery Zuławski.[˙]
Follow up: Who is the mother of Xawery Zuławski?[˙]
Intermediate answer: The mother of Xawery Zuławski is Małgorzata Braunek.[˙]
So the final answer is: Rick Scott Małgorzata Braunek.
...
#
Context1: John, Count Palatine of Neumarkt: John (Johann von Pfalz-Neumarkt; 1383 – 14 March 1443) was the Count
Palatine of Neumarkt from 1410 to his death. The son of Rupert III of the Palatinate, he married Catherine of Pomerania in
1407.
Context2: John, Count Palatine of Neumarkt: John (Johann von Pfalz-Neumarkt; 1383 – 14 March 1443) was the Count
Palatine of Neumarkt from 1410 to his death. The son of Rupert III of the Palatinate, he married Catherine of Pomerania in
1407.
Question: Who is Catherine Of Pomerania, Countess Palatine Of Neumarkt’s father-in-law?
Are follow up questions needed here: Yes.
Follow up: Who is the husband of Catherine of Pomerania, Countess Palatine of Neumarkt?
Intermediate answer: The husband of Catherine of Pomerania, Countess Palatine of Neumarkt is John, Count Palatine of
Neumarkt.
Follow up: Who is the father of John, Count Palatine of Neumarkt?
Intermediate answer: The father of John, Count Palatine of Neumarkt is Rupert III of the Palatinate.
So the final answer is: Rupert III of the Palatinate.
#
Context1: Crimen a las tres: Crimen a las tres is a 1935 Argentine crime film directed and written by Luis Saslavsky. Crimen
a las tres. Directed by, Luis Saslavsky.
Context2: Elio Petri: The Working Class Goes to Heaven (Italian: La classe operaia va in paradiso), released in the US as
Lulu the Tool, is a 1971 political drama film directed by Elio Petri. So the answer is Elio Petri.
Context3: March 20, 1995: Luis Saslavsky (April 21, 1903 – March 20, 1995) was an Argentine film director, screenwriter
and film producer, and one of the influential directors in the Cinema of Argentina of the classic era. So the answer is March
20, 1995.
Context4: Elio Petri: Final years. In 1981, Petri visited Geneva to direct Arthur Miller´s new play The American Clock, with
Marcello Mastroianni playing the lead role. Petri died of cancer on 10 November 1982. He was 53 years old.
Question: Which film has the director died first, Crimen A Las Tres or The Working Class Goes To Heaven?
Are follow up questions needed here: Yes.
Follow up: Who is the director of Crimen a las tres?
Intermediate answer: The director of Crimen a las tres is Luis Saslavsky.
Follow up: Who is the director of The Working Class Goes to Heaven?
Intermediate answer: The director of The Working Class Goes to Heaven is Elio Petri.
Follow up: When did Luis Saslavsky die?
Intermediate answer: Luis Saslavsky died on March 20, 1995.
Follow up: When did Elio Petri die?
Intermediate answer: Elio Petri died on 10 November 1982.
So the final answer is: The Working Class Goes to Heaven.
#
Figure 13: Instruction and exemplars for the 2WIKIMQA decomposition prompt.
-----
**_Given the following question, answer it by providing follow up questions and intermediate answers. For each follow up_**
**_question, you are given a context which is the top returned google snippet for the question from Wikipedia. If no follow_**
**_up questions are necessary, answer the question directly._**
#
Context1: Frost: Frost is a thin layer of ice on a solid surface, which forms from water vapor in an above-freezing atmosphere
coming in contact with a solid surface whose ...
Context2: Graduation: Graduation is the awarding of a diploma to a student by an educational institution. It may also refer to
the ceremony that is associated with it.
Context3: Winter: Winter ; Astronomical season, 22 December – 21 March ; Meteorological season, 1 December – 28/29
February ; Solar (Celtic) season, 1 November – 31 January.
Question: Is it common to see frost during some college commencements?
Are follow up questions needed here: Yes.
Follow up: What seasons can you expect to see frost?
Intermediate answer: Frost is common during the winter.
Follow up: When is college commencement?
Intermediate answer: College commencement ceremonies often happen during the months of December, May, June.
Follow up: Do any of the months December, May, June occur during the Winter?
Intermediate answer: December is in the winter.
So the final answer is: Yes.
...
#
Context1: Last rites: The last rites, also known as the Commendation of the Dying, are the last prayers and ministrations
given to an individual of Christian faith, when possible, shortly before death. They may be administered to those awaiting
execution, mortally injured, or terminally ill.
Context2: Richard Dawkins: Dawkins is an outspoken atheist and a supporter of various atheist, secular, and humanist
organisations, including Humanists UK and the Brights movement. Dawkins suggests that atheists should be proud, not
apologetic, stressing that atheism is evidence of a healthy, independent mind.
Context3: Prayer in the Catholic Church: In the Catholic Church, prayer is "the raising of one´s mind and heart to God or the
requesting of good things from God." It is an act of the moral virtue ...
Question: Would Richard Dawkins hypothetically refuse an offering of the Last rites?
Are follow up questions needed here: Yes.
Follow up: What are the last Rites?
Intermediate answer: The Last rites, in Catholicism, are the last prayers and ministrations given to an individual of the faith,
when possible, shortly before death.
Follow up: What are Richard Dawkins religious beliefs?
Intermediate answer: Richard Dawkins is known as an outspoken atheist, well known for his criticism of creationism and
intelligent design.
Follow up: Would an atheist participate in Catholics prayers?
Intermediate answer: It is unlikely that an atheist would participate in Catholics prayers.
So the final answer is: Yes.
#
Context1: number 1: Hydrogen is the chemical element with the symbol H and atomic number 1. Hydrogen is the lightest
element. So the answer is number 1.
Context2: Spice Girls - Simple English Wikipedia, the free encyclopedia: The group has five members. Each member uses
a nickname initially given to them: Melanie Chisholm ("Sporty Spice"), Emma Bunton ("Baby Spice"), Melanie Brown
("Scary Spice"), Victoria Beckham (née Adams) ("Posh Spice"), and Geri Halliwell ("Ginger Spice") .
Context3: Square number: In mathematics, a square number or perfect square is an integer that is the square of an integer; in
other words, it is the product of some integer with ...
Question: Hydrogen’s atomic number squared exceeds number of Spice Girls?
Are follow up questions needed here: Yes.
Follow up: What is the atomic number of hydrogen?
Intermediate answer: Hydrogen has an atomic number of 1.
Follow up: How many people are in the Spice Girls band?
Intermediate answer: The Spice Girls has 5 members.
Follow up: Is the square of 1 greater than 5?
Intermediate answer: The square of 1 is 1 which is less than 5.
So the final answer is: No.
#
Figure 14: Instruction and exemplars for the STRATEGYQA decomposition prompt.
-----
**_Given a question and a context, answer the question and explain why. If you are unsure, answer Unknown._**
#
Context:
Who is the wife of Douglas Leiterman? The wife of Douglas Leiterman is Beryl Fox.
Where was Beryl Fox born? Beryl Fox was born in Winnipeg, Manitoba.
When and where was Mary born? Mary was born in c. 18 BC or September 8 (21), 16 BC Herodian Kingdom of Judea.
Where was Beryl Fox born? Beryl Fox was born in Winnipeg, Manitoba.
Who is the wife of Douglas Leiterman? The wife of Douglas Leiterman is Mary.
Who is the wife of Douglas Leiterman? The wife of Douglas Leiterman is Beryl Fox.
Question:
Where was the wife of Douglas Leiterman born?
Answer:
The wife of Douglas Leiterman is Beryl Fox. Beryl Fox was born in Winnipeg, Manitoba.
So the answer is: Winnipeg, Manitoba.
...
#
Context:
Who is Beatrice of Aragon’s father? The father of Beatrice of Aragon is King Ferdinand I of Naples.
Who is the father of Rupert III, Elector Palatine? The father of Rupert III, Elector Palatine is Rupert II, Elector Palatine.
Who is the husband of Catherine of Pomerania? The husband of Catherine of Pomerania is John II, Count Palatine of
Neumarkt.
Who is Catherine Of Pomerania, Countess Palatine Of Neumarkt’s husband? The husband of Catherine Of Pomerania,
Countess Palatine Of Neumarkt is John I, Count Palatine of Neumarkt.
Who is the father of John II, Count of Holstein-Rendsburg? The father of John II, Count of Holstein-Rendsburg is Henry II,
Count of Holstein-Rendsburg.
Who is Catherine Of Pomerania, Countess Palatine Of Neumarkt’s husband? The husband of Catherine Of Pomerania,
Countess Palatine Of Neumarkt is John II, Count of Holstein-Rendsburg.
Who is the father of John I, Count Palatine of Neumarkt? The father of John I, Count Palatine of Neumarkt is Rupert III,
Elector Palatine.
Who are the parents of Rupert III, Elector Palatine? The parents of Rupert III, Elector Palatine are Rupert II, Elector Palatine
and Beatrice of Aragon.
Who is the father of John II, Count Palatine of Neumarkt? The father of John II, Count Palatine of Neumarkt is Rupert III,
Elector Palatine.
Question:
Who is Catherine Of Pomerania, Countess Palatine Of Neumarkt’s father-in-law?
Answer:
The husband of Catherine Of Pomerania, Countess Palatine Of Neumarkt is John I, Count Palatine of Neumarkt. The father
of John I, Count Palatine of Neumarkt is Rupert III, Elector Palatine.
So the answer is: Rupert III, Elector Palatine.
#
Context:
When did Elio Petri die? Elio Petri died on 10 November 1982.
Who is the director of The Working Class Goes to Heaven? The director of The Working Class Goes to Heaven is Elio Petri.
Who is the director of Crimen A Las Tres? The director of Crimen A Las Tres is Luis Saslavsky.
Who is the director of Crimen A Las Tres? The director of Crimen A Las Tres is Luis Saslavsky.
When did Luis Saslavsky die? Luis Saslavsky died on March 20, 1995.
Who is the director of Crimen A Las Tres? The director of Crimen A Las Tres is Luis Saslavsky.
When did Elio Petri die? Elio Petri died on 10 November 1982.
When did Luis Saslavsky die? Luis Saslavsky died on March 20, 1995.
When did Luis Saslavsky die? Luis Saslavsky died on March 20, 1995.
When did Elio Petri die? Elio Petri died on 10 November 1982.
Who is the director of The Working Class Goes to Heaven? The director of The Working Class Goes to Heaven is Elio Petri.
Who is the director of The Working Class Goes to Heaven? The director of The Working Class Goes to Heaven is Elio Petri.
Question:
Which film has the director died first, Crimen A Las Tres or The Working Class Goes To Heaven?
Answer:
The director of Crimen A Las Tres is Luis Saslavsky. The director of The Working Class Goes to Heaven is Elio Petri. Luis
Saslavsky died on March 20, 1995. Elio Petri died on 10 November 1982.
So the answer is: The Working Class Goes To Heaven.
#
Figure 15: Instruction and exemplars for the 2WIKIMQA meta-reasoner prompt.
-----
**_Given a question and a context, provide a Yes or No answer and explain why. If you are unsure, answer Unknown._**
#
Context:
What type of animal is a jellyfish? A jellyfish is a type of invertebrate.
Do jellyfish have arteries? No, jellyfish do not have arteries. They have a type of nervous system called a nerve net.
What is atherosclerosis? Atherosclerosis is a disease where plaque builds up in the arteries.
Do jellyfish have arteries? Jellyfish do not have a circulatory system and therefore do not have arteries.
So jellyfish don’t have atherosclerosis because they don’t have arteries? Yes, that is correct.
What is atherosclerosis? Atherosclerosis is a type of heart disease.
Is an invertebrate susceptible to atherosclerosis? No, invertebrates are not susceptible to atherosclerosis.
What is atherosclerosis? Atherosclerosis is a disease in which plaque builds up on the walls of the arteries.
Question:
Is a jellyfish safe from atherosclerosis?
Answer:
Jellyfish do not have a circulatory system and therefore do not have arteries. Atherosclerosis is a disease in which plaque
builds up on the walls of the arteries.
So the answer is: Yes.
#
...
Context:
What is tonsure? Tonsure is the practice of cutting or shaving the hair from the scalp, as a sign of religious devotion or
humility.
What is Christopher Hitchens’ religious affiliation? Christopher Hitchens was an atheist.
What is tonsure? Tonsure is a practice of cutting or shaving the hair from the scalp.
Would Christopher Hitchens be likely to do something as a sign of religious devotion? No.
Would Christopher Hitchens be religious? Christopher Hitchens was an atheist.
Is an atheist likely to engage in religious practices? No, an atheist is not likely to engage in religious practices.
What is tonsure? Tonsure is the practice of cutting or shaving the hair from the scalp, as a sign of religious devotion or
humility.
What is Christopher Hitchens’ religious affiliation? Christopher Hitchens was an atheist.
Do atheists usually engage in tonsure? No, atheists do not usually engage in tonsure as it is a religious practice.
Question:
Would Christopher Hitchens be very unlikely to engage in tonsure?
Answer:
Tonsure is the practice of cutting or shaving the hair from the scalp, as a sign of religious devotion or humility. Christopher
Hitchens was an atheist. An atheist is not likely to engage in religious practices.
So the answer is: Yes.
#
Context:
What is the evidence that Neanderthals used arithmetic? There is no direct evidence that Neanderthals used arithmetic, but
indirect evidence suggests that they may have had some understanding of basic mathematical concepts.
What is the earliest evidence of arithmetic? The earliest evidence of arithmetic dates back to the ancient Sumerians, who
lived in what is now modern-day Iraq.
Did Neanderthals live before the ancient Sumerians? Yes.
What is arithmetic? Arithmetic is the branch of mathematics that deals with the properties and manipulation of numbers.
Is there any reason to believe that Neanderthals could not have used arithmetic? There is no reason to believe that Neanderthals
could not have used arithmetic.
Do we have any evidence that Neanderthals could manipulate numbers? We have no evidence that Neanderthals could
manipulate numbers.
What is arithmetic? Arithmetic is the branch of mathematics that deals with the properties and manipulation of numbers.
What is arithmetic? Arithmetic is the branch of mathematics that deals with the addition, subtraction, multiplication, and
division of whole numbers and with the manipulation of fractions and decimals.
Question:
Did Neanderthals use arithmetic?
Answer:
There is no evidence that Neanderthals used arithmetic. The earliest evidence of arithmetic dates back to the ancient
Sumerians who lived after the Neanderthals.
So the answer is: No.
#
Figure 16: Instruction and exemplars for the STRATEGYQA meta-reasoner prompt.
-----
**_Given a question and a context, answer the question and explain why. If you are unsure, answer Unknown._**
#
Context:
Who is the wife of Douglas Leiterman? | Douglas Leiterman (1927 – 19 December 2012) was a Canadian television producer.:
[[ Douglas Leiterman[´] [´]], [Spouse´[´], Beryl Fox[´] [´]]]
Where was Beryl Fox born? | Winnipeg, Manitoba: Fox was born in 1931 in Winnipeg, Manitoba. So the answer is Winnipeg,
Manitoba.
Who is Douglas Leiterman’s wife? | Douglas Leiterman (1927 – 19 December 2012) was a Canadian television producer.:
[[ Douglas Leiterman[´] [´]], [ Known for´[´], This Hour Has Seven Days[´] [´]], [Spouse´[´], Beryl Fox[´] [´]]]
Where was Beryl Fox born? | Winnipeg, Manitoba: Fox was born in 1931 in Winnipeg, Manitoba. So the answer is Winnipeg,
Manitoba.
Who is the wife of Douglas Leiterman? | Douglas Leiterman (1927 – 19 December 2012) was a Canadian television producer.:
[[ Douglas Leiterman[´] [´]], [Spouse´[´], Beryl Fox[´] [´]]]
When and where was Mary born? | Mary was born in c. 18 BC or September 8 (21), 16 BC Herodian Kingdom of Judea.
Question:
Where was the wife of Douglas Leiterman born?
Answer:
The wife of Douglas Leiterman is Beryl Fox. Beryl Fox was born in Winnipeg, Manitoba.
So the answer is: Winnipeg, Manitoba.
...
#
Context:
Who is the director of Crimen A Las Tres? | Crimen a las tres: Crimen a las tres is a 1935 Argentine crime film directed and
written by Luis Saslavsky.
Who is the director of The Working Class Goes to Heaven? | The Working Class Goes to Heaven: The Working Class Goes
to Heaven (Italian: La classe operaia va in paradiso), released in the US as Lulu the Tool, is a 1971 political drama film
directed by Elio Petri.
When did Luis Saslavsky die? | Luis Saslavsky: Luis Saslavsky (April 21, 1903 – March 20, 1995) was an Argentine film
director, screenwriter and film producer, and one of the influential directors in the Cinema of Argentina of the classic era.
When did Elio Petri die? | Elio Petri: Petri died of cancer on 10 November 1982. He was 53 years old.
Who is the director of Crimen A Las Tres? | Crimen a las tres: Crimen a las tres is a 1935 Argentine crime film directed and
written by Luis Saslavsky.
Who is the director of The Working Class Goes to Heaven? | The Working Class Goes to Heaven: The Working Class Goes
to Heaven (Italian: La classe operaia va in paradiso), released in the US as Lulu the Tool, is a 1971 political drama film
directed by Elio Petri.
When did Luis Saslavsky die? | Luis Saslavsky: Luis Saslavsky (April 21, 1903 – March 20, 1995) was an Argentine film
director, screenwriter and film producer, and one of the influential directors in the Cinema of Argentina of the classic era.
When did Elio Petri die? | Elio Petri: Petri died of cancer on 10 November 1982. He was 53 years old.
Who is the director of Crimen A Las Tres? | Crimen a las tres: Crimen a las tres is a 1935 Argentine crime film directed and
written by Luis Saslavsky.
When did Luis Saslavsky die? | Luis Saslavsky: Luis Saslavsky (April 21, 1903 – March 20, 1995) was an Argentine film
director, screenwriter and film producer, and one of the influential directors in the Cinema of Argentina of the classic era.
Who is the director of The Working Class Goes to Heaven? | The Working Class Goes to Heaven: The Working Class Goes
to Heaven (Italian: La classe operaia va in paradiso), released in the US as Lulu the Tool, is a 1971 political drama film
directed by Elio Petri.
When did Elio Petri die? | Elio Petri: Petri died of cancer on 10 November 1982. He was 53 years old.
Question:
Which film has the director died first, Crimen A Las Tres or The Working Class Goes To Heaven?
Answer:
The director of Crimen A Las Tres is Luis Saslavsky. The director of The Working Class Goes to Heaven is Elio Petri. Luis
Saslavsky died on March 20, 1995. Elio Petri died on 10 November 1982.
So the answer is: The Working Class Goes To Heaven.
#
Figure 17: Instruction and exemplars for the 2WIKIMQA meta-reasoner prompt for MCR-EV and SCR-EV
reasoning over retrieved evidence.
-----
**_Given a question and a context, answer the question step-by-step. If you are unsure, answer Unknown._**
#
Context:
What is atherosclerosis? | Atherosclerosis: Atherosclerosis is a pattern of the disease arteriosclerosis in which the wall of the
artery develops abnormalities, called lesions. These lesions may lead to narrowing due to the buildup of atheromatous plaque.
At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age.
What type of animal is a jellyfish? | Jellyfish - Simple English Wikipedia, the free encyclopedia: Jellyfish are animals of the
phylum Cnidaria. They are a monophyletic clade, the Medusozoa. Most of them live in the oceans, in salt water, where they
eat small sea animals like plankton and little fish, and float in the sea.
Is an invertebrate susceptible to atherosclerosis? | Atherosclerosis: Atherosclerosis is a pattern of the disease arteriosclerosis
in which the wall of the artery develops abnormalities, called lesions.
What is atherosclerosis? | Atherosclerosis: Atherosclerosis is a pattern of the disease arteriosclerosis in which the wall of the
artery develops abnormalities, called lesions. These lesions may lead to narrowing due to the buildup of atheromatous plaque.
At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age.
Do jellyfish have arteries? | Jellyfish: Jellyfish are mainly free-swimming marine animals with umbrella-shaped bells and
trailing tentacles, although a few are anchored to the seabed by stalks rather
What is atherosclerosis? | Atherosclerosis: Atherosclerosis is a pattern of the disease arteriosclerosis in which the wall of the
artery develops abnormalities, called lesions. These lesions may lead to narrowing due to the buildup of atheromatous plaque.
At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age.
Do jellyfish have arteries? | Jellyfish: Jellyfish are mainly free-swimming marine animals with umbrella-shaped bells and
trailing tentacles, although a few are anchored to the seabed by stalks rather
So jellyfish don’t have atherosclerosis because they don’t have arteries? | Jellyfish: A free-swimming marine coelenterate
that is the sexually reproducing form of a hydrozoan or scyphozoan and has a nearly transparent saucer-shaped body and
Question:
Is a jellyfish safe from atherosclerosis?
Answer:
Jellyfish do not have a circulatory system and therefore do not have arteries. Atherosclerosis is a disease in which plaque
builds up on the walls of the arteries.
So the answer is: Yes.
...
#
Context:
What is arithmetic? | Arithmetic: Arithmetic is an elementary part of mathematics that consists of the study of the properties
of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of
roots.
What is the evidence that Neanderthals used arithmetic? | Neanderthal: In 2012, British-American geneticist Graham Coop
hypothesised that they instead found evidence of a different archaic human species interbreeding with modern
Is there any reason to believe that Neanderthals could not have used arithmetic? | Neanderthal: A large part of the controversy
stems from the vagueness of the term "species", as it is generally used to distinguish two genetically isolated populations, but
What is arithmetic? | Arithmetic: Arithmetic is an elementary part of mathematics that consists of the study of the properties
of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of
roots.
Do we have any evidence that Neanderthals could manipulate numbers? | Neanderthal: Neanderthals also written as
Neandertals, are an extinct species or subspecies of archaic humans who lived in Eurasia until about 40,000 years ago.
What is arithmetic? | Neanderthal: Neanderthals also written as Neandertals, are an extinct species or subspecies of archaic
humans who lived in Eurasia until about 40,000 years ago.
What is the earliest evidence of arithmetic? | Mathematics: It is in Babylonian mathematics that elementary arithmetic
(addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed
a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time.
Did Neanderthals live before the ancient Babylonians? | Neanderthal: Neanderthals also written as Neandertals, are an extinct
species or subspecies of archaic humans who lived in Eurasia until about 40,000 years ago. Pre- and early Neanderthals,
living before the Eemian interglacial
Question:
Did Neanderthals use arithmetic?
Answer:
There is no evidence that Neanderthals used arithmetic. The earliest evidence of arithmetic dates back to the ancient
Babylonians who lived after the Neanderthals.
So the answer is: No.
#
Figure 18: Instruction and exemplars for the STRATEGYQA prompt for MCR-EV and SCR-EV reasoning over
retrieved evidence.
-----
| [
"Ori, Yoran",
"Tomer, Wolfson",
"Jonathan, Berant",
"Ben, Bogin",
"Uri, Katz",
"Daniel, Deutch"
] | 2023-10-17T00:00:00 | EMNLP 2023 Main | true | 69 | 5 | null | http://arxiv.org/abs/2304.13007 | https://arxiv.org/abs/2304.13007 | https://www.semanticscholar.org/paper/7e4f5589327b6b574cc950a03fd1d6236e9e6128 |
ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering | With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA. | This work proposes a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering, and conducts comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods to provide insights into the reasoning mechanisms. | ## CONVFINQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering
**Zhiyu Chen[1], Shiyang Li[1], Charese Smiley[2], Zhiqiang Ma[2],**
**Sameena Shah[2]** and William Yang Wang[1]
1University of California, Santa Barbara
2J.P. Morgan
{zhiyuchen,shiyangli,william}@cs.ucsb.edu,
{charese.h.smiley,zhiqiang.ma,sameena.shah}@jpmchase.com
**Abstract**
With the recent advance in large pre-trained
language models, researchers have achieved
record performances in NLP tasks that mostly
focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human
beings. In this work, we investigate the application domain of finance that involves realworld, complex numerical reasoning. We propose a new large-scale dataset, CONVFINQA,
aiming to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning
paths in real-world conversations. We conduct comprehensive experiments and analyses
with both the neural symbolic methods and
the prompting-based methods, to provide insights into the reasoning mechanisms of these
two divisions. We believe our new dataset
should serve as a valuable resource to push forward the exploration of real-world, complex
reasoning tasks as the next research focus. Our
dataset and code is publicly available[1].
|Col1|2010|Col3|2009|Col5|2008|
|---|---|---|---|---|---|
|share-based compensation cost|$18.10||$14.60||$13.80|
|income tax benefit|-$6.30||-$5.20||-$4.90|
**Financial report:**
… the total income tax benefit recognized for
share-based compensation in the accompanying
statements of income is also presented.
**2010** **2009** **2008**
**share-based**
**compensation cost** $18.10 $14.60 $13.80
**income tax benefit** -$6.30 -$5.20 -$4.90
**Conversational QA:**
**Q1: In the year of 2010, what was the share-based**
compensation cost?
**A1: 18.1**
**Q2: and what was the income tax benefit?**
**A2: -6.3**
**Q3: what was, then, the sum of both?**
**A3:** **add(18.1, -6.3) = 11.8**
**Q4: and what was that sum in 2009?**
**A4:** **add(14.6, -5.2) = 9.4**
**Q5: what, then, was the change in the sum of those**
amounts from 2009 to 2010?
**A5: add(18.1, -6.3), add(14.6, -5.2), subtract(#0, #1) = 2.4**
Figure 1: An example from CONVFINQA: each question
may depend on previous questions to answer.
**1** **Introduction**
The rapid advancement in developing large pretrained language models (LM) has brought the natural language processing research into a new era.
Based on the well-known transformer (Vaswani
et al., 2017) architecture, such large pre-trained
LMs (Devlin et al., 2019; Radford et al., 2019;
Raffel et al., 2020; Sanh et al., 2021; Wang et al.,
2022) have set up new state-of-the-art results for
many NLP tasks, with some of them approaching
or even surpassing human performances, like on
the SQuAD (Rajpurkar et al., 2016) dataset. We
observe that the tasks with the essence of modeling
language patterns can be well addressed by large
pre-trained LMs. However, for the other kind of
tasks requiring complex reasoning abilities, current
researches are still away from satisfactory performances (Wei et al., 2022).
Traditional methods on reasoning tasks typically
take neural symbolic models to encode the context, generate the reasoning program and do the
execution (Liang et al., 2017; Chen et al., 2020).
Most recently, it is shown that sufficiently large
pre-trained LMs can excel at reasoning tasks given
proper prompts (Wei et al., 2022). But their tasks
being experimented with are relatively general and
toy, such as simple math word problems. The form
of the solutions and the reasoning explanations
probably have been witnessed by the model during
pre-training. This raises an interesting question:
Which of the two directions is the fundamental way
[1https://github.com/czyssrs/ConvFinQA](https://github.com/czyssrs/ConvFinQA)
-----
to solve complex reasoning problems?
In this work, we go beyond the simple reasoning
tasks and dive into the real application domain of
finance to investigate the complex numerical reasoning ability of current modeling paradigms. The
finance domain bears the natural requirements of
realistic, complex numerical reasoning from human labor, such as quantitative analysis of financial reports. We seek to study the real-world scenario of conversational question answering over
**financial reports – investors or analysts would typ-**
ically ask sequential questions to get insights into
the numerical in the reports. The questions require
extensive calculations and meanwhile often demonstrate cross dependency, forming the chains of numerical reasoning throughout the conversation.
To this end, we propose a new dataset,
**CONVFINQA (Conversational Finance Question**
**Answering), with 3,892 conversations consisting**
14,115 questions. To construct the dataset, we design a framework to simulate the conversation flow
by decomposition and concatenation of the multihop questions from the FinQA (Chen et al., 2021)
dataset. We then ask expert annotators to compose
the question for each conversation turn based on the
simulated conversing flow. Figure 1 shows one example conversation from our dataset. We conduct
comprehensive experiments and analyses on our
dataset using both the neural symbolic models and
the prompting-based methods, and summarize the
following insights: (1) Both kinds of approaches
(with the execution accuracy less than 70.0%) fall
far behind human performance (89.4%). The reasoning chains throughout the conversation pose
great challenges for the models to learn when to refer to or discard the conversation history and how to
assemble the reasoning path. (2) Though excelling
at simple general reasoning tasks, prompting-based
methods perform a lot worse for our task (less than
50.0% using GPT-3 175B). They either superficially mimic the given prompts or recall their own
knowledge for simple general numerical reasoning.
They tend to fail to understand new complex task
paradigms for new domains. We believe our new
dataset should serve as a challenging and valuable
resource for the exploration of real-world, complex
reasoning tasks as the next research focus.
**2** **Related Work**
**Conversational Question Answering** Conversational question answering (ConvQA) (Zaib et al.,
Dataset Size Mode Challenge Domain
SQA 6k ConvQA table navigation general
CSQA 200k ConvQA KG reasoning general
CoQA 8k ConvQA co-reference general
QuAC 14k ConvQA open-ended general
DROP 96k QA numerical reasoning general
MathQA 37k QA numerical reasoning math
FinQA 8k QA numerical reasoning finance
TAT-QA 17k QA numerical reasoning finance
**CONVFINQA** 4k ConvQA numerical reasoning finance
Table 1: Comparison of CONVFINQA with existing datasets.
2021) has been gaining attentions in recent years.
In ConvQA, the users can append multiple questions in addition to the first one to get more information. This also mitigates the need to ask a single
complex multi-hop question at one time, making
the information-seeking procedure more natural.
For previous datasets, SQA (Iyyer et al., 2017) are
built by decomposing multi-hop questions based
on Wikitables. CSQA (Saha et al., 2018) questions
require simple logical operations over knowledge
graphs (KGs). CoQA (Reddy et al., 2019) focuses
on co-references among the conversation turns to
be more human-like. QuAC (Choi et al., 2018)
focuses on open-ended, exploratory questions. In
contrast, our dataset CONVFINQA targets complex numerical reasoning chains among the sequential questions in finance conversations.
**Numerical Reasoning** The numerical reasoning
ability is often investigated in the form of question
answering. The DROP dataset (Dua et al., 2019)
explores simple calculations over texts in the general domain. MaWPS (Koncel-Kedziorski et al.,
2016) and MathQA (Amini et al., 2019) focus on
generating solutions for math word problems. Recently, Wei et al. (2022) demonstrate that large
pre-trained LMs can excel at reasoning tasks given
proper prompts with natural language explanations.
However, their reasoning tasks are mostly simple
and general. In this work, we explore complex numerical reasoning in a highly specialized domain.
**Financial NLP** Previous work in financial NLP
mostly centers on sentiment analysis (Day and Lee,
2016; Akhtar et al., 2017), fraud detection (Han
et al., 2018; Wang et al., 2019; Nourbakhsh and
Bang, 2019), opinionated QA (Liu et al., 2020),
such as the FiQA[2] dataset built based on social media. Most recently, Chen et al. (2021) propose the
FinQA dataset with multi-hop numerical reasoning
questions based on financial report. TAT-QA (Zhu
2https://sites.google.com/view/fiqa/home
-----
|Type I simple conversation|Col2|
|---|---|
|The reasoning program of the original multi-step question: op1( arg1, arg2 ), op2( #0, arg3 )||
||Decomposition|
|Conversation skeleton: Turn 1: op1( arg1, arg2 ) Turn 2: op2( #0, arg3 )||
||Insert span selection turns|
|Conversation skeleton: Turn 1: query number arg1 Turn 2: query number arg2 Turn 3: op1( arg1, arg2 ) Turn 4: op2( #0, arg3 )||
|Type II hybrid conversation|Col2|
|---|---|
|The reasoning programs of the two original multi-step questions: op1( arg1, arg2 ), op2( #0, arg3 ) op3( arg3, arg4 ), op4( #0, arg4 )||
||Decomposition|
|Conversation skeleton Conversation skeleton of question 1: of question 2: Turn 1: op1( arg1, arg2 ) Turn 1: op3( arg3, arg4 ) Turn 2: op2( #0, arg3 ) Turn 2: op4( #0, arg4 )||
||Insert span selection turns|
|Conversation skeleton Conversation skeleton of question 1: of question 2: Turn 1: query number arg1 Turn 1: op3( arg3, arg4 ) Turn 2: query number arg2 Turn 2: op4( #0, arg4 ) Turn 3: op1( arg1, arg2 ) Turn 4: op2( #0, arg3 )||
||Integrating two decompositions|
|Concatenation of the two decompositions: Turn 1: query number arg1 Turn 2: query number arg2 Turn 3: op1( arg1, arg2 ) = #0 Turn 4: op2( #0, arg3 ) Turn 5: op3( arg3, arg4 ) = #1 Turn 6: op4( #1, arg4 )||
Figure 2: The simulation process of conversation skeletons.
a two-step construction framework: (I): Conver**sational QA flow simulation to produce the con-**
versation skeleton with each turn filled with the
reasoning semantics, and (II): Question composi**tion to realize the reasoning semantics into textual**
questions.
**Conversational QA Flow Simulation** We build
the conversation flow based on the decomposition
and concatenation of the multi-step reasoning programs (the solutions of the multi-hop questions) in
the existing FinQA (Chen et al., 2021) dataset. In
FinQA, the authors construct two multi-hop questions for most of its reports. The two FinQA questions for the same report naturally query different
but sometimes correlated aspects of the report, inspiring us to integrate them into a natural and realistic conversation. We simulate two types of conversations: Type I: Simple conversation from the
et al., 2021) is another QA dataset with a similar focus. In CONVFINQA, we seek to construct
question sequences in the conversational setting
aiming at more natural experiences for real-world
usages. Table 1 presents the comparison of our
dataset with existing ones.
**3** **Task Formulation**
Given a financial report containing both the textual
content T and structured table B, the user asks a sequence of questions {Qi}i[n]=0 [where later questions]
may depend on previous questions to answer. The
target is to generate the reasoning program G to be
executed to get the answer A to the last question:
_P_ (A|T, B, Qn) = _P_ (Gi|T, B, Q0, Q1, ...Qn−1)
(1)
X
Where _Gi_ is all the possible programs to evaluate
_{_ _}_
to the correct answer. We follow the same domain
specific language (DSL) in FinQA (Chen et al.,
2021) to construct the reasoning programs as a
sequence of operation-argument clauses (Appendix
A for all operations):
op1[args1], op2[args2]..., opn[argsn] (2)
We follow the same evaluation metric as in FinQA,
the execution accuracy to evaluate the final execution result and program accuracy to evaluate program equivalence.
**4** **The CONVFINQA Dataset**
**4.1** **Dataset Construction**
**The Overview** The core challenge of building
such a dataset is the construction of a natural, realistic conversational flow – what kinds of questions
the queriers may ask and how these questions logically appear in a conversation. We consult financial
experts to summarize the following key factors integrating a conversation when querying financial
reports: (i) The questioner directly queries the surface content. (ii) The questioner asks something
requiring calculations from the numbers in the report to answer. (iii) The questioner asks the above
two kinds of questions sequentially to form the conversation, to cumulatively query more information
or switch to other aspects.
Directly composing the conversations from
scratch involving all the above factors is very heavy
and costly. To tackle this challenge, we propose
-----
**Financial report:** **Original questions from FinQA:**
**Answer 1: subtract(1636526, 1642438), divide(#0, 1642438)**
**Question 2: what is the percentage change of balance of good will**
|Col1|balance at beginning of year|acquisition of hittite|goodwill adjustment related to other acquisitions ( 2 )|foreign currency translation adjustment|balance at end of year|
|---|---|---|---|---|---|
|2016|$1,636,526|2014|44046|-1456 ( 1456 )|$1,679,116|
|2015|$1,642,438|-1105 ( 1105 )|3663|-8470 ( 8470 )|$1,636,526|
**Answer 2: subtract(1679116, 1636526), divide(#0, 1636526)**
… notes to consolidated financial statements 2014 ( continued ) depreciation expense for
property, plant and equipment was $ 134.5 million, $ 130.1 million and $ 114.1 million in fiscal
2016, 2015 and 2014, respectively …
**Type Ⅱ hybrid conversation**
|Q1: What’s the balance of goodwill by the end of 2014? A1: 1642438 Q2: and 2015? Simulation from A2: 1636526 question 1 Q3: what was the change in the balance of goodwill of these 2 years? A3: subtract(1636526, 1642438) = #0 Q4: how much does this change represent, in percentage, in relation to to that balance in 2014? A4: divide(#0, 1642438)|Col2|
|---|---|
|||
|Q5: (skip) A5: 1679116 Q6: (skip) Simulation A6: 1636526 from Q7: (skip) question 2 A7: subtract(1679116, 1636526) Q8: and over the subsequent year, what is that percentage? A8: subtract(1679116, 1636526), divide(#0, 1636526)||
**Type Ⅰ simple conversation**
**Q1: What’s the balance of goodwill by the end of 2014?**
**A1: 1642438**
**Q2: and 2015?**
Simulation **A2: 1636526**
from **Q3: what was the change in the balance of goodwill of these 2**
question 1 years?
**A3: subtract(1636526, 1642438)**
**Q4: how much does this change represent, in percentage, in**
relation to to that balance in 2014?
**A4: subtract(1636526, 1642438), divide(#0, 1642438)**
Figure 3: The question composition examples for the two types of conversations. For the hybrid conversation example, the
annotator skips three turns and directly jumps to the last turn using references, making the conversation more natural.
decomposition of a single multi-hop question and
**Type II: Hybrid conversation from the decompo-**
sition and integration of two multi-hop questions.
Figure 2 illustrates the simulation processes of the
two types of conversation flows.
For Type I simple conversations, we take one
muti-hop question and decompose its reasoning
program into single steps – each reasoning step
will then be realized into one question as one conversation turn. To consider the scenario that the
questioner directly queries the surface content, every time there is a new number in a reasoning step,
we randomly insert an additional turn before this
turn with the semantic to query this new number.
For Type II hybrid conversations, we take two
multi-hop questions based on the same report, decompose their reasoning programs and insert additional number selection turns similar to the type I
conversation. Then we concatenate the decompositions of the two questions to integrate the full conversation skeleton – corresponding to the scenario
where the questioner asks two different aspects of
the same report. Since the two aspects of the same
report often correlate with each other, the conversation flow constructed this way will involve longer
dependencies among the turns.
**Question Composition** After we construct both
types of conversation skeletons, we employ expert
annotators to realize the skeletons into textual questions. We use the UpWork[3] platform to recruit expert annotators with finance backgrounds, such as
CPAs, MBAs, etc. Figure 3 gives the composition
examples of the two types of conversations.
Specifically, we present the financial report and
the simulated conversation skeletons to the annotators, with each turn filled with the reasoning semantics (the decomposed reasoning program or a
single number for the number selection turn). We
instruct the annotators to: (i) Read the report and
understand the reasoning flow of the whole conversation skeleton; (ii) Compose questions for the
turns based on the given reasoning semantics;
Since our conversation skeletons are simulated,
there must be many unnatural scenarios, e.g., unnatural decompositions, redundant or unnecessary
turns, etc. Therefore, we emphasize the following
key points: (i) The annotators can skip some turns
and directly jump to a certain following turn with
the goal of an overall natural conversation. The key
is to identify redundancies in the given conversation
flow and compress the unnecessary turns using references to the previous context. The right example
in Figure 3 shows a scenario to skip unnecessary
turns. (ii) If there is no way to compose a natural
conversation with the given skeleton, the annotators can discard the example. We launch training
3www.upwork.com
-----
sessions for the annotators to master task settings
before working on the official large batches.
**4.2** **Dataset Analysis**
**Dataset Statistics** We end up with 3,892 conversations containing 14,115 questions. We split the
dataset into 3,037/421/434 for train/dev/test sets.
2,715 of the conversations are simple conversations,
and the rest 1,177 are hybrid conversations. Table 2
summarizes the general statistics of our dataset.
In our CONVFINQA dataset, the major challenge is to learn the chain of numerical reasoning throughout the conversation turns. First, we
sample 200 turns from our dataset and ask the expert annotators to count the longest dependency
distance to answer the current question, i.e., how
many previous questions need to be seen to answer
the current one. Figure 4 shows the result distributions. Second, in CONVFINQA, we build two
types of conversations – the simple conversation
from the decomposition of one FinQA question and
the hybrid conversation from the decompositions
and concatenation of two FinQA questions. We are
interested to see, for the second type of hybrid conversations, how the question set from the second
FinQA question makes references to the first one.
We split the hybrid conversations into the two sets
– one from the first source FinQA question and one
from the second, and ask the expert annotators to
decide whether any questions from the second set
depend on the questions from the first set to answer.
Among 200 samples, 65.0% of them depend on the
first question set to answer, which demonstrates the
challenging reasoning chains in our dataset – the
model may need to construct the reasoning chains
crossing different aspects and long-range. At last,
we also classify the type of questions based on the
reasoning forms of the answers. 34.73% of the
questions are number selection questions, 35.10%,
25.41%, and 4.75% of them have reasoning programs of 1, 2, and over 3 steps, respectively.
For the type of questions, 59.18% of the questions rely on supporting facts only from the table to
answer, 25.56% of the questions rely on supporting
facts only from the text, and the rest 15.26% rely on
both. For the types of calculations, we have around
18.80% additions, 40.49% subtractions, 6.92% multiplications, 33.43% divisions.
**Data Quality Assessment** To evaluate the quality of CONVFINQA and establish human performance references, we sample 200 example ques
Conversations 3,892
Questions 14,115
Report pages 2,066
Vocabulary 20k
Avg. # questions in one conversation 3.67
Avg. question length 10.59
Avg. # sentences in input text 23.65
Avg. # rows in input table 6.39
Avg. # tokens in all inputs (text & table) 675.61
Max. # tokens in all inputs (text & table) 2338.00
Table 2: Statistics of CONVFINQA.
Figure 4: Distribution of the longest dependency distances of
the questions in CONVFINQA. Over 60% of the questions
have longer dependencies with previous questions.
tions and distribute to both the expert and laymen
annotators to answer. The two expert annotators
reach an average execution accuracy of 89.44% and
program accuracy of 86.34%, with an agreement
rate over 85.0% for both metrics. For the laymen
performance, we distribute the samples to MTurk[4]
and end up with an execution accuracy of 46.90%
and program accuracy of 45.52% with agreement
rates lower than 60.0%. This again demonstrates
the great expertise required to solve our dataset.
**5** **Experiments on Neural Symbolic**
**Approaches**
In this section, we will first experiment with traditional neural symbolic approaches using the full
training data and make detailed analyses.
**5.1** **Methods and Main Results**
We take the FinQANet model from (Chen et al.,
2021) and two generative models – the GPT-2 (Radford et al., 2019) and T5 (Raffel et al., 2020). FinQANet is a pipeline approach with a retriever to
first retrieve the supporting facts from the financial report, then a generator taking the supporting
4Three built-in worker qualifications are used: HIT Approval Rate (≥95%), Number of Approved HITs (≥ 1000),
and Locale (US Only) Qualification. We do not select any
professional constraints. We pay $2.0 for each question.
-----
**Baselines** **Exe Acc** **Prog Acc**
GPT-2(medium) 58.19 57.00
T-5(large) 58.66 57.05
FinQANet (BERT-base) 55.03 54.57
FinQANet (BERT-large) 61.14 60.55
FinQANet (RoBERTa-base) 64.95 64.16
FinQANet (RoBERTa-large) **68.90** **68.24**
FinQANet-Gold (RoBERTa-large) 77.32 76.46
Human Expert Performance 89.44 86.34
General Crowd Performance 46.90 45.52
Table 3: The execution accuracy (Exe Acc) and program
accuracy (Prog Acc) for the models. We also experiment with
using gold supporting facts, shown as FinQANet-Gold.
facts and the question as the input to decode the
reasoning program. Structural information and constraints are also involved in the decoder. We adopt
the same retrieving process from FinQANet and
use the current conversation context, i.e., the questions up to the current turn, to retrieve the evidences
from the input financial report. We end up with
the retrieval results of 86.38% recall for the top
3 retrieved facts. For the program generation, we
concatenate the retrieved facts with the conversation context as the input. We experiment with the
encoder varied as BERT (Devlin et al., 2019) and
RoBERTa (Liu et al., 2019). Table 3 shows the overall experiment results on CONVFINQA. Using a
specially designed encoder-decoder with structural
preservation of the program, FinQANet still outperforms the standalone generative models. While
there is still a gap till the expert performance, the
models already surpass the laymen performance.
We can see that such neural approaches specially
designed can learn better numerical reasoning ability for the specific domain than the common sense
_numerical reasoning ability of the general crowd._
**5.2** **Performance Breakdown**
To gain a deeper understanding of the model insights, we analyze the performances of different
types of questions. The results are shown in Table 4. We can see that the number selection turns
are the easiest to answer. Considering different
types of conversations, the hybrid conversations
are harder to learn than simple conversations, especially the second part of the hybrid conversations
where the question set comes from the decomposition of the second multi-hop question. In these
questions, some of them are irrelevant to the ques
**Methods** **Exe Acc** **Prog Acc**
**full results** 68.90 68.24
**Number selection turns**
Number selection questions 82.54 82.34
Program questions 62.14 61.26
**Simple & hybrid conversations**
Simple conversations 72.37 72.00
Hybrid conversations 60.99 59.70
Hybrid conversations (first part) 68.11 66.54
Hybrid conversations (second part) 52.38 51.43
Table 4: Performance breakdown. The number selection questions are the easiest to answer. The hybrid conversations are
harder than simple conversations, while the second part of
them is even more difficult.
tions in the first part, while some of them depend
on the questions from the first part to answer. The
model faces a stronger challenge of finding the
correct reasoning chains. We also look into the performance breakdown by conversation turns, which
is shown in Figure 5. Later turns in the conversations tend to be harder to answer due to longer
reasoning dependencies.
_n-th conversation turn_
Figure 5: Performances for the nth conversation turn.
**5.3** **Analyses and Findings**
We manually analyze a sample of the predictions
from the FinQANet(RoBERTa-large) model and
summarize the following findings:
**The model excels at number selection questions.**
For the number selection questions depending on
previous references, e.g., what is that value in the
_subsequent year?, the model is mostly able to an-_
swer. Also, the model is mostly clear on when to
discard the previous context and make the transition to new questions.
**The model suffers from the lack of domain**
**knowledge.** The lack of financial knowledge
leads to many errors of missing retrieval facts,
wrong value selections, and wrong mathematical
-----
generations. Nonetheless, the current large pretrained models do see financial corpus during pretraining; we still need to endow the system with
stronger domain knowledge for tasks requiring
high-level, complex domain reasoning abilities.
**The model struggles with long reasoning chains.**
For the later question turns in a conversation that
demonstrate longer reasoning dependencies to the
previous context, the model often struggles with
deducting the correct reasoning programs. If the
prediction for any turn is wrong, then there is a very
minor chance that the subsequent turns are correct.
We provide two error case studies in Figure 6.
**6** **Experiments on Prompting-Based**
**Approaches**
In this section we attempt on few-shot learning with
prompting-based methods and reveal the insights.
**6.1** **Methods and Main Results**
We use the GPT-3 text-davinci-002 model[5] (Brown
et al., 2020). Directly injecting the full financial
report into the prompt is not realistic because of
the length constraint. Therefore we still attempt the
retriever-generator paradigm. Due to the high cost
of using GPT-3, in this work we only run retrieval
on a sample of the test set, and run program generation on the full test set using the gold retrieval
results as the input. Nonetheless, we believe our
experiments are sufficient to show many interesting and valuable insights into the prompting-based
methods on CONVFINQA.
For the retrieval, we concatenate each sentence
or linearized table row of the report with the conversation context, and let the model predict if the
former is relevant for answering the last question.
We use 16 exemplars and run GPT-3 on a sample
of 300 examples of the test set. We end up with an
average recall of 74.25% using 3 different sets of
exemplars, which is much lower than the retriever
trained with the full training data in §5.1.
For the program generation, the exemplar is formatted as [supporting facts, conversation
context, result to be generated]. We experiment using the following settings: (i) Answer**only, to directly generate the execution results.**
**(ii) Program-original, to generate the reasoning**
program with the original DSL. (iii) Program**normal, to generate the reasoning program with**
5OpenAI has released the model interface as a paid service
**Baselines** **Exe Acc** **Prog Acc**
Answer-only 24.090.61 -
Program-original 40.814.68 36.624.22
Program-normal 45.152.77 38.882.57
CoT prompting 40.631.25 33.842.19
Human Expert Performance 89.44 86.34
General Crowd Performance 46.90 45.52
Table 5: The results for all the prompting methods. We report
the average and the standard deviation of different sets of
exemplars or annotators.
the normal DSL. We convert the programs into the
normal form commonly used in the general domain,
e.g., add(a1, a2) → _a1 + a2. (iv) the Chain of_
**Thought (CoT) prompting. CoT prompting (Wei**
et al., 2022) includes a natural language explanation of the reasoning steps before reaching the answer. We ask 3 expert annotators to compose the
explanations for the exemplars. For each method,
we run experiments using 3 sets of 10 different exemplars. Table 5 shows the overall results. Even
with the gold retrieval results, GPT-3 still underperforms the neural symbolic approaches with fulltraining data in §5. See Appendix D for all the
prompt details.
**6.2** **Performance Breakdown**
We take the results from the best-performing
method, Program-normal, to investigate the detailed performances. Table 6 shows the performance breakdown for different types of turns. Surprisingly, GPT-3 even performs worse on number
selection turns. We find that the model often makes
errors for the number selection turns with references to the previous conversation context, e.g., for
the question what is that value in the subsequent
_year?, the model still chooses the value in the pre-_
vious year. Even if we specify the conversational
QA setting in the prompt instructions and explicitly
ask to answer the last question, the model likely
does not understand this task paradigm and often
fails to make correct references to the context. This
further makes the performances in longer reasoning
chains worse, as shown in Table 6. We also analyze
the performances for conversation turn length and
exemplar numbers in Appendix D.
**6.3** **Analyses and Findings**
We analyze samples of the predictions for all the
methods and summarize the following findings:
-----
|in millions|2010|2009|2008|
|---|---|---|---|
|cash flows provided by ( used in ) operating activities including discontinued operations|$515.20|$559.70|$627.60|
|plan category|number of securities to be issued upon exercise of outstanding options warrants and rights|weighted-average exercise price of outstanding options warrants and rights|number of securities remaining available for future issuance under equity compensation plans|
|---|---|---|---|
|equity compensation plans approved by security holders|766801|$40.85|8945694|
|Error case (1)|Error case (2)|
|---|---|
|Supporting facts: table row(s) and text sentence(s) in millions 2010 2009 2008 cash flows provided by ( used in ) operating activities including discontinued operations $515.20 $559.70 $627.60 …excluding the $ 250 million impact of additional accounts receivable from the change in accounting discussed above, cash flows provided by operations were $ 765.2 million in 2010… Questions in the conversation: Q1: what was the total of cash flows provided by ( used in ) operating activities including discontinued operations in 2009? Q2: and what was that in 2008? Q3: what was, then, the change in that total over the year? Q4: what percentage did this change represent in relation to the 2008 total? Q5: over the subsequent year, what was the decline in that total of cash flows? Q6: what was this decline as a percentage of the 2009 total? Gold reasoning program for Q6: subtract(559.7, 515.2), divide(#0, 559.7) Predicted reasoning program for Q6: subtract(515.2, 765.2), divide(#0, 765.2)|Supporting facts: table row(s) and text sentence(s) number of number of securities to be weighted-average securities remaining issued upon exercise price of available for future exercise of outstanding issuance under outstanding options options warrants equity plan category warrants and rights and rights compensation plans equity compensation plans approved by security holders 766801 $40.85 8945694 Questions in the conversation: Q1: what was the total value of the securities issued and approved by security holders? Q2: how much is that in millions? Q3: and what was that total value for the securities approved but not yet issued? Gold reasoning program for Q3: multiply(8945694, 40.85) Predicted reasoning program for Q3: multiply(766801, 40.85)|
Figure 6: Error cases from the results of FinQANet(RoBERTa-large)
**GPT-3** **struggles** **with** **new** **complex** **task**
**paradigms.** Like stated in §6.2, GPT-3 probably
has not seen a similar paradigm as our task setting
during pre-training. We see many examples where
GPT-3 simply mimics the reasoning steps given in
one exemplar but ignores the actual context. This
is also the reason that CoT prompting performs
even worse than generating the program only. We
explicitly explain our task setting in the prompt
instructions about how the questions in the conversation are interrelated and the task goal to answer
the current turn. However, in many cases, GPT-3
either mimics the reasoning steps given in the exemplars or comes up with incorrect reasoning based
on its own knowledge in the general domain. See
Appendix D for error cases from Program-normal.
**7** **Conclusion and Discussion**
Our new dataset, CONVFINQA, targets one of the
major directions to be explored as the next research
focus – how to simulate human reasoning abilities
in complex real-world settings. We experiment
with the neural symbolic models with full training
data and the prompting-based few-shot learning
and find that: (1) Both approaches are still away
from human expert performances, indicating the
challenge of this task. (2) The neural symbolic
approach uses specifically crafted architectures to
learn co-occurrence patterns with large-scale training data. The prompting-based approach recalls its
own memory of elaborating the reasoning process
with the trigger of the prompts. However, this may
not work well when encountering new complex
task paradigms for new domains. (3) Theoretically,
we may encode as many task paradigms into the
**Methods** **Exe Acc** **Prog Acc**
**full results** 48.85 42.14
**Number selection turns**
Number selection questions 35.32 34.72
Program questions 55.56 45.82
**Simple & hybrid conversations**
Simple conversations 52.22 46.64
Hybrid conversations 41.16 31.90
Hybrid conversations (first part) 56.30 48.03
Hybrid conversations (second part) 22.85 12.38
Table 6: Performance breakdown: it is hard for GPT-3 to learn
the problem paradigm and correctly make references to the
conversation context. Still, questions with longer context is
harder to answer.
**GPT-3 can do simple calculations by itself.** For
methods that generate the reasoning programs,
compared with the results of the neural symbolic
approaches in §5, the gap between the execution
and program accuracy is much larger. We find that
GPT-3 often directly generates the correct numerical results without generating the program. Though
the given prompts always derive the programs first,
GPT-3 tends to use its own knowledge acquired
during pre-training. This is also the reason why
Answer-only achieves certain correctness. However, GPT-3 still struggles with complex calculations, such as long digits and divisions.
**GPT-3 performs better for its familiar program**
**format.** In Table 5, Program-normal outperforms
Program-original, since we use the common form
of calculation which is seen much more frequently
by GPT-3 during its pre-training. GPT-3 makes
many grammar errors for Program-original.
-----
large LMs, as long as the reasoning process can be
clearly illustrated by language. But for highly specialized domains or tasks, designing specific models also tend to be more realistic and effective. (4)
We are also eager to see the actual bound between
the reasoning tasks that can benefit from language
modeling and the ones that can not. This should be
the crucial factor in deciding the upper bound of
what large LMs can solve with reasoning.
**8** **Limitations**
In this work, we investigate two construction mechanisms for the conversation, the decomposition of
single multi-hop questions and the decomposition
and concatenation of two multi-hop questions regarding the same report. This definitely does not
cover all possible cases in real-world conversations.
We make this first attempt and hope for future work
to continue exploration.
For prompting-based methods, we only experiment with the GPT-3 model, whose interface is
released to the public as a paid service. Also, due
to cost constraints, we do not conduct extensive
experiments on complex prompt engineering. We
believe our experiments can provide valuable insights into the task of complex reasoning over realworld specific domains, and meanwhile we do not
exclude the possibility that there could be better
performances for prompting-based methods if applying advanced prompt engineering or even larger
pre-trained LMs, like the PaLM model (Chowdhery et al., 2022) which is not released. We leave
this for future work.
**9** **Ethical Considerations**
**Dataset Collection Process and Conditions.**
For the annotation of our CONVFINQA dataset
on Upwork, we first launch interviews of the task
introduction with 2 example conversations, which
is paid as $30. Then based on their consents to continue working on the large-scale job, we discuss
with the workers to reach agreements on the compensation before starting the large-scale job. For
the simple conversations from one FinQA question,
we pay around $4.0 per conversation. For complex
conversations from two FinQA questions, we pay
around $7.0 per conversation. The hourly rates are
discussed and agreed upon with both sides based
on the working speed of different workers. Among
all the US-based hires, the average hourly rate is
$60.0, with the minimum hourly rate of $50.0. The
evaluation tasks and prompt writing tasks follow
the similar procedure and rates.
**IRB (Institutional Review Board) Approval.**
The dataset annotation is classified as exempt by
our Institutional Review Board (IRB). The systems
trained using our dataset are primarily intended
to be used as augmenting human decision-making
in financial analysis, but not as a replacement of
human experts.
**Acknowledgment**
We thank the anonymous reviewers for their
thoughtful comments. This research was supported
by the J.P. Morgan Faculty research award. The authors are solely responsible for the contents of the
paper and the opinions expressed in this publication
do not reflect those of the funding agencies.
**References**
Md. Shad Akhtar, Abhishek Kumar, Deepanway
Ghosal, Asif Ekbal, and Pushpak Bhattacharyya.
2017. [A multilayer perceptron based ensemble](https://doi.org/10.18653/v1/d17-1057)
[technique for fine-grained financial sentiment anal-](https://doi.org/10.18653/v1/d17-1057)
[ysis.](https://doi.org/10.18653/v1/d17-1057) In Proceedings of the 2017 Conference on
_Empirical Methods in Natural Language Processing,_
_EMNLP 2017, Copenhagen, Denmark, September 9-_
_11, 2017, pages 540–546. Association for Computa-_
tional Linguistics.
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. Mathqa: Towards interpretable math](https://doi.org/10.18653/v1/n19-1245)
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/n19-1245)
[malisms.](https://doi.org/10.18653/v1/n19-1245) In Proceedings of the 2019 Conference
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, NAACL-HLT 2019, Minneapolis, MN,_
_USA, June 2-7, 2019, Volume 1 (Long and Short Pa-_
_pers), pages 2357–2367. Association for Computa-_
tional Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen,
Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario
[Amodei. 2020. Language models are few-shot learn-](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
[ers. In Advances in Neural Information Processing](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
_Systems 33: Annual Conference on Neural Informa-_
_tion Processing Systems 2020, NeurIPS 2020, De-_
_cember 6-12, 2020, virtual._
-----
Xinyun Chen, Chen Liang, Adams Wei Yu, Denny
Zhou, Dawn Song, and Quoc V. Le. 2020. [Neu-](https://openreview.net/forum?id=ryxjnREFwH)
ral symbolic reader: [Scalable integration of dis-](https://openreview.net/forum?id=ryxjnREFwH)
[tributed and symbolic representations for reading](https://openreview.net/forum?id=ryxjnREFwH)
[comprehension. In 8th International Conference on](https://openreview.net/forum?id=ryxjnREFwH)
_Learning Representations, ICLR 2020, Addis Ababa,_
_Ethiopia, April 26-30, 2020. OpenReview.net._
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena
Shah, Iana Borova, Dylan Langdon, Reema Moussa,
Matt Beane, Ting-Hao Huang, Bryan R. Routledge,
[and William Yang Wang. 2021. Finqa: A dataset of](https://doi.org/10.18653/v1/2021.emnlp-main.300)
[numerical reasoning over financial data. In Proceed-](https://doi.org/10.18653/v1/2021.emnlp-main.300)
_ings of the 2021 Conference on Empirical Methods_
_in Natural Language Processing, EMNLP 2021, Vir-_
_tual Event / Punta Cana, Dominican Republic, 7-11_
_November, 2021, pages 3697–3711. Association for_
Computational Linguistics.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettle[moyer. 2018. Quac: Question answering in context.](https://doi.org/10.18653/v1/d18-1241)
In Proceedings of the 2018 Conference on Empirical
_Methods in Natural Language Processing, Brussels,_
_Belgium, October 31 - November 4, 2018, pages_
2174–2184. Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng
Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier
Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan,
Hyeontaek Lim, Barret Zoph, Alexander Spiridonov,
Ryan Sepassi, David Dohan, Shivani Agrawal, Mark
Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz,
Erica Moreira, Rewon Child, Oleksandr Polozov,
Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta,
Jason Wei, Kathy Meier-Hellstern, Douglas Eck,
Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
[Palm: Scaling language modeling with pathways.](https://doi.org/10.48550/arXiv.2204.02311)
_CoRR, abs/2204.02311._
[Min-Yuh Day and Chia-Chou Lee. 2016. Deep learn-](https://doi.org/10.1109/ASONAM.2016.7752381)
[ing for financial sentiment analysis on finance news](https://doi.org/10.1109/ASONAM.2016.7752381)
[providers. In 2016 IEEE/ACM International Confer-](https://doi.org/10.1109/ASONAM.2016.7752381)
_ence on Advances in Social Networks Analysis and_
_Mining, ASONAM 2016, San Francisco, CA, USA,_
_August 18-21, 2016, pages 1127–1134. IEEE Com-_
puter Society.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. [BERT: pre-training of](https://doi.org/10.18653/v1/n19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/n19-1423)
[standing.](https://doi.org/10.18653/v1/n19-1423) In Proceedings of the 2019 Conference
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, NAACL-HLT 2019, Minneapolis, MN,_
_USA, June 2-7, 2019, Volume 1 (Long and Short Pa-_
_pers), pages 4171–4186. Association for Computa-_
tional Linguistics.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
[DROP: A reading comprehension benchmark requir-](https://doi.org/10.18653/v1/n19-1246)
[ing discrete reasoning over paragraphs. In Proceed-](https://doi.org/10.18653/v1/n19-1246)
_ings of the 2019 Conference of the North American_
_Chapter of the Association for Computational Lin-_
_guistics: Human Language Technologies, NAACL-_
_HLT 2019, Minneapolis, MN, USA, June 2-7, 2019,_
_Volume 1 (Long and Short Papers), pages 2368–_
2378. Association for Computational Linguistics.
Jingguang Han, Utsab Barman, Jer Hayes, Jinhua Du,
[Edward Burgin, and Dadong Wan. 2018. Nextgen](https://doi.org/10.18653/v1/P18-4007)
[AML: distributed deep learning based language tech-](https://doi.org/10.18653/v1/P18-4007)
[nologies to augment anti money laundering inves-](https://doi.org/10.18653/v1/P18-4007)
[tigation. In Proceedings of ACL 2018, Melbourne,](https://doi.org/10.18653/v1/P18-4007)
_Australia, July 15-20, 2018, System Demonstrations,_
pages 37–42. Association for Computational Linguistics.
Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017.
[Search-based neural structured learning for sequen-](https://doi.org/10.18653/v1/P17-1167)
[tial question answering. In Proceedings of the 55th](https://doi.org/10.18653/v1/P17-1167)
_Annual Meeting of the Association for Computa-_
_tional Linguistics, ACL 2017, Vancouver, Canada,_
_July 30 - August 4, Volume 1: Long Papers, pages_
1821–1831. Association for Computational Linguistics.
[Diederik P. Kingma and Jimmy Ba. 2015. Adam: A](http://arxiv.org/abs/1412.6980)
[method for stochastic optimization.](http://arxiv.org/abs/1412.6980) In 3rd Inter_national Conference on Learning Representations,_
_ICLR 2015, San Diego, CA, USA, May 7-9, 2015,_
_Conference Track Proceedings._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
[MAWPS: A math word problem repository.](https://doi.org/10.18653/v1/n16-1136) In
_NAACL HLT 2016, The 2016 Conference of the_
_North American Chapter of the Association for Com-_
_putational Linguistics: Human Language Technolo-_
_gies, San Diego California, USA, June 12-17, 2016,_
pages 1152–1157. The Association for Computational Linguistics.
Chen Liang, Jonathan Berant, Quoc V. Le, Kenneth D.
Forbus, and Ni Lao. 2017. [Neural symbolic ma-](https://doi.org/10.18653/v1/P17-1003)
[chines: Learning semantic parsers on freebase with](https://doi.org/10.18653/v1/P17-1003)
[weak supervision. In Proceedings of the 55th An-](https://doi.org/10.18653/v1/P17-1003)
_nual Meeting of the Association for Computational_
_Linguistics, ACL 2017, Vancouver, Canada, July 30 -_
_August 4, Volume 1: Long Papers, pages 23–33. As-_
sociation for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
[Roberta: A robustly optimized BERT pretraining ap-](http://arxiv.org/abs/1907.11692)
[proach. CoRR, abs/1907.11692.](http://arxiv.org/abs/1907.11692)
-----
Zhuang Liu, Degen Huang, Kaiyu Huang, Zhuang Li,
[and Jun Zhao. 2020. Finbert: A pre-trained finan-](https://doi.org/10.24963/ijcai.2020/622)
[cial language representation model for financial text](https://doi.org/10.24963/ijcai.2020/622)
[mining. In Proceedings of the Twenty-Ninth Inter-](https://doi.org/10.24963/ijcai.2020/622)
_national Joint Conference on Artificial Intelligence,_
_IJCAI 2020, pages 4513–4519. ijcai.org._
Armineh Nourbakhsh and Grace Bang. 2019. [A](http://arxiv.org/abs/1908.09156)
[framework for anomaly detection using language](http://arxiv.org/abs/1908.09156)
[modeling, and its applications to finance.](http://arxiv.org/abs/1908.09156) _CoRR,_
abs/1908.09156.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.
_OpenAI blog, 1(8):9._
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
[Wei Li, and Peter J. Liu. 2020. Exploring the limits](http://jmlr.org/papers/v21/20-074.html)
[of transfer learning with a unified text-to-text trans-](http://jmlr.org/papers/v21/20-074.html)
[former. J. Mach. Learn. Res., 21:140:1–140:67.](http://jmlr.org/papers/v21/20-074.html)
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
[Percy Liang. 2016. Squad: 100, 000+ questions for](https://doi.org/10.18653/v1/d16-1264)
[machine comprehension of text. In Proceedings of](https://doi.org/10.18653/v1/d16-1264)
_the 2016 Conference on Empirical Methods in Nat-_
_ural Language Processing, EMNLP 2016, Austin,_
_Texas, USA, November 1-4, 2016, pages 2383–2392._
The Association for Computational Linguistics.
Siva Reddy, Danqi Chen, and Christopher D. Manning.
2019. [Coqa: A conversational question answer-](https://doi.org/10.1162/tacl_a_00266)
[ing challenge.](https://doi.org/10.1162/tacl_a_00266) _Trans. Assoc. Comput. Linguistics,_
7:249–266.
Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra,
Karthik Sankaranarayanan, and Sarath Chandar.
[2018. Complex sequential question answering: To-](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17181)
[wards learning to converse over linked question an-](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17181)
[swer pairs with a knowledge graph.](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17181) In Proceed_ings of the Thirty-Second AAAI Conference on Ar-_
_tificial Intelligence, (AAAI-18), the 30th innovative_
_Applications of Artificial Intelligence (IAAI-18), and_
_the 8th AAAI Symposium on Educational Advances_
_in Artificial Intelligence (EAAI-18), New Orleans,_
_Louisiana, USA, February 2-7, 2018, pages 705–_
713. AAAI Press.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Teven Le Scao, Arun
Raja, Manan Dey, M. Saiful Bari, Canwen Xu,
Urmish Thakker, Shanya Sharma, Eliza Szczechla,
Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak,
Debajyoti Datta, Jonathan Chang, Mike Tian-Jian
Jiang, Han Wang, Matteo Manica, Sheng Shen,
Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen,
Abheesht Sharma, Andrea Santilli, Thibault Févry,
Jason Alan Fries, Ryan Teehan, Stella Biderman,
Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. 2021. [Multitask prompted train-](http://arxiv.org/abs/2110.08207)
[ing enables zero-shot task generalization.](http://arxiv.org/abs/2110.08207) _CoRR,_
abs/2110.08207.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
[Kaiser, and Illia Polosukhin. 2017. Attention is all](https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
[you need. In Advances in Neural Information Pro-](https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
_cessing Systems 30: Annual Conference on Neural_
_Information Processing Systems 2017, December 4-_
_9, 2017, Long Beach, CA, USA, pages 5998–6008._
Weikang Wang, Jiajun Zhang, Qian Li, Chengqing
[Zong, and Zhifei Li. 2019. Are you for real? de-](https://doi.org/10.18653/v1/D19-1185)
[tecting identity fraud via dialogue interactions. In](https://doi.org/10.18653/v1/D19-1185)
_Proceedings of the 2019 Conference on Empirical_
_Methods in Natural Language Processing and the_
_9th International Joint Conference on Natural Lan-_
_guage Processing, EMNLP-IJCNLP 2019, Hong_
_Kong, China, November 3-7, 2019, pages 1762–_
1771. Association for Computational Linguistics.
Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang,
Ziyi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu
[Chang, Mohit Bansal, and Heng Ji. 2022. Language](https://doi.org/10.48550/arXiv.2205.10747)
[models with image descriptors are strong few-shot](https://doi.org/10.48550/arXiv.2205.10747)
[video-language learners. CoRR, abs/2205.10747.](https://doi.org/10.48550/arXiv.2205.10747)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022.
[Chain of thought prompting elicits reasoning in large](http://arxiv.org/abs/2201.11903)
[language models. CoRR, abs/2201.11903.](http://arxiv.org/abs/2201.11903)
Munazza Zaib, Wei Emma Zhang, Quan Z. Sheng,
Adnan Mahmood, and Yang Zhang. 2021. [Con-](http://arxiv.org/abs/2106.00874)
[versational question answering: A survey.](http://arxiv.org/abs/2106.00874) _CoRR,_
abs/2106.00874.
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao
Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and
[Tat-Seng Chua. 2021. TAT-QA: A question answer-](https://doi.org/10.18653/v1/2021.acl-long.254)
[ing benchmark on a hybrid of tabular and textual](https://doi.org/10.18653/v1/2021.acl-long.254)
[content in finance. In Proceedings of the 59th An-](https://doi.org/10.18653/v1/2021.acl-long.254)
_nual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Con-_
_ference on Natural Language Processing, ACL/IJC-_
_NLP 2021, (Volume 1: Long Papers), Virtual Event,_
_August 1-6, 2021, pages 3277–3287. Association for_
Computational Linguistics.
**Appendix A: Operation Definitions**
We describe all the operations in Table 7.
**Appendix B: Annotation Interface**
We use Turkle[6] to build our annotation platform,
which is a Django-based web application that can
run in a local server. Figure 7 shows our annotation
interface. We present the financial report and the
decomposed program list to the annotators and ask
them to re-write each program step into a question.
6https://github.com/hltcoe/turkle
-----
|Name|Arguments|Output|Description|
|---|---|---|---|
|add|number1, number2|number|add two numbers: number1 + number2|
|subtract|number1, number2|number|subtract two numbers: number1 −number2|
|multiply|number1, number2|number|multiply two numbers: number1 · number2|
|divide|number1, number2|number|multiply two numbers: number1/number2|
|exp|number1, number2|number|exponential: number1number2|
|greater|number1, number2|bool|comparison: number1 > number2|
Table 7: Definitions of all operations
Figure 7: Annotation interface.
**Appendix D: Prompt Details**
For the experiments on GPT-3, here is the list of
prompts we used:
**Retriever** Instruction: I am a highly intelligent
bot. You need to provide me with context and a
series of questions. I will respond yes if the context
is needed to answer the last question, otherwise, I
will respond with no.
Prompt format: context: (supporting fact candidate) questions: (the question sequence up to
current question) answer: (yes or no)
**Answer-only** Instruction: I am a highly intelligent bot. I can have conversations with the user to
answer a series of questions. Later questions may
depend on previous questions to answer. You need
to provide me with the series of questions as the
context and I will answer the last question.
Prompt format: context: (supporting facts) questions: (the question sequence up to current question) answer: (the execution result)
**Baselines** **Exe Acc** **Prog Acc**
GPT-2(medium) 59.12 57.52q
T-5(large) 58.38 56.71
FinQANet (BERT-base) 54.56 52.81
FinQANet (BERT-large) 60.67 58.99
FinQANet (RoBERTa-base) 64.90 63.15
FinQANet (RoBERTa-large) 68.32 67.87
Table 8: Validation results.
**Appendix C: Experiment Details**
For the neural symbolic approaches, the training
of all models are conducted on TITAN RTX GPUs.
All the implementation and pre-trained models are
based on the huggingface transformers library. We
use the Adam optimizer (Kingma and Ba, 2015).
The learning rate of all models varies at the level of
1e-5 (except for T-5 with 1e-4). We set the batch
size as 16. Table 8 shows the results on validation
set.
-----
**Program-original & Program-normal** Instruction: I am a highly intelligent bot. I can have
conversations with the user to answer a series of
questions. Later questions may depend on previous
questions to answer. You need to provide me with
the series of questions as the context and I will
answer the last question with a multi-step mathematical solution. We use symbols, such as #0, #1,
to denote the results of the intermediate steps.
Prompt format: context: (supporting facts) questions: (the question sequence up to current question) solution: (the program)
**CoT Prompt** Instruction: I am a highly intelligent bot. I can have conversations with the user to
answer a series of questions. Later questions may
depend on previous questions to answer. You need
to provide me with the series of questions as the
context and I will answer the last question with a
multi-step mathematical solution with step-by-step
explanations. We use symbols, such as #0, #1, to
denote the results of the intermediate steps.
Prompt format: context: (supporting facts) questions: (the question sequence up to current question) solution: (CoT explanation and the program).
For all prompts, we add the index of ’Q1’, ’Q2’,
etc., before each question in the question sequence.
Table 9 shows the results of Program-normal for
different number of exemplars. Figure 8 shows
the performances for question of the nth conversation turn. Figure 9 gives two error cases of GPT-3
Program-normal.
**Exemplar numbers** **Exe Acc** **Prog Acc**
5 43.52 36.62
10 48.85 42.14
15 49.31 44.05
20 50.30 45.10
25 49.90 46.08
Table 9: Results of Program-normal for different number of
exeamplers.
Figure 8: Performances for question of the nth conversation
turn. The second turn mostly makes references to the first turn
and GPT-3 often fails to understand it.
-----
|Col1|2004|2005|2006|2007|2008|2009|
|---|---|---|---|---|---|---|
|s&p 500 index|100.00|104.91|121.48|128.16|80.74|102.11|
|loews common stock|100.00|135.92|179.47|219.01|123.7|160.62|
|Error case (1)|Error case (2)|
|---|---|
|Supporting facts: table row(s) and text sentence(s) 2004 2005 2006 2007 2008 2009 s&p 500 index 100.00 104.91 121.48 128.16 80.74 102.11 loews common stock 100.00 135.92 179.47 219.01 123.7 160.62 Questions in the conversation: Q1: what was the price performance of the loews common stock in 2009? Q2: and by how much did it change since 2004? Q3: and only between 2007 and 2008, what was that change for the s&p 500? Gold reasoning program for Q3: subtract(80.74, 128.16) Predicted reasoning program for Q6: subtract(102.11, 100.00), divide(#0, 100.00)|Supporting facts: table row(s) and text sentence(s) … the company recorded a liability for interest and penalties of $ 77 million, $ 55 million, and $ 48 million as of december 31, 2018, 2017, and 2016, respectively … Questions in the conversation: Q1: what was the difference in the liability for interest and penalties between 2017 and 2018? Gold reasoning program for Q3: subtract(77, 55) Predicted reasoning program for Q3: subtract(55, 48)|
Figure 9: Error cases from the results of GPT-3 Program-normal.
-----
| [
"Zhiyu, Chen",
"Shiyang, Li",
"Charese, Smiley",
"William Yang, Wang",
"Sameena, Shah",
"Zhiqiang, Ma"
] | 2022-10-07T00:00:00 | EMNLP 2022 Main | true | 69 | 2 | null | http://arxiv.org/abs/2210.03849 | https://arxiv.org/abs/2210.03849 | https://www.semanticscholar.org/paper/d96997265f8146e93b4c9350f19d55e46d1317f0 |
Unveiling Transformers with LEGO: a synthetic reasoning task | We propose a synthetic reasoning task, LEGO (Learning Equality and Group Operations), that encapsulates the problem of following a chain of reasoning, and we study how the Transformer architectures learn this task. We pay special attention to data effects such as pretraining (on seemingly unrelated NLP tasks) and dataset composition (e.g., differing chain length at training and test time), as well as architectural variants such as weight-tied layers or adding convolutional components. We study how the trained models eventually succeed at the task, and in particular, we manage to understand some of the attention heads as well as how the information flows in the network. In particular, we have identified a novel \emph{association} pattern that globally attends only to identical tokens. Based on these observations we propose a hypothesis that here pretraining helps for LEGO tasks due to certain structured attention patterns, and we experimentally verify this hypothesis. We also observe that in some data regime the trained transformer finds ``shortcut" solutions to follow the chain of reasoning, which impedes the model's robustness, and moreover we propose ways to prevent it. Motivated by our findings on structured attention patterns, we propose the LEGO attention module, a drop-in replacement for vanilla attention heads. This architectural change significantly reduces Flops and maintains or even \emph{improves} the model's performance at large-scale pretraining. | The LEGO attention module is proposed, a drop-in replacement for vanilla attention heads, which significantly reduces Flops and maintains or even improves the model's performance at large-scale pretraining. | ## Unveiling Transformers with LEGO: a synthetic reasoning task
Yi Zhang Arturs Backurs S´ebastien Bubeck
Ronen Eldan Suriya Gunasekar Tal Wagner
**Microsoft Research**
_{zhayi, arturs.backurs, sebubeck, roneneldan, suriya.gunasekar, tal.wagner}@microsoft.com_
**Abstract**
We propose a synthetic reasoning task, LEGO (Learning Equality and Group Operations), that encapsulates the problem of following a chain of reasoning, and we study how the Transformer architectures
learn this task. We pay special attention to data effects such as pretraining (on seemingly unrelated NLP
tasks) and dataset composition (e.g., differing chain length at training and test time), as well as architectural variants such as weight-tied layers or adding convolutional components. We study how the
trained models eventually succeed at the task, and in particular, we manage to understand some of the
attention heads as well as how the information flows in the network. In particular, we have identified
a novel association pattern that globally attends only to identical tokens. Based on these observations
we propose a hypothesis that here pretraining helps for LEGO tasks due to certain structured attention patterns, and we experimentally verify this hypothesis. We also observe that in some data regime
the trained transformer finds “shortcut” solutions to follow the chain of reasoning, which impedes the
model’s robustness, and moreover we propose ways to prevent it. Motivated by our findings on structured
attention patterns, we propose the LEGO attention module, a drop-in replacement for vanilla attention
heads. This architectural change significantly reduces Flops and maintains or even improves the model’s
performance at large-scale pretraining.
### 1 Introduction
The deep learning revolution is about training large neural networks on vast amount of data. The first
field transformed by this methodology was computer vision, crucially leveraging the convolutional neural
network architecture [LBD[+]89, KSH12]. More recently natural language processing was revolutionized by
the Transformer architecture [VSP[+]17]. Transformers are designed to process input represented as “set of
elements” (e.g., the words in a sentence with their positional encoding). This is of course an incredibly generic
assumption, and thus Transformers can be applied to a wide variety of tasks, including vision [DBK[+]21],
reinforcement learning [CLR[+]21], and protein structure prediction [RMS[+]21, JEP[+]21] among others, or
even jointly across domains to produce generalized agents [RZP[+]22]. In fact, learning with Transformers is
rapidly becoming the norm in deep learning.
Transformer models display excellent performance on the standard criterion “training error/test error”
(e.g., for masked language prediction or translation). However, what makes them particularly noteworthy, is
that large-scale Transformer models seem to exhibit unexpected emergent behaviors, such as basic reasoning ability [TDFH[+]22, BMR[+]20, CND[+]22, DHD[+]21, RBC[+]21, HBM[+]22, SPN[+]22, ZRG[+]22, WWS[+]22,
-----
NAGA[+]22], excellent fine-tuning performance [HSW[+]22, TDFH[+]22, NAGA[+]22, RBC[+]21, PHZ[+]22], or
zero-shot learning [BMR[+]20, CND[+]22, DHD[+]21, RBC[+]21, HBM[+]22, SPN[+]22, ZRG[+]22]. Currently, there
is a remarkable community effort towards at-scale experimental investigation of Transformers, essentially
trying to find out what such models can do when they become large enough and are trained on large/diverse
enough datasets. The successes are striking and capture the imagination [BMR[+]20, RDN[+]22]. Yet, for all
of these wonders, there is very little understanding of how these models learn, or in fact what do they learn.
Answering such questions in the at-scale experiments is particularly challenging, as one has little control over
the data when hundreds of billions of tokens are harvested from various sources. In this paper, we propose to
take a step back, and try to understand how learning occurs and what is being learned in a more controlled
setting that captures important aspects of “reasoning”.
The benefit of such a controlled setting is that we can try to understand some of the most pressing
questions in learning with Transformers, particularly around (i) the architecture and (ii) the importance of
training data. For (i) we probe the role of multiple heads and depth, and we show that we can successfully
understand them in our controlled setting. For (ii) we investigate how much the dataset composition matters,
as well as how pretraining on merely vaguely related tasks makes fine-tuning successful. In turn, these
insights can guide our thinking for large-scale experiments, and we give some of the lessons learned below.
In particular, our insights crystallize into an architectural change to BERT for faster inference with matching
or even better performance (Section 5).
**1.1** **LEGO: A synthetic reasoning task**
Core components of reasoning include the ability to associate concepts, and to manipulate them. We propose a simple task that captures these two aspects, which we call LEGO (Learning Equality and Group
Operations). In LEGO, the input describes a sequence of variable assignments as well as operations on these
variables by a fixed (mathematical) group. One needs to be able to deal with both long-range assignments
(the same variable appearing in different parts of the input should be viewed as a being equal to same
quantity), as well as short-range operations (describing what group element is applied to which variable).
A key parameter of an input sequence will be its length, which is proportional to the number of sequential
reasoning steps one has to do in order to resolve the value of each variable. We will mostly train with a
fixed sequence length (say 12). We often provide supervision only on part of the sequence (say the first 6
variables). We do so in order to test the generalization capabilities from smaller length sequences to longer
length sequences without introducing potential errors due to the positional encoding in Transformers.
**1.2** **Some takeaways**
In LEGO, we are interested in both classical generalization (i.e., training and test distribution are the same)
and out-of-distribution generalization. For the latter we focus on distribution shifts that vary the length of
the chain of reasoning, and thus we refer to this type of generalization as length extrapolation. Specifically,
the setting for length extrapolation is to train with supervision on shorter sequence lengths (e.g., supervision
on only the first 6 variables) and test on a long sequences (e.g., accuracy computed on 12 variables). A
summary of our empirical observations is as follows:
1. First, classical generalization happens reliably for all architectures and data regimes.
2. More interestingly, length extrapolation seems to depend on architectural/data composition choices.
Specifically, BERT-like models without special data preparation do not extrapolate to longer sequences,
while other models like ALBERT, or BERT with carefully selected data (such as diverse sequence lengths,
or pre-trained BERT) do extrapolate.
3. The extrapolating models all seem to evolve attention heads dedicated to either association (long-range
identity matching) or manipulation (short-range operations). We provide evidence that pre-trained BERT
-----
(which is pre-trained on a seemingly unrelated dataset) generalizes because it has learned such heads.
4. The non-extrapolating models seem to solve the classical generalization problem using a certain shortcutlike solution, whereby using the specificity of the group operations they are able to jump to the end of
the chain of reasoning, and then complete the rest of the variables by following the reasoning both from
the start and the end of the chain.
We interpret our findings as follows:
(i) Classical generalization can be a deceptive metric, as there might be unexpected ways to solve the
problem. This is famously related to the issue of embedding machine learning systems with common
_sense reasoning. Namely, we hope that when an ML system solves a task, it does so in “the way humans_
do it”, but of course, nothing guarantees that this will happen. Our findings are consistent with the current
methodology of increasing the diversity of the training data, which seems crucial for generalization.
(ii) ALBERT-like models, where a layer is repeated several times, seem to be an ideal structure for problems
that could be described algorithmically as a “for loop” (as is the case with following a chain of reasoning).
Indeed we find that ALBERT extrapolates in data regimes where BERT does not, clearly separating these
two architectures.
(iii) The success of pretraining/fine-tuning in vastly different tasks might actually come from a “simple” better
initialization, rather than complex knowledge encoded during pre-training.
(iv) The interplay between short-range (close-by information in a sentence) and long-range (the same concept
appearing in different places in the sentence) is relevant more broadly than in our synthetic task. We
observe that the networks effectively learn to deal with short-range/long-range information by implementing specific attention patterns. This motivates us to study a new LEGO attention architecture, and
we show it matches or even outperforms its baseline on the large-scale pretraining but with significantly
less computational cost.
**1.3** **Related works**
In [ZRKB21], the PVR (Pointer Value Retrieval) task is introduced, with a similar high-level goal to ours
in introducing the LEGO task, namely to study how neural networks learn to reason in a controlled setting.
In a PVR task, part of the input indicates another part of the input where a function of potentially varying
complexity has to be computed. Like us, they use distribution shift to investigate how various network
architectures learn this task, and they observe that networks can learn the task at hand (“classical generalization”) yet fail to extrapolate to mild distribution shift. They then ask the following questions: “Are
there architectural changes that can enforce better priors and withstand distribution shift? Can novel learning objectives prevent these adversarial correlations? Progress on these questions holds promise for greater
robustness.”
Our study attacks these questions directly in the context of the LEGO task (e.g., ALBERT versus BERT,
and training set composition investigations), and our preliminary results indicate that this is indeed a fruitful
direction to obtain better models in some aspects (e.g., more interpretable). Other examples of recent synthetic benchmark with a similar philosophy include SCAN (Simplified version of the CommAI Navigation)
[LB18], CFQ (Compositional Freebase Questions) [KSS[+]20], LIME [WRL[+]21], PCFG SET [HDMB20], and
BONGARD-LOGO [NYM[+]20]. In SCAN for example, one has to “translate” a command of the form “turn
left twice and jump” into a sequence of actions “LTURN LTURN JUMP” (see [PBBG22] for more recent
progress on this dataset). Again, similarly to the PVR tasks, these works focus on understanding generalization (in these cases, compositional generalization). Another related line of works is on studying Transformers
to recognize various formal languages, see e.g., [BAG20, YPPN21]. A contemporary work [CIS21] proposed
modifications to Transformer architectures to achieve significantly better length extrapolation (other works
-----
+ + + +
1 _a_ _−_ _b_ _e_ _f_ _−_ _d_ _c_
Figure 1: The graph representation of the sentence a = +1; b = −a; e = +b; d = −f ; c = +d; f = +e
studying this important class of distribution shifts include [AWA[+]22]). As far as we know, none of these
works try to probe the inner workings of the networks in the same depth as we do here. On the other hand,
networks trained on real data are being extensively scrutinized, see for example [RKR20] where they try
to understand some of the attention heads of BERT (see also [SGSB20a]). However, making sense of these
real-data-trained networks is a daunting task, and a key contribution of ours is to show in a limited setting
one can obtain a clearer picture of what Transformers learn.
The LEGO task is also naturally related to the growing literature on testing mathematical/coding abilities
of Transformers (e.g., [SGSB20b]), specifically the simpler tasks of checking the correctness of a proof (or
simplifying one, such as in [AAG21] which studies simplification of polynomials), or executing code for a
given input [CST21]. It would be interesting to see if some of the insights we derive in the present paper
apply to currently challenging mathematical tasks such as MATH [HBK[+]21] and IsarStep [LYWP21].
There are an abundance of studies on attention heads that have identified the importance of local,
convolutional, attention patterns [VTM[+]19, CNM19, CKLM19, RST20, YSI20]. However, to the best of our
knowledge, we are the first to demonstrate the importance of the association pattern that globally attends
to identical tokens, thanks to the LEGO task.
### 2 Learning equality and group operations (LEGO)
We propose the following synthetic task, which we call LEGO. Let G be a finite (semi)group acting on a
finite set X, and denote g(x) for the action of g ∈ _G on x ∈_ _X. We define a formal language using the_
symbols from G and X as well as symbols from a finite alphabet A which we refer to as the variables. A
sentence in our formal language is made of clauses separated by a semi-colon. A clause is of the form a = gx
with a ∈ _A, g ∈_ _G and either x ∈_ _X or x ∈_ _A. If x ∈_ _X, such a clause means that the variable a is assigned_
the element g(x) ∈ _X. On the other hand if x ∈_ _A and the variable x was assigned an element y ∈_ _X_
through another clause (or chain of clauses) in the sentence, then the clause a = gx assigns variable a to
the element g(y) ∈ _X. The task’s goal is to take in input a sentence with a fixed number n of clauses, given_
in an arbitrary order, and to output the assigned element to each variable that appear in the sentence (the
formal language will have a further restriction that ensures that each variable is assigned one and only one
element).
We can view a sentence as a directed graph on the vertex set X ∪ _A with labelled edges as follows: a_
clause a = gx corresponds to a directed edge from the vertex x to the vertex a, and the edge is labelled
with g. We restrict our attention to sentences corresponding to a line graph directed away from some fixed
root vertex r ∈ _X, and whose non-root vertices are all in A, see Figure 1 for an example. In particular such_
sentences are “consistent”, meaning that a variable is assigned a unique element (the assignment is obtained
by simply “following the chain”).
**Task 1. The most basic instantiation of LEGO is when G is the unique group of 2 elements acting on**
a set X also of 2 elements, that is G = {+, −} and X = {1, −1}. Our sentences thus consists of n clauses
of the form ai = _ai_ 1, where ai _A for i = 1, 2, . . ., n and a0 = 1 (we fix r = 1). Note that in this case_
_±_ _−_ _∈_
our formal language has well over a billion unique valid sentences when n ≥ 10. Example of a sentence with
_n = 6 is (see Figure 1 for the graph depiction): a = +1; b = −a; e = +b; d = −f_ ; c = +d; f = +e. Our
task’s goal is to report the elements or values from X assigned to the variables appearing in the sentence.
In the above example, assignments for variables a, b, c, d, e, f are 1, −1, −1, −1, 1, 1.
**Task 2. One can think of Task 1 as the case of LEGO for the permutation group on N = 2 elements**
(acting on itself). Our second task will correspond to N = 3, which is qualitatively different since the
permutation group on 3 elements is non-abelian.
-----
We will focus on Task 1 in the main paper and include in Appendix E experiments on this Task 2. Our
training and test data for the task consists of n length chains as described above with the order of clauses in
the sentence randomized. A sample input sentence to a transformer looks like [BOS] j=-f; f=-b; y=+t;
```
o=+e; d=+y; v=+d; h=-o; b=-i; i=+1; t=+l; e=-j; l=-h; [EOS]. See appendix for further data gen
```
eration details.
### 3 Transformers for LEGO
+1 or -1? +1 or -1? ... +1 or -1?
...
**[BOS]** **d** **=** **-** **c** **;** **b** ... **;** **a** **=** **+** **1** **;** **[EOS]**
...
```
Transformer
```
...
**[BOS]** **d** **=** **-** **c** **;** **b** ... **;** **a** **=** **+** **1** **;** **[EOS]**
```
classification head
representations
model
input tokens
```
**[BOS]**
**d**
**=**
**-**
**c**
**;**
**b**
**;**
**a**
**=**
**+**
**1**
**;**
**[EOS]**
Figure 2: Illustration of a transformer model applied to LEGO Task 1 on input sentence d=-c; b=-a; c=+b; a=+1;.
We apply a linear classification head to the output representations of each clause’s first token to generate predictions
for the variables assignment.
We apply transformer models in the token classification pipeline to predict the assignments of the variables
in the input sentence, depicted in Figure 2. To evaluate the out-of-distribution generalization (referred to
simply as generalization), we introduce the notation of ntr _n, such that during training, supervision is_
_≤_
provided only on the first ntr clauses (first in the graph representation of the input sentence). We mainly
focus on BERT [DCLT18] and ALBERT [LCG[+]19] architectures. These two models are representative large
transformer architectures for NLP tasks, and we observe they exhibit intriguing behavior difference on our
tasks which we will detail in Section 4. See appendix for training hyper-parameters and dataset construction
details.
In Figure 3, we report initial results on LEGO with n = 12 and ntr = 6, 12. Both BERT and ALBERT
are able to achieve good classical generalization, while only ALBERT appears to generalize even to slightly
longer sequence length. We observe similar behavior across different lengths of inputs too. This suggests
that classical generalization might be a deceptive metric to evaluate learning of true logic/reasoning tasks.
Motivated by these initial results, in the next section we focus on breaking down the learning dynamics of
BERT and ALBERT for the LEGO task towards carefully understanding their strengths and weaknesses.
### 4 Unveiling Transformers with LEGO
**4.1** **BERT vs. ALBERT: Iterative reasoning in iterative architectures**
A salient feature of many reasoning tasks is an iterative component, meaning they can (or must) be solved by
sequentially repeating certain operations. In this section, we use LEGO to study and compare Transformer
architectures through the lens of iterative reasoning.
-----
Rand-Init ALBERT variable #
|100%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|50%||||||||
10[0] 10[1] 10[2] 11
epoch
variable #
0
1
2
3
4
5
6
7
8
9
10
11
epoch
Rand-Init BERT
10[0] 10[1] 10[2]
epoch
epoch
100%
(a)
|Rand-Init BERT Rand-Init ALBERT 100% 90% 80% acc (b) 70% test 60% 50% 100 101 102 100 101 102|variable # 0 1 2 3 4 5 6 7 8 9 10 11|Col3|
|---|---|---|
Figure 3: Solving LEGO (Task 1) using BERT and ALBERT, trained from random initialization. Each curve
corresponds to the test accuracy of a single variable appearing in the sentence over the course of training. The variable
numbers in the legend are their position in the reasoning chain (or graph representation) of the input sentence, rather
than the position in the sentence itself. For example, on the input sentence: b=-a; d=-c; c=+b; a=+1;, variable #0
is a, #1 is b, #2 is c, and #3 is d. Top a): models are trained to fit all variables (n = 12, ntr = 12). Bottom b):
models are trained to fit the first 6 variables but test on all 12 variables (n = 12, ntr = 6). Dashed curves represent
variables not supervised during training.
A natural solution to LEGO—and arguably the go-to solution for a human—is to implement a “forloop”, where each iteration resolves one step in the reasoning chain. The iteration could look for the next
unresolved variable token whose value could be resolved in one step. Iterative Transformer architectures such
as ALBERT and Universal Transformers [DGV[+]18], where the weights are shared across different layers,
inherently implement a for-loop with a number of iterations equal to the number of layers. If the model
manages to learn to implement one such iteration during training, the network would immediately be capable
of performing length extrapolation. If this indeed occurs, it would point to a clear advantage of ALBERT
over BERT in our setting. This leads to the following questions.
**Q1. Do iterative architectures indeed exhibit better length extrapolation?**
The bottom plots of Figure 3 display the length extrapolation result for BERT and for ALBERT. They
show the clear advantage of recurrence: While the non-iterative BERT achieves only somewhat better-thanrandom accuracy for one variable (#6) beyond the ones accounted for during training (#0- -#5), the iterative
ALBERT reaches near-perfect accuracy on two additional variables (#6 and #7), and nontrivial accuracy on
the third (#8). These results clearly support that iterative architectures do generalize better in the iterative
LEGO reasoning task.
**Q2. Does the ALBERT architecture actually implement the for-loop?**
To a lesser extent, Figure 3 also hints at a positive answer to Q2. Observe that ALBERT exhibits length
extrapolation to variable #6 immediately (in terms of epochs) as soon as it fits the training variables (#0
-----
0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11
**#** **#**
Figure 4: Visualization of information percolation within the fine-tuned models. The color indicates the test accuracy
of the probing classifier at each layer. Brighter is higher. We observe ALBERT’s information percolation is linear
than BERT’s, which implies ALBERT is biased towards learning a for-loop.
– #5), whereas for BERT, the corresponding plot (#6) climbs gradually even after the training variables
are predicted perfectly. This suggests that once it manages to learn the operations required for one step of
reasoning, it can immediately implement those operations over a few more iterations not required in training.
In order to gain stronger evidence, we measure the dependence between the location of a variable token
in the chain and the layer in which its value is typically resolved. To this end, given a trained model,
we train one linear classifier per layer which predicts the value of a variable token based only on its token
representation at the corresponding layer (without using other information), while keeping the original model
unchanged. This allows us to gauge the rate of information percolation along the reasoning chain in terms
of layers per reasoning step. If the model indeed implements a for-loop in its forward pass, one expects a
linear relationship between the number of layers and the number of reasoning steps already completed. We
visualize in Figure 4 the test accuracy of prediction as a function of the layer in the network and depth in
the chain. While not perfectly linear, the relation clearly looks closer to linear in ALBERT, suggesting that
the ALBERT model has an inductive bias towards learning to implement the “natural” for-loop with its
forward pass.
**Q3. How can we incentivize models to learn iterative solutions?**
We attempt to incentivize the model to implement the “natural” for-loop solution. We rely on the observation
that if each iteration of the for-loop simply percolates the information one more step (assigning a value to
the next variable), then adding more layers with the same weights should not affect the output, and in fact,
one should be able to read out the output of the calculation from any layer of the neural network, as long as
its depth exceeds the length of the chain. With this observation in mind, we train a ALBERT model with
stochastic depth [HSL[+]16]. We uniformly sample depth between 6 and 12 per batch during training while
fixing it at 12 during test. Figure 5 shows a clear improvement in generalization to longer lengths using
stochastic depth.
**4.2** **Rand-Init vs. Pretrained: Structural advantages from pretraining**
Pretraining large models has emerged as a prominent and highly successful paradigm in large-scale deep
learning. It advocates first training the model on a large dataset to perform a generic task, followed by
task-specific fine-tuning on the task at hand. Our goal here is to use LEGO as a testing ground for this
-----
Rand-Init ALBERT
w/ stochastic depth variable #
|100%|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|100% 90% 80% acc 70% test 60% 50%|||||
||||||
||||||
||||||
||||||
||||||
10[0] 10[1] 10[2]
epoch
Rand-Init ALBERT
10[0] 10[1] 10[2]
epoch
100%
variable #
0
1
2
3
4
5
6
7
8
9
10
11
Figure 5: Generalization of ALBERT trained with stochastic depth. The stochastic depth improves the length
extrapolation to longer sequence lengths.
Rand-Init BERT
|100%|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60% 50%|||||||
||||||||
||||||||
||||||||
||||||||
||||||||
10[0] 10[1] 10[2]
epoch
Pre-Trained BERT
10[0] 10[1] 10[2]
epoch
Mimicking BERT variable #
10[0] 10[1] 10[2]
epoch
100%
variable #
0
1
2
3
4
5
6
7
8
9
10
11
Figure 6: Pretrained BERT exhibits significant performance advantages over its Rand-Init counterpart, while the
mimicking procedure (a simple initialization scheme we describe below) heads closes the gap.
paradigm. To this end, we compare (a) training the BERT architecture for LEGO from random initializations
to (b) fine-tuning the standard pre-trained BERT model to solve LEGO. Figure 6 (left and center plots)
shows that pretraining helps generalization in LEGO dramatically: the pre-trained model generalizes to
unseen sequence lengths (the dashed plots) much better, and within a far smaller number of epochs, than
the randomly initialized model.
**4.2.1** **Why does pretraining help in LEGO?**
One simple explanation is that pre-trained BERT is already aware of the semantics of tokens like ‘=’ or
‘-’. We have easily ruled out this possibility, by replacing those tokens with arbitrary ones that do not
encompass the same semantics; this does not affect the performance of pre-trained BERT. A more intriguing
explanation pertains to the attention mechanism itself. At its basis, LEGO requires two fundamental types
of information transfer:
- Association: encoding long-range dependencies that transfer a value between two occurrences of the
same variable. For example, if the input contains the two clauses “a = +1” and “b = −a” (with arbitrary
separation between them), the architecture must associate the two occurrences of the variable a in order
to correctly set b to −1.
- Manipulation: encoding short-range dependencies of transferring a value from the right-hand to the
left-hand side of the clause. For example, to successfully process the clause “b = −a”, the architecture
must associate these particular occurrences of a and b with each other, in order to transfer the value of a
(after applying to it the group element −1) into b.
Association corresponds to a purely global attention pattern, completely reliant on the identity or content
-----
of the tokens and oblivious to their positions in the input sequence. Manipulation, in contrast, corresponds
to a purely local attention pattern, where nearby positions attend to each other.
Layer 0 Head 3 Layer 1 Head 11
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] 1.00.80.60.40.20.0
Figure 7: Visualization of two representative attention maps from a pre-trained BERT model not yet fine-tuned on
LEGO. A complete visualization of all attention patterns of the pre-trained BERT is in Appendix F. On the LEGO
input sequence, certain heads implement local, convolution-like manipulation operators (left), while some others
implement global, long-range association operators (right). Note that the sample input sequence is presented in the
reasoning chain order for visualization purposes only.
It is natural to ask whether they are indeed manifested in the pre-trained model’s attention heads in
practice. Indeed, Fig. 7 shows two exemplar attention heads of pre-trained BERT on an input LEGO
sequence without any fine-tuning. The right head clearly depicts association: each token attends to all other
occurrences of the same token in the input sequence. This motivates us to make the following hypothesis:
**_the advantage of pre-trained models on LEGO can be largely attributed to the association and_**
**_manipulation heads learned during pretraining._**
Note that merely the existence of the heads does not fully validate the hypothesis yet. To rule out other
factors, we carefully design controlled experiments to test this hypothesis in the section below.
**4.2.2** **Verifying the hypothesis with Mimicking**
To test this hypothesis, we conduct the following mimicking experiments.
**Mimicking BERT** We ‘initialize’ certain attention heads to perform association and manipulation,
without access to pretraining data. We achieve this by specifying the target attention matrices (one for
association and one for manipulation), and training the model on random data to minimize a “mimicking
loss” that measures how well the actual attention matrices at every layer match the target matrices. The
precise mimicking loss and training protocol are specified in the Appendix B.3. The rightmost plot in
Figure 6 shows that BERT with mimicking initialization attains significant advantage in generalization over
randomly initialized BERT, despite not being pre-trained on any real data (and thus not having learned to
“reason”). This confirms that much of the advantage of pre-trained BERT stems from having learned these
information transfer patterns.
-----
**4.3** **Shortcut solutions and their effect on generalization**
As discussed in Section 4.1, a natural solution to LEGO is to resolve variables iteratively by the order of
their depth in the chain. Surprisingly, we find that the Rand-Init BERT and ALBERT models first learn a
“shortcut” solution: they immediately resolve the last variable in the reasoning chain, perhaps by counting
the total number of minus signs. Indeed, the last variable can be easily identified as it appears only once
whereas every other variable appears twice, and its value is fully determined by the parity of the number of
minus signs. This behavior is observed in Figure 3a: the randomly initialized models are trained to fit all 12
variables: the last variable (#11, indicated by the brightest green curves) improves earlier than almost all
other ones.
This behavior may be related to the well-observed phenomenon of spurious features: a model succeeds in
training not relying on any actual features of cows and circumventing the intended solution [MPL19, SHL20,
GSL[+]18, NNSN21].
We use LEGO as a case study of shortcut solutions and their effect on generalization. Instead of training
the model to fit the first six variables (as in bottom Figure 3 in Appendix), we train it to fit the first five
(#0–#4) and the last variable (#11). This allows us to measure length extrapolation (to #5–#10) in a
setting where models can learn the shortcut. The results show significantly degraded performance, implying
that shortcut solutions impede generalization. We then study ways to prevent models from learning them,
by pretraining and mimicking. The full section appears in Appendix A.
### 5 LEGO Attention: faster and better
Our analysis in Section 4.2 reveals that the advantage of the pre-trained BERT model on LEGO originates
from two specific types of attention structures emerging from pre-training — the association and manipulation
patterns. A quick examination of all the attention heads depicted in Appendix F suggests that there is one
more clearly identifiable attention pattern: broadcasting on the [CLS] token or the [SEP] token (sometimes
both). Namely, it ‘broadcasts’ the value inside the special tokens to the others. Even though [CLS], [SEP]
play no role on LEGO per se, they are vital to the pretraining objective as well as many downstream tasks.
Thus the broadcasting attention pattern is presumably important for many real-life NLP tasks beyond
LEGO. Association, manipulation, and broadcasting consist of a considerable portion of the pre-trained
BERT’s attention heads, and they are so structured that we can in fact LEGO v0 them efficiently.
Linear
Concat
**(a)** **(b)** **(c)**
Linear
Attention Hardcode Attention
Depth-wise Conv
ReLU
Linear compute hardcoded
patterns
hidden state
raw input tokens
Figure 8: Our proposed LEGO attention consists of 3 pathways. BERT has pathway (b) only; the LEGO v0 attention
module has (a) and (c); the LEGO v1 attention has (a), (b), and (c). See Appendix C.
10
-----
|1e11|Attention L|Layer Infe|erence Flops|Col5|Col6|
|---|---|---|---|---|---|
||||standard Attention LEGO v1 Attention|||
|||||||
|||LEGO v0 Attention|LEGO v0 Attention|||
|||||||
|||||||
|||||||
|||||||
|||||||
1.2 1e11 Attention Layer Inference Flops 1.80 Model Performance
standard Attention standard Attention
1.75
1.0 LEGO v1 Attention LEGO v1 Attention
LEGO v0 Attention 1.70 LEGO v0 Attention
0.8 1.65
Flops0.6 1.60
1.55
0.4
Validation Loss
1.50
0.2
1.45
0.0 1.40
Figure 9: Top) Comparison of inference Flops and model size. Flops are measured on a batch of 64 sequences of 512
tokens.
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|RT|Col9|
|---|---|---|---|---|---|---|---|---|
||||||vanilla BE||RT||
||||||||||
||||||hardcode hybrid||Attn||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
vanilla BERT 8 vanilla BERT
10 hardcode Attn hardcode Attn
7
hybrid hybrid
8 6
5
6
4
training loss
4 validation loss
3
2 2
0 2000 4000 6000 8000 0 2000 4000 6000 8000
step step
Figure 10: Training and validation performance on BERT pertaining task (Masked Language Modelling+Next Sentence Prediction). As a standard, the training sequence length increases from 128 to 512 around the 7k-th step,
where the BERT training loss exhibits a sudden bump in response, while the LEGO v0/v1 models exhibit remarkable
resilience. The LEGO v1 model learns faster and (slightly) outperforms BERT in validation.
**LEGO Attention:** For the association, manipulation, and broadcasting heads, we can efficiently construct the sparse attention matrix based on the input token IDs only, without learning Q and K or the
expensive attention probability computation. For manipulation maps, due to their intrinsic locality, we
decide to implement them directly with temporal convolutions (along the time dimension). For the other
global maps, given a raw input sequence of T tokens, u1, u2, . . ., uT ∈ N, we manually construct the association and broadcasting maps Aasso, Acls, Asep ∈ R[T][ ×][T] such that (Aasso)ij = 1 [ui = uj], (Acls)ij =
**1 [uj = [CLS]], (Asep)ij = 1 [uj = [SEP]] where 1 [·] is the indicator function which outputs 1 if the argu-**
ment is true and 0 otherwise. In the end, we normalize them to have row-wise unit ℓ1 norm. Notably, the
latter three patterns require no training (except for a value map for each layer) and are shared across all
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|standard Attention|Col9|Col10|Col11|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||
|LEGO v0 Attention|||||||LEGO v0 Attention|||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|Flops 8 7 6 5 4 3 2 0 ining h inc se, w perfor and ken I ulatio tions u, . 2 T su ind em t e m|||are measured||on a batch o||||f 64 sequence||s of 5|
|||||||||||||
|||||||||||||
|||||||||||vanilla BE hardcode|RT Attn|
|||||||||||||
|||||||||||hybrid||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
||||2000 task (Maske reases from 1 hile the LEGO ms BERT in broadcastin Ds only, wit n maps, du (along the . ., u N, T ∈ ch that (A a icator functi o have row- ap for each l||4000 step d Language 28 to 512 ar v0/v1 mode validation. g heads, we hout learnin e to their in time dimensi we manuall sso) = 1 [u ij on which ou wise unit ℓ 1 ayer) and ar||||6000 80 Modelling+Ne ound the 7k- ls exhibit rem can efficient g Q and K trinsic local on). For th y construct = u j], (A i c tputs 1 if th norm. Notab e shared ac||00 xt S th st arka ly co or t ity, e oth the ) ls ij e arg ly, t ross|
layers.
On the standard BERT pertaining benchmark, we compare the following three models: BERT-base
model, LEGO v0 and v1 models. We use convolutional kernel size 21 for the latter two. In Figure 10, we
show that the LEGO v0 model learns fast in the beginning but falls short later on. However, the LEGO
v1 model not only reduces model size and accelerates inference, but also renders models that are extremely
competitive with the base model in terms of the final performance of large-scale pertaining. We follow
precisely the training pipeline and hyperparameters of [DCLT18]. See Appendix C for architecture details
of the LEGO v0/v1 models.
We observe that the LEGO v0 model learns faster but gradually falls short, while the LEGO v1 model
achieves the best of both worlds: it learns faster at the beginning and achieves even (slightly) lower validation
loss at the end. The LEGO v1 model’s validation loss curve appears to be a lower envelope of the other two.
The BERT/LEGO v0/v1 models achieve 1.49/1.66/1.47 final pertaining validation loss and 88.2/82.5/88.1
11
-----
Dev F1 score on SQuAD v1.1 [RZLL16]. We leave comprehensive evaluations for future work.
### 6 Conclusion
In this work, we study Transformers by constructing LEGO, a controllable synthetic logical reasoning task.
With LEGO, we have gained insights into their inductive bias, the role of pertaining, etc. Based on these
insights, we proposed the LEGO attention mechanism which both accelerates inference and leads to comparable or even better performance. There are many important attention heads beyond just manipulation
and association, and their roles remain to be discovered. We believe LEGO will continue to deepen our understanding on Transformers’ inner working and to inspire better algorithms/architectures for tasks beyond
LEGO.
12
-----
### References
[AAG21] Vishesh Agarwal, Somak Aditya, and Navin Goyal. Analyzing the nuances of transformers’
polynomial simplification abilities. arXiv preprint arXiv:2104.14095, 2021.
[AWA[+]22] Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Ramasesh,
Ambrose Slone, Guy Gur-Ari, Ethan Dyer, and Behnam Neyshabur. Exploring length generalization in large language models. arXiv preprint arXiv:2207.04901, 2022.
[BAG20] Satwik Bhattamishra, Kabir Ahuja, and Navin Goyal. On the Ability and Limitations of Transformers to Recognize Formal Languages. In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), pages 7096–7116, 2020._
[BMR[+]20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal,
Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel
Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin,
Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford,
Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in
_Neural Information Processing Systems, volume 33, pages 1877–1901, 2020._
[CIS21] R´obert Csord´as, Kazuki Irie, and J¨urgen Schmidhuber. The neural data router: Adaptive control flow in transformers improves systematic generalization. arXiv preprint arXiv:2110.07732,
2021.
[CKLM19] Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT
look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop Black_boxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy,_
August 2019. Association for Computational Linguistics.
[CLR[+]21] Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter
Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning
via sequence modeling. Advances in neural information processing systems, 34, 2021.
[CND[+]22] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
[CNM19] Gon¸calo M. Correia, Vlad Niculae, and Andr´e F. T. Martins. Adaptively sparse transformers.
In EMNLP, 2019.
[CST21] Xinyun Chen, Dawn Song, and Yuandong Tian. Latent execution for neural program synthesis
beyond domain-specific languages. Advances in Neural Information Processing Systems, 34,
2021.
[DBK[+]21] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,
Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly,
Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image
recognition at scale. In International Conference on Learning Representations, 2021.
[DCLT18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of
deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805,
2018.
[DGV[+]18] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819, 2018.
13
-----
[DHD[+]21] Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu,
Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. Glam: Efficient scaling of
language models with mixture-of-experts. arXiv preprint arXiv:2112.06905, 2021.
[GSL[+]18] Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman,
and Noah A Smith. Annotation artifacts in natural language inference data. arXiv preprint
_arXiv:1803.02324, 2018._
[HBK[+]21] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset,
2021.
[HBM[+]22] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al.
Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
[HDMB20] Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed:
How do neural networks generalise? _Journal of Artificial Intelligence Research, 67:757–795,_
2020.
[HSL[+]16] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with
stochastic depth. In European conference on computer vision, pages 646–661, 2016.
[HSW[+]22] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,
Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In Inter_national Conference on Learning Representations, 2022._
[JEP[+]21] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Z´[ˇ]ıdek, Anna Potapenko, et al.
Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583–589, 2021.
[KSH12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, volume 25,
2012.
[KSS[+]20] Daniel Keysers, Nathanael Sch¨arli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov,
Xiao Wang, Marc van Zee, and Olivier Bousquet. Measuring compositional generalization: A
comprehensive method on realistic data. In International Conference on Learning Representa_tions, 2020._
[LB18] Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional
skills of sequence-to-sequence recurrent networks. In International conference on machine learn_ing, pages 2873–2882. PMLR, 2018._
[LBD[+]89] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D.
Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation,
1(4):541–551, 1989.
[LCG[+]19] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu
Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv
_preprint arXiv:1909.11942, 2019._
[LYWP21] Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Isarstep: a benchmark for high-level
mathematical reasoning. In International Conference on Learning Representations, 2021.
14
-----
[MPL19] R Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing
syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007, 2019.
[NAGA[+]22] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin,
David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with
language models. In Deep Learning for Code Workshop, 2022.
[NNSN21] Thao Nguyen, Vaishnavh Nagarajan, Hanie Sedghi, and Behnam Neyshabur. Avoiding spurious
correlations: Bridging theory and practice. In NeurIPS 2021 Workshop on Distribution Shifts:
_Connecting Methods and Applications, 2021._
[NYM[+]20] Weili Nie, Zhiding Yu, Lei Mao, Ankit B Patel, Yuke Zhu, and Anima Anandkumar. Bongardlogo: A new benchmark for human-level concept learning and reasoning. Advances in Neural
_Information Processing Systems, 33:16468–16480, 2020._
[PBBG22] Arkil Patel, Satwik Bhattamishra, Phil Blunsom, and Navin Goyal. Revisiting the compositional
generalization abilities of neural sequence models. arXiv preprint arXiv:2203.07402, 2022.
[PHZ[+]22] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin,
and Ilya Sutskever. Formal mathematics statement curriculum learning. _arXiv preprint_
_arXiv:2202.01344, 2022._
[RBC[+]21] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song,
John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language
models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446,
2021.
[RDN[+]22] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical
text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
[RKR20] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in bertology: What we know
about how bert works. Transactions of the Association for Computational Linguistics, 8:842–
866, 2020.
[RMS[+]21] Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi
Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, et al. Biological structure and function emerge
from scaling unsupervised learning to 250 million protein sequences. Proceedings of the National
_Academy of Sciences, 118(15), 2021._
[RST20] Alessandro Raganato, Yves Scherrer, and J¨org Tiedemann. Fixed encoder self-attention patterns in transformer-based machine translation. arXiv preprint arXiv:2002.10260, 2020.
[RZLL16] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical
_Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, November 2016._
Association for Computational Linguistics.
[RZP[+]22] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov,
Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al.
A generalist agent. arXiv preprint arXiv:2205.06175, 2022.
[SGSB20a] Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, and Mohit Bansal. Prover: Proof generation for interpretable reasoning over rules. arXiv preprint arXiv:2010.02830, 2020.
15
-----
[SGSB20b] Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, and Mohit Bansal. Prover: Proof generation for interpretable reasoning over rules. In Bonnie Webber, Trevor Cohn, Yulan He, and
Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Lan_guage Processing, EMNLP, pages 122–136. Association for Computational Linguistics, 2020._
[SHL20] Megha Srivastava, Tatsunori Hashimoto, and Percy Liang. Robustness to spurious correlations
via human annotations. In International Conference on Machine Learning, pages 9109–9119.
PMLR, 2020.
[SPN[+]22] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari,
Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using
deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language
model. arXiv preprint arXiv:2201.11990, 2022.
[TDFH[+]22] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for
dialog applications. arXiv preprint arXiv:2201.08239, 2022.
[VSP[+]17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
L ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Infor_mation Processing Systems, volume 30, 2017._
[VTM[+]19] Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head
self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint
_arXiv:1905.09418, 2019._
[WDS[+]20] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony
Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,
Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain
Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-theart natural language processing. In Proceedings of the 2020 Conference on Empirical Methods
_in Natural Language Processing: System Demonstrations, pages 38–45, Online, October 2020._
Association for Computational Linguistics.
[WRL[+]21] Yuhuai Wu, Markus N Rabe, Wenda Li, Jimmy Ba, Roger B Grosse, and Christian Szegedy.
Lime: Learning inductive bias for primitives of mathematical reasoning. In International Con_ference on Machine Learning, pages 11251–11262. PMLR, 2021._
[WWS[+]22] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny
Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint
_arXiv:2201.11903, 2022._
[YPPN21] Shunyu Yao, Binghui Peng, Christos Papadimitriou, and Karthik Narasimhan. Self-attention
networks can process bounded hierarchical languages. arXiv preprint arXiv:2105.11115, 2021.
[YSI20] Weiqiu You, Simeng Sun, and Mohit Iyyer. Hard-coded gaussian attention for neural machine
translation. arXiv preprint arXiv:2005.00742, 2020.
[ZRG[+]22] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen,
Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained
transformer language models. arXiv preprint arXiv:2205.01068, 2022.
[ZRKB21] Chiyuan Zhang, Maithra Raghu, Jon Kleinberg, and Samy Bengio. Pointer value retrieval: A
new benchmark for understanding the limits of neural network generalization. arXiv preprint
_arXiv:2107.12580, 2021._
16
-----
### A Shortcut solutions and their effect on generalization
As explained in Section 4.3, we have observed that the randomly initialized models first learn a “shortcut”
solution—predicting the last variable in the chain by counting the overall number of minus signs—instead of
the “common sense” iterative solution. This can be seen in the top two plots in Figure 3, where the accuracy
of variable #11 improves earlier than most of the other variables.[1]
To be more precise, let us describe the two solutions in detail with an example. Consider the input:
```
a=+1; d=-c; b=-a; c=b; Initially, only the variable a is resolved. The iterative solution identifies an unre
```
solved variable that appears in the same clause with an already resolved variable, resolves it according to
that clause, and repeats. In this example, it would resolve b to −1 by the clause b=-a, then resolve c to −1
by the clause c=b, and then resolve d to 1 by the clause d=-c. The shortcut solution identifies an unresolved
variable that appears only once, and resolves it to 1 if the overall number of minus signs is even, and to −1
otherwise. In the above example, where d is the last variable in the reasoning chain, the shortcut solution
correctly resolves it to 1.
As further mentioned in Section 4.3, the shortcut solution to LEGO may be related to the phenomenon
of spurious features, where models learn to perform tasks in ways that circumvents the intended “common
sense” solution a human would use. Such spurious solutions are often considered undesirable, as they are
known to generalize poorly even to mild variants of the task. Indeed, the shortcut solution to LEGO is
brittle even under simple variations to the problem:
_• Repeated clauses, e.g., a=+1; b=-a; b=-a;_
_• Redundant clauses, e.g., a=+1; b=a; c=-a; c=-b;_
_• Multiple jointly rooted reasoning chains, e.g., a=+1; b=-a; c=-a;_
_• Multiple disjoint reasoning chains, e.g., a=+1; b=+1; c=-a; d=-b;_
and more. In all those settings the shortcut solutions would fail, whereas the “common sense” iterative
solution would succeed. This motivates us to empirically study the effect of the shortcut solution on the
ability of the models to generalize. We pose the following questions:
1This behavior is not expected in the bottom two plots, since in the top ones the models are trained to fit all 12 variables
including #11, while in the bottom plots the models are only trained to fit the first 6 variables, which precludes learning the
shortcut solution.
17
-----
Rand-Init BERT
Rand-Init ALBERT variable #
(a)
(b)
|100%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
10[0] 10[1] 10[2]
epoch
Rand-Init BERT
10[0] 10[1] 10[2]
variable #
0
1
2
3
4
5
6
7
8
9
10
11
epoch
Rand-Init ALBERT variable #
|100%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
10[0] 10[1] 10[2]
epoch
10[0] 10[1] 10[2]
epoch
variable #
0
1
2
3
4
5
6
7
8
9
10
11
Figure 11: Learning shortcut impedes generalization. (a) Train on variables #0-#4 and #11. (b) Train on variables
#0-#4 only. In both plots we test on all 12 variables.
**Q1. How does reliance on shortcut solutions affect the ability of the network to generalize?**
A first indication that the shortcut solution is undesirable for LEGO can be gleaned already from Figure 3:
along with the early improvement in the accuracy of variable #11 (which indicates that the shortcut solution
is being learned), we observe a drop in the accuracy of some of the variables that were already learned
(#2 in ALBERT and #3 in BERT). This may suggest that the shortcut solution impedes even classical
generalization. Indeed, the models seem to “realize” that, as we can see that the accuracy of #11 drops
before improving again together with the other variables, indicating that the shortcut solution is being
abandoned in favor of a solution that can predict all variables correctly.
To gain insight into the effect of the shortcut solution on out-of-distribution generalization, we performed
an experiment where the models are trained to fit the first five variables (#0-#4) and the last one (#11),
and are asked to predict all 12 variables at test time. This is different from the top plots in Figure 3 where
the model was trained to fit all 12 variables (and thus no out-of-distribution generalization is observed), and
from the bottom plots in Figure 3 where the model is trained to fit the first six variables (#0-#5), without
#11 (and thus learning the shortcut solution is not possible). The results of this experiment are reported
in the top two plots in Figure 11. The bottom plots depict a control experiment where the models are
trained to fit only the first five variables, without #11 (and thus, again, learning the shortcut solution is not
possible). The results show that the models exhibit inferior out-of-distribution generalization (to variables
#5 in BERT and #6 in ALBERT) when provided supervision for #11 (top plots), even though they are
given strictly more information during training than in the bottom plots, ostensibly making the task easier.
We thus infer that the shortcut solution in LEGO has an adverse effect on out-of-distribution generalization.
**Q2. What are effective ways to prevent models from learning shortcuts, and do they result in**
**better generalization?**
In Section 4 we studied the effect of pretraining on LEGO, and observed that the pretrained BERT model
exhibits much better out-of-distribution generalization than the randomly initialized BERT model. This
naturally suggests that pretrained BERT possibly avoids the shortcut solution. We confirm experimentally
in Figure 12 (top), where pretrained BERT and ALBERT are finetuned to fit all 12 variables. Indeed observe
18
-----
|100%|Col2|Col3|100%|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|100% 90% acc 80% test 70% 60%|||100% 90% acc 80% test 70% 60%||||
||||||||
||||||||
||||||||
||||||||
|100%|Col2|Col3|100%|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|100% 90% acc 80% test 70% 60%|||100% 90% acc 80% test 70% 60%||||
||||||||
||||||||
||||||||
||||||||
Pre-Trained BERT Pre-Trained ALBERT
100% 100%
90% 90%
variable #
80% 80%
0
test acc 70% test acc 70% 1
60% 60% 2
3
50% 50%
4
10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
5
Mimicking BERT Mimicking ALBERT 6
100% 100%
7
90% 90%
8
80% 80% 9
test acc 70% test acc 70% 10
11
60% 60%
50% 50%
10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
epoch epoch
Figure 12: Shortcut solutions are avoided via either pre-training or mimicking.
the accuracy of #11 improving either later than or concurrently with all other variables, suggested that the
shortcut solution is not being learned. We speculate that this may have to do with the number of epochs it
takes to learn the iterative solution: By the time the randomly initialized BERT has learned the shortcut
solution, pretrained BERT already attains full accuracy on the entire chain of variables. Avoiding the
shortcut solution may partly explain the superior out-of-distribution generalization performance of pretained
BERT over randomly initialized BERT, seen in Figure 6.
Section 4 also showed that our mimicking technique—which directly mimics the attention patterns of the
pretrained model without training on any data—can recover much of the benefit in pretraining for LEGO.
This extends to avoiding the shortcut solution as well: the bottom plots in Figure 12 show that the mimicking
BERT and ALBERT models exhibit similar accuracy patterns to their pretrained counterparts, suggesting
that the shortcut solution is not being learned by them.
### B Data generation and training details for the LEGO task
**B.1** **Data generation**
We specify the data generation mechanism for Task 1 in the following. We use lowercase alphabets as
variables A = {a, b, c, d, . . ., z}. Given n, we generate a sentence from our distribution s ∼D(n) as follows:
1. Sample n variables a1, a2, . . ., an _A and their corresponding assignments (or labels) y1, y2, . . . yn_ _X_
_∈_ _∈_
uniformly, i.e., ∀ _i ∈_ [n], ai ∼ Unif(A) and yi = ±1 w.p. 0.5.
2. The n clauses are then generated as ai = giai 1 for i = 1, 2, . . ., n, where a0 = r = 1 and the group
_−_
elements g1, g2, . . . gn ∈ _G are uniquely chosen so that the clauses are consistent with assignments ai = yi_
for all i ∈ [n].
3. The sentence s is generated by concatenating a random ordering of the n clauses with a semicolon ;.
Finally, the sentence is padded with [BOS] and [EOS] tags to denote the beginning and end of sentence,
respectively. See Figure 13 for example sentences from our distribution.
19
-----
```
[BOS] j=-f; f=-b; y=+t; o=+e; d=+y; v=+d; h=-o; b=-i; i=+1; t=+l; e=-j; l=-h; [EOS]
[BOS] j=+o; s=-y; p=-r; y=-m; u=-a; a=-f; k=+p; o=-k; q=+u; m=+1; f=+s; r=+q; [EOS]
[BOS] z=+d; b=+1; m=+t; d=-u; u=-h; a=-b; j=+m; i=-j; t=+x; f=+i; h=-f; x=-a; [EOS]
[BOS] j=-f; f=-b; y=+t; o=+e; d=+y; v=+d; h=-o; b=-i; i=+1; t=+l; e=-j; l=-h; [EOS]
[BOS] w=+l; m=+c; c=-i; f=-d; p=-m; a=+b; y=-a; b=+p; i=+f; l=-v; d=+1; v=+y; [EOS]
```
Figure 13: Samples of sentences generated from our distribution D(n) for Task 1 with n = 12.
**B.2** **Training**
Our vocabulary for data generated as above thus consists of symbols of variables a ∈ _A, group operations_
+, −∈ _G, the root node 1 ∈_ _X, the equal sign ‘=’, and the semicolon ‘;’ along with the [BOS] and [EOS] tags._
To apply transformers to the LEGO task we convert each symbol in our vocabulary to vectors in R[d] (referred
to as tokens) using a learnable linear embedding layer. Thus, a sentence of n clauses now corresponds to an
ordered list of 5n+2 tokens, which we turn into an unordered list using positional embedding (see [VSP[+]17]).
These tokens are processed iteratively by transformer blocks. Each transformer block maps 5n tokens in R[d]
to another set of 5n tokens using a multi-head attention layer followed by a one-hidden layer feedforward net
(there are also residual connections and layer normalization, for full details of architecture see [VSP[+]17]). We
use bert-base-uncased[2] and albert-base-v1[3] along with their pretrained weights from the open source
Huggingface transformers library [WDS[+]20]. The Rand-Init models have identical configurations to their
pretrained counterparts but randomly initialized weights.
Our training and test datasets are i.i.d. samples from our distribution D(n) as described above. We
generate 10[4] _× n and 10[3]_ _× n datapoints for training and test, respectively, and sanity checked that there_
is no overlap between train and test data. Recall that during training we provide supervision on the first
_ntr appearing in the graph representation of the sentence, but test the accuracy on all n samples at test_
time. Note that since the clause positions are randomized in our input sentence (e.g., Figure 13), the first
_ntr clauses in the graph representation can appear at arbitrary positions in the sentence which allows for_
training the positional encodings for longer sequences than those seen in training.
In all the LEGO experiments, we use cross entropy loss averaged over the ntr clauses as our sample loss
during training. We train for 200 epochs using the Adam optimizer with batch size of 1000 samples, 5 _×_ 10[−][5]
learning rate, β1 = 0.9, β2 = 0.999, ϵ = 1 × 10[−][8], and cosine learning rate schedule with Tmax = 200. For
tokenization, we use the pre-trained BERT tokenizer for all the experiments, which merely converts the input
symbols into integers that are used as token ids. Each run is conducted on a cluster with 4 A100 GPUs.
**A note on variance of training across LEGO tasks** When training BERT and ALBERT models for
the LEGO task using our experimental setup, we see non-trivial variance in absolute test accuracies across
different runs. In Figure 14, we show three different runs of BERT and ALBERT models trained on LEGO
tasks of length n = 12 (the configuration used in most results in the paper). While we see that the absolute
values of the test accuracies vary significantly, the qualitative observations made in our paper hold across
all the runs: importantly, across all the runs, we see that using iterative ALBERT architecture as well as
pre-training non-iterative BERT architecture leads to better generalization to unseen lengths. This shows
that conclusions derived in our paper hold despite the variance across runs.
Methodologically, we attribute the variance in the absolute test accuracy to the relatively small size of
our datasets (e.g., the number of training examples for n = 12 is 120K tokens) for training standard trained
language models which are otherwise trained on hundreds of millions of tokens. Furthermore, note that in
our experiments the variance across runs arise both from having different train and test datasets as well as
different random seeds for model initialization and training algorithm.
[2https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased)
[3https://huggingface.co/albert-base-v1](https://huggingface.co/albert-base-v1)
20
-----
RandInt-BERT, n=12
100 101 10
|100%|RandI|Int-|-BERT, n=12|Col5|RandIn|nt-A|Albert, n=12|Col9|Pretrained-BERT, n=1|12|
|---|---|---|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%|||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|50%|||||||||||
epoch
RandInt-BERT, n=12
RandInt-Albert, n=12
100 101 10
epoch
RandInt-Albert, n=12
Pretrained-BERT, n=12
100 101 10
epoch
Pretrained-BERT, n=12
100%
90%
80%
70%
60%
50%
(a)
(b)
|100%|RandI|Int-|-BERT, n=12|Col5|RandIn|nt-A|Albert, n=12|Col9|Pretrained-BERT, n=1|12|
|---|---|---|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% 60% test|||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|50%|||||||||||
100 101 10
epoch
RandInt-BERT, n=12
100 101 10
epoch
RandInt-Albert, n=12
100 101 10
epoch
Pretrained-BERT, n=12
variable #
0
1
2
3
4
5
6
7
8
9
10
11
(c)
|100%|RandI|Int-|-BERT, n=12|Col5|RandIn|nt-A|Albert, n=12|Col9|Pretrained-BERT, n=1|12|
|---|---|---|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%|||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|50%|||||||||||
100 101 10
epoch
100 101 10
epoch
100 101 10
epoch
Figure 14: Three sample runs of models trained on LEGO tasks with n = 12 and ntr = 6. We observe that while
there are variances across different runs of the models, the qualitative conclusions stated in the paper holds for all
the runs: i.e, iterative ALBERT models and pretraining lead to better generalization to unseen task lengths.
**B.3** **The Mimicking procedure**
Starting with a randomly initialized transformer model, we would like to make sure that some of its attention
heads implement the manipulation and association functionalities, prior to the fine-tuning. To do so, we
craft the desired attention patterns of a manipulation head and an association head, and train the model
with gradient descent to match them. Specifically, given a random input x ∈ R[T] of length T, we hard code
the following matrix M ∈ R[T][ ×][T] as the target attention pattern for the manipulation head[4], and derive from
input x the following matrix A ∈ R[T][ ×][T] whose Ai,j entry indicates whether xi and xj are identical tokens.
Note that we further specify that A has a zero diagonal in observance of the association head in Figure 7
having a vanishing diagonal. In reality, attention maps have unit row sums, thus we normalize M and A
accordingly to obtain M[˜] and A[˜] such that their rows M[˜] [t, :] and A[˜][t, :] are valid distributions.
... ... ...
1 2 4 2 1
1 2 4 2 1
1 2 4 2 1
... ... ...
1, if xi = xj and i ̸= j
0, otherwise
_M =_
_Aij =_
At every layer, we randomly appoint two attention heads to mimic the manipulation and association
operators while leave the other heads off the hook. Upon seeing a input sequence x ∈ R[T], we denote the
4Here we choose the Gaussian filter [1, 2, 4, 2, 1] for illustration. In practice, we find that the final performance is robust to
various pattern choices as long as they are localized and shift-invariant.
21
-----
attention maps of the appointed heads at the l the layer as Attn[(]0[l][)][(][x][)][,][ Attn]1[(][l][)][(][x][)][ ∈] [R][T][ ×][T][ . For the mimicking]
_−_
objective, we draw input sequence x’s whose tokens are independent and uniform over the vocabulary,
and then compute the the Kullback–Leibler divergence between each row of Attn[(]0[l][)][(][x][)][,][ Attn]1[(][l][)][(][x][) and the]
corresponding rows of M[˜], A[˜]. Thus the overall mimicking loss is
_L−1_
" _l=0_
X
**KL** `Attn[(]0[l][)][(][x][)[][t,][ :]]` _M[˜]_ [t, :] + KL `Attn[(]1[l][)][(][x][)[][t,][ :]]` _A[˜][t, :]_
[#]
_Lmimic =_ E
rand. seq. x
_t=1_
Note that the above mimicking loss pertains only two attention heads per layer and we leave out all the
rest. Then the mimicking procedure boils down to updating the transformer model’s parameters to minimize
the Lmimic. We find that a vanilla Adam optimizer drives the mimicking loss down to near zero in a negligible
amount of time compared to large-scale pre-training, even for large models such as BERT and ALBERT.
### C Details of the LEGO v0/v1 models
The LEGO v0/v1 models are derived from the BERT-base model. They have same number of layers, same
hidden dimensions, and same feed-forward layer’s structure as the BERT model while and they differ in
attention layers. In all three models, each attention head effectively operates on 64 hidden dimensions. The
specifics are provided below.
_• BERT-base model has 12 layers, 768 hidden dimensions, and 12 attention heads per layer._
_• LEGO v0 model: convolutional pathway consists of Linear from 768 dim to 576 dim + ReLU + Depthwise_
temporal Convolution with with kernel size 21 and 576 channels + Linear from 576 dim to 576 dim.
There are three hardcoded attention heads whose attention patterns are Aasso, Acls, Asep and each of
them operates on a value map of 64 dim (thus the entire value map is a Linear layer from 768 dim to 192
dim).
_• LEGO v1 model: convolutional pathway consists of Linear from 768 dim to 384 dim + ReLU + Depthwise_
temporal Convolution with kernel size 21 and 384 channels + Linear from 384 dim to 384 dim. There are
three hardcoded attention heads whose attention patterns are Aasso, Acls, Asep and each of them operates
on a value map of 64 dim (thus the entire value map is a Linear layer from 768 dim to 384 dim). There
are also 3 ordinary attention heads whose Q,K,V maps are all Linear layers from 768 dim to 384 dim.
### D Effect of length of LEGO chains and depth of model
All our experiments in the main paper were on a typical instance of LEGO task with length n = 12 chains
and standard BERT and ALBERT models of depth D = 12. In this Appendix, we briefly explore the effect of
the LEGO chain length (n, ntr) and transformer models depth D. The chain structure of information flow in
our LEGO Task 1 would suggest that for a transformer network of depth D, the learning and generalization
on LEGO task would crucially depend on its maximum chain length n. If n (or importantly ntr) is too small,
the training data might not have enough information to guide generalization to longer lengths, on the other
hand, if n is too large, the model might not be able to propagate information along the chain in a natural
iterative manner. For example, implementing a “natural” iterative algorithm of resolving one clause of the
task at a time (as described in the beginning of Appendix A) would require models of depth D ≥ _n._
22
-----
RandInt-BERT, n=8
RandInt-Albert, n=8
Pretrained-BERT, n=8
(a) n = 8
|100%|RandInt-BERT, n=8|Col3|RandInt|t-Albert, n=8|Col6|Pretrained-BERT, n=8|8|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|50%||||||||
100 101 10
epoch
RandInt-BERT, n=12
100 101 10
epoch
RandInt-Albert, n=12
100 101 102
variable #
0
1
2
3
4
5
6
7
epoch
Pretrained-BERT, n=12 variable #
(b) n = 12
(c) n = 16
|100%|RandInt-BERT, n=12|2|RandInt-|-Albert, n=12|Col6|Pretrained-BERT, n=1|12|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|50%||||||||
100 101 10
epoch
RandInt-BERT, n=16
100 101 10
epoch
RandInt-Albert, n=16
100 101 10
epoch
Pretrained-BERT, n=16
variable #
0
1
2
3
4
5
6
7
8
9
10
11
|100%|RandInt-BERT, n=16|6|RandInt-|-Albert, n=16|Col6|Pretrained-BERT, n=1|16|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|50%||||||||
100 101 10
epoch
RandInt-BERT, n=20
100 101 10
epoch
RandInt-Albert, n=20
100 101 10
epoch
Pretrained-BERT, n=20
variable #
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(d) n = 20
|100%|RandInt-BERT, n=20|0|RandInt-|-Albert, n=20|Col6|Pretrained-BERT, n=2|20|
|---|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|50%||||||||
100 101 10
epoch
100 101 10
epoch
100 101 10
epoch
variable #
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Figure 15: Generalization performance of BERT and ALBERT models (depth D = 12) trained on LEGO task of
varying chain lengths n: (a) n = 8, (b) n = 12, (c) n = 16, (d) n = 20.
We study this behavior by first repeating the experiments from our main paper on depth D = 12 models
on LEGO tasks of varying lengths n. In all the cases, we proportionally increase the length of chain for
which supervision is provided by training on first ntr = n − 6 clauses in the chain.
In Figure 15, we show the performance of a typical run of transformer models in this setting. When trained
on small length chains of n = 8 and ntr = 2, we indeed see that all the models including pretrained-BERT is
able to learn the short training length (classical generalization) but the supervision does not contain enough
information for the models to learn to generalize to unseen lengths. On the other hand, for larger chain
lengths we see that the generalization only gets better, with a very strong monotonic trend for pretrainedBERT models. In particular, the strong generalization performance at n = 20 with ntr = 14 would suggest
that these models might not really be implementing the “natural” iterative solution for the task. Rather, it
is possible that training on longer sequences leads the models to learn a more compact representation that
23
-----
nevertheless generalizes remarkably well to much longer lengths than seen during training.
Complementing our results on varying the length of LEGO chains, we also look at how BERT and
ALBERT architectures of smaller depth D < 12 learn our LEGO task of length n = 12. In Figure 16 we
show the results of models trained from random initialization (we do not have pre-trained models at these
depths). Here we do see the trend that larger depth improves generalization. In particular, for a task of
chain length n, there does appear to be a minimum threshold of D at which the models learn to generalize
even in the classical sense (on lengths the models were trained on), but this relationship appears sub-linear
rather than linear with D ≥ _n as speculated by the “natural” iterative algorithm. It would be of interest in_
future studies to explore this relation between depth and length better.
RandInt-BERT, D=2
RandInt-Albert, D=2
(a) D = 2
(b) D = 4
|100%|RandIn|nt-BERT, D=|=2|RandIn|nt-Albert, D=2|2|
|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60% 50%|||||||
||||||||
||||||||
||||||||
||||||||
||||||||
100 101 10
epoch
RandInt-BERT, D=4
100 101 10
epoch
RandInt-Albert, D=4
|100%|RandIn|nt-BERT, D=|=4|RandIn|nt-Albert, D=4|4|
|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%|||||||
||||||||
||||||||
||||||||
||||||||
|50%|||||||
100 101 10
epoch
RandInt-BERT, D=8
100 101 10
epoch
RandInt-Albert, D=8
(c) D = 8
(d) D = 12
|100%|RandIn|nt-BERT, D=|=8|RandIn|nt-Albert, D=8|8|
|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%|||||||
||||||||
||||||||
||||||||
||||||||
|50%|||||||
100 101 10
epoch
RandInt-BERT, D=12
100 101 10
epoch
RandInt-Albert, D=12
|100%|RandIn|nt-BERT, D=1|12|RandIn|nt-Albert, D=1|12|
|---|---|---|---|---|---|---|
|100% 90% 80% acc 70% test 60%|||||||
||||||||
||||||||
||||||||
||||||||
|50%|||||||
variable #
0
1
2
3
4
5
6
7
8
9
10
11
100 101 10
epoch
100 101 10
epoch
Figure 16: Rand-Init BERT and ALBERT models of varying depth D trained on LEGO tasks of length n = 12.
**D.1** **Effect of number of parameters**
In this section, we provide empirical evidence that the advantage of ALBERT over BERT for predicting
longer sequences does not merely come from fewer parameters. Precisely, we modify the original BERT
model to have 96 hidden size instead of 784 uniformly across all layers, with all other hyper-parameter
unchanged. This ”thin” BERT model has approximately the same parameter count as the ALBERT model.
The result is shown in Figure 17.
24
-----
Rand-Init BERT (hidden size 96)
|1.0|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|1.0 0.9 0.8 acc test 0.7 0.6|||||
||||||
||||||
||||||
||||||
10[0] 10[1] 10[2]
variable #
0
1
2
3
4
5
6
7
8
9
10
11
epoch
Figure 17: The ”thin” model not only lacks the length extrapolation properties of ALBERT, but it does not even
generalize in-distribution. In fact, test accuracy on variable #2 remains near 50% after 200 training epochs.
### E LEGO Task 2: dihedral group
As a generalization of the main task (i.e., LEGO Task 1) analyzed so far, we present LEGO Task 2 of learning
the dihedral group[5] _D3 of order 6 which is isomorphic to the symmetric group S3. Note that LEGO Task_
1 can be viewed as learning the dihedral group D1 of order 2. Clearly, the shortcut solution described in
Section A is not valid here.
We repeat the out-of-distribution generalization experiments on Task 2 with the exact same model configurations and training hyper-parameters as Task 1 (see Section B.2). For dataset creation, we largely follow
the pipeline detailed in Section B.1 with group elements from D3 for which we create corresponding tokens.
The only modification here is that the labels are categorical with 6 classes, since every variable in Task 2
may take 6 candidate values.
|100%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|100% 80% acc 60% test 40% 20%|||||||||
||||||||||
||||||||||
||||||||||
||||||||||
|100%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|100% 80% acc 60% test 40% 20%||||||||
|||||||||
|||||||||
|||||||||
|||||||||
Pre-Trained BERT Rand-Init BERT Mimicking BERT
100%
80%
variable #
60%
0
test acc 40% 1
2
20% 3
4
10[0] 10[1] 10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
Pre-Trained ALBERT Rand-Init ALBERT Mimicking ALBERT 5
100% 6
7
80%
8
60% 9
10
test acc
40% 11
20%
10[0] 10[1] 10[2] 10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
Figure 18: Out-of-distribution generalization on LEGO Task 2. Task 2 appears to be significantly more challenging
than Task 1 and pretraining plays a more important role on top of rand-init models, while the mimicking procedure
is able to match and even outperform pretraining.
In Figure 18, we show these preliminary results and observe that Task 2 is indeed more challenging
[5https://en.wikipedia.org/wiki/Dihedral_group](https://en.wikipedia.org/wiki/Dihedral_group)
25
-----
than Task 1, and the extent to which pretraining provides benefits is noticeably larger. Without further
hyper-parameter tuning, the randomly initialized models even face optimization issues in fitting the training
labels. Interestingly, we also find our proposed mimicking procedure introduced in Section B.3 is not only
able to match pretraining’s performance and also outperform it.
So far, none of the models here is capable of non-trivially generalizing to more than one extra variable.
It is an important future direction to search for suitable adaptations to the current transformer architectures/training algorithms that can eventually solve this task.
Furthermore, the amount of knowledge that models have to ”memorize”, e.g. the outcomes of g(x) for
all g ∈ _G and x ∈_ _X grows quadratically with group size. While LEGO task 1 contains only 4 mappings,_
namely 1 = +1, 1 = +( 1), 1 = ( 1), 1 = +( 1). For moderately size group such as D5, models will
_−_ _−_ _−_ _−_ _−_ _−_
have to ”memorize” (5!)[2] = 125[2] in total mappings. This makes the task perfectly suitable for analyzing
how and where (at which neurons) the models store these informations. We consider this as an exciting
future direction.
26
-----
### F Attention maps of pretrained BERT on LEGO
To complement the visualizations in Figure 7, we provide visualizations of attentions maps of all attention
heads from a pretrained BERT when taking a LEGO sequence as input. We make the observation that almost
_every attention head implements either a manipulation or an association operator. Please view electronically_
and zoom in for details. All the figures are vectorized.
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 0 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 0 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 0 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 1 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 1 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 1 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 0
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 2 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 2 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 2 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 2
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 4 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 4 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 4 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 4
layer 1
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 3 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 3 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 3 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 3
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 5 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 5 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 5 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 5
Layer 2 Head 0
,e=-d,f=-e,g=+f,h=-g
Layer 2 Head 4
,e=-d,f=-e,g=+f,h=-g
Layer 2 Head 8
,e=-d,f=-e,g=+f,h=-g
Layer 4 Head 0
Layer 2 Head 1
,e=-d,f=-e,g=+f,h=-g,
Layer 2 Head 5
,e=-d,f=-e,g=+f,h=-g,
Layer 2 Head 9
,e=-d,f=-e,g=+f,h=-g,
Layer 4 Head 1
Layer 2 Head 2
c,e=-d,f=-e,g=+f,h=-g,
Layer 2 Head 6
c,e=-d,f=-e,g=+f,h=-g,
Layer 2 Head 10
c,e=-d,f=-e,g=+f,h=-g,
Layer 4 Head 2
Layer 2 Head 3
,e=-d,f=-e,g=+f,h=-g,
Layer 2 Head 7
,e=-d,f=-e,g=+f,h=-g,
Layer 2 Head 11
,e=-d,f=-e,g=+f,h=-g,
Layer 4 Head 3
Layer 3 Head 0
,e=-d,f=-e,g=+f,h=-g
Layer 3 Head 4
,e=-d,f=-e,g=+f,h=-g
Layer 3 Head 8
,e=-d,f=-e,g=+f,h=-g
Layer 5 Head 0
Layer 3 Head 1
,e=-d,f=-e,g=+f,h=-g,
Layer 3 Head 5
,e=-d,f=-e,g=+f,h=-g,
Layer 3 Head 9
,e=-d,f=-e,g=+f,h=-g,
Layer 5 Head 1
Layer 3 Head 2
c,e=-d,f=-e,g=+f,h=-g,
Layer 3 Head 6
c,e=-d,f=-e,g=+f,h=-g,
Layer 3 Head 10
c,e=-d,f=-e,g=+f,h=-g,
Layer 5 Head 2
Layer 3 Head 3
,e=-d,f=-e,g=+f,h=-g,
Layer 3 Head 7
,e=-d,f=-e,g=+f,h=-g,
Layer 3 Head 11
,e=-d,f=-e,g=+f,h=-g,
Layer 5 Head 3
27
-----
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 6 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 6 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 6 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 7 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 7 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 7 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 6
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 8 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 8 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 8 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 8
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 0e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 1e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 2e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 3e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 4e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 5e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 6e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 7e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 8e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 10 Head 9e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+cLayer 10 Head 10,e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+cLayer 10 Head 11,e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 10
layer 7
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 0=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 1=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 2=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 3=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 4=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 5=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 6=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 7=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 8=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,eLayer 9 Head 9=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 9 Head 10e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 9 Head 11e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 9
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 0e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 1e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 2e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 3e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 4e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 5e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 6e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 7e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
[BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 8e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+c,Layer 11 Head 9e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+cLayer 11 Head 10,e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS] [BOS][EOS]=+=+==+===+====+=1bbddgghheeaakkcc-------ff[BOS],,,,,,,,,,,,ijijl a=+1,b=+a,c=-b,d=+cLayer 11 Head 11,e=-d,f=-e,g=+f,h=-g,i=-h,j=-i,k=+j,l=-k,[EOS]
layer 11
Layer 8 Head 0
c,e=-d,f=-e,g=+f,h=-g,i
Layer 8 Head 4
c,e=-d,f=-e,g=+f,h=-g,i
Layer 8 Head 8
c,e=-d,f=-e,g=+f,h=-g,i
Layer 10 Head 0
Layer 8 Head 1
,e=-d,f=-e,g=+f,h=-g,
Layer 8 Head 5
,e=-d,f=-e,g=+f,h=-g,
Layer 8 Head 9
,e=-d,f=-e,g=+f,h=-g,
Layer 10 Head 1
Layer 8 Head 2
c,e=-d,f=-e,g=+f,h=-g,i
Layer 8 Head 6
c,e=-d,f=-e,g=+f,h=-g,i
Layer 8 Head 10
c,e=-d,f=-e,g=+f,h=-g,i
Layer 10 Head 2
Layer 8 Head 3
c,e=-d,f=-e,g=+f,h=-g,i
Layer 8 Head 7
c,e=-d,f=-e,g=+f,h=-g,i
Layer 8 Head 11
c,e=-d,f=-e,g=+f,h=-g,i
Layer 10 Head 3
Layer 9 Head 0
c,e=-d,f=-e,g=+f,h=-g,i
Layer 9 Head 4
c,e=-d,f=-e,g=+f,h=-g,i
Layer 9 Head 8
c,e=-d,f=-e,g=+f,h=-g,i
Layer 11 Head 0
Layer 9 Head 1
,e=-d,f=-e,g=+f,h=-g,
Layer 9 Head 5
,e=-d,f=-e,g=+f,h=-g,
Layer 9 Head 9
,e=-d,f=-e,g=+f,h=-g,
Layer 11 Head 1
Layer 9 Head 2
c,e=-d,f=-e,g=+f,h=-g,i
Layer 9 Head 6
c,e=-d,f=-e,g=+f,h=-g,i
Layer 9 Head 10
c,e=-d,f=-e,g=+f,h=-g,i
Layer 11 Head 2
Layer 9 Head 3
c,e=-d,f=-e,g=+f,h=-g,i
Layer 9 Head 7
c,e=-d,f=-e,g=+f,h=-g,i
Layer 9 Head 11
c,e=-d,f=-e,g=+f,h=-g,i
Layer 11 Head 3
28
-----
| [
"Sébastien, Bubeck",
"Suriya, Gunasekar",
"Arturs, Backurs",
"Ronen, Eldan",
"Tal, Wagner",
"Yi, Zhang"
] | 2023-02-17T00:00:00 | null | false | 69 | 1 | null | http://arxiv.org/abs/2206.04301 | https://arxiv.org/abs/2206.04301 | https://www.semanticscholar.org/paper/d253beffd28d88cc3150c9e80511a6187ea6613b |
MWPToolkit: An Open-Source Framework for Deep Learning-Based Math Word Problem Solvers | While Math Word Problem (MWP) solving has emerged as a popular field of study and made great progress in recent years, most existing methods are benchmarked solely on one or two datasets and implemented with different configurations. In this paper, we introduce the first open-source library for solving MWPs called MWPToolkit, which provides a unified, comprehensive, and extensible framework for the research purpose. Specifically, we deploy 17 deep learning-based MWP solvers and 6 MWP datasets in our toolkit. These MWP solvers are advanced models for MWP solving, covering the categories of Seq2seq, Seq2Tree, Graph2Tree, and Pre-trained Language Models. And these MWP datasets are popular datasets that are commonly used as benchmarks in existing work. Our toolkit is featured with highly modularized and reusable components, which can help researchers quickly get started and develop their own models. We have released the code and documentation of MWPToolkit in https://github.com/LYH-YF/MWPToolkit. | The first open-source library for solving MWPs called MWPToolkit is introduced, which provides a unified, comprehensive, and extensible framework for the research purpose and deploys 17 deep learning-based MWP solvers and 6 MWP datasets in the toolkit. | ## MWPToolkit: An Open-source Framework for Deep Learning-based Math Word Problem Solvers
**Yihuai Lan[1][∗], Lei Wang[2][∗], Qiyuan Zhang[2][∗], Yunshi Lan[3][†], Bing Tian Dai[2],**
**Yan Wang[4], Dongxiang Zhang[5], Ee-Peng Lim[2]**
1 Xihua University, 2Singapore Management University
3East China Normal University, 4 Tencent AI Lab, 5 Zhejiang University
_{lei.wang.2019, yslan.2015}@phdcs.smu.edu.sg, {btdai, eplim}@smu.edu.sg_
_{yifan2250, qiyuanzhang97}@gmail.com, [email protected], [email protected]_
**Abstract**
Developing automatic Math Word Problem
(MWP) solvers has been an interest of NLP researchers since the 1960s. Over the last few
years, there are a growing number of datasets
and deep learning-based methods proposed for
effectively solving MWPs. However, most
existing methods are benchmarked solely on
one or two datasets, varying in different configurations, which leads to a lack of unified,
standardized, fair, and comprehensive comparison between methods. This paper presents
MWPToolkit, the first open-source framework for solving MWPs. In MWPToolkit,
we decompose the procedure of existing MWP
solvers into multiple core components and decouple their models into highly reusable modules. We also provide a hyper-parameter
search function to boost the performance. In
total, we implement and compare 17 MWP
solvers on 4 widely-used single equation generation benchmarks and 2 multiple equations
generation benchmarks. These features enable our MWPToolkit to be suitable for researchers to reproduce advanced baseline models and develop new MWP solvers quickly.
[Code and documents are available at https:](https://github.com/LYH-YF/MWPToolkit)
[//github.com/LYH-YF/MWPToolkit.](https://github.com/LYH-YF/MWPToolkit)
**Single Equation Generation:**
Paco had 26 salty cookies and 17 sweet cookies. He ate
14 sweet cookies and 9 salty cookies. How many salty
cookies did Paco have left?
**Equation:**
_x = 26 −_ 9
**Answer:**
_x = 17_
**Multiple Equations Generation:**
Jerome bought 12 CDs. Some cost 7.50$ each, and the
rest cost 6.50 each . How many CDs were bought at
each price if he spent 82 dollars?
**Equation:**
7.5 × x + 6.5 × y = 82,
_x + y = 12_
**Answer:**
_x = 4, y = 8_
Table 1: Two examples of Math Word Problems. We
display single equation generation and multiple equations generation.
Over the last few years, there are a growing
number of datasets (Kushman et al., 2014; KoncelKedziorski et al., 2016; Upadhyay and Chang,
2017; Huang et al., 2016; Wang et al., 2017; Qin
et al., 2020; Miao et al., 2020; Patel et al., 2021) and
deep learning-based methods that have been proposed to solve MWPs, including Seq2Seq (Wang
et al., 2017, 2018a; Chiang and Chen, 2019; Li
et al., 2019), Seq2Tree (Wang et al., 2019; Liu
et al., 2019a; Xie and Sun, 2019; Qin et al., 2020),
Graph2Tree (Zhang et al., 2020b; Shen and Jin,
2020), and Pre-trained Language Models (Kim
et al., 2020). However, most existing MWP solving
methods are evaluated solely on one or two datasets,
varying in different settings (i.e., different train-test
split and k-fold cross validation), which leads to a
lack of unified, standardized, fair, and comprehensive comparison between methods. Moreover, it is
time-consuming and complicated to re-implement
prior methods as baselines, which leads to the difficulty making a consistent conclusion in term of
performance comparison to other methods. Thus
**1** **Introduction**
Developing automatic Math Word Problems
(MWPs) solvers has been an interest of NLP researchers since the 1960s (Feigenbaum and Feldman, 1963; Bobrow, 1964). As shown in Table 1,
when solving a MWP, machines need to make inferences based on the given textual problem description and question. It requires machines to translate
the natural language text into valid and solvable
equations according to context, numbers, and unknown variables in the text and then compute to
obtain the numerical values as the answer.
_∗Equal contribution._
_†_ Corresponding author.
-----
it severely hinders the development of research in
MWPs community.
To encourage the development of this field, we
present MWPToolkit, the first open-source framework for deep learning-based MWP solvers. To
unify MWP methods into MWPToolkit, we design the framework of MWP solvers as an architecture with multiple core components: config, data,
model and evaluation. We further decouple the
components into highly reusable modules and deploy them into MWPToolkit. Thus, it is easily
extensible and convenient to develop new models
by combing existing modules and replacing individual modules with proposed ones. Besides, we also
develop a hyper-parameter search function for all
methods developed in MWPToolkit, which helps
mitigate the negative impact caused by sub-optimal
hyper-parameters.
MWPToolkit includes comprehensive benchmark datasets and models. So far, we have incorporated 6 widely-used MWP datasets and 17
models. The datasets contain 4 datasets that are
single equation generation and 2 datasets that are
multiple equation generation. The models include
Seq2seq, Seq2tree, Graph2tree, and commonlyused non-pretrained (AttSeq2Seq (Bahdanau et al.,
2014), LSTMVAE (Zhang et al., 2016), and Transformer (Vaswani et al., 2017)) and pretrained models (GPT-2 (Radford et al., 2019), BERTGen (Devlin et al., 2018), and RoBERTaGen (Liu et al.,
2019b)). Currently, our framework supports builtin evaluation protocols including equation accuracy
and answer accuracy for two types of generation.
In our MWPToolkit, users can run, compare, and
test models on MWP tasks under the same setting
with simple configuration files and command lines.
To ensure that our re-implementations in
MWPToolkit are correct and the experiments by
our framework are reliable, we set the same hyperparameters as the ones in original papers and ensure
the re-implemented result should be approximate
to the reported result. In this paper, we provide
a set of results of 17 models on 6 datasets with
the same k-fold cross-validation setting after the
built-in hyper-parameter search. We hope the community can benefit from the results of this comprehensive comparison, better understand existing
MWP methods, and easily develop new and powerful models by utilizing our MWPToolkit.
Figure 1: The overall framework of MWPToolkit.
**2** **MWPToolkit Framework**
The overall framework of our MWPToolkit is
presented in Figure 1, including the config component, data component, model component, evaluation component, and execution from bottom to
top. The config component is used to set up the
experimental configuration, which supports the following components. The data component preprocess different datasets into a unified form used in
the subsequent components. The model component is responsible for model construction. After
determining specific evaluation metrics in the evaluation component, the execution part is used to train
with a given group of hyper-parameters or hyperparameter searching and evaluate models with a
specific setting, i.e., train-test split or k-fold crossvalidation. Note that in our MWPToolkit, users
can use the given random seed to reproduce results
completely. In the following part, we present the
details of config, data, model, evaluation components and execution part.
**2.1** **Config Component**
Config component serves as the core humansystem interaction component in which the developer can specify experimental configurations. The
configuration part of our framework consists of
_Command lines, External config and Internal con-_
_fig. The default configuration is defined in internal_
configuration file. Users can flexibly and simply
use the command lines to modify the major settings
of the experiment. They can also have more customized settings with external configuration files,
which is beneficial for the duplication of research
results.
**2.2** **Data Component**
Any raw dataset in the data module follows the predefined data flow to convert raw data into a unified
-----
Dataset Language Task # Examples # Multi-Equ Hard Set Reference
MAWPS-s en Single equation generation 1,987 - - (Koncel-Kedziorski et al., 2016)
Draw1K en Multiple equations generation 1,000 745 - (Upadhyay and Chang, 2017)
Math23K zh Single equation generation 23,162 - - (Wang et al., 2017)
HMWP zh Multiple equations generation 5,491 1,789 - (Qin et al., 2020)
ASDiv-a en Single equation generation 1,238 - - (Miao et al., 2020)
SVAMP en Single equation generation 3,138 - 1,000 (Patel et al., 2021)
Table 2: The collected datasets in MWPToolkit. “# Multi-Equ” stands for the number of examples, the targets of
which are multiple equations. “Hard Set” means an external challenging or adversarial test set.
format of data as input for the the following model
component: raw data 7→ _Preprocessor 7→_ _Dataset_
_7→_ _Dataloader 7→_ processed data.
We display the statistics of all built-in datasets
in Table 2. As we can see, raw datasets vary in
formats and features, so we first preprocess these
raw datasets and convert them to a unified format.
In Preprocessor, we first tokenize input text by
a tokenizer, extract numbers from the tokenized
text by some simple rules, and record extracted
numbers and map them into position-aware special
tokens (a.k.a, number mapping). To avoid infinite
generation space in target, we convert equations
into equation templates by replacing numbers with
position-aware special tokens from number mapping. We add another special token < bridge >
for the multiple equations generation task to convert the equation forest to a tree. Hence it can
be treated as the single equation generation task.
Note that different models require us to prepare
different data formats and features. For example,
Bert-based MWP models use WordPiece embeddings (Wu et al., 2016) instead of word embeddings. For another example, Graph2tree models
utilize external information, like the results of the
dependency parser, to construct the graph. Hence
we customize the preparation of data preprocessing after basic preprocessing. Users can add a new
dataset to our framework by referring to our processing step.
We design the Dataset module to do data preparation. The design of AbstractDataset is to include
some shared attributes and basic functions. Any
specific dataset class or user customized dataset
class can inherit AbstractDataset with few modifications.
After the Dataset module, DataLoader module
selects features from the processed data to form
tensor data (PyTorch), which can be directly used in
the model component. AbstractDataLoader class,
including common attributes and basic functions,
allows users to easily create new DataLoaders for
new models.
**2.3** **Model Component**
We organize the implementations of MWP solving
methods in the model component. The objective
of the model component is to disentangle model
implementation from data processing, evaluation,
execution, and other parts, which benefits users to
focus on the model itself. We unify the implementation of a model. Specifically, we provide three
interface functions for loss calculation, prediction,
and test, respectively. When users deploy or add
a new model with MWPToolkit, they can simply
focus on these interface functions without considering other parts. Such a design enables users to
develop new algorithms easily and quickly. Besides, the commonly-used components of the implemented models have been decoupled and shared
across different models for code re-usage.
We have carefully surveyed the recent literatures
and selected the commonly-used MWP solving
models in our library. As the first released version,
we have implemented 17 MWP solving models in
the four categories: Seq2seq, Seq2tree, Graph2tree,
and Pretrained Language Models. In the future,
more methods will be added into our toolkit as the
regular update, like MathDQN (Wang et al., 2018b),
EPT (Kim et al., 2020), KAS2T (Wu et al., 2020),
and NumS2T (Wu et al., 2021). We summarize all
the implemented models in Table 3.
For all the implemented models in
MWPToolkit, we ensure that these reimplementations are correct and that the
experiments by our framework are reliable. We
set the same hyper-parameters as the ones in
original papers and ensure the re-implemented
result should be approximate to the reported result.
The detailed performance comparison between our
re-implementation and original results is shown in
Table 4 and Table 5.
-----
|Type|Model|Encoder|Decoder|Pretrained Model|Reference|
|---|---|---|---|---|---|
|Seq2Seq|DNS MathEN Saligned GroupATT|GRU BiLSTM BiLSTM BiLSTM|LSTM LSTM LSTM LSTM|- - - -|(Wang et al., 2017) (Wang et al., 2018b) (Chiang and Chen, 2019) (Li et al., 2019)|
||AttSeq2Seq LSTMVAE Transformer|LSTM LSTM Transformer|LSTM LSTM Transformer|- - -|(Bahdanau et al., 2014) (Zhang et al., 2016) (Vaswani et al., 2017)|
|Seq2Tree|TRNN AST-Dec GTS SAU-Solver TSN|BiLSTM BiLSTM GRU GRU GRU|LSTM TreeLSTM TreeDecoder TreeDecoder TreeDecoder|- - - - -|(Wang et al., 2019) (Liu et al., 2019a) (Xie and Sun, 2019) (Qin et al., 2020) (Zhang et al., 2020a)|
|Graph2Tree|Graph2Tree MulltiE&D|LSTM+GCN GRU+GCN|TreeDecoder GRU|- -|(Zhang et al., 2020b) (Shen and Jin, 2020)|
|Pretrained based|BERTGen RoBERTaGen GPT-2|BERT RoBERTa -|Transformer Transformer Transformer|BERT RoBERTa GPT-2|(Devlin et al., 2018) (Liu et al., 2019b) (Radford et al., 2019)|
Table 3: The implemented models in MWPToolkit. Currently, the toolkit includes four types of models: Seq2Seq,
Seq2Tree, Graph2Tree, and Pretrained models.
**2.4** **Evaluation Component**
Our toolkit standardizes the evaluation of MWP
solving models with Equation Accuracy and An_swer Accuracy for single equation generation or_
multiple equations generations. Equations accuracy is computed by measuring the exact match
of predicted equations and ground-truth equations.
For answer accuracy, we first check the validation
of predicted equations. The answer accuracy is 0
if the predicted equations are invalid or unsolvable.
We then calculate the answer using our encapsulated calculation module and compare it with the
ground-truth answer. If their difference is less than
1e − 5, we regard it as 1 and 0 otherwise.
**2.5** **Execution**
On the top of above components, we implement
training and testing paradigms, where two options
are provided. One is to follow the standard traintest splitting if the splitting data is given in the
original dataset. Another is to conduct k-fold
cross-validation. To improve the performance, we
also implement a series of hyper-parameter search
strategies, such as beam search, greedy search, sampling strategy.
**3** **Usage of MWPToolkit**
This section shows how users run the existing models and incorporate new models with our toolkit.
**3.1** **Running Existing Models**
Figure 2 shows the procedure of running an existing
model in MWPToolkit. Firstly, users need a con
figuration file to set up the experimental environment. In the configuration file, users should specify an existing model, a dataset, a task, and other
hyper-parameters regarding the model and training. The class Configure() loads all information on
configuration for the subsequent steps. Then, the
toolkit preprocesses data and organizes the dataset
by calling the function create dataset(). Based on
the processed dataset, users can use the function
_create dataloader() to convert data to the tensor_
format for training, validation, and test with the
specified batch size and other hyper-parameters
like the maximum length of the input sequence.
Later, the function get model() is used to get access to the model that the users would like to run.
Next, users can employ the function get trainer()
to build an executable MWP solver based on the
dataloader, model, and specified task obtained in
previous steps. Eventually, users run the function
_trainer.fit() to start the training and evaluation._
**3.2** **Developing New MWP Sovlers**
MWPToolkit is an extensible and easy-to-use
framework. It is convenient for users to add a new
MWP solving model or a new benchmark dataset
into MWPToolkit by filling up the specified interfaces. In the following, we present the details of
how to add a new dataset and model.
**3.2.1** **Add a New Dataset**
To add a new dataset, users need to inherit the
abstract class AbstractDataset and are required
to fill in the functions: _init (),_ _load data(),_
-----
#initialize config
config = Config()
#initialize dataset
dataset = create_dataset(config)
#data preprocess
folds = dataset.cross_validation_load(config["k_fold"])
for fold_t in folds:
#initialize dataloader
dataloader = create_dataloader(config)(config, dataset)
#initialize model
model = get_model(config["model"])(config, dataset)
#initialize evaluator
evaluator = get_evaluator(config)
#initialize trainer and start training
trainer = get_trainer(config)(config, model, dataloader, evaluator)
trainer.fit()
Get Configuration
Build Dataset
Preprocess
Build Dataloader
Build Model
Build Evaluator
Build Trainer
Start Training
#initialize config
config = Config()
#initialize dataset
dataset = create_dataset(config)
#data preprocess
dataset.dataset_load()
#initialize dataloader
dataloader = create_dataloader(config)(config, dataset)
#initialize model
model = get_model(config["model"])(config,dataset)
#initialize evaluator
evaluator = get_evaluator(config)
#initialize trainer and start training
trainer = get_trainer(config)(config, model, dataloader, evaluator)
trainer.fit()
(a) (b) (c)
Figure 2: Examples of how to use our MWPToolkit. Figure (a) illustrates the code of running models using the
train-test split. Figure (b) is about the usage flow of the toolkit. Figure (c) shows the code of running models using
_k-fold cross-validation._
_preprocess(), and build vocab()._ _init () is used_
to set up parameters of the dataset. load data() is
used to load the entire raw dataset or split training,
validation, and test sets. The function preprocess()
is used to process the raw dataset and prepare the
processed dataset for later usage in other modules.
_build vocab() is used to build shared or separate_
vocabulary dictionaries for the encoder and decoder.
The users can fill in the above required implemented functions to create a customized dataset
class quickly.
**3.2.2** **Add a New Model**
To add a new model, users need to complete three
functions in a new model class: _init (), calcu-_
_late loss(), and model test()._ _init () is used to_
build the model and initialize the parameters. calcu_late loss() is used to calculate the loss for training_
based on the model prediction and ground-truth
equations. model test() prepares suitable evaluation protocols and is executed for evaluation of
model performance on a specified dataset and task.
**4** **Performance Comparison**
To evaluate the models in MWPToolkit, we conduct extensive experiments to compare 17 MWP
solving models on 4 widely-used single equation
generation benchmark datasets and 2 multiple equations generation benchmarks. In our experiments,
if models have been evaluated on certain datasets,
we run models with the parameter configurations
described in their original papers. Otherwise, we
run hyper-parameter search to search a group of
hyper-parameters for these models. In the following sections, we discuss the detailed performance
comparison.
**4.1** **Single Equation Generation**
Table 4 displays the results of models on single
equation generation datasets. We include four
datasets for the single equation generation task,
i.e., Math23k, MAWPS-s, ASDiv-a, and SVAMP.
We can see from Table 4. We report three types
of results for a model on a dataset. The first two
columns are equation accuracy (Equ. Acc) and answer accuracy (Ans. Acc). The third column is the
results reported in the original papers under their
settings, such as train-test split (* means train-test
split) or 5-fold cross-validation. Note that for any
models, the results based on 5-fold cross-validation
are less than those based on train-test split because
the number of training examples of 5-fold crossvalidation is smaller than those in train-test split.
As shown in table 4, answer accuracy in the k-fold
cross-validation setting by our MWPToolkit are
either better than original answer accuracy or close
to them. Through our experiments, we can observe
that Graph2Tree and RoBERTaGen are the most
effective baselines, which means it is potential for
researchers to develop better models based on these
two model categories.
**4.2** **Multiple Equations Generation**
We add another special token < bridge > to convert equation forest to a tree for the multiple equations generation task. Hence it can be treated as
the single equation generation task. We apply the
17 models on two multiple equation generation
datasets, i.e., DRAW1K and HMWP. Their results
are shown in Table 5. As we can observe in ta
-----
|Model|Datasets|Col3|Col4|Col5|
|---|---|---|---|---|
||Math23K Equ. Acc Ans. Acc OA Acc|MAWPS-s Equ. Acc Ans. Acc OA Acc|ASDiv-a Equ. Acc Ans. Acc OA Acc|SVAMP Equ. Acc Ans. Acc OA Acc|
|DNS MathEN Saligned GroupAtt AttSeq LSTMVAE Transformer|57.1 67.5 58.1 66.7 69.5 66.7* 59.1 69.0 65.8 56.7 66.6 66.9|78.9 86.3 59.5 85.9 86.4 69.2 86.0 86.3 - 84.7 85.3 76.1|63.0 66.2 - 64.3 64.7 - 66.0 67.9 - 59.5 61.0 -|22.1 24.2 - 21.8 25.0 - 23.9 26.1 - 19.2 21.5 -|
||57.1 68.7 59.6 59.0 70.0 - 52.3 61.5 62.3*|79.4 87.0 79.7 79.8 88.2 - 77.9 85.6 -|64.2 68.3 55.5 64.0 68.7 - 57.2 59.3 -|23.0 25.4 24.2 23.2 25.9 - 18.4 20.7 -|
|TRNN AST-Dec GTS SAU-Solver TSN|65.0 68.1 66.9* 57.5 67.7 69.0* 63.4 74.2 74.3 64.6 75.1 74.8 63.8 74.4 75.1|86.0 86.5 66.8 84.1 84.8 - 83.5 84.1 82.6 83.4 84.0 - 84.0 84.7 84.4|68.9 69.3 - 54.5 56.0 - 67.7 69.9 71.4 68.5 71.2 - 68.5 71.0 -|22.6 26.1 - 21.9 24.7 - 25.6 29.1 30.8 27.1 29.7 - 25.7 29.0 -|
|Graph2Tree MultiE&D|64.9 75.3 75.5 65.5 76.5 76.9|84.9 85.6 83.7 83.2 84.1 -|72.4 75.3 - 70.5 72.6 -|31.6 35.0 36.5 29.3 32.4 -|
|BERTGen RoBERTaGen GPT-2|64.8 76.6 - 65.2 76.9 - 63.8 74.3 -|79.0 86.9 - 80.8 88.4 - 75.4 75.9 -|68.7 71.5 - 68.7 72.1 - 59.9 61.4 -|22.2 24.8 - 27.9 30.3 - 22.5 25.7 -|
Table 4: Performance comparisons of different methods on single equation generation task. “Equ. Acc” is equation
accuracy. “Ans. Acc” stands for answer accuracy. “OA Acc” means original answer accuracy in previous papers.
“*” means train-test split.
and TexBox (Li et al., 2021) for text generation
tasks, ExplainaBoard (Liu et al., 2021) for evaluating interpretable models, Photon (Zeng et al.,
2020) for text-to-SQL tasks, and Huggingface’s
Transformers (Wolf et al., 2020) for model pretraining. To our best knowledge, there is no such a
unified and comprehensive framework for MWPs
solving task. Therefore, we release MWPToolkit,
which includes a considerable number of benchmark datasets and deep learning-based solvers.
Recently, a large number of new MWP solving methods have been proposed, including graph
neural networks based (Li et al., 2020), template
based (Lee et al., 2021), neural symbolic (Qin
et al., 2021), pre-trained based (Liang et al., 2021),
multilingual pre-trained based methods (Tan et al.,
2021), and solvers using external information and
signals (Liang and Zhang; Wu et al., 2021). In addition, weakly supervised learning for MWP solving (Hong et al., 2021; Chatterjee et al., 2021) and
supervised learning for geometric problem solving (Lu et al., 2021; Chen et al., 2021) have recently
attracted much researchers’ attention. More work
on math word and geometric problem solving can
be found in the survey paper (Zhang et al., 2019).
We will update more above methods to the toolkit
in the future.
**6** **Conclusion**
This paper presented an extensible, modularized,
and easy-to-use toolkit, MWPToolkit, the first
open-source framework for solving MWPs. In our
MWPToolkit, we decompose the procedure of existing MWP methods into multiple components and
|Model|Datasets|Col3|
|---|---|---|
||Draw1K Equ. Acc Ans. Acc OA Acc|HMWP Equ. Acc Ans. Acc OA Acc|
|DNS MathEN Saligned GroupAtt AttSeq LSTMVAE Transformer|35.8 36.8 - 38.2 39.5 - 36.7 37.8 - 30.4 31.4 -|24.0 32.7 - 32.4 43.7 - 31.0 41.8 - 25.2 33.2 -|
||39.7 41.2 - 40.9 42.3 - 27.1 28.3 -|32.9 44.7 - 33.6 45.9 - 24.4 32.4 -|
|TRNN AST-Dec GTS SAU-Solver TSN|27.4 28.9 - 26.0 26.7 - 38.6 39.9 - 38.4 39.2 - 39.3 40.4 -|27.2 36.8 - 24.9 32.0 - 33.7 44.6 - 33.1 43.7 44.8 34.3 44.9 -|
|Graph2Tree MultiE&D|39.8 41.0 - 38.1 39.2 -|34.4 45.1 - 34.6 45.3 -|
|BERTGen RoBERTaGen GPT-2|33.9 35.0 - 34.2 34.9 - 30.7 31.5 -|29.2 39.5 - 30.6 41.0 - 36.3 49.0 -|
Table 5: Performance comparison of different methods
on multiple equations generation task. “Equ. Acc” is
equation accuracy. “Ans. Acc” stands for answer accuracy. “OA Acc” means original answer accuracy in
previous papers.
ble 5, to our surprise, LSTMVAE achieves the best
performance on Draw1K, and GPT-2 achieves the
best performance on HMWP. Most researchers focus on improving performance on the single equation generation task, while few researchers develop
models on the multiple equation generation task in
the MWP solving community. We hope the results
shown in table 5 can help researchers develop more
powerful and effective models for solving MWPs
with multiple equations as their generation targets.
**5** **Related Work**
In NLP community, there have been a number of
toolkits that managed to summarize the existing
methods and establish a unified framework for a
certain task, such as OpenNMT (Klein et al., 2017)
-----
decouple their models into highly reusable modules. We also provide a hyper-parameter search
function for a fairer comparison. Furthermore, we
implement and compare 17 MWP solving models, including Seq2Seq, Seq2tree, Graph2Tree,
and commonly-used non-pretrained models and
pretrained models, on 4 widely-used single equation generation benchmark datasets and 2 multiple equations generation benchmarks. These features enable our MWPToolkit to be suitable for
researchers to reproduce reliable baseline models
and develop new MWP solving methods quickly.
In the future, we will continue to add
more benchmark datasets, the latest published
MWP solvers, and commonly-used models into
MWPToolkit as the regular update. We welcome
more researchers and engineers to join, develop,
maintain, and improve this toolkit to push forward
the development of the research on MWPs solving.
**Acknowledgement**
The authors would like to thank everyone who has
contributed to make MWPToolkit a reality. Thanks
to the TextBox (Li et al., 2021), CRSLab (Zhou
et al., 2021), and RecBole (Zhao et al., 2020) for
such elegant and easy-to-use libraries. We refer to
these libraries and learn a lot from them.
**References**
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. _arXiv preprint_
_arXiv:1409.0473._
D. Bobrow. 1964. Natural language input for a computer problem solving system. pages 146–226.
Oishik Chatterjee, Aashish Waikar, Vishwajeet Kumar,
Ganesh Ramakrishnan, and Kavi Arya. 2021. A
weakly supervised model for solving math word
problems. arXiv preprint arXiv:2104.06722.
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan
Liang, Lingbo Liu, Eric P Xing, and Liang Lin.
2021. Geoqa: A geometric question answering
benchmark towards multimodal numerical reasoning. arXiv preprint arXiv:2105.14517.
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-aligned equation generation for
solving and reasoning math word problems. In
_NAACL-HLT (1)._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Edward A. Feigenbaum and Julian Feldman. 1963.
_Computers and Thought. McGraw-Hill, Inc., New_
York, NY, USA.
Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, and
Song-Chun Zhu. 2021. Learning by fixing: Solving math word problems with weak supervision. In
_AAAI Conference on Artificial Intelligence._
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers
solve math word problems? large-scale dataset construction and evaluation.
Bugeun Kim, Kyung Seo Ki, Donggeon Lee, and Gahgene Gweon. 2020. Point to the expression: Solving algebraic word problems using the expressionpointer transformer model. In Proceedings of the
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 3768–3779._
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Opensource toolkit for neural machine translation. arXiv
_preprint arXiv:1701.02810._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
MAWPS: A math word problem repository. In
_NAACL, pages 1152–1157._
Nate Kushman, Luke Zettlemoyer, Regina Barzilay,
and Yoav Artzi. 2014. Learning to automatically
solve algebra word problems. In ACL, pages 271–
281.
Donggeon Lee, Kyung Seo Ki, Bugeun Kim, and
Gahgene Gweon. 2021. Tm-generation model: a
template-based method for automatically solving
mathematical word problems. The Journal of Super_computing, pages 1–17._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
Bing Tian Dai, and Dongxiang Zhang. 2019. Modeling intra-relation in math word problems with different functional multi-head attentions. In Proceedings
_of the 57th Annual Meeting of the Association for_
_Computational Linguistics, pages 6162–6167._
Junyi Li, Tianyi Tang, Gaole He, Jinhao Jiang, Xiaoxuan Hu, Puzhao Xie, Zhipeng Chen, Zhuohao Yu,
Wayne Xin Zhao, and Ji-Rong Wen. 2021. TextBox:
A unified, modularized, and extensible framework
for text generation. In Proceedings of the 59th An_nual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Con-_
_ference on Natural Language Processing: System_
_Demonstrations, pages 30–39. Association for Com-_
putational Linguistics.
Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu,
Fengyuan Xu, and Sheng Zhong. 2020. Graph-totree neural networks for learning structured inputoutput translation with applications to semantic
parsing and math word problem. _arXiv preprint_
_arXiv:2004.13781._
-----
Zhenwen Liang, Jipeng Zhang, Jie Shao, and Xiangliang Zhang. 2021. Mwp-bert: A strong baseline for math word problems. _arXiv preprint_
_arXiv:2107.13435._
Zhenwen Liang and Xiangliang Zhang. Solving math
word problems with teacher supervision.
Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan,
Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, and Graham Neubig. 2021. Explainaboard:
An explainable leaderboard for nlp. arXiv preprint
_arXiv:2104.06387._
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke
Kawahara. 2019a. Tree-structured decoding for
solving math word problems. In Proceedings of
_the 2019 Conference on Empirical Methods in Nat-_
_ural Language Processing and the 9th International_
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 2370–2379._
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
Huang, Xiaodan Liang, and Song-Chun Zhu. 2021.
Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. arXiv
_preprint arXiv:2105.04165._
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing english math word problem solvers. In ACL.
Arkil Patel, S. Bhattamishra, and Navin Goyal. 2021.
Are nlp models really able to solve simple math
word problems? In NAACL.
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng
Tang, and Liang Lin. 2021. Neural-symbolic solver
for math word problems with auxiliary tasks. arXiv
_preprint arXiv:2107.01431._
Jinghui Qin, Lihui Lin, Xiaodan Liang, Rumin Zhang,
and Liang Lin. 2020. Semantically-aligned universal tree-structured solver for math word problems.
_arXiv preprint arXiv:2010.06823._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.
_OpenAI blog, 1(8):9._
Yibin Shen and Cheqing Jin. 2020. Solving math word
problems with multi-encoders and multi-decoders.
In Proceedings of the 28th International Conference
_on Computational Linguistics, pages 2924–2934._
Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing
Jiang. 2021. Investigating math word problems using pretrained multilingual language models. arXiv
_preprint arXiv:2105.08928._
Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and
dataset for algebra word problems. In EACL.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In Advances in neural information pro_cessing systems, pages 5998–6008._
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math
word problem to an expression tree. arXiv preprint
_arXiv:1811.05632._
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI
_Conference on Artificial Intelligence, volume 32._
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers
with recursive neural networks. In Proceedings of
_the AAAI Conference on Artificial Intelligence, vol-_
ume 33, pages 7144–7151.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854.
Thomas Wolf, Julien Chaumond, Lysandre Debut, Victor Sanh, Clement Delangue, Anthony Moi, Pierric Cistac, Morgan Funtowicz, Joe Davison, Sam
Shleifer, et al. 2020. Transformers: State-of-theart natural language processing. In Proceedings of
_the 2020 Conference on Empirical Methods in Nat-_
_ural Language Processing: System Demonstrations,_
pages 38–45.
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuan-Jing
Huang. 2020. A knowledge-aware sequence-to-tree
network for math word problem solving. In Proceed_ings of the 2020 Conference on Empirical Methods_
_in Natural Language Processing (EMNLP), pages_
7137–7146.
Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuanjing
Huang. 2021. Math word problem solving with explicit numerical values. In ACL.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V.
Le, Mohammad Norouzi, Wolfgang Macherey,
Maxim Krikun, Yuan Cao, Qin Gao, Klaus
Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws,
Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith
Stevens, George Kurian, Nishant Patil, Wei Wang,
Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes,
[and Jeffrey Dean. 2016. Google’s neural machine](http://arxiv.org/abs/1609.08144)
[translation system: Bridging the gap between human](http://arxiv.org/abs/1609.08144)
[and machine translation.](http://arxiv.org/abs/1609.08144)
-----
Zhipeng Xie and Shichao Sun. 2019. A goal-driven
tree-structured neural model for math word problems. In IJCAI, pages 5299–5305.
Jichuan Zeng, Xi Victoria Lin, Caiming Xiong,
Richard Socher, Michael R Lyu, Irwin King,
and Steven CH Hoi. 2020. Photon: A robust
cross-domain text-to-sql system. _arXiv preprint_
_arXiv:2007.15280._
Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and
[Min Zhang. 2016. Variational neural machine trans-](https://doi.org/10.18653/v1/D16-1050)
[lation. In Proceedings of the 2016 Conference on](https://doi.org/10.18653/v1/D16-1050)
_Empirical Methods in Natural Language Processing,_
pages 521–530, Austin, Texas. Association for Computational Linguistics.
Dongxiang Zhang, Lei Wang, Luming Zhang,
Bing Tian Dai, and Heng Tao Shen. 2019. The
gap of semantic parsing: A survey on automatic
math word problem solvers. _IEEE transactions_
_on pattern analysis and machine intelligence,_
42(9):2287–2305.
Jipeng Zhang, Ka Wei LEE, Ee-Peng Lim, Wei Qin,
Lei Wang, Jie Shao, Qianru Sun, et al. 2020a.
Teacher-student networks with multiple decoders for
solving math word problem.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graph-totree learning for solving math word problems. Association for Computational Linguistics.
Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan
Lin, Kaiyuan Li, Yushuo Chen, Yujie Lu, Hui
Wang, Changxin Tian, Xingyu Pan, et al. 2020.
Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms.
_arXiv preprint arXiv:2011.01731._
Kun Zhou, Xiaolei Wang, Yuanhang Zhou, Chenzhan
Shang, Yuan Cheng, Wayne Xin Zhao, Yaliang Li,
and Ji-Rong Wen. 2021. Crslab: An open-source
toolkit for building conversational recommender system. arXiv preprint arXiv:2101.00939.
-----
| [
"Yunshi, Lan",
"Dongxiang, Zhang",
"Lei, Wang",
"Yan, Wang",
"Yihuai, Lan",
"Qiyuan, Zhang",
"Ee-Peng, Lim",
"Bing Tian, Dai"
] | 2021-09-17T00:00:00 | AAAI 2022 Demonstration Track | false | 67 | 9 | null | http://arxiv.org/abs/2109.00799 | https://arxiv.org/abs/2109.00799 | https://www.semanticscholar.org/paper/934bdb14923510a2dedd3682a3f2c89f7e1ab364 |
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning | The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets---GSM8K, MathQA, and MATH---and find that it successfully recognizes errors and, in turn, increases final answer accuracies. | This work proposes SelfCheck, a general-purpose zero-shot verification schema for recognizing errors in large language models and uses the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. | ## SELFCHECK: USING LLMS TO ZERO-SHOT CHECK
### THEIR OWN STEP-BY-STEP REASONING
**Ning Miao[1*]** **Yee Whye Teh[1]** **Tom Rainforth[1]**
ABSTRACT
The recent progress in large language models (LLMs), especially the invention of
chain-of-thought prompting, has made it possible to automatically answer questions
by stepwise reasoning. However, when faced with more complicated problems that
require non-linear thinking, even the strongest LLMs make mistakes. To address
this, we explore whether LLMs are able to recognize errors in their own step-bystep reasoning, without resorting to external resources. To this end, we propose
SelfCheck, a general-purpose zero-shot verification schema for recognizing such
errors. We then use the results of these checks to improve question-answering
performance by conducting weighted voting on multiple solutions to the question.
We test SelfCheck on three datasets—GSM8K, MathQA, and MATH—and find
that it successfully recognizes errors and, in turn, increases final answer accuracies.
INTRODUCTION
Recent years have witnessed dramatic changes in the areas of NLP and AI brought on by significant
advances in LLMs. From GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), Llama (Touvron et al., 2023) and Falcon (Almazrouei et al., 2023) to GPT-4 (OpenAI, 2023) and PaLM-2 (Google,
2023), the increasing model sizes and exploding amount of training data have empowered LLMs to
achieve human-level performance on a large range of tasks, including summarization, translation,
and question answering. The invention of Chain-of-Thought prompting (CoT, Wei et al. (2022)) has
further enhanced LLMs’ ability to solve complex problems by generating step-by-step solutions.
However, the performance of even the largest LLMs is still unsatisfactory on more difficult reasoning
problems. For example, GPT-4 with CoT prompting only correctly answers 42.5% of problems in the
MATH dataset (Bubeck et al., 2023; Hendrycks et al., 2021), which is far below human level. Such
problems require careful and extensive multi-step reasoning to solve, and LLMs are consequently
prone to make mistakes: even though their error rate on individual steps may be low, the probability
of generating at least one erroneous step can still be quite high, undermining the final answer.
Recent works have tried to overcome this limitation by checking for errors in these step-by-step
solutions (Cobbe et al., 2021; Li et al., 2022; Ling et al., 2023). Such checks can then be used to
provide confidence scores in answers and select between different possible alternatives. This checking
has typically been performed either by using an external verification model (Cobbe et al., 2021; Lyu
et al., 2023; Peng et al., 2023), or through few-shot in-context learning (Brown et al., 2020) of an
LLM (Weng et al., 2022; Ling et al., 2023).
Unfortunately, existing methods generally require extra training data and/or domain-specific exemplars, which often makes them inconvenient to use in practice and restricts them to specific domains
or data formats. The aim of our work is thus to instead provide a general-purpose, zero-shot, approach
to checking that relies only on the original LLM, without the need for additional external resources.
To this end, we introduce SelfCheck, a zero-shot step-by-step checker for self-identifying errors in
LLM reasoning chains. SelfCheck uses the LLM to individually check the conditional correctness of
each step in the chain based on the preceding steps, in a manner similar to a human going back to
check their working. The results of these individual checks are then integrated to form an overall
correctness estimation for the whole reasoning chain.
Key to SelfCheck’s success is a novel mechanism for performing the checking of individual steps. As
we will show, the naive approach of directly asking the LLM to check a step is typically ineffective.
Instead, we introduce a multi-stage approach that breaks the problem down into a series of simpler
1Department of Statistics, University of Oxford. *Email: <[email protected]>.
-----
tasks, leverages the generative strengths of the LLM, and decorrelates errors between the original
generation and checking. Specifically, using separate calls to the LLM we first extract the target and
relevant context for the step, then regenerate an independent alternative step from these, and finally
compare the two. The original step is then deemed to pass the check if it matches the regeneration.
Besides providing an estimation of correctness for each solution, SelfCheck can also boost final
answer accuracies for the original questions by weighted voting. Namely, given multiple solutions to
a question, it uses confidence scores as weights to vote among the answers, which provides a soft
way to focus on more accurate solutions.
We evaluate SelfCheck on three math tasks, namely GSM8K (Cobbe et al., 2021), MathQA (Amini
et al., 2019), and MATH (Hendrycks et al., 2021). For all datasets, we find that using SelfCheck
achieves a significant increase in final answer accuracies compared with simple majority voting and
other baselines. We also see that SelfCheck provides an accurate confidence estimation for LLM’s
solutions, which decreases the proportion of incorrect solutions by 9%, 22.8%, and 16.2% on the three
datasets respectively when filtering out solutions with low confidence scores. We further perform a
number of ablations to justify some of our key design choices in the SelfCheck approach.
To summarize, we introduce SelfCheck as a novel and effective zero-shot schema for self-checking
step-by-step reasoning in LLMs. Unlike previous methods, SelfCheck does not need any finetuning or
example crafting, so can be directly applied to reasoning tasks in different domains. Our experiments
confirm that it can, in turn, be used to improve final predictive performance of LLMs. Our code is
[available at https://github.com/NingMiao/SelfCheck.](https://github.com/NingMiao/SelfCheck)
2 RELATED WORK
How to automatically check the correctness of a sequence of reasoning steps is a long-standing
question. We now discuss how previous methods have tried to tackle this in an LLM context. We note
that none of these works are able to work in the zero-shot setting covered by SelfCheck, requiring
either problem-specific examples, an external model, and/or finetuning.
**Few-shot verification** Though our focus will be on zero-shot checking, for some problems one
may have hand-crafted exemplars available that are specifically designed to that particular questionanswering task. Previous methods have been designed to perform checking of LLMs’ generated
solutions in this few-shot checking scenario.
For example, the Self-Verification (SV) approach of Weng et al. (2022) verifies the whole solution by
backward prediction. That is, it uses the conclusion from CoT reasoning to predict a masked condition
in the question. However, it only supports single-step checking and is based on the assumption that
every piece of information in the question can be recovered using a correct solution of it, which is
often not the case. Consequently, it is only applicable to simpler tasks, such as GSM8K.
The Deductive Verification (DV) approach of Ling et al. (2023) instead looks to verify independent
sub-tasks, as per SelfCheck. However, its verifier only supports checking reasoning chains in a special
format called Natural Programs. As a result, it can only work with a specific specialised generator,
without serving as a general verifier for multi-step reasoning.
**Verification with external resources** In some cases, there might be external resources available to
verify the logical correctness or faithfulness of LLM outputs. Lyu et al. (2023) translate a question
into a symbolic reasoning chain using an LLM and solve the problem by a symbolic logic solver.
Peng et al. (2023) introduced an external database to check for incorrect knowledge in LLM outputs.
These methods are limited by the availability of external resources and are typically restricted to
checking for certain types of errors.
**Training/finetuning a verifier** A few other methods train or finetune a separate verifier model
to check reasoning chains. Cobbe et al. (2021) finetuned a GPT-3 model on GSM8K to predict the
correctness of a solution as a whole. Li et al. (2022) trained a binary deberta-v3-large (He et al., 2020)
classifier on each domain to predict step correctness. More recently, Lightman et al. (2023) built a
large dataset, which contains step-wise correctness labels from human labelers, and finetuned a GPT-4
model on it. Unlike SelfCheck, all of these methods require extra data and external computational
resources, restricting their applicability and ease of use.
-----
Figure 1: Example of using SelfCheck, focusing on the checking of a particular step (Step 5). To
check the correctness of the step, SelfCheck goes through 4 stages. First, in the target extraction stage,
it figures out that the main purpose of Step 5 is to complete the square. In the information collection
stage, it then establishes that Step 5 only directly relies on Step 4. Next, the step regeneration
stage instructs the LLM to complete the square independently, only using Step 4 as context. The
regeneration result shows that the center and radius of the circle are (3, 0) and 3, which is different
from what is implied by the original Step 5. Consequently, the result comparison stage concludes that
Step 5 is likely to be wrong. After checking all the steps, SelfCheck integrates the results to form an
overall confidence score, w. See Appendix A for a complete version of the example.
3 SELFCHECK: USING LLMS TO CHECK THEIR OWN REASONING
Rather than relying on external resources or problem-specific data like the aforementioned approaches,
it would be highly beneficial if we could develop self-contained checking schemes that require only
the original LLM itself. In other words, we would like to use the LLM to identify errors in its own
step-by-step reasoning, analogously to how a human might go back to check their working.
Unfortunately, directly asking the LLM to check its own reasoning is largely ineffective: it almost
invariably declares that the original answer is correct, with Ling et al. (2023) finding answers checked
in this way are deemed correct more than 90% of the time regardless of whether they actually are. As
we will show in Section 5, individually prompting the LLM to check each step in the CoT reasoning
fares slightly better, but is still only able to offer marginal gains compared to not checking at all.
A more nuanced method to perform this checking is thus required. To this end, we introduce
**SelfCheck, a general-purpose, zero-shot, checking schema for self-identifying errors in LLM CoT**
reasoning. Given a question, q, and its step-by-step solution, s, produced by some generator (which
will generally be an LLM with appropriate CoT prompting), SelfCheck considers each step of s in
turn and tries to establish its individual correctness based on the preceding steps. This checking is
done by leveraging an LLM (which can either be the same LLM used to generate s or a separate
one), but rather than directly asking the LLM to perform the check, we instead introduce a novel step
checking method (see Section 3.1) that exploits their generative modeling strengths. The results of
the checks on individual steps are then combined into a single confidence score, w ∈ [0, 1], for the
whole solution. These confidence scores, in turn, allow us to improve predictive performance, by
using them to perform weighted voting on multiple solutions to the same question.
-----
3.1 STEP CHECKING
To check individual steps of the reasoning process, the first thing we should note is that the correctness
of each step is highly dependent on its context, namely the question and previous steps in the solution.
For example, we usually need to refer to previous steps for the definition of variables and the meaning
of specific numbers. If each step is conditionally correct based on the provided context and the last
step provides an answer in the required format, then the overall reasoning will itself be correct. The
target of the step checking is thus simply to check the conditional correctness of each step based on
the provided context. That is, we only care about catching errors at the current step, and can assume
all information from its context to be correct.
A simple idea to try and achieve this would be to feed the current step as well as all its context to an
LLM and directly ask it to ‘check the correctness of the step’. However, in practice, we find that this
task is too difficult for the LLM to do effectively, even with careful prompting that exemplifies how
to do the checking in detail (see Section 5). This difficulty comes first from the fact that there are
multiple aspects to the checking problem that the checker must deal with simultaneously: it needs to
understand the key content in the step and then collect all related information from the context, before
actually checking for its correctness. Second, ‘checking’ is a less common task in the training corpus
of most LLMs, such that it is a problem that does not necessarily play to their strengths. Finally, there
are likely to be strong correlations between the errors such a checker will make with the errors made
in the original generation, undermining its usefulness.
To address these difficulties, SelfCheck instead decomposes the checking task for each step into four
stages: target extraction, information collection, step regeneration, and result comparison. The LLM
is used to execute each stage successively, with the outcome of the result comparison providing the
correctness prediction.
The idea behind this decomposition is to make the LLM focus on an easier task at each stage and
ensure the individual tasks carried out are more closely aligned to the LLM’s strengths. Moreover, by
focusing on regenerating and then comparing, we hope to reduce the correlations between the errors
of the checking and the original generation.
At a high level, the stages work by first prompting the LLM to figure out the target of the current
step and what information it uses to achieve the target; we find that the LLM is usually able to
perform these tasks extremely accurately. Then we ask the LLM to re-achieve the target using only
the collected information, providing an alternative to the original step that maintains the same purpose
in the overall reasoning process. Here the clear description of the target and the simplified context we
provide make the regeneration stage less challenging. As a result, we hope its output will be more
reliable and thus serve as a useful reference. Even if this is not the case, it will still hopefully provide
a viable alternative, with a distinct generation, that can be used for comparison. The last stage then
uses the LLM to compare the original step with the regenerated output. If their main conclusions
match/mismatch, this provides evidence that the original step was correct/incorrect.
A worked example of this step-checking process is provided in Figure 1. In the following, we describe
each of the subtasks in detail and provide our specific instructions to the LLM. We note here that
the different LLM queries are made independently, rather than keeping the queries and answers
from previous stages in context. Thus, for example, when the LLM is called to carry out the step
regeneration, it does not have access to the original generation. The same prompts are used across
LLMs and datasets, thereby providing a general-purpose approach.
**Target extraction** To check a step (for example, Step 5 in Figure 1), we first need to figure out
what the step is trying to achieve. Without a specific target, the regeneration stage would proceed in
a random direction, making it impossible to serve as a reference to the original step. We thus use
the LLM itself to extract the target of a step using the question and all previous steps (Steps 0-4 in
Figure 1) with the following prompt (we omit some line breaks due to space limitations):
_The following is a part of the solution to the problem [Question]: [Step 0,..., Step i]. What specific action_
_does the step [Step i] take? Please give a brief answer using a single sentence and do not copy the steps._
During execution, we copy the question and steps into [Question] and [Step 0, ..., Step i] to form
the actual input to the LLM. The reason for requesting a brief answer is to try and keep the amount
of information retained to the minimum needed, thereby avoiding unnecessary influence on the
regeneration and hopefully reducing correlations in errors in turn.
-----
**Information collection** To reduce the difficulty of the regeneration stage and avoid unrelated
information from affecting the result, we filter out information that is not directly related to the
current step. Specifically, we ask the LLM to select useful items from the question and all previous
items with the following prompt, where [Information j] is simply the j-th sentence in the question:
_This is a math question: [Question]. The following is information extracted from the question:_
_Information 0: [Information 0]_ _Information 1: [Information 1]_ _..._
_The following are the first a few steps in a solution to the problem:_
_Step 0: [Step 0]_ _Step 1: [Step 1]_ _..._ _Step i-1: [Step i-1]_
_Which previous steps or information does the next step [Step i] directly follow from?_
After retrieving the free-text response from the LLM, we extract step or information ids by regular
expression. For example in Figure 1, the current step requires Step 4 and no information from the
question as context. The selected steps and information are then fed into the regeneration stage.
**Step regeneration** Given the target and necessary information of the step, we can now ask the
LLM to achieve the target independently with only the collected information, without seeing the
original step. Because the step is usually a small jump from previous conclusions, and the information
collection stage has already filtered out irrelevant information, we can usually trust regeneration
results. The prompt for this stage is:
_We are in the process of solving a math problem. We have some information from the problem:_
_Information 0: [Information I0]_ _Information 1: [Information I1]_ _..._
_The following are some previous steps:_ _Step 0: [Step S0]_ _Step 1: [Step S1]_ _..._
_The target for the next step is:_ _[Target]_
_Please try to achieve the target with the information from the problem or previous steps._
Here [Target] is the output from the target extraction stage. [Information Ii] and [Step Si] correspond
to the specific items selected by the information collection stage. In Figure 1, only Step 4 and no
information from the question is directly related to the current step, so SelfCheck simply copies the
content of Step 4 into [Step S0] and removes the block containing [Information Ii].
**Result comparison** The last step is to compare results from the regeneration stage and the original
step with the following prompt:
_The following are 2 solutions to a math problem._ _Solution 1: [Regeneration output]_ _Solution 2: [Step i]_
_Compare the key points from both solutions step by step and then check whether Solution 1 ‘supports’,_
_‘contradicts’ or ‘is not directly related to’ the conclusion in Solution 2. Pay special attention to the difference_
_in numbers._
If the regeneration output ‘supports’/‘contradicts’ the original step, we can conclude that the original
step is likely correct/incorrect respectively. Sometimes, the correctness of the original step cannot be
directly inferred from the regeneration output. For example, when the target is to simplify an equation,
then there may be multiple valid solutions. In such cases, we are not sure about the correctness of the
original step, which makes ‘is not directly related to’ the third possible outcome of the check.
3.2 RESULTS INTEGRATION
After running step-checking and getting a checking result for each step, we need an integration
function ϕ to give a confidence score, w ∈ [0, 1], for the overall correctness of the solution. The input
of ϕ should be a vector in the form of [r0, r1, ..., rn], where each item ri represents the step checking
result for Step i. We will use ri = −1, 0, and 1 to represent the step-checking results ‘contradict’,
‘is not directly related to’ and ‘support’ respectively. We find that the following simple integration
function works well in practice
_w = ϕ([r0, r1, ..., rn]) = 2_ Sigmoid
_∗_
_i=0_ 1ri=−1 − _λ0_
X
1
_ri=0_
_i=0_
X
(1)
_−λ−1_
where λ 1 and λ0 are two non-negative hyperparameters with λ 1 > λ0; we fix λ 1 = 1 and
_−_ _−_ _−_
_λ0 = 0.3 in our experiments. The rationale of this setup is that the more failed checks we see,_
the more likely the overall reasoning process, and thus final solution, are wrong. Note here that,
because the checks are themselves imperfect, we do not necessarily want to immediately reject the
whole solution from a single step-check failure, especially for ri = 0 cases. This is why we take
a ‘soft’ approach to the verification with a confidence score. The number of successful checks,
i.e. _i=0_ [1][r]i[=1][, is deliberately not included in our integration function as an increased number of]
[P][n]
-----
successful checks does not actually increase our confidence in the overall solution: shorter reasoning
chains are generally preferable to longer ones for a given question and LLM.
Once calculated, the resulting confidence score can be directly used as a weight for voting between
different possible solutions. We can thus use SelfCheck to increase the accuracy of an LLM’s answers
by generating multiple possible solutions, calculating confidence scores for each, and then choosing
our final answer through weighted voting.
4 EXPERIMENTS
We now run experiments on three math-reasoning datasets to evaluate SelfCheck’s effectiveness in
checking multi-step reasoning and improving final answer accuracies. Note here that our focus on
math-reasoning problems is due to ease of performance evaluation and dataset availability; SelfCheck
is directly applicable to other question-answering problems with nominal changes to our prompts.
**Datasets** GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and MATH (Hendrycks et al.,
2021) consist of math problems on primary school, middle school, and competition levels, containing
1319, 2985, and 5000 test samples, respectively. For GSM8K and MathQA, we evaluate SelfCheck
on the whole test sets. Due to limited resources, we use a subset of MATH test set taken from Ling
et al. (2023).[1] Besides the levels of difficulty, the three datasets differ from each other in the following
aspects. Firstly, MathQA provides 5 options to choose from for each problem, while GSM8K and
MATH have no options. Secondly, GSM8K only has arithmetic problems, while MathQA and MATH
contain more diverse problems in geometry, physics, probability, and algebra.
**LLMs** We use GPT-3.5 (gpt-3.5-0301) and GPT-4 (gpt-4-0613) as our LLMs, focusing in particular
on the former due to budget restrictions. Note that the same prompts are used for all datasets with
both LLMs during evaluation; no dataset-specific customization or tuning has been performed. When
devising the prompts, a small number of training samples from MathQA dataset were utilized.
**Baselines** We use majority voting (also known as Self-Consistency Decoding (Wang et al., 2022)
in the context of CoT reasoning) as our main baseline following Ling et al. (2023) and Lightman
et al. (2023). Despite its simplicity, this is still quite a strong baseline in the current literature. In
particular, most existing few-shot methods report similar results compared with it (Weng et al., 2022;
Ling et al., 2023). We also compare with previously quoted results from Self Verification (SV, Ling
et al. (2023)) and Deductive Verification (DV, Weng et al. (2022)) when possible. We note though
that these approaches are not directly comparable to SelfCheck in general, as they require additional
exemplars which will often not be available in practice. Despite this, we will find that SelfCheck
outperforms them when comparisons are possible.
We omit results from Faithful-CoT (Lyu et al., 2023), because it has already been shown to decrease
the accuracies on GSM8K and MATH by 11.8% and 4.2%, respectively compared to majority
voting (Ling et al., 2023). It is also impossible for us to compare with training/finetuning based
methods such as Lightman et al. (2023), because we have neither access to their finetuned models nor
computation resources to repeat their training/finetuning. The significant extra data and resources
they require also means their contributions are somewhat tangential to SelfCheck regardless.
4.1 FINAL ANSWER CORRECTNESS
Figure 2 shows the performance gains using the confidence scores from SelfCheck to do weighted
voting compared with baseline methods. The upper plots show that accuracies of both SelfCheck and
majority voting have the same increasing tendency as the number of generated solutions per question
increases, which is a result of the variance reduction provided by averaging over more solutions.
The bottom plots show the difference in accuracy between the two including the standard error in
the estimate. We can see that by allocating higher weights to correct solutions, SelfCheck achieves
significantly higher accuracies than majority voting for all solution numbers per question. We also
find the improvements of SelfCheck (compared with majority voting) to be higher than Deductive
Verification and Self-Verification in their reported settings, despite the use of in-context learning
[1https://github.com/lz1oceani/verify_cot/tree/main/results/chatgpt3.5/](https://github.com/lz1oceani/verify_cot/tree/main/results/chatgpt3.5/natural_program/MATH_np.json)
[natural_program/MATH_np.json](https://github.com/lz1oceani/verify_cot/tree/main/results/chatgpt3.5/natural_program/MATH_np.json)
-----
0.84
0.81
0.78
0.75
0.72
0.69
0.66
0.04
0.02
0.00
0.78
0.75
0.72
0.69
0.66
0.63
0.60
0.04
0.02
0.00
0.51
0.48
0.45
0.42
0.39
0.36
0.33
0.04
0.02
0.00
|DV SV|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|DV|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
Majority Voting
SelfCheck
Majority Voting
SelfCheck
Majority Voting
SelfCheck
DV SV
DV
1 2 3 4 5 6 7 8 9 10
#Solutions per question
(a) GSM8K
1 2 3 4 5 6 7 8 9 10
#Solutions per question
(b) MathQA
1 2 3 4 5 6 7 8 9 10
#Solutions per question
(c) MATH[∗]
Figure 2: The upper plots show the accuracies of SelfCheck and majority voting for different numbers
of generated solutions per question with GPT-3.5. The lower plots show the accuracy gaps between
each method and majority voting, where DV and SV stand for Deductive Verification (Weng et al.,
2022) and Self-Verification (Ling et al., 2023), respectively. It is difficult to compare with DV and SV
with respect to absolute accuracies because they are using different generator models. However, we
can see that SelfCheck achieves higher relative performance gains than both in their reported settings.
Table 1: SelfCheck significantly increases final answer accuracies with both GPT-3.5 and GPT4, even we only have 2 candidate solutions for each question. ∆Acc is the performance gain of
SelfCheck compared with majority voting (MV), with the✓✓represent the proportions of questions with 0, 1 or 2 correct solutions. We see that the gains from ± indicating the standard error. ✗✗, ✗✓and
SelfCheck are typically larger in cases where it is common for only one of the solutions to be correct,
as these are the cases using weighted voting can influence the final answer.
Dataset Generator Checker ✗✗ (%) ✗✓ (%) ✓✓ (%) Acc(MV, %) Acc(SelfCheck, %) ∆Acc (%)
GPT-3.5 GPT-3.5 16.8 23.0 60.2 71.7 74.3 2.8±0.9
GSM8K GPT-4 GPT-4 8.8 8.2 83.0 87.1 86.9 -0.2±0.2
GPT-4 GPT-3.5 8.8 8.2 83.0 87.1 **88.1** 1.0±0.3
GPT-3.5 GPT-3.5 27.6 26.4 46.0 59.2 64.6 5.4±1.1
MathQA GPT-4 GPT-4 16.2 11.0 72.8 78.3 80.9 2.6±0.4
GPT-4 GPT-3.5 16.2 11.0 72.8 78.3 **81.2** 3.0±0.4
GPT-3.5 GPT-3.5 52.6 23.2 24.2 35.8 38.0 2.2±0.7
MATH[∗] GPT-4 GPT-4 42.0 20.2 37.8 47.9 **51.3** 3.4±0.6
GPT-4 GPT-3.5 42.0 20.2 37.8 47.9 48.9 1.0±0.8
from additional examples. We will perform additional ablations on how performance changes when
ensembling over a larger number of solutions in Section 5.1.
To investigate the effect of using more powerful LLMs, and of using a different LLM for the
generation and checking, we further conducted experiments with GPT-4 and a mix of GPT-4 and
GPT-3.5. Because of the high cost of calling the GPT-4 API, we randomly sample 500 questions from
each dataset to form the test sets and generate 2 (instead of 10) answers to each question. In Table 1,
we see that SelfCheck significantly outperforms majority voting with both GPT-3.5 and GPT-4. We
also notice that using GPT-3.5 to check GPT-4 generated answers yields surprisingly good results,
actually outperforming checking with GPT-4 on the simpler GSM8K and MathQA tasks. This is
likely because using different LLMs helps to further decorrelate the errors of the generator and the
checker, and shows that using a cheaper LLM can still often be sufficient for the checking. For the
more difficult problems in MATH, using GPT-4 as checker always produces better results, but even
here the checking from GPT-3.5 is beneficial compared to doing no checking at all.
4.2 VERIFICATION PERFORMANCE
Besides serving as a confidence score calculator to improve the performance of voting, SelfCheck
can also predict the correctness of a single solution. To do so, we simply set a threshold t to the
confidence score, where solutions with confidence scores w ≥ _t are classified as correct._
-----
1.00
0.80
0.60
0.40
0.20
0.00
1.00
0.80
0.60
0.40
0.20
0.00
1.00
0.80
0.60
0.40
0.20
0.00
|Col1|Real + in Pred + Real in Pred|Col3|Col4|
|---|---|---|---|
||Real + in Pred + Real in Pred|||
|||||
|||||
|Col1|Col2|
|---|---|
||Real + in Pred + Real in Pred|
|||
|||
|Col1|Real + in Pred + Real in Pred|Col3|
|---|---|---|
||Real + in Pred + Real in Pred||
||||
||||
Real + in Pred +
Real in Pred
Real + in Pred +
Real in Pred
Real + in Pred +
Real in Pred
0.3 0.6 0.9
Threshold t
(a) GSM8K
0.3 0.6 0.9
Threshold t
(b) MathQA
0.3 0.6 0.9
Threshold t
(c) MATH[∗]
Figure 3: When raising the classification thresholds t, the proportions of real correct solutions in predicted correct solutions (Real + in Pred +) increase for GSM8K (67.5%→76.5%),
MathQA (59.4%→82.2%) and MATH (34.6%→50.8%).
1.0
0.8
0.6
0.4
0.2
Figure 4 shows the ROC curves for each dataset. As a comparison, directly prompting GPT-3.5 to verify whole reasoning
chains leads to no meaningful control on the false and true positive rates (FP and TP): they are always both 100% on MATH
and 98% on GSM8K, as observed by Ling et al. (2023). In
other words, the checker always predicts the answer as correct,
providing no useful information.
As well as verification accuracies, we may also care about the 0.0 MATH
GSM8K
MathQA
MATH
solution quality after filtering out solutions with low confidence 0.0 0.5 1.0
scores w. Figure 3 shows that by increasing the threshold t, FP rate
SelfCheck can filter out more incorrect solutions, such that Figure 4: True positive rates (TP)
a higher proportion of the solutions that pass the check are vs. false positive rates (FP) as clasindeed correct (Real + in Pred +). Though this is at the cost of sification threshold, t, is varied.
misclassifying more of the real correct solutions as incorrect, this can be a useful feature in cases
where the risk of choosing an incorrect solution is higher than rejecting a correct one.
5 ANALYSIS
We now perform some ablations to justify some of the key design choices made by SelfCheck and
provide insights on its behavior. Limited by budget and time, all experiments in this section are
performed on a subset of the MathQA test set with 100 randomly selected questions.
0.85
0.80
0.75
0.70
0.65
5.1 MORE SOLUTIONS PER QUESTION?
Serving as a method to reduce variance, majority
voting increased final answer accuracies on different
datasets when we increased from 2 to 10 solutions
in Figure 2. In cases where we only care about final predictive performance, one might thus question
whether it is better to simply use our computational
resources to keep increasing the size of this ensemble,
rather than relying on a checking scheme.
|Majority Voting SelfCheck|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||Majority Voting SelfCheck|||
||||||
10 20 30 40 50
#Solutions per question
resources to keep increasing the size of this ensemble, Figure 5: SelfCheck achieves significantly
rather than relying on a checking scheme. higher final answer accuracies than majority
voting for large ensembles of solutions.
However, as shown in Figure 5, this effect saturates
for larger solution ensembles, with the accuracy of majority voting never going above that achieved
when n = 9, thereby never reaching the performance we already achieved by SelfCheck for the
smaller ensemble. Moreover, the performance of SelfCheck continues to increase as the ensemble
grows. By lowering the weights (confidence) of incorrect solutions, SelfCheck increases the chance
of selecting the correct answers, even when their generation probabilities in the generator LLM are
low. Therefore, with SelfCheck, LLMs can effectively rectify their own biased beliefs by themselves.
-----
5.2 ALBATION STUDIES
In order to pick apart the effect of several critical design choices for SelfCheck, we compare SelfCheck
with some of its variants with respect to final answer and verification accuracies on MathQA.
**Global v.s. step-by-step checking** The first question is can we simply ask an LLM to check the
whole solution without taking steps into consideration. To answer it, we prompt the LLM to perform
global checking with the following instruction:
_The following is a question and a solution to it from a student. Carefully check whether the solution is correct_
_step by step. End your response with your conclusion that starts with "Correct", "Wrong" or "Not Sure"._
_Question: [Question]_ _Solution: [Step 0, Step 1,..., Step n]_
Similar to the findings of Ling et al. (2023), we find that
the global checker outputs "correct" most of the time
and rarely recognizes an error. Consequently, its final
answer accuracies are very close to majority voting (in
Figure 6) and its verification accuracy (55.0%) is only
marginally above random guess (50.0%). This lack of
ability to deal with the difficulty of global checking is
what makes step checking necessary.
**Single-stage v.s. multiple-stage step checking** Next,
we ask whether we really need to decompose the step
checking into several stages? To answer this, we design
the following prompt to use the LLM directly.
0.8
0.7
0.6
|Col1|SelfCheck Global Check Single Stage Check Error Check (0-shot) Error Check (1-shot) Majority Voting|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||
||||||||SelfCheck Global Check Single Stage Check Error Check (0-shot) Error Check (1-shot) Majority Voting||||||
||||||||||||||
||||||||||||||
10
#Solutions per question
Figure 6: Generation accuracies for variants
of SelfCheck on MathQA with GPT-3.5.
_The following is a question and the first a few steps in its solution._
_Question: [Question]_ _Solution: [Step 0, Step 1,..., Step i-1]_
_Check the correctness of the next step: [Step i]_
_Please consider the information it relies on and check step by step. Please end your response with your_
_conclusion that starts with "Correct", "Wrong" or "Not Sure"._
Figure 6 and Table 2 show that although this is better
than global checking, it is still significantly worse than
SelfCheck with its multi-stage checking. This indicates
that checking a step in a single stage is still too challenging for the LLM, so it is necessary to further decompose
step checking into a pipeline of easier sub-tasks.
**Error check v.s. regenerate and compare** We now
justify the choice to perform step regeneration and comparison instead of direct error checking for each step.
To do so, we replace our regeneration stage and comparison stage with a single error-checking stage. We
first compare with a zero-shot version of the variant
with the following prompt:
Table 2: Verification accuracies for variants of SelfCheck on MathQA with GPT3.5. The reported verification accuracy is
the average of true positive and true negative rates.
Method Accuracy (%)
SelfCheck **66.7%**
Global Check 55.0%
Single stage Check 57.2%
Error Check (0-shot) 63.1%
Error Check (1-shot) 64.2%
_Given the following information:_
_Information 0: [Information I0]_ _Information 1: [Information I1]_ _..._
_Step 0: [Step S0]_ _Step 1: [Step S1]_ _..._
_Check the correctness of the next step [Step i]_
_Please check for grounding errors, reasoning errors and calculation errors step by step. Please end your_
_response with your conclusion that starts with "Correct", "Wrong" or "Not Sure"._
We then add an examplar from Ling et al. (2023) (see Appendix B) to make a more powerful one-shot
error checker. However, results in Figure 6 and Table 2 show that even with a very detailed and
instructive example, direct error checking still performs worse than our regenerate and compare
approach, which supports our previous argument that LLMs are better at generation than checking.
6 CONCLUSIONS
In this paper, we have introduced SelfCheck, a general-purpose, zero-shot, step-by-step checking
scheme for LLMs. Unlike previous approaches, SelfCheck does not require any additional data
or external resources: it uses the LLM to identify errors in its own reasoning, leveraging a novel
regenerate-and-compare approach. By using the results of this checking to perform weighted voting
over different solutions, we find that SelfCheck is able to, in turn, increase final predictive accuracy.
-----
REFERENCES
Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic,
Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: an open large language
model with state-of-the-art performance. 2023.
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh
Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based
formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the
_Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and_
_Short Papers), pp. 2357–2367, Minneapolis, Minnesota, June 2019. Association for Computational_
[Linguistics. doi: 10.18653/v1/N19-1245. URL https://aclanthology.org/N19-1245.](https://aclanthology.org/N19-1245)
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence:
Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
Google. Palm 2 technical report. arXiv preprint arXiv:2303.08774, 2023.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert
with disentangled attention. In International Conference on Learning Representations, 2020.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In
_Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_
_(Round 2), 2021._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the
advance of making language models better reasoners. arXiv preprint arXiv:2206.02336, 2022.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su.
Deductive verification of chain-of-thought reasoning. arXiv preprint arXiv:2306.03872, 2023.
Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki,
and Chris Callison-Burch. Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379,
2023.
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars
Liden, Zhou Yu, Weizhu Chen, et al. Check your facts and try again: Improving large language
models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
-----
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_Neural Information Processing Systems, 35:24824–24837, 2022._
Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners
with self-verification. arXiv preprint arXiv:2212.09561, 2022.
-----
APPENDIX A A COMPLETE EXAMPLE OF SELFCHECK
To help readers get a better idea of how SelfCheck works, we show a complete version of the example
in Figure 1, including the exact inputs and outputs of different stages. Please note that we have
compiled all latex codes for readability, though the inputs and outputs to LLMs are in their original
form. The question is
Let S be the set of complex numbers z such that the real part of _z[1]_ [is equal to][ 1]6 [. This set]
forms a curve. Find the area of the region inside the curve.
We feed the following instruction to the LLM that serves as the generator:
Solve the following problem step by step. Please start each step with "Step :" and split
sentences in each step with "\n\n". Please finish you response with "So the answer is ...".
Problem: Let S be the set of complex numbers z such that the real part of _z[1]_ [is equal to][ 1]6 [.]
This set forms a curve. Find the area of the region inside the curve.
After getting the following solution, we start checking Step 5 as an example:
Solution:
(Step 0) Let z = x + yi be a complex number, where x and y are real numbers.
(Step 1) We are given that the real part of _z[1]_ [is equal to][ 1]6 [.]
(Step 2) Using the fact that the real part of a complex number a + bi is a, we have
Re 1z = Re _x+1yi_ = _x[2]+x_ _y[2][ .]_
(Step 3) Setting Re 1z = [1]6 [, we get] _x[2]+x_ _y[2][ =][ 1]6_ _[.]_
(Step 4) Cross-multiplying and rearranging, we get the equation 6x = x[2] + y[2].
(Step 5) Completing the square, we obtain _x −_ 2[3] 2 3 + y2 = 94 _[.]_
(Step 6) This is the equation of a circle with center 2 _[,][ 0]_ and radius [3]2 [.]
(Step 7) The area of the region inside the circle is [9]4 [π] [. So the answer is] [ 9]4[π] [.]
**Variable definition extraction** This optional stage extracts the definitions of variables that appear
in the reasoning chain. SelfCheck feeds the definitions to the regeneration stage to avoid LLM not
understanding the meaning of a variable. The input to the LLM is:
The following is the solution to a math question: Question: Let S be the set of complex
numbers z such that the real part of _z[1]_ [is equal to][ 1]6 _[.][ This set forms a curve. Find the area of]_
the region inside the curve.
Solution:
(Step 0) Let z = x + yi be a complex number, where x and y are real numbers.
...
(Step 7) The area of the region inside the circle is [9]4[π] [. So the answer is][ 9]4[π] [.]
Are there variables in the solution? If so, please list the definition of variable in
the form of "1. variable is defined as...".
The underlined parts are the type of question, the question itself and the steps in its solution,
respectively.
The output from the LLM is:
Yes. There are variables in the solution.
1. zzz is defined as a complex number of the form xxx + + + yi yi yi, where xxx and yyy are real numbers.
The bold part is then saved to form a part of the input in the regeneration stage.
**Target extraction** To get a brief and clear target of the current step, the input to the LLM is:
The following is a part of the solution to the problem: Let S be the set of complex numbers
_z such that the real part of_ _z[1]_ [is equal to][ 1]6 _[.][ This set forms a curve. Find the area of the]_
-----
region inside the curve.
(Step 0) Let z = x + yi be a complex number, where x and y are real numbers.
...
(Step 5) Completing the square, we obtain _x −_ 2[3] 2 + y2 = 94 _[.]_
What specific action does the step "Completing the square, we obtain _x −_ 2[3] 2 + y2 = 94 _[.]["]_
take? Please give a brief answer using a single sentence and do not copy the steps.
The underlined parts are the question and reasoning steps before the current one, including the current
one.
The output of the LLM is:
**The step completes the square to rewrite the equation in standard form of a circle.**
The whole sentence is saved and forms the most important part of the input in the regeneration stage.
**Information Collection** To get sentences in the question and previous steps in the solution that are
directly related to the current step, the input to the LLM is:
This is a math question: Question: Let S be the set of complex numbers z such that the
real part of _z[1]_ [is equal to][ 1]6 _[.][ This set forms a curve. Find the area of the region inside the curve.]_
The following is information extracted from the question:
Information 0: Let S be the set of complex numbers z such that the real part of _z[1]_ [is equal to]
1
6 _[.]_
Information 1: This set forms a curve.
Information 2: Find the area of the region inside the curve.
The following is the first a few steps in a solution to the problem:
Step 0: Let z = x + yi be a complex number, where x and y are real numbers.
...
Step 4: Cross-multiplying and rearranging, we get the equation 6x = x[2] + y[2].
Which previous steps or information does the next step "Completing the square, we obtain
_x −_ 2[3] 2 + y2 = 94 _[.][" directly follow from?]_
The underlined information (contents after Information X:) is simply a list of sentences in the question.
The underlined steps are reasoning steps before (not including) the current one in the solution.
The output from the LLM is:
The next step "Completing the square, we obtain _x −_ 2[3] 2 + y2 = 94 _[.][" directly follows]_
from Step 4.
Then we use regular expression to extract the information and step id from LLM’s output. Some steps
may only rely on information from the question while others may only use results from previous
steps.
**Step regeneration** In this stage, we instruct the LLM to achieve the extracted target with only
collected information with the following prompt:
We are in a process of solving a math problem.
Variables are defined as: 1. z is defined as a complex number of the form x + yi, where x
and y are real numbers.
The following are some previous steps: Step 0: Cross-multiplying and rearranging, we get
the equation 6x = x[2] + y[2].
-----
The target for next step is: The step completes the square to rewrite the equation in standard
form of a circle.
Please try to achieve the target with the information from previous steps.
The underlined parts are the type of question, variable definition (if any), collected information
from the question (if any), collected previous steps (if any) and the extracted target. Please notice
that the current step does not directly rely on any information from the question, we removed the
corresponding block from the input.
The output from the LLM is:
**To rewrite the equation in standard form of a circle, we need to complete the square**
**for both xxx and yyy.**
**Starting from the given equation: 666xxx = = = x x x[2][2][2]** +++ y y y[2][2][2].
**We add and subtract (6(6(6///2)2)2)[2][2][2]** = 9= 9= 9 on the right-hand side:
666xxx + 9 + 9 + 9 − − − 9 =9 =9 = x x x[2][2][2] +++ y y y[2][2][2] + 9+ 9+ 9 − − − 999.
**Simplifying: (((xxx − − −** 3)3)3)[2][2][2] +++ y y y[2][2][2] = 9= 9= 9. This is now in the standard form of a circle with
**center (3(3(3,,, 0) 0) 0) and radius 333.**
Then we can directly compare the regenerated step with the original one in the next stage.
**Result comparison** The input to the LLM is:
The following are 2 solutions to a math problem:
Solution 1: To rewrite the equation in standard form of a circle, we need to complete the
square for both x and y.
Starting from the given equation: 6x = x[2] + y[2].
We add and subtract (6/2)[2] = 9 on the right-hand side: 6x + 9 − 9 = x[2] + y[2] + 9 − 9.
Simplifying: (x − 3)[2] + y[2] = 9. This is now in the standard form of a circle with center
(3, 0) and radius 3.
Solution 2: Completing the square, we obtain _x −_ 2[3] 2 + y2 = 94 _[.]_
Compare the key points from both solutions step by step and then check whether Solution
1 "supports", "contradicts" or "is not directly related to" the conclusion in Solution 2. Pay
special attention to difference in numbers.
The underlined parts are simply the problem type, the regenerated step and the original step.
The output is:
In both solutions, the goal is to rewrite the given equation in standard form of a circle.
However, in Solution 1, the completion of the square results in the center of the circle being
(3, 0) and a radius of 3. In Solution 2, the completion of the square results in the center of
the circle being (3/2, 0) and a radius of 3/2.
Therefore, Solution 1 contradicts the conclusion in Solution 2 as the centers and radii of
the circles are different.
If the word ‘support’ or ‘contradict’ appears in the last line of the output, SelfCheck outputs a step
score 1 or -1. Otherwise, SelfCheck outputs 0 meaning that two steps are not related.
APPENDIX B EXAMPLAR FOR ONE-SHOT ERROR CHECKING
The following example from Ling et al. (2023) is used to perform one-shot error checking.
Here is some information:
“1. Alice gets 2 apples.
2. Alice gets twice as many oranges as bananas.
3. Original price per apple: $2
-----
Discount: 50%
Price per apple after discount: $2 * (1 - 0.50) = $2 * 0.50 = $1"
Based on the given information, here is a reasoning process:
"Calculate Alice’s cost of the 5 apples after a 50% discount.
Price per apple after discount: $1
Apples be eaten: 3
Oranges be eaten: 6
Total apples: 5
Total oranges: x
Total bananas: 2x
Alice’s total number of fruits: 5 + x + 2x = 5 + 4x
Alice’s total cost of 5 apples: $1 * (5 - 3) = $2”
Double-check the reasoning process, let’s analyze its correctness, and end with "yes" or "no".
Answer:
Let’s think step by step.
Grounding check
Purpose: Find references for numbers in the reasoning process sequentially
Rule: Any ungrounded number makes the check fails
"x" is a variable, we do not care about variables.
"y" is a variable, we do not care about variables.
"5" is a quantity from the given information that represents the number of apples.
"50%" is a quantity from the given information that represents the discount.
"$1" is a quantity from the given information that represents the price per apple after the
discount.
"3" is a quantity not from the given information, it is ungrounded.
"6" is a quantity not from the given information, it is ungrounded.
"$2" is a quantity from the given information that represents the original price per apple.
Result: Based on the check, there are ungrounded numbers, so the grounding check fails.
Reasoning check:
Purpose: Check whether the reasoning logic correctly answers the question
Rule: Any logic error makes the check fails
To calculate the total cost of apples after a 50% discount, we should multiply the number of
apples by the discounted price. But to answer the question, we do not need to calculate the
total number of fruit Alice gets.
Result: Based on the check, there are some logic mistakes, so the reasoning check fails.
Calculation check:
Purpose: Check whether the calculation process is consistent
Rule: Any inconsistent calculation makes the check fails
calculation1:
equation: $1 * (5 - 3), answer: $2
(5 - 3) = 2
$1 * 2 = $2 is consistent with the answer, so the calculation is correct.
calculation2:
equation: 5 + x + 2x, answer: 5 + 4x
x + 2x = 3x
5 + 3x is inconsistent with the answer, so the calculation is incorrect.
Result: Based on the check, the calculation process is inconsistent, so the calculation check
fails.
Check results: Ground check fails, Reasoning check fails, Calculation check fails.
Rule: Any failed check makes the reasoning incorrect.
So the answer is "no".
-----
| [
"Ning, Miao",
"Yee Whye, Teh",
"Tom, Rainforth"
] | 2023-10-05T00:00:00 | ICLR 2024 Poster | true | 66 | 2 | null | http://arxiv.org/abs/2308.00436 | https://arxiv.org/abs/2308.00436 | https://www.semanticscholar.org/paper/1e6102c981b9464c632ef0b00dbd11dfb0564e4e |
Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers | In theorem proving, the task of selecting useful premises from a large library to unlock the proof of a given conjecture is crucially important. This presents a challenge for all theorem provers, especially the ones based on language models, due to their relative inability to reason over huge volumes of premises in text form. This paper introduces Thor, a framework integrating language models and automated theorem provers to overcome this difficulty. In Thor, a class of methods called hammers that leverage the power of automated theorem provers are used for premise selection, while all other tasks are designated to language models. Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8.2\%$ of problems neither language models nor automated theorem provers are able to solve on their own. Furthermore, with a significantly smaller computational budget, Thor can achieve a success rate on the MiniF2F dataset that is on par with the best existing methods. Thor can be instantiated for the majority of popular interactive theorem provers via a straightforward protocol we provide. | Thor is introduced, a framework integrating language models and automated theorem provers to overcome the difficulty of selecting useful premises from a large library to unlock the proof of a given conjecture. | ## Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers
**Albert Q. Jiang**
University of Cambridge
```
[email protected]
```
**Wenda Li**
University of Cambridge
```
[email protected]
```
**Szymon Tworkowski**
University of Warsaw
```
[email protected]
```
**Piotr Miło´s**
Polish Academy of Sciences
```
[email protected]
```
**Konrad Czechowski**
University of Warsaw
```
[email protected]
```
**Tomasz Odrzygó´zd´z**
IDEAS NCBR
```
[email protected]
```
**Yuhuai Wu**
Google Research & Stanford University
```
[email protected]
```
**Abstract**
**Mateja Jamnik**
University of Cambridge
```
[email protected]
```
In theorem proving, the task of selecting useful premises from a large library to
unlock the proof of a given conjecture is crucially important. This presents a
challenge for all theorem provers, especially the ones based on language models,
due to their relative inability to reason over huge volumes of premises in text
form. This paper introduces Thor, a framework integrating language models and
automated theorem provers to overcome this difficulty. In Thor, a class of methods
called hammers that leverage the power of automated theorem provers are used for
premise selection, while all other tasks are designated to language models. Thor
increases a language model’s success rate on the PISA dataset from 39% to 57%,
while solving 8.2% of problems neither language models nor automated theorem
provers are able to solve on their own. Furthermore, with a significantly smaller
computational budget, Thor can achieve a success rate on the MiniF2F dataset that
is on par with the best existing methods. Thor can be instantiated for the majority
of popular interactive theorem provers via a straightforward protocol we provide.
**1** **Introduction**
In theorem proving, premise selection is the task of identifying useful facts from a large library
that enable finding the proof of a given conjecture. It is essential for the discovery of many proofs,
and Automated Reasoning in Large Theories (ARLT) depends on having apt methods for premise
selection [Kühlwein et al., 2012, Sutcliffe et al., 2007]. A group of proof methods have been developed
inside interactive theorem provers (ITPs) to deal with this task. They use external automated theorem
provers (ATPs) to reach the remaining goals, inspect the proofs produced, and pick out the premises
involved in them. Such systems are called hammers [Blanchette et al., 2016]. Hammers are available
in many ITPs [Paulson, 2010, Kaliszyk and Urban, 2015, Gauthier and Kaliszyk, 2015, Czajka and
Kaliszyk, 2018] and are immensely popular within the theorem proving community.
Language models have had some successful applications in the area of formal theorem proving [Polu
and Sutskever, 2020, Han et al., 2021, Jiang et al., 2021, Polu et al., 2022]. However, we observe that
language-model-based reasoning systems are inept at premise selection. The difficulty of premise
selection for language models is that they cannot effectively reason over thousands of available facts
Preprint. Under review.
-----
and their definitions in plain text form. In subsection 2.2, we elaborate on the scale of the problems
language models need to deal with for premise selection and provide empirical evidence for this
difficulty. Seeing that hammers are very good at finding relevant facts, we propose in our framework
to offload the premise selection task to them from language models. The resulting system is Thor, a
framework that organically integrates language models and ATPs via the use of hammers.
The methodology of Thor is simple and can be deployed in any hammer-enabled ITP: we first use
the hammer method to attempt to prove every proof state in the training problems, and mark the
successful application steps. Then we train the language model on the training problems, predicting a
special token (e.g., <hammer>) if the hammer can be applied at the state. When doing evaluation, if
the language model emits the special token, we invoke the hammer method. This methodology incurs
very little extra computation compared to standard language model training while capitalising on the
potential of a hybrid neuro-symbolic model.
To demonstrate the usefulness of Thor, we instantiate it with a language-model-based reasoning
system for the ITP Isabelle and Sledgehammer [Paulson, 2010], Isabelle’s implementation of the
hammer method. We then investigate the performance of the instantiated Thor system on two datasets,
PISA [Jiang et al., 2021] and MiniF2F [Zheng et al., 2022]. On PISA we dramatically improve the
success rate of a language-model-based reasoning system from 39.0% to 57.0% and solve 8.2% of
problems that cannot be solved by either language models or Sledgehammer alone. On MiniF2F, Polu
et al. [2022] used expert iteration to improve on a language model and achieved the state-of-the-art
1-pass success rate of 29.6%. With much less computation, Thor increases this rate to 29.9%, slightly
exceeding the previous result. It is worth noting that Thor and expert iteration can be used in tandem.
In this paper, we demonstrate that finding suitable sub-systems for premise selection can benefit
the performance of the overall reasoning system. Given Thor’s strong performance, computational
efficiency, and applicability to many ITPs, we believe it should become a strong baseline as often as
possible when language models are used for theorem proving.
**Contributions**
1. We created Thor, a theorem proving framework which integrates language models and
automated theorem provers via using hammers.
2. We raised the state-of-the-art success rate of language-model-based reasoning systems on
PISA from 39.0% to 57.0%. Thor proved 8.2% theorems which cannot be proved by either
language models or Sledgehammer.
3. We improved the state-of-the-art success rate on MiniF2F from 29.6% to 29.9%, matching
the language models trained with expert iteration, but with far less computation.
**2** **Background**
**2.1** **Automated and Interactive Theorem Proving**
Mechanising theorem proving has been a grand challenge of artificial intelligence since the late
1950s [Gelernter, 1959]. A group of systems which we call automated theorem provers attempt to
use automated procedures to determine the validity of conjectures (e.g., the DPLL algorithm [Davis
[et al., 1962] for SAT problems [Tarski, 1969]). Popular examples of ATPs include E, SPASS, Z3,](https://wwwlehre.dhbw-stuttgart.de/~sschulz/E/E.html)
[CVC4, and Vampire. Although SAT is known to be NP-complete [Cook, 1971], modern ATPs can](https://cvc4.github.io)
often solve problems with millions of symbols [Ohrimenko et al., 2009] and are useful practically.
ATPs are often based on fragments of first-order logic, which limits the type of theorems they can
express. Hence, projects such as the formalisation of complicated mathematical results [Gonthier
et al., 2008, Avigad et al., 2007, Gonthier et al., 2013, Scholze, 2021] and operating system kernel
verification [Klein et al., 2009] are done in interactive theorem provers, often based on higher-order
logic or dependent type theory. ITPs and ATPs have very different objectives: ITPs aim at making it
easy to formalise a diverse set of problems in numerous mathematical domains in a sound manner;
while ATPs focus on improving the efficiency and performance on very well-defined problems like
[SAT solving. Prominent ITPs include Isabelle, Mizar, HOL Light, HOL4, Lean, and Coq. Theorem](https://www.cl.cam.ac.uk/research/hvg/Isabelle/)
proving in ITPs can be modelled as a sequential decision process: initially a theorem gets declared
and the proof state contains some goals; at each step, the user produces a proof step that
-----
applies to and transforms the proof state; when all the goals have been discharged, the theorem is
considered proven. Large libraries of mathematical knowledge such as the Archive of Formal Proofs[1]
and the Mizar Mathematical Library[2] have been built around these ITPs. Because of the size of these
mathematical libraries, to find useful premises in them is a difficult problem. In the next subsections
we illustrate how two different approaches deal with premise selection.
**2.2** **Language Models for Theorem Proving**
Language models that automate theorem proving mostly follow the approach of the GPT-f
model [Polu and Sutskever, 2020]: pre-trained causal language models are used to predict a proof
```
step that can be applied, given the current proof state and some optional context. Concretely,
```
a language model can take as input and output, two sequences of the following form:
```
INPUT: <SOS> <CTXT> $(context) <PRF_STT> $(proof state) <PRF_STP>
OUTPUT: $(proof step) <EOS>
```
At test time, the reasoning system receives the text representation of the current proof state,
samples a proof step from the language model, applies it to the ITP, and repeats until the proof
is finished or a computational budget has been reached. A best-first strategy is often used for proof
search: a queue of search nodes is maintained with the priority being the accumulated log likelihood
of the generated proof steps.
Language models treat all input and output information as text and they are usually limited to be a few
thousands of characters long. To do premise selection well, the language model has to either memorise
all the relevant premises during training, or be prompted with available premises in evaluation. It
is difficult to do the former because a mathematical corpus can have too many facts for a language
model to remember. For example, the Archive of Formal Proofs has more than 200,000 theorems,
plus the numerous definitions and their derivations to serve as premises. The latter is no easier
because there may be too many premises to fit into the input. For instance, if we use the textual
representation of 300 available premises (a usual number used for premise selection with symbolic
tools) and their definitions as the context in the input-output format above, the input length can
well exceed 10,000 characters and the limit of standard language models. We observe that empirically
1.9% of the steps involving premise selection generated by the language model manage to advance
the proof, while the number is 28.2% for steps having no premises. Hence, a good mechanism for
premise selection could bring crucial benefits.
**2.3** **Hammers**
Blanchette et al. [2016] define hammers as methods that “automate reasoning over large libraries
developed with formal proof assistants (ITPs)”. Consider, for example, Sledgehammer (designed
for Isabelle) which is the original and the most popular implementation of hammers. Figure 1
presents a proof of _√2 /∈_ Q in Isabelle. The beauty of using Sledgehammer with Isabelle is that
despite the complicated-looking proof, humans only need to sketch the proof in Figure 1a and let
Sledgehammer find all the necessary premises to complete every single proof step. The final accepted
proof is presented in Figure 1b. The Sledgehammer proof steps use the internal proof methods
```
metis, meson, smt, auto, simp, fastforce and blast. Conveniently, this tells us which
```
steps in the corpus are generated by Sledgehammer. Note that a human user might also use the proof
methods auto, simp, fastforce and blast as these do not contain additional premises. Only
the methods metis, meson, smt are exclusive to Sledgehammer.
We now describe how Sledgehammer performs premise selection: Sledgehammer makes it possible
to leverage the advancement of ATP research while using ITPs, and can thus be seen as a bridge
between the two [Paulson, 2010]. When invoked, Sledgehammer translates the current goal together
with hundreds of possibly relevant premises into a format (e.g., SMT-LIB, TPTP) that external
ATPs can understand [Meng and Paulson, 2008]. The ATPs are then executed to solve the current
goal. Note that Isabelle follows a kernel philosophy (i.e., only a handful of axioms and inference
rules are trusted), and external ATPs are used skeptically—should a proof be found by the ATPs,
Sledgehammer picks out the useful premises, and reconstructs the proof within the Isabelle kernel
[1https://www.isa-afp.org](https://www.isa-afp.org)
[2http://mizar.org/library/](http://mizar.org/library/)
-----
(b) The proof accepted by Isabelle. The steps containing
```
assume, obtain, have, show are from the original hu
```
man proof sketch. The steps containing `metis, smt,`
```
fastforce, blast, auto, fastforce are completed by
```
Sledgehammer.
(a) The proof sketch produced by the human
user. The sledgehammer command indicates that the human invokes the Sledgehammer method at that point.
Figure 1: A proof of
2 /∈ Q, adapted from the original by Li et al. [2021] with consent.
(e.g., using the primitive inference rules). Here, external ATPs serve as relevance filters of premises
rather than trusted oracles. Hammers implemented in other ITPs are largely similar.
**3** **Thor**
In this section we introduce Thor, a framework integrating language models and automated theorem
provers via the use of hammers. Thor is motivated by the difficulty for language models to do premise
selection and the excellent performance of hammers for it: we should be able to drastically improve
automation in theorem proving if we can take the best from both worlds.
Below we provide the protocol of adopting Thor for a hammer-enabled ITP. We first provide Thor’s
training data preprocessing procedure in Algorithm 1, and then look at a concrete example to
demonstrate its use.
**Algorithm 1 Thor’s training data preprocessing algorithm.**
**Require: Proof state s, hammer method h**
```
INPUT = s.input
```
**if h(s)→** `success then` _▷_ Hammer can be applied to the proof state
```
OUTPUT = <hammer> <EOS>
```
**else** _▷_ Hammer fails at the proof state
```
OUTPUT = s.output
```
**end if**
**return (INPUT, OUTPUT)**
Now consider the situation in the proof of _√2 /∈_ Q (Figure 1) after the step then have "even a":
without Thor, it should produce the following datapoint
```
INPUT: <SOS> <CTXT> $(context) <PRF_STT> $(proof state) <PRF_STP>
OUTPUT: by (smt (z3) even_power oddE) <EOS>
```
With Thor’s preprocessing, we apply the hammer method to the proof state and find out that it can be
done successfully. Hence, we keep the input the same and change the output to:
```
OUTPUT: <hammer> <EOS>
```
If the hammer method cannot be applied, we leave the datapoint unchanged. We iterate over every
datapoint in the training data and apply this preprocessing algorithm.
-----
We hypothesise that being exposed to training data in this format, the language model is capable of
learning a heuristic for when the hammer can be successfully invoked. At evaluation time, whenever
the language model outputs the sequence <hammer> <EOS>, instead of applying it directly to the
ITP, we call the hammer method. This effectively makes the hammer an invokable method for the
language model. This protocol is straightforward to implement for hammer-enabled ITPs.
The only extra cost of deploying Thor is in the data preprocessing step. Multiplying the hammer
time limit by the average number of problems submitted to the Archive of Formal Proofs in one year,
we estimate that 7400 CPU hours per year are needed to preprocess one of the largest proof corpora
available. This is a modest cost since the process only needs to be done once per dataset and the results
can be shared. Better still, for some ITPs, the hammer method leaves a trace, greatly reducing the time
needed to figure out which steps can be solved by hammers. For the ITP Coq, all steps containing
the keyword sauto are generated by CoqHammer [Czajka and Kaliszyk, 2018]. For Isabelle, all
steps containing the keywords metis, meson, smt are generated by Sledgehammer (described
in Section 2.3). With these traces, deploying Thor on ITPs like Coq or Isabelle incurs little extra
computational cost compared to training a standard language model.
**4** **Experiment**
Our experiments are intended to answer the following research questions:
1. Can Thor prove theorems that cannot be proved by language models or automated theorem
provers individually? Does Thor improve premise selection for language models?
2. Does explicitly learning how to select premises hurt the performance of language models?
3. How important are the context information and the diversity of sequence generation?
4. How does Thor compare with other methods at improving language models for theorem
proving?
To answer these questions, we create an instance of Thor for the ITP Isabelle. We choose Isabelle for
two reasons: (1) Isabelle’s Sledgehammer is one of the most mature hammer methods among major
ITPs, and may thus showcase Thor’s full potential; and (2) Isabelle’s Archive of Formal Proofs is one
of the world’s largest formal mathematical libraries, suitable for data-hungry methods like language
models. We make explicit the details of our experimental setup next.
**4.1** **Experimental Setup**
**Machine specification** For pre-training, fine-tuning, and evaluation, we use a TPUVM with 8 cores
[from Google Cloud Platform. The Isabelle process has access to up to 32 CPU cores. We estimate](https://cloud.google.com/tpu?hl=en)
that reproducing all the experiments in this paper requires a total of 1160 TPU hours.
**Language model architecture** We use a decoder-only transformer [Vaswani et al., 2017] language
model, adapting the setup, codebase, and hyperparameters from [Wang and Komatsuzaki, 2021]. The
language model has 700M non-embedding parameters, with 24 layers, 24 attention heads, a hidden
dimension of 1536, and a GPT-2 [Radford et al., 2019] tokenizer with a vocabulary size of 50400.
Rotary positional embeddings [Su et al., 2021] are used. The model is pre-trained on the GitHub +
arXiv subsets of The Pile [Gao et al., 2021], with a context length of 2048. We use a global batch size
of 32 sequences which amounts to 65536 tokens. For the first 3,000 steps, the learning rate linearly
increases from 0 to 0.0002, and then it follows a cosine schedule with a final value of 1.2 × 10[−][5]
for 197,000 steps. We use a weight decay rate of 0.05 and no dropout for pre-training. Pre-training
takes ≈ 150 TPU hours. For fine-tuning, we use the procedure described in Section 3 to prepare the
PISA dataset. We use the most recent proof step as the context in each datapoint. The same
learning rate scheduling strategy is used, with a peak learning rate of 3 × 10[−][4] after 10,000 steps and
a final learning rate of 3 × 10[−][5] after a further 90,000 steps. We use a dropout rate of 0.15 and a
weight decay rate of 0.1. The global batch size is 256 sequences, or 524, 288 tokens. We early-stop
fine-tuning and take the checkpoint at 11,000 steps for evaluation as the validation loss reaches a
minimum then. Fine-tuning takes ≈ 50 TPU hours.
-----
Table 1: Proof success rates on PISA/test
Method Success rate (%)
LISA [Jiang et al., 2021] 33.2
Sledgehammer 25.7
Language model 39.0
Language model ∪ Sledgehammer 48.8
Thor **57.0**
**Sledgehammer configuration** To set up Sledgehammer, we mostly follow the default Isabelle2021
configuration. An important default parameter is that the Sledgehammer timeout limit is 30s. Our
configuration uses the on-machine versions of the five default ATPs (E, SPASS, Vampire, Z3, and
CVC4) to prevent performance deviation caused by network issues.
**Proof search** To sample from the language model, we use temperature sampling with the temperature parameter T = 1.2. To search for the proof of a theorem, we use the best-first search strategy
described in [Polu and Sutskever, 2020]. The queue is ordered by the accumulated log likelihoods of
the generated proof steps, with a maximum length of 32. Each proof step has a timeout limit
of 10s. The search is terminated if and only if one of the following scenarios happens: (1) a valid
proof has been found for the theorem; (2) the language model is queried 300 times; (3) a wallclock
timeout of 500s has been reached; (4) the queue is empty but the theorem is not proved. Empirically,
it takes ≈ 60 TPU hours to evaluate 1, 000 problems.
Our language model setup is different from Language models of ISAbelle proofs [Jiang et al., 2021,
LISA] in three aspects: (1) our language model has 700M instead of 163M non-embedding parameters
(2) the most recent proof step is included in the language model prompt (3) a higher sampling
temperature (1.2 instead of 1.0) is used.
**4.2** **Datasets and Environment**
We use two datasets. The first is the PISA dataset [Jiang et al., 2021], which includes the Isabelle/HOL
repository[3] [under a BSD-style license and the Archive of Formal Proofs version 2021-10-22[4], whose](https://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2021-1/COPYRIGHT)
[various entries are under open-source licenses as described on its official page. PISA contains the core](https://www.isa-afp.org/about.html)
higher-order logic library of Isabelle, as well as a diverse library of proofs formalised with Isabelle,
mostly concerning mathematics or verification of software and hardware. The PISA dataset contains
2.49 million datapoints in total. The proof states have an average length of 369 characters and
the proof steps have an average length of 33 characters. All of the Isabelle/HOL theorems go
into the training set as they are considered foundational and might be used by all other repositories.
We make a 95%/1%/4% split of theorems from the AFP for the training/validation/test sets. We
randomly select 3,000 theorems from the test set (PISA/test) for the evaluation of model performance.
[The second is the Isabelle fraction of the MiniF2F dataset [Zheng et al., 2022] under an Apache license.](https://github.com/openai/miniF2F/blob/main/isabelle/LICENSE)
The dataset contains 488 high school mathematics competition problems split into a validation set
and a test set, each with 244 problems. These problems have been formalised in Lean, Metamath,
and Isabelle to provide a benchmark of the same problems in different ITP languages. This allows us
to contrast different approaches developed for different ITPs. Since we do not use the validation set
for model selection, we do not actually distinguish between the two sets. Hence, we mainly compare
with previous work on the test set as the final result.
[We use the codebase by Jiang et al. [2021], under a BSD 3-clause license, to interact with the Isabelle](https://github.com/albertqjiang/Portal-to-ISAbelle/blob/main/LICENSE)
server and prove theorems from both datasets.
-----
400
200
600
400
200
0
0 5 10 15
Language model
Thor
#Premises in proofs
(a) The number of premises in successful proofs
found by the language model and Thor.
0
0 5 10 15
Language model
Thor
#Premises in ground truth proofs
(b) The number of premises in ground truth proofs
for problems solved by the language model and Thor.
Figure 2: Comparison of the number of premises in problems the language model and Thor can solve.
**4.3** **Thor Against an Ensemble of a Language Model and Sledgehammer**
Because Thor has both a language model and Sledgehammer at its disposal, we wish to investigate
how it fares against a simple ensemble of the two. We set out to evaluate the performance of Thor,
as well as a language model of the same configuration, and Sledgehammer with a 120s timeout on
_PISA/test. It takes ≈_ 50 TPU hours to evaluate Thor for 1000 problems. The proof success rates
on PISA/test are presented in the second column of Table 1. We can see that the language model
alone and Sledgehammer alone can prove 39.0% and 25.7% of the problems respectively. When we
take the union of the problems they manage to solve individually, we get a 48.8% success rate. Thor
manages to prove 57.0% of the problems. This implies that for 8.2% of the problems, Thor uses both
the language model and Sledgehammer to complete the proofs, and it’s not possible to achieve this
with only the language model or only Sledgehammer. We perform 4 case studies on problems that
only Thor can solve in Appendix A.
Thor’s motivation is to solve the premise selection problem for language models. To confirm that Thor
helps premise selection, we collect the proofs generated by the language model and Thor respectively
and count the number of premises in them. The results are presented in Figure 2a: we can see that
for proofs requiring 0 or 1 premises, Thor and the language model perform similarly. But for proofs
requiring more premises, Thor performs much more robustly, finding several times more proofs than
the language model. We also count the number of premises in the ground truth proofs (written by
humans) for theorems the language model and Thor can prove. The results are presented in Figure 2b:
we see that whatever the number of premises the ground truth uses, Thor outperforms the language
model in finding proofs, and the more premises the ground truth proof has, the more obvious is the
effect. We conclude that Thor is indeed more capable of premise selection than language models.
**4.4** **The Effect of Learning How to Select Premises**
The procedure we described in Section 3 ensures that the language model learns when to do premise
selection, but not how to do it, by replacing the premise selection steps with <hammer>. Here we
investigate the effect of making the language model learn both when and how. An easy way to achieve
this is to create a variant of Thor: (i) at training time, use the original data; (ii) at evaluation time,
when the language model outputs a sequence containing any of the Sledgehammer keywords, invoke
Sledgehammer. This further simplifies data preparation and explicitly subjects the language model to
perform premise selection. To investigate the effect of this alternative approach, we evaluate a system
trained in this way on PISA/test and present its success rate in Table 2. We can see that it achieves a
success rate of 55.4% on PISA/test, 1.6% lower than the base version of Thor, which suggests that
explicitly learning how to do premise selection marginally decreases its success rate. This result is
expected: since finding how to do premise selection is entrusted to the hammer method, the language
model should focus on learning when to invoke the hammer for optimal performance. Making the
language model learn an irrelevant additional task only hurts Thor’s performance.
[3https://isabelle.in.tum.de/website-Isabelle2021/dist/library/HOL/index.html](https://isabelle.in.tum.de/website-Isabelle2021/dist/library/HOL/index.html)
[4https://www.isa-afp.org/release/afp-2021-10-22.tar.gz](https://www.isa-afp.org/release/afp-2021-10-22.tar.gz)
-----
Table 2: Proof success rates on PISA/test
Variants of Thor Success rate (%)
Base, sampling temperature T = 1.2 57.0
Learning how to select premises 55.4
No proof context 53.6
Sampling temperature T = 1.0 55.7
Table 3: Proof success rates on MiniF2F.
Method Valid (%) Test (%)
PACT [Han et al., 2021] 23.9 24.6
Expert iteration [Polu et al., 2022] **33.6** 29.6
Sledgehammer 9.9 10.4
Language model 25.0 24.2
Language model ∪ Sledgehammer 27.1 27.5
Thor 28.3 **29.9**
**4.5** **The Effect of the Proof Context**
Our language model setup differs from that of LISA [Jiang et al., 2021] in that we use the most
recent proof step as the context in the input data, as introduced in Section 3. This is based on
the intuition that the most recent proof step information is beneficial for the language model’s
reasoning ability. In this subsection we perform an ablation study to confirm the effect of this
```
context on Thor. Here a variant of Thor is trained without the context information and evaluated
```
on PISA/test. The results are in Table 2. We observe that this variant manages to prove 53.6% of
theorems on PISA/test, 3.4% fewer than the base version of Thor. The drop in success rate indicates
that the context information we use is crucial for the optimal performance of Thor.
**4.6** **The Effect of the Sequence Sampling Diversity**
Our language model setup differs from LISA [Jiang et al., 2021] also in the sampling temperature.
Previous works on language models for theorem proving often use a temperature T = 1.0 [Polu and
Sutskever, 2020, Jiang et al., 2021] for sampling output sequences, while we use T = 1.2. A higher
temperature in the sampling procedure means that the generated sequences are more diverse (having
a higher entropy). Here we perform an ablation study on the diversity of Thor-generated sequences.
We evaluate Thor with sampling temperature T = 1.0 on PISA/test and the success rate is in Table 2.
We can see that the success rate with sampling temperature T = 1.0 is 55.7%, 1.3% lower than with
_T = 1.2. This suggests a more diverse sampling strategy can improve Thor’s performance, and that_
the optimal diversity in language model samples varies for different systems.
**4.7** **Comparing Thor with Expert Iteration**
There exist other methods for improving language models for theorem proving like value function
training [Polu and Sutskever, 2020], proof artifact co-training [Han et al., 2021, PACT], and expert
iteration [Polu et al., 2022]. We wish to compare Thor with them. However, these methods operate in
ITPs other than Isabelle and are thus hard to compare with directly. Thankfully, Polu et al. [2022]
used expert iteration [Silver et al., 2017] to improve PACT [Han et al., 2021] and to achieve the
state-of-the-art result on MiniF2F, a dataset containing multiple ITP formalisations of the same
problems. Hence, we can fairly contrast expert iteration with Thor. We should emphasise that Thor
and expert iteration are not incompatible methods: one can use Thor together with expert iteration.
We start by evaluating Thor, a language model with the same configuration, and Sledgehammer on
MiniF2F. The results are presented in Table 3. We also include the success rates of the language
model that Polu et al. [2022] used (PACT), as well as the language model after expert iteration in the
same table. The success rates on the validation set are also included, but we use the rates on the test
-----
set as the final results, as the valid set can be used for model selection. We can see that the language
model is able to prove 24.2% of the problems on MiniF2F, similar to PACT’s 24.6%. Thor increases
the success rate of the language model by 5.7% to 29.9%, while expert iteration increases the success
rate of PACT by 5.0% to 29.6%. Hence, the improvement in proof success rate brought upon the
language model by Thor is comparable to that by expert iteration.
An important factor in choosing a suitable method is its cost. Expert iteration requires manually
creating a set of “curriculum” problems, evaluating the language model on them, and training the
language model on a growing training set for one epoch every iteration. We estimate that to perform
expert iteration at the same scale as Polu et al. [2022] for Isabelle, it would cost 100 human hours
to formalise 300 maths problems, and 500 TPU hours to evaluate and fine-tune the language model
for 8 expert iterations. Thor, on the other hand, incurs little extra computational cost compared with
training a standard language model. We conclude that while requiring a much smaller computational
budget, Thor can improve language models’ success rates to a similar degree as expert iteration.
**5** **Related Work**
Language models were first applied to automate theorem proving by Polu and Sutskever [2020].
Since then, there have been a few works [Han et al., 2021, Jiang et al., 2021, Polu et al., 2022]
aiming to enhance the ability of language-model-based reasoning systems, or to enable these systems
for interactive theorem provers that were not supported before. All of these works used the same
framework laid down by Polu and Sutskever [2020], namely to iteratively sample from a language
model and directly apply the output to the ITP. Thor, to the best of our knowledge, is the first system
to explicitly hybridise language models and symbolic reasoning tools (ATPs) for theorem proving.
Instead of relying on language models entirely, Thor uses hammers, a well-established tool, to solve
premise selection.
With the growing bodies of formal mathematical libraries, premise selection has become one of the
most crucial tasks of theorem proving. The hammer method is one of the many ways that premise
selection can be done. We have described how the Isabelle implementation of the hammer method
selects premises in Section 2. HOL(y)Hammer [Kaliszyk and Urban, 2015] and CoqHammer [Czajka
and Kaliszyk, 2018] implement the hammer method for HOL Light and Coq respectively, making it
possible for Thor to be instantiated for them. Apart from hammers, SInE [Hoder and Voronkov, 2011]
and SRASS [Sutcliffe and Puzis, 2007] are both symbolic methods that take on the task of premise
selection by ranking the available premises according to their relevance to the current conjecture,
measured by syntactic and semantic distances respectively. MaLARea [Urban, 2007] pioneered
having machine learning components in premise selection systems and its later version MaLARea
SG1 [Urban et al., 2008] combines machine learning and formal semantics for premise selection. A
few approaches [Irving et al., 2016, Wang et al., 2017, Kaliszyk et al., 2017] use deep learning in the
premise selection task. All these diverse methods may have quantitative or qualitative merits over the
hammer approach, and thus have the potential to be integrated as the premise selection component
for future versions of Thor.
**6** **Discussion**
In this paper we introduced a simple approach to overcome language models’ weakness in premise
selection for theorem proving: we created Thor, a framework that integrates language models and
automated theorem provers via the hammer proof method. We presented a straightforward protocol
for deploying Thor on any hammer-enabled ITP. The instance of Thor with Isabelle dramatically
increased the number of automatically proved theorems, suggesting that language models’ deficiency
at premise selection can be effectively compensated by utilising ATPs. Furthermore, approaches like
expert iteration [Polu et al., 2022] or proof artifact co-training [Han et al., 2021] have no contradictions
and can be easily incorporated with Thor. Compared with these methods, Thor has the additional
advantage of being computationally efficient.
One limitation of Thor is that it only admits automated theorem provers that directly generate valid
proof steps in the ITP via the use of the hammer. In Section 5, we pointed out that there are other
premise selection tools with approaches different from the hammer method that the current version of
Thor cannot use. Also, there exist methods which assist premise selection but do not directly generate
-----
the proof steps. An example of this is SErAPIS [Stathopoulos et al., 2020], which performs semantic
search over the Isabelle mathematical library with the help of Wikipedia. Thor cannot use this class of
methods either. We leave to future work the task of broadening the options for the premise selection
tool that Thor uses. Here we only tested Thor on the ITP Isabelle due to the computational costs of
experiments. Therefore another future direction is to instantiate Thor with other ITPs and see whether
improvements brought by Thor are as significant for other ITPs as we show them here for Isabelle.
Thor demonstrates how a difficult problem for language models can be solved by borrowing tools
from another research domain. We are encouraged by its success and think that more problems like
premise selection can be identified and solved similarly. With its strong performance, computational
efficiency, and convenient deployment, Thor gives scope to tool hybridisation, which shows promise
to be impactful in the field of automated reasoning, and artificial intelligence in general.
**References**
Jeremy Avigad, Kevin Donnelly, David Gray, and Paul Raff. A formally verified proof of the prime
number theorem. ACM Transactions on Computational Logic (TOCL), 9(1):2–es, 2007.
Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. Hammering
towards QED. J. Formaliz. Reason., 9(1):101–148, 2016. doi: 10.6092/issn.1972-5787/4593. URL
```
https://doi.org/10.6092/issn.1972-5787/4593.
```
Stephen A Cook. The complexity of theorem-proving procedures. In Proceedings of the third annual
_ACM symposium on Theory of computing, pages 151–158, 1971._
Lukasz Czajka and Cezary Kaliszyk. Hammer for coq: Automation for dependent type theory.
_[J. Autom. Reason., 61(1-4):423–453, 2018. doi: 10.1007/s10817-018-9458-4. URL https:](https://doi.org/10.1007/s10817-018-9458-4)_
```
//doi.org/10.1007/s10817-018-9458-4.
```
Martin Davis, George Logemann, and Donald Loveland. A machine program for theorem-proving.
_Communications of the ACM, 5(7):394–397, 1962._
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An
[800gb dataset of diverse text for language modeling, 2021. URL https://arxiv.org/abs/](https://arxiv.org/abs/2101.00027)
```
2101.00027.
```
Thibault Gauthier and Cezary Kaliszyk. Premise selection and external provers for HOL4. In
Xavier Leroy and Alwen Tiu, editors, Proceedings of the 2015 Conference on Certified Programs
_and Proofs, CPP 2015, Mumbai, India, January 15-17, 2015, pages 49–57. ACM, 2015. doi:_
[10.1145/2676724.2693173. URL https://doi.org/10.1145/2676724.2693173.](https://doi.org/10.1145/2676724.2693173)
Herbert L Gelernter. Realization of a geometry theorem proving machine. In IFIP congress, pages
273–281, 1959.
Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François Garillot,
Stéphane Le Roux, Assia Mahboubi, Russell O’Connor, Sidi Ould Biha, et al. A machine-checked
proof of the odd order theorem. In International conference on interactive theorem proving, pages
163–179. Springer, 2013.
Georges Gonthier et al. Formal proof–the four-color theorem. Notices of the AMS, 55(11):1382–1393,
2008.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. CoRR, abs/2102.06203, 2021. URL
```
https://arxiv.org/abs/2102.06203.
```
Kryštof Hoder and Andrei Voronkov. Sine qua non for large theory reasoning. In International
_Conference on Automated Deduction, pages 299–314. Springer, 2011._
Geoffrey Irving, Christian Szegedy, Alexander A Alemi, Niklas Eén, François Chollet, and Josef
Urban. Deepmath-deep sequence models for premise selection. Advances in neural information
_processing systems, 29, 2016._
-----
Albert Q. Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. Lisa: Language models of isabelle
proofs. 6th Conference on Artificial Intelligence and Theorem Proving, 2021.
Cezary Kaliszyk and Josef Urban. Hol(y)hammer: Online ATP service for HOL light. Math. Comput.
_[Sci., 9(1):5–22, 2015. doi: 10.1007/s11786-014-0182-0. URL https://doi.org/10.1007/](https://doi.org/10.1007/s11786-014-0182-0)_
```
s11786-014-0182-0.
```
Cezary Kaliszyk, François Chollet, and Christian Szegedy. Holstep: A machine learning dataset for
higher-order logic theorem proving. arXiv preprint arXiv:1703.00426, 2017.
Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin,
Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, et al. sel4: Formal
verification of an os kernel. In Proceedings of the ACM SIGOPS 22nd symposium on Operating
_systems principles, pages 207–220, 2009._
Daniel Kühlwein, Twan van Laarhoven, Evgeni Tsivtsivadze, Josef Urban, and Tom Heskes. Overview
and evaluation of premise selection techniques for large theory mathematics. In Bernhard Gramlich,
Dale Miller, and Uli Sattler, editors, Automated Reasoning - 6th International Joint Conference,
_IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings, volume 7364 of Lecture Notes in_
_Computer Science, pages 378–392. Springer, 2012. doi: 10.1007/978-3-642-31365-3\_30. URL_
```
https://doi.org/10.1007/978-3-642-31365-3_30.
```
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Isarstep: a benchmark for high-level
mathematical reasoning. In 9th International Conference on Learning Representations, ICLR 2021,
_[Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/](https://openreview.net/forum?id=Pzj6fzU6wkj)_
```
forum?id=Pzj6fzU6wkj.
```
Jia Meng and Lawrence C. Paulson. Translating higher-order clauses to first-order clauses. J. Autom.
_[Reason., 40(1):35–60, 2008. doi: 10.1007/s10817-007-9085-y. URL https://doi.org/10.](https://doi.org/10.1007/s10817-007-9085-y)_
```
1007/s10817-007-9085-y.
```
Olga Ohrimenko, Peter J Stuckey, and Michael Codish. Propagation via lazy clause generation.
_Constraints, 14(3):357–391, 2009._
Lawrence C. Paulson. Three years of experience with sledgehammer, a practical link between
automatic and interactive theorem provers. In Renate A. Schmidt, Stephan Schulz, and Boris
Konev, editors, Proceedings of the 2nd Workshop on Practical Aspects of Automated Reasoning,
_PAAR-2010, Edinburgh, Scotland, UK, July 14, 2010, volume 9 of EPiC Series in Computing,_
[pages 1–10. EasyChair, 2010. doi: 10.29007/tnfd. URL https://doi.org/10.29007/tnfd.](https://doi.org/10.29007/tnfd)
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_[CoRR, abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393.](https://arxiv.org/abs/2009.03393)_
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. CoRR, abs/2202.01344, 2022.
[URL https://arxiv.org/abs/2202.01344.](https://arxiv.org/abs/2202.01344)
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language
models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Peter Scholze. Liquid tensor experiment. Experimental Mathematics, 0(0):1–6, 2021. doi: 10.1080/
[10586458.2021.1926016. URL https://doi.org/10.1080/10586458.2021.1926016.](https://doi.org/10.1080/10586458.2021.1926016)
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap,
Karen Simonyan, and Demis Hassabis. Mastering chess and shogi by self-play with a general
[reinforcement learning algorithm. CoRR, abs/1712.01815, 2017. URL http://arxiv.org/abs/](http://arxiv.org/abs/1712.01815)
```
1712.01815.
```
Yiannos Stathopoulos, Angeliki Koutsoukou-Argyraki, and LC Paulson. Serapis: A concept-oriented
search engine for the isabelle libraries based on natural language. In Online proceedings of the
_Isabelle Workshop, 2020._
-----
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with
[rotary position embedding, 2021. URL https://arxiv.org/abs/2104.09864.](https://arxiv.org/abs/2104.09864)
Geoff Sutcliffe and Yury Puzis. SRASS - A semantic relevance axiom selection system. In Frank
Pfenning, editor, Automated Deduction - CADE-21, 21st International Conference on Automated
_Deduction, Bremen, Germany, July 17-20, 2007, Proceedings, volume 4603 of Lecture Notes in_
_Computer Science, pages 295–310. Springer, 2007. doi: 10.1007/978-3-540-73595-3\_20. URL_
```
https://doi.org/10.1007/978-3-540-73595-3_20.
```
Geoff Sutcliffe, Josef Urban, and Stephan Schulz, editors. Proceedings of the CADE-21 Workshop
_on Empirically Successful Automated Reasoning in Large Theories, Bremen, Germany, 17th_
_[July 2007, volume 257 of CEUR Workshop Proceedings, 2007. CEUR-WS.org. URL http:](http://ceur-ws.org/Vol-257)_
```
//ceur-ws.org/Vol-257.
```
Alfred Tarski. Truth and proof. Scientific American, 220(6):63–77, 1969.
Josef Urban. Malarea: a metasystem for automated reasoning in large theories. In ESARLT, 2007.
Josef Urban, Geoff Sutcliffe, Petr Pudlák, and Jirí Vyskocil. Malarea SG1- machine learner for
automated reasoning with semantic guidance. In Alessandro Armando, Peter Baumgartner, and
Gilles Dowek, editors, Automated Reasoning, 4th International Joint Conference, IJCAR 2008,
_Sydney, Australia, August 12-15, 2008, Proceedings, volume 5195 of Lecture Notes in Computer_
_[Science, pages 441–456. Springer, 2008. doi: 10.1007/978-3-540-71070-7\_37. URL https:](https://doi.org/10.1007/978-3-540-71070-7_37)_
```
//doi.org/10.1007/978-3-540-71070-7_37.
```
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
[Kaiser, and Illia Polosukhin. Attention is all you need, 2017. URL https://arxiv.org/abs/](https://arxiv.org/abs/1706.03762)
```
1706.03762.
```
Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.
```
https://github.com/kingoflolz/mesh-transformer-jax, May 2021.
```
Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by deep
graph embedding. Advances in neural information processing systems, 30, 2017.
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. minif2f: a cross-system benchmark for
formal olympiad-level mathematics. In International Conference on Learning Representations,
[2022. URL https://openreview.net/forum?id=9ZPegFuFTFv.](https://openreview.net/forum?id=9ZPegFuFTFv)
-----
**A** **Appendix**
In this section, we present some lemmas solved by Thor only.
**Case 1. The lemma cols_upt_k _insert is from the QR Decomposition entry[5]** in the AFP.
**lemma cols_upt_k_insert:**
**fixes A::"’a^’n::{mod_type}^’m::{mod_type}"**
**assumes k: "(Suc k)<ncols A"**
**shows "cols_upt_k A (Suc k) = (insert (column**
```
(from_nat (Suc k)) A) (cols_upt_k A k))"
```
**unfolding cols_upt_k_def**
**apply (auto)**
**apply (metis Suc_lessD from_nat_mono’ from_nat_to_nat_id k**
```
less_Suc_eq_le less_le ncols_def to_nat_le)
```
**by (metis from_nat_mono’ k less_imp_triv**
```
less_or_eq_imp_le ncols_def not_less_eq order_trans)
```
Here, cols_upt_k A (Suc k) returns the set of columns in the matrix A up to the natural number
```
k+1, while ncols A counts the number of columns in the matrix A. In short, this lemma claims that
```
the set of columns (in a matrix A) up to column index k + 1 is equivalent to that of the same matrix
up to column index k inserted with the (k + 1)[th] column (of A). This will subject to the condition
that k + 1 is less than the number of columns in A. With Thor, the LM decided to unfold the goal
with the definition of cols_upt_k, which is followed by an auto tactic to simplify the proof state. All
remaining subgoals are then discharged by Sledgehammer.
**Case 2. The lemma size_del_max is from theWeight-Balanced Trees entry[6]** in the AFP.
**lemma size_del_max: "t ̸= Leaf =⇒** `size t = Suc(size(snd(del_max t)))"`
**apply(induction t rule: del_max.induct)**
**apply simp**
**apply (clarsimp split: prod.splits)**
**apply (smt (z3) size_rotateR size_wbt.simps(1))**
**by simp**
In this lemma, t is a weight-balanced tree, and the size function measures its size (as the name
suggests) and del_max deletes the maximum node from it. Essentially, this lemma claims that when a
weight-balanced its size will be reduced by one if we remove the largest node from it. For the proof,
Thor intelligently performs structural induction with the induction rule del_max.induct and then
simplifies the proof state a few times, which includes splitting products with the rule prod.splits.
Finally, Thor concludes the remaining goals with Sledgehammer.
**Case 3. The lemma t_list_of_B_log_bound is from the AFP entry named as Priority Queues Based**
_on Braun Trees.[7]_
**lemma t_list_of_B_log_bound:**
```
"braun t =⇒ t_list_of_B t ≤ 3 * (nlog2 (size t + 1) + 1) * size t"
```
**apply (induction t rule: measure_induct_rule[where f=size])**
**apply (case_tac x)**
**apply simp**
**using braun.simps(1) t_list_of_B_braun_simps(1) apply blast**
**by (metis acomplete_if_braun height_acomplete order_refl**
```
size1_size t_list_of_B_induct)
```
Here, size measures the size of a Braun tree; nlog2 stands for the function λx. log2(x) ;
_⌈_ _⌉_
```
t_list_of_B is another measure of a Braun tree. Basically, this lemma describes the relation```
ship between a normal tree size and a Braun-tree specific measure. The proof starts with an intelligent
structural induction, progresses with case analysis, and is concluded with Sledgehammer on each of
the remaining subgoals.
5QR_Decomposition/Gram_Schmidt.thy
6Weight_Balanced_Trees/Weight_Balanced_Trees.thy
7Priority_Queue_Braun/Sorting_Braun.thy
-----
**Case 4. The lemma inj_imp_Ker0 is from the AFP entry named as Matrices, Jordan Normal Forms,**
_and Spectral Radius Theory.[8]_
**lemma inj_imp_Ker0:**
**assumes "inj_on T (carrier V)"**
**shows "carrier (V.vs kerT) = {0V}"**
**apply (rule equalityI)**
**apply (rule subsetI)**
**apply (unfold ker_def, auto)**
**by (metis V.module.M.zero_closed assms f0_is_0 inj_on_contraD)**
Here, T is a linear map between two vector spaces. The lemma claims that if the T is injective on the
carrier set of the space V, the kernel of T has to be a singleton set with the zero in V. In this proof,
Thor naturally performs a sequence of introduction steps by applying the lemma equalityI and
`subsetI, before unfolds the definition of a kernel (i.e., ker_def` ) and uses auto to simplify the proof
state. The final remaining goal is closed with Sledgehammer.
8Jordan_Normal_Form/Missing_VectorSpace.thy
-----
| [
"Albert Q., Jiang",
"Wenda, Li",
"Yuhuai, Wu",
"Szymon, Tworkowski",
"Konrad, Czechowski",
"Tomasz, Odrzygóźdź",
"Piotr, Miłoś",
"Mateja, Jamnik"
] | 2022-05-22T00:00:00 | NeurIPS 2022 | true | 65 | 15 | [
"Isabelle"
] | http://arxiv.org/abs/2205.10893 | https://arxiv.org/abs/2205.10893 | https://www.semanticscholar.org/paper/c2d574f7c6a9e3bafe396ecb4ab639179d6fd92c |
Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem | The celebrated Seq2Seq technique and its numerous variants achieve excellent performance on many tasks such as neural machine translation, semantic parsing, and math word problem solving. However, these models either only consider input objects as sequences while ignoring the important structural information for encoding, or they simply treat output objects as sequence outputs instead of structural objects for decoding. In this paper, we present a novel Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder and a hierarchical tree decoder, that encodes an augmented graph-structured input and decodes a tree-structured output. In particular, we investigated our model for solving two problems, neural semantic parsing and math word problem. Our extensive experiments demonstrate that our Graph2Tree model outperforms or matches the performance of other state-of-the-art models on these tasks. | This paper presents a novel Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder and a hierarchical tree decoder, that encodes an augmented graph-structured input and decodes a tree- Structured output. | # Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem
**Shucheng Li[∗†], Lingfei Wu[∗][‡], Shiwei Feng[†], Fengyuan Xu[†], Fangli Xu[§], Sheng Zhong[†]**
_†National Key Lab for Novel Software Technology, Nanjing University, China_
_‡IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA_
_§Squirrel AI Learning, New Jersey, USA_
_†{shuchengli,fengyuan.xu,swfeng98,sheng.zhong}@smail.nju.edu.cn_
_‡[email protected], §[email protected]_
|SP|Text Input: what jobs are there for web developer who know ’c++’ ? Structured output: answer( A, ( job ( A ), title ( A, W ), const ( W, ’Web Developer’ ), language ( A, C ), const ( C, ’c++’ ) ) )|
|---|---|
|MWP|Text input: 0.5 of the cows are grazing grass . 0.25 of the cows are sleep- ing and 9 cows are drinking water from the pond . find the total number of cows . Structured output: ( ( 0.5 * x ) + ( 0.25 * x ) ) + 9.0 = x|
**Text Input:**
what jobs are there for web developer who know ’c++’ ?
SP
**Structured output:**
answer( A, ( job ( A ), title ( A, W ), const ( W, ’Web
Developer’ ), language ( A, C ), const ( C, ’c++’ ) ) )
**Text input:**
0.5 of the cows are grazing grass . 0.25 of the cows are sleeping and 9 cows are drinking water from the pond . find the
MWP total number of cows .
**Structured output:**
( ( 0.5 * x ) + ( 0.25 * x ) ) + 9.0 = x
Table 1: Examples of structured input and output of semantic parsing (SP) and math word problem (MWP).
For inputs, we consider parsing tree augmented sequences to get structural information. For outputs, they
are naturally a hierarchical structure with some structural meaning symbols like brackets.
math word problem, label sequence learning, and
supervised grammar learning, to name just a few.
As shown in Fig. 1, finding the parse tree of a
sentence involves a structural dependency among
the labels in the parse tree; generating a mathematical expression of a math word problem involves
a hierarchical dependency between math logical
operations and the numbers. Conventionally, there
have been efforts in generalizing kernel methods to
predict structured and inter-dependent variables in
a supervised learning setting (Tsochantaridis et al.,
2005; Altun et al., 2004; Joachims et al., 2009).
Recently, the celebrated Sequence-to-Sequence
technique (Seq2Seq) and its numerous variants
(Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015) achieve excellent performance in
neural machine translation. Encouraged by the success of Seq2Seq model, there is a surge of interests
in applying Seq2Seq models to cope with other
tasks such as developing neural semantic parser
(Dong and Lapata, 2016) or solving math word
problem (Ling et al., 2017). However, the two
significant challenges making a Seq2Seq model ineffective in these tasks are that, i) for the natural
**Abstract**
The celebrated Seq2Seq technique and its
numerous variants achieve excellent performance on many tasks such as neural machine
translation, semantic parsing, and math word
problem solving. However, these models either only consider input objects as sequences
while ignoring the important structural information for encoding, or they simply treat output objects as sequence outputs instead of
structural objects for decoding. In this paper, we present a novel Graph-to-Tree Neural
Networks, namely Graph2Tree consisting of a
graph encoder and a hierarchical tree decoder,
that encodes an augmented graph-structured
input and decodes a tree-structured output. In
particular, we investigated our model for solving two problems, neural semantic parsing and
math word problem. Our extensive experiments demonstrate that our Graph2Tree model
outperforms or matches the performance of
other state-of-the-art models on these tasks.
**1** **Introduction**
Learning general functional dependency between
arbitrary input and output spaces is one of the key
challenges in machine learning. While many efforts
in machine learning have mainly focused on designing flexible and powerful input representations for
solving classification or regression problems, many
applications require researchers to design novel
models that can deal with complex structured inputs and outputs, such as graphs, trees, sequences,
or sets. In this paper, we consider the general problem of learning a mapping between a graph input G ∈G and a tree output T ∈T, based on
a training sample of structured input-output pairs
(G1, T1), ..., (Gn, Tn) drawn from some
_∈G × T_
fixed but unknown probability distribution.
Such learning problems often arise in a variety
of applications, ranging from semantic parsing, to
_∗_ authors contributed equally to this research.
2841
-----
text description input, it often entails some hidden
syntactic structure information such as dependency,
constituency tree or even semantic structure information like AMR parsing tree; ii) for the meaningful representation output, it typically contains
abundant information in a structured object like a
parsing tree or a mathematical equation.
Inspired by these observations, in this work, we
propose a Graph-to-Tree neural networks, namely
Graph2Tree consisting of a graph encoder and a
hierarchical tree decoder, which leverages the structural information of both source graphs and target
trees. In particular, our Graph2Tree model learns
the mapping from a structured object such as a
graph to another structured object such as a tree. In
addition, we also observe that the structured object
translation typically follows a modular procedure,
which translates the individual sub-graph in the
source graph into the corresponding target one in
target tree output, and then compose them to form
the final target tree.
Therefore, we design a workflow to align with
this procedure: our graph encoder first learns from
an input graph that is constructed from the various
inputs such as combining both a word sequence and
the corresponding dependency or constituency tree,
and then our tree decoder generates the tree object
from the learned graph vector representations to explicitly capture the compositional structure of a tree.
In particular, we present a novel Graph2tree model
with a separated attention mechanism to jointly
learn a final hidden vector of the corresponding
graph nodes in order to align the generation process between a heterogeneous graph input and a
hierarchical tree output.
To demonstrate the effectiveness of our model,
we perform experiments on two important tasks –
Semantic Parsing and Math Word Problem. First,
we compare our approach against several neural
network approaches on the Semantic Parsing task.
Our experimental results show that our Graph2Tree
model could outperform or match the performance
of other state-of-the-art models on three standard
benchmark datasets. Second, we further compare
our approach with existing recently developed neural approaches on the math word problem and our
results clearly show that our Graph2Tree model
can achieve state-of-the-art performance compared
to other baselines that use many task-specific techniques. We believe our Graph2Tree model is a
solid attempt for learning structured input-output
translation.
**2** **Related Works**
**2.1** **Graph Neural Networks**
The graph representation learning recently attracted
a lot of attention and interest from both academia
and industry. One of the most important research
lines is the semantic embedding learning of graph
nodes or edges based upon the power of graph
neural networks (GNNs) (Li et al., 2016; Kipf and
Welling, 2017; Velickovic et al., 2017; Gilmer et al.,
2017; Hamilton et al., 2017).
Encouraged by the recent success in GNNs, various Sequence-to-Graph (Peng et al., 2018) or
Graph-to-Sequence models (Xu et al., 2018a,b,c;
Beck et al., 2018; Chen et al., 2020) have been
proposed to handle the structured inputs, structured
outputs or both of them, i.e. generating AMR graph
generation from the text sequence. More recently,
some researchers proposed the Tree-to-Tree (Chen
et al., 2018b), Graph-to-Tree (Yin et al., 2019) and
Graph-to-Graph (Guo et al., 2018) neural networks
for targeted application scenarios.
However, these works are designed exclusively
for specific downstream tasks like program translation or code edit. Compared to them, our proposed
Graph2Tree neural network with novel design of
graph encoder and tree decoder does not rely on
any specific downstream task assumption. Additionally, our Graph2Tree is the first generic neural
network translating graph inputs into tree outputs,
which may have numerous applications in practice.
**2.2** **Neural Semantic Parsing**
Semantic parsing is the task of translating natural language utterances into machine-interpretable
meaning representations like logical forms or SQL
queries. Recent years have witnessed a surge of interests in developing neural semantic parsers with
sequence-to-sequence models. These parsers have
achieved promising results (Jia and Liang, 2016;
Dong and Lapata, 2016; Ling et al., 2016). Due to
the fact that the meaning representations are usually structured objects (e.g. tree structures), many
efforts have been devoted to develop structureoriented decoders, including tree decoders (Dong
and Lapata, 2016; Alvarez-Melis and Jaakkola,
2017), grammar constrained decoders (Yin and
Neubig, 2017; Yin et al., 2018; Jie and Lu, 2018;
Dong and Lapata, 2018), action sequences for semantic graph generation (Chen et al., 2018a), and
2842
-----
modular decoders based on abstract syntax trees
(Rabinovich et al., 2017). However, those approaches could potentially be further improved because they only consider the word sequence information and ignore other rich syntactic information,
such as dependency or constituency tree, available
at the encoder side.
Researchers recently attempted to leverage of
the power of GNNs in various NLP tasks, including the neural machine translation (Bastings et al.,
2017; Beck et al., 2018), conversational machine
reading comprehension (Chen et al., 2019b), and
AMR-to-text (Song et al., 2018). Specifically in
the semantic parsing field, a general Graph2Seq
model (Xu et al., 2018b) is proposed to incorporate
these dependency and constituency trees with the
word sequence and then create a syntactic graph as
the encoding input. However, this approach simply
treats a logical form as a sequence, neglecting the
abundant information in a structured object like tree
in the decoder architecture. Therefore, we present
the Graph2Tree model to utilize the structure information in both structured inputs and outputs.
**2.3** **Math Word Problems**
The math word problem is the task of translating
the short paragraph (typically consisting with multiple short sentences) into succinct mathematical
equations. To solve a math word problem illustrated in Table 1, traditional approaches focus on
generating numeric answer expressions by mapping verbs in problems text to categories (Hosseini
et al., 2014) or by generating templates from problem texts (Kushman et al., 2014). However, these
approaches either need additional hand-crafted annotations for problem texts or are limited to a set
of predefined equation templates.
Inspired by the great success of Seq2Seq models in Neural Machine Translation, deep-learning
based methods are intensively explored by researchers in the equation generation (Wang et al.,
2017; Ling et al., 2017; Li et al., 2018, 2019; Zou
and Lu, 2019; Xie and Sun, 2019). However, different forms of equations can be formed to solve
the same math problem, which often makes models fail. To resolve the equation duplication issues,
various equation normalization methods are proposed in (Wang et al., 2018a, 2019) to generate a
unique expression tree with the cost of losing the
understanding of problem-solving steps in equation expressions. In contrast, we propose to use a
Graph2Tree model to solve this task without any
special mechanisms like equation normalization.
To the best of our knowledge, this is the first work
to use GNN to build a math word problem solver.
**3** **Problem Formulation and Structure**
**Object Construction**
**3.1** **Graph-to-Tree Translation Task**
In this work, we consider the problem of translating a graph input to a tree output. In particular, we consider two important tasks - Semantic Parsing and Math Word Problem. Formally,
we define both tasks as follows. The input side
contains a set of text sequences, denoted as S =
_{s1, s2, . . ., sn} ∈S where si is a text sequence_
consisting of a sequence of word embeddings
_si = {w1, w2, . . ., w|si|} ∈W, where W is a pre-_
trained word embedding space. We then construct
a heterogeneous graph input G = (V, E) ∈G,
where V = [V1 V2] contains all of the original
word nodes V1 ∈V1 as well as the relationship
nodes V2 ∈V2 from the relationships of a parsing
tree (i.e. dependency or constituency tree), and
_E ∈E denotes if the two nodes are connected or_
not. The aim is to translate a set of heterogeneous
graph inputs G = _g1, g2, . . ., gn_ into a set of tree
_{_ _}_
outputs T = {t1, t2, ...tn} ∈T where ti is a logic
form or math equation consisting of a sequence of
tree node token ti = {y1, y2, . . ., y|ti|} ∈Y.
**3.2** **Constructing Graph Inputs and Tree**
**Outputs**
To apply GNNs, the first step is to construct a graph
input by combining the word sequence with their
corresponding hidden structure information. How
to construct such graphs is critical to incorporate
the structured information and influences the final
performance. Similarly, how to construct the tree
outputs from logic form or math equations also
play an important role in the final performance
and model interpretability. In this section, we will
introduce two methods for graph construction and
one method for tree construction.
nsubj
nmod
expl
compound case
are there ada jobs outside austin
Dependency Feature Sentence Level Feature
Figure 1: Dependency tree augmented text graph
2843
-----
**Combining Word Sequence with Dependency**
**Parse Tree. The dependency parse tree not only**
represents various grammatical relationships between pairs of text words, but also is shown to have
an important role in transforming texts into logical forms (Reddy et al., 2016). Therefore, the first
method integrates two types of features by adding
dependency linkages between corresponding word
pairs in word sequence. Concretely, we transform
a dependency label into a node, which is linked
respectively with two word nodes with dependency
relationship. Figure 1 gives such an example of
constructed heterogeneous graph from a text.
|VBP EX FW NNS IN NN|Col2|Col3|
|---|---|---|
||||
ROOT
SQ
P
Layer 3 PP
Layer 2 ND NP NP
Layer 1 VBP EX FW NNS IN NN
are there ada jobs outside austin
Constituency Feature Sentence Level Feature
Figure 2: Constituency tree augmented text graph
**Combining Word Sequence with Constituency**
**Tree. The constituency tree contains the phrase**
structure information which is also critical to describe the word relationships and has shown to provide useful information for translation (Gu et al.¯,
2018). Since the leaf nodes in the constituency
tree are the word nodes in the text, this method
merges these nodes with the identical ones in the
bi-directional word sequence chain to create the
syntactic graph. Figure 2 shows an example of
constructed heterogeneous graph input.
|Graph Embedding S|ROOT 1 + 9.0 = x|Col3|
|---|---|---|
|||x|
**Graph** ROOT
**Embedding**
S1 + 9.0 = x
S2 + S2
0.5 - x 0.25 - x
Subtree Node Operator Node Operand Node
Start Decoding Parent Feeding Sibling Feeding
Figure 3: A sample tree output in our decoding process
from expression ”( ( 0.5 * x ) + ( 0.25 * x ) ) + 9.0 = x”
**Constructing Tree Outputs. To effectively learn**
the compositional nature of our structured outputs,
we need to firstly transform original outputs from
logic forms or math equations to tree structured
objects. Specifically, we follow the tree construction method in (Dong and Lapata, 2016), which
is a top-down manner to generate tree-structured
outputs. In original outputs containing structural
meaning symbols like brackets, we first extract subtree structures and replace these sub-tree structures
with sub-tree symbols. Then we grow branches
from the generated sub-tree symbols until all hierarchical structures in the original sequence are
processed. Figure 3 provides an example of constructed tree objects from mathematical expression.
**4** **Graph2Tree Neural Networks**
We aim to learn a mapping that translates a heterogeneous graph-structured input G and its corresponding tree-structured outputs T . We illustrate
the workflow of our proposed Graph2Tree model
for semantic parsing in Figure 4, and present each
component of the model as follows.
**4.1** **Graph Encoder**
To effectively learn graph representations from our
constructed heterogeneous text graph, we present a
novel bidirectional graph node embeddings method
- BiGraphSAGE. The proposed BiGraphSAGE extends the widely used GraphSAGE (Hamilton et al.,
2017) by learning forward and backward node embeddings of a graph G in an interleaved fashion.
In particular, consider a word node v 1
_∈V_
with pretrained word embedding wv like GloVe
(Pennington et al., 2014) as v’s initial attributes.
We then generate the contextualized node embeddings av for all nodes v ∈V1 using Bi-directional
Long Short Term Memory (BiLSTM) (Graves et al.,
2013). For a relationship node v 2, we initial_∈V_
ize av with randomized embeddings. These feature vectors are used as initial node embeddings
**h[0]v** [=][ a][v][. Then each node embedding learns its]
vector representation by aggregating information
from a node local neighborhood within K hops of
the graph.
**h[k]** (v) [=][ M][k] _u_ (1)
_N⊢_ _⊢[(][{][h][k][−]⊢_ [1][,][ ∀][u][ ∈N][⊢][(][v][)][}][)]
**h[k]** (v) [=][ M][k] _u_ (2)
_N⊣_ _⊣[(][{][h][k][−]⊣_ [1][,][ ∀][u][ ∈N][⊣][(][v][)][}][)]
where k ∈{1, ..., K} is the iteration index and N
is the neighborhood function of node v. M[k]
_⊢_ [and]
**M[k]**
_⊣_ [are the forward and backward aggregator func-]
tions. Node v’s forward (backward) representation
**h[k]v⊢** [(][h]v[k]⊣[) aggregates the information of nodes in]
(v) ( (v)).
_N⊢_ _N⊣_
2844
-----
|X .# . . .|. . .|X .# . . .|Col4|X .# . . .|. . .|
|---|---|---|---|---|---|
**DependencyInput : Parseare there ada jobs outside austin :** **Pooling** XX....#" **. . .** XX....#" XX....#" **. . .** XX....#" XX....#" **Tree DecoderLSTMLSTM** </s><n> **LSTMLSTMLSTM** </s>ANSada
{u'dep': u'compound', u'jobs', u'ada'}, X$ X$ X$ X$ X$ **LSTM** loc
{u'dep': u'nsubj', u'are', u'jobs'}, **Graph**
{u'dep': u'case', u'austin', u'outside'}, **FC Layer** **Embedding** **LSTM** \+
………… **LSTM**, **LSTM** </s>
**Construct Graph** XX.#" XX.#" XX.#" XX.#" **LSTM** <n> **LSTM** ANS
**Graph Encoder** .. .. .. .. **LSTM** jobs
. . . .
are explthere nsubjada compoundjobs nmodoutsidecase austin **Node EmbeddingBi-directional** XXXX....#"$$ **. . .** XXXX....#"$$ XXXX....#"$$ **. . .** XXXX....#"$$ **LSTMLSTMLSTM** language<n>, **LSTMLSTMLSTM** austinANS</s>
z" z+ z+,# z$ … 𝒔3
Hop 1Hop 2Hop 3 XXX.....#"$ [𝐡])['⊣] XXX.....#"$ [𝐡])['⊢]… **AttentionSeparate** 𝜶𝜷𝒕𝒕(𝒊)(𝒊) **Concat∥** 𝒅3 𝒔43 <n>LSTM **Sibling FeedingParent feedingStart decodingNon-terminalDecoder Unit**
Figure 4: Overall architecture of our Graph2Tree model. We use semantic parsing task as an example.
Conceptually, one can choose to keep these
node embeddings for each direction independently,
which ignores interactions between two intermediate node embeddings during the training. Therefore, we fuse two intermediate unidirectional node
embeddings at each hop as follows,
**h1 = h[k]** (v)[,][ h][2] [=][ h][k] (v) (3)
_N⊢_ _N⊣_
**h[k]N (v)** [=][ w][g] _[⊙]_ **[h][1]** [+ (1][ −] **[w][g][)][ ⊙]** **[h][2][,]** (4)
**wg = σ(W[⃗]** _z[[][h]1[;][ h]2[;][ h]1_ _[⊙]_ **[h]2[;][ h]1** _[−]_ **[h]2[])]** (5)
**4.2** **Tree Decoder**
We propose a new general tree decoder fully leveraging the outputs of our graph encoder, i.e. the bidirectional node embeddings and the graph embedding, and faithfully generating the tree-structured
targets like logic forms or math equations.
Inspired by the thinking paradigm of human beings, our tree decoder at high level uses a divideand-conquer strategy splitting the whole decoding
task into sub ones. Figure 3 illustrates an example
output of our tree decoder. In this example, we
firstly initialize the root tree node ROOT with the
graph embedding g, and then apply a sub-decoder
on the ROOT to generate a 1st-level coarse output
containing a sub-tree node S1. This S1 is further
decoded with the similar sub-decoder to derive the
2nd-level coarse output. This procedure is repeated
to generate the 3rd-level output in which there is
no sub-tree nodes. In this way, we get the whole
tree output in a top-down manner.
This whole procedure can be summarized as follows: 1) initialize the root tree node with the graph
embedding from our encoder and perform the first
level decoding with our LSTM based sub-decoder;
2) for each newly generated sub-tree node, a subdecoder is applied to derive the next level coarse
output; 3) repeat step 2 until there is no sub-tree
nodes in the last level of tree structure.
**4.2.1** **Sub-Decoder Design**
In each of our sub-decoder task, the conditional
probability of the generated word at step t is calculated as follows:
where ⊙ denotes component-wise multiplication,
_σ is a sigmoid function and wg is a gating vector._
The graph encoder learns node embeddings h[k]v
by repeating the following process K times:
**h[k]v** [=][ σ][(][W][k][ ·][ CONCAT][(][h]v[k][−][1], h[k] (v)[))] (6)
_N_
where W[k] denotes weight matrices, σ is a nonlinearity function, K is maximum number of hops.
The final bi-directional node embeddings zv is
chosen to concatenate the two unidirectional node
embeddings at the last hop,
**zv = CONCAT(h[K]v** _v_ (7)
_⊢[,][ h][K]⊣[)]_
**g = MAXPOOL(FC(z)).** (8)
After the bi-directional embeddings for all nodes
**z are computed, we then feed the obtained node**
embeddings into a fully-connected neural network
and apply the element-wise max-pooling operation
on all node embeddings to compute the graph-level
vector representation g, where other alternative
commutative operations such as mean or attention
based weighted sum can be used as well.
2845
-----
attention mechanism over the node representations
corresponding to the different node types:
exp(score(zv, st))
_αt(v) =_ exp([P][V]k=1[1] _[score][(][z][k][,][ s][t][))]_ _, ∀v ∈V1_ (11)
exp(score(zv, st))
_βt(v) =_ exp([P][V]k=1[2] _[score][(][z][k][,][ s][t][))]_ _, ∀v ∈V2_ (12)
where the score(·) function estimates the similarity
of zv and st. Then, we compute the context vectors
**cv1 and cv2, respectively.**
**cv1 =** _αt(v)zv, ∀v ∈V1_ (13)
**cv2 =** X _βt(v)zv, ∀v ∈V2_ (14)
X
We concatenate the context vector cv1, context
vector cv2 and decoder hidden state st to compute
the final attention hidden state at this time step as:
**s˜t = tanh(Wc** [cv1 ; cv2 ; st] + bc) (15)
_·_
where Wc and bc are learnable parameters. The
final context vector ˜st is further used for decoding
tree structured outputs. The output probability distribution over a vocabulary at the current time step
is calculated by:
_p(yt_ _y1, y2, . . ., yt_ 1, g) = softmax(Wv ˜st + bv) (16)
_|_ _−_
where Wv and bv are learnable parameters. Our
model is then jointly trained to maximize the conditional log-probability of the target tree given a
heterogeneous graph input g.
**5** **Experiments**
In this section, we evaluate the effectiveness and
generality of Graph2Tree model on two important
tasks – Semantic Parsing and Math Word Problem.
The code and data for our Graph2Tree model are
provided for research purpose [1].
**5.1** **Experiments for Semantic Parsing**
**Datasets. We evaluate our Graph2Tree on three**
totally-different benchmark datasets, JOBS (Zettlemoyer and Collins, 2005), GEO (Zettlemoyer and
Collins, 2005), and ATIS (Dahl et al., 1994), for
the semantic parsing task. The first one JOBS is
a set of 640 queries from a job listing database,
the second one GEO is a set of 880 queries on a
database of U.S. geography, and the last one ATIS
is a dataset of 5410 queries from a flight booking
system. We utilize the same train/dev/test split
standard as used in previous works. We adopt the
data preprocessing provided by (Dong and Lapata,
[1https://github.com/IBM/Graph2Tree](https://github.com/IBM/Graph2Tree)
_p(yt_ **y<t, x) = fpredict(st)** (9)
_|_
where x denotes vectors of all input words, yt is the
predicted output word at t, st is the decoder hidden
state at t, and fpredict is a non-linear function.
The key component of Eq. (9) is the computation of st. Conceptually, this value is calculated as
**st = fdecoder(yt** 1, st 1), where fdecoder is usually
_−_ _−_
a RNN unit. We propose two improvements on top
of it, parent feeding and sibling feeding, to feed
more information for decoding sub-tree nodes.
**Parent feeding. For a sub-task in our tree decod-**
ing process, we aim to expand the sub-tree node in
the parent layer. Therefore, it is reasonable to take
the sub-tree node embedding sti into consideration.
Therefore, we add the sub-tree node embedding
as part of our input at every time-step, in order to
capture the upper-layer information for decoding.
**Sibling feeding. Besides the information from**
parent nodes, if two sub-tree nodes share the same
parent node, then these two sub-tasks can also be related. Inspired by this observation, we employ the
sibling feeding mechanism to feed the preceding
sibling sentence embedding to the sub-task related
to its closet neighbor sub-tree node. For example,
imagine p1 is the parent node of c1, c2, and we feed
both embeddings of p1 and c1 when decoding c2.
Therefore, our sub-decoder calculates the decoder hidden state st as follows:
**st = fdecoder(yt** 1, st 1; stparent; stsibling) (10)
_−_ _−_
where stparent stands for sub-tree node embedding
from parent layer and stsibling is the sentence embedding of the closest preceding sibling. By fully
utilizing the information from parent nodes and
sibling nodes, our tree decoder can effectively generate target hierarchical outputs.
**4.3** **Separate Attention Mechanism to Locate**
**Source Sub-graph**
Various attention mechanisms have been proposed
(Bahdanau et al., 2014; Luong et al., 2015) to incorporate the hidden vectors of the inputs into account during the decoding processing. In particular, the context vector st depends on a set of bidirectional node representations of the source graph
(z1,...,z|V |) to which the decoder locates the source
sub-graph. Since our graph input is essentially
a heterogeneous graph with two different input
sources (word nodes with relationship nodes of
a parsing tree), we propose to employ a separated
2846
-----
2016). Natural language utterances are in lower
case and stemmed, and entity mentions are replaced
by numbered markers. For the graph construction,
we use the dependency parser and constituency
parser from CoreNLP (Manning et al., 2014).
**Settings. We use the Adam optimizer (Kingma**
and Ba, 2014) with a batch size of 20. For the
JOBS and GEO datasets, our hyper-parameters are
cross-validated on the training sets. For ATIS, we
tune them on the development set. The learning
rate is set to 0.001. In graph encoder, the BiRNN
we use is a one-layer BiLSTM with a hidden size
of 150, and the hop size in GNN is chosen from
_{2,3,4,5,6}. The decoder we employ is a one-layer_
LSTM with a hidden size of 300. The dropout rate
is chosen from {0.1,0.3,0.5}.
**Baselines. We compare our model against several**
state-of-the-art neural semantic parsers: i) Seq2Seq
model with a Copy mechanism (Jia and Liang,
2016); ii) Seq2Seq and Seq2Tree models (Dong
and Lapata, 2016); iii) Graph2Seq model (Xu et al.,
2018a). We report the exact-match accuracy for
each baseline on all three benchmarks.
|Methods|JOBS|GEO|ATIS|
|---|---|---|---|
|Jia et al.(2016)|-|85.0|76.3|
|Dong et al.(2016)-Seq2Seq|87.1|84.6|84.2|
|Dong et al.(2016)-Seq2Tree|90.0|87.1|84.6|
|Xu et al.(2018)-Graph2Seq2|88.6|85.7|83.3|
|Graph2Tree|92.9|88.9|84.6|
**Methods** **_JOBS_** **_GEO_** **_ATIS_**
Jia et al.(2016) - 85.0 76.3
Dong et al.(2016)-Seq2Seq 87.1 84.6 84.2
Dong et al.(2016)-Seq2Tree 90.0 87.1 **84.6**
Xu et al.(2018)-Graph2Seq[2] 88.6 85.7 83.3
**Graph2Tree** **92.9** **88.9** **84.6**
Table 2: Exact-match accuracy comparison on all three
benchmarks JOBS, GEO, and ATIS for SP task
|Methods|Translated logic form results|
|---|---|
|Reference str|job (ANS), language (ANS, ’delphi’), title (ANS, ’developer’), loc (ANS, ’san antonio’), platform (ANS, ’windows’)|
|Graph2tree|job (ANS), language (ANS, ’delphi’), title (ANS, ’developer’), loc (ANS, ’san antonio’), platform (ANS, ’windows’ )|
|Graph2seq|job (ANS), language (ANS, ’delphi’), title (ANS, ’developer’), platform (ANS, ’windows’)|
|Seq2seq|job (ANS), language (ANS, ’delphi’), title (ANS, ’developer’), loc (ANS, ’san antonio’)|
Table 3: Case study of SP input: “what jobs can a
_delphi developer find in san antonio on windows ?”_
**Results.** Table 2 shows that our proposed
Graph2Tree outperforms or achieves comparable
exact-match accuracy compared to other state-ofthe-art baselines, highlighting the effectiveness of
our proposed model by exploiting full utilization of
structural information in both inputs and outputs.
2We run our own implementation of Graph2Seq on thesedatasets using PyTorch.
**Case study. Next we analyze the different decod-**
ing results of all models for an example case in
Table 3. The challenge in semantic parsing is the
high-order neighborhood estimation of the noun
key word “jobs” to its attribute words “windows”
and “san antonio”. It is hard for the traditional sequence encoder to encode high-order neighborhood
(long-range dependency). For instance, there are
10 hops between the word “jobs” and “windows”
according to the sequential dependency, while there
are only two hops if we introduce the syntactic dependency information. Therefore, syntactic graph
with graph encoder is an effective way to learn
a high-quality representation for decoding. This
partially explains why our Graph2tree model outperforms Seq2Seq and Seq2Tree models.
|Methods|JOBS|GEO|
|---|---|---|
|Full model|92.9|88.9|
|w/o const tree|90.0|86.8|
|w/ original GraphSage|90.7|88.2|
|w/ only parent feeding|91.4|87.9|
|w/ only sibling feeding|89.2|84.3|
|w/o parent & sibling feeding|88.6|83.9|
|w/o separated attention w/ uniform attention|83.6 90.7|77.1 87.1|
|w/o bilstm|89.3|86.4|
**Methods** **JOBS** **GEO**
Full model 92.9 88.9
w/o const tree 90.0 86.8
w/ original GraphSage 90.7 88.2
w/ only parent feeding 91.4 87.9
w/ only sibling feeding 89.2 84.3
w/o parent & sibling feeding 88.6 83.9
w/o separated attention 83.6 77.1
w/ uniform attention 90.7 87.1
w/o bilstm 89.3 86.4
Table 4: Ablation study of Graph2Tree on the semantic parsing (JOBS and GEO). We employ exact match
accuracy as evaluation metric.
**Ablation study. Table 4 presents the ablation study**
on our Graph2Tree using a constituency tree based
graph (on SP datasets JOBS and GEO). This is
done with test sets (JOBS and GEO have no dev
set). Firstly, We observe that the syntactic information in constituency tree, which is helpful for
describing word relationships, is critical to our overall performance. And we found that our bidirectional GraphSAGE, encoding from both forward
and backward nodes according to edge direction,
is proved to enhance the final performance. Furthermore, parent feeding and sibling feeding mechanism, which can enrich both the paternal and fraternal information in decoding, also play important
roles in the whole model. In addition, designed for
different types of nodes in the input graph, the separate attention mechanism is proved useful in our
model. Last but not least, it is also necessary to use
Bi-LSTM in encoder to learn the contextualized
word embeddings from the word sequences.
2847
-----
|Methods|Col2|MATHQA|
|---|---|---|
|Seq2Prog||51.9|
|Seq2Prog+Cat||54.2|
|TP-N2F||55.95|
|Seq2seq||58.36|
|Seq2Tree||64.15|
|Graph2Seq||65.36|
|Graph2Tree|with constituency graph with dependency graph|69.65 65.66|
**Methods** **MATHQA**
Seq2Prog 51.9
Seq2Prog+Cat 54.2
TP-N2F 55.95
Seq2seq 58.36
Seq2Tree 64.15
Graph2Seq 65.36
with constituency graph **69.65**
**Graph2Tree**
with dependency graph 65.66
Table 6: Solution accuracy comparison on MATHQA
so far on this MAWPS benchmark. We have observed similar conclusions on a more challenging
and larger dataset – MATHQA. This highlights the
importance of having our Graph2Tree neural networks that can leverage the structured information
from both inputs and outputs for automatic solving
of math problems.
It is worth noting that our hierarchical tree decoder directly generates original mathematical expressions, which faithfully reflect reasoning steps
when building math equations. However, state-ofthe-art math word problem solvers like Group-Att
(Li et al., 2019) or T-RNN (Wang et al., 2019)
have achieved high performance by utilizing Equation Normalization (EN) proposed by (Wang et al.,
2019) to keep structures of output equations unified. This method can improve solution accuracy
because it reduces the difficulty of equation generation. On the other hand, the normalized equations
completely lose the semantic meaning of operands
and operators, making them difficult to reason rationales how answer math equations are built.
**Attention visualization. For better understanding**
of our separated attention, we give a visualization
sample from MAWPS. As shown in Figure 5(a),
we give an augmented graph input and equation
tree, where ⟨N⟩ is sub-tree node and 1, 2 are indexed markers for original numbers. Specifically,
Figure 5(b) and 5(c) illustrates alignments with
word nodes and compositional nodes in graph input respectively. For example, in Figure 5(c), the
equation part “2 * 1” is matched with “a bee has
2 legs” in the original natural language sentence
which is actually semantically connected with “NP”
and “VP” in the constituency tree.
**Ablation study. Similarly, we also perform the**
ablation study for math word problem (MAWAPS),
as shown in Table 7. This is done with dev set.
Attention mechanism, constituency structure and
other components in our model play significant
roles for Graph2tree to achieve high performance
in MWP solving, which is consistent with our ob
**5.2** **Experiments for Math Word Problems**
**Datasets. We here evaluate our Graph2Tree model**
on two benchmark datasets, MAWPS (KoncelKedziorski et al., 2016) and MATHQA (Amini
et al., 2019), for the Math Word Problems automatically solving task. The MAWPS dataset is a
Math Word Problem dataset in English and contains 2373 pairs after harvesting equations with single unknown variable. The other MATHQA dataset
is a recently proposed large-scale Math Word Problem dataset with 37k English pairs, where each
math expression is corresponding to an annotated
formula for better interpretability. This dataset is
more difficult for covering complex multivariate
problems.
**Baselines. We compare our Graph2Tree model**
against several state-of-the-art methods. We report
the solution accuracy for each baseline in test set.
On MAWPS, our baselines are: i) Retrieval, Classification, and Seq2Seq (Robaidek et al., 2018); ii)
Seq2Tree (Dong and Lapata, 2016); iii) Graph2Seq
(Xu et al., 2018a); iv) MathDQN (Wang et al.,
2018b); v) T-RNN (Wang et al., 2019); vi) GroupAtt (Li et al., 2019). On MATHQA, our baselines
are: i) Sequence-to-program (Amini et al., 2019);
ii) TP-N2F (Chen et al., 2019a); iii) Seq2Seq,
Seq2Tree and Graph2Seq.
|Methods|Col2|MAWPS|
|---|---|---|
|Oracle||84.8|
|Retrieval|Jaccard Cosine|45.6 38.8|
|Classification|BiLSTM Self-attention|62.8 60.4|
|Seq2seq|LSTM CNN|25.6 44.0|
|Seq2Tree||65.2|
|Graph2Seq||70.4|
|MathDQN||60.25|
|T-RNN|Full model W/o equantion normalization W/o self-attention|66.8 63.9 66.3|
|Group-Att||76.1|
|Graph2Tree|with constituency graph with dependency graph|78.8 76.8|
**Methods** **MAWPS**
Oracle 84.8
Jaccard 45.6
Retrieval
Cosine 38.8
BiLSTM 62.8
Classification
Self-attention 60.4
LSTM 25.6
Seq2seq
CNN 44.0
Seq2Tree 65.2
Graph2Seq 70.4
MathDQN 60.25
Full model 66.8
T-RNN W/o equantion normalization 63.9
W/o self-attention 66.3
Group-Att 76.1
with constituency graph **78.8**
**Graph2Tree**
with dependency graph 76.8
Table 5: Solution accuracy comparison on MAWPS
**Results. As shown in Table 5, our Graph2Tree**
model consistently outperforms other state-of-theart baselines by a large margin up to 10 points absolute accuracy except Group-Att baseline. To the
best of our knowledge, we make the first attempt to
employ the graph neural network for solving Math
Word Problems, and our Graph2Tree model with
constituency graph achieves the best performance
2848
-----
|S|NP VP|NP|SBA|
|---|---|---|---|
|X = <N>||||
|<E>||||
|2 * 1||||
<E>
WHADJP
(c) Attention for structure nodes
|a bee has X =|2|legs, how many legs do|1|bees|
|---|---|---|---|---|
|<N> <E>|||||
|2|||||
|1|||||
(b) Attention for word nodes
<E>
S
VP
NP NP SBAR
a bee has 2 legs ...... ?
x = N
2 - 1
(a) A graph-to-tree translation example
Figure 5: Effect visualization of our separated attentions on both word and structure nodes in a graph.
|Methods|MAWAPS|
|---|---|
|Full model|78.8|
|w/o const tree|75.6|
|w/ original GraphSage|76.4|
|w/ only parent feeding|75.6|
|w/ only sibling feeding|72.4|
|w/o parent & sibling feeding|67.6|
|w/o separated attention w/ uniform attention|67.6 71.6|
|w/o bilstm|72.8|
Table 7: Ablation study of Graph2Tree on the math
word problem (MAWAPS). We employ solution accuracy as evaluation metric. The Methods settings is
same as Table 4.
servation in the semantic parsing task. However, it
is worth noting that, according to the experiment,
the sibling mechanism is obviously more important to the MWP task than the semantic parsing
task, which is in line with our expectations. In the
MWP task, the result of decoding, math expressions, is relatively simple compared to semantic
parsing. And in math expressions, the order between leaf nodes (numbers), which directly affects
the correctness of expressions, is very important.
The sibling mechanism plays exactly such a role.
One potential interesting extension is that, if we can
connect leaf nodes in the input graph and employ
edge weights to dynamically represent the order
between the nodes, it may achieve a similar or even
better effect than the sibling mechanism.
**6** **Conclusion and Future Work**
We presented a novel Graph2Tree model consisting of a graph encoder and a hierarchical tree decoder, for learning the translation between structured inputs and structured outputs. Studies on two
tasks - Semantic Parsing and Math Word Problem
demonstrated our model consistently outperformed
or matched the performance of the state-of-the-art.
Our Graph2Tree model is generic and agnostic to
the downstream tasks and thus one of the future
works is to adapt it to the other NLP applications.
**References**
Yasemin Altun, Thomas Hofmann, and Alexander J
Smola. 2004. Gaussian process classification for
segmenting and annotating sequences. In ICML,
page 4. ACM.
David Alvarez-Melis and Tommi S Jaakkola. 2017.
Tree-structured decoding with doubly-recurrent neural networks. ICLR.
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. [MathQA: Towards interpretable](https://doi.org/10.18653/v1/N19-1245)
[math word problem solving with operation-based](https://doi.org/10.18653/v1/N19-1245)
[formalisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. [Neural machine translation by jointly](https://arxiv.org/abs/1409.0473)
[learning to align and translate.](https://arxiv.org/abs/1409.0473) _arXiv e-prints,_
abs/1409.0473.
Joost Bastings, Ivan Titov, Wilker Aziz, Diego
Marcheggiani, and Khalil Sima’an. 2017. [Graph](https://doi.org/10.18653/v1/D17-1209)
[convolutional encoders for syntax-aware neural ma-](https://doi.org/10.18653/v1/D17-1209)
[chine translation. In Proceedings of the 2017 Con-](https://doi.org/10.18653/v1/D17-1209)
_ference on Empirical Methods in Natural Language_
_Processing, pages 1957–1967, Copenhagen, Den-_
mark. Association for Computational Linguistics.
Daniel Beck, Gholamreza Haffari, and Trevor Cohn.
2018. [Graph-to-sequence learning using gated](https://doi.org/10.18653/v1/P18-1026)
[graph neural networks.](https://doi.org/10.18653/v1/P18-1026) In Proceedings of the
_56th Annual Meeting of the Association for Com-_
_putational Linguistics (Volume 1:_ _Long Papers),_
pages 273–283, Melbourne, Australia. Association
for Computational Linguistics.
2849
-----
[Bo Chen, Le Sun, and Xianpei Han. 2018a. Sequence-](https://doi.org/10.18653/v1/P18-1071)
[to-action: End-to-end semantic graph generation for](https://doi.org/10.18653/v1/P18-1071)
[semantic parsing.](https://doi.org/10.18653/v1/P18-1071) In Proceedings of the 56th An_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 766–_
777, Melbourne, Australia. Association for Computational Linguistics.
Kezhen Chen, Qiuyuan Huang, Hamid Palangi, Paul
Smolensky, Kenneth D Forbus, and Jianfeng Gao.
2019a. Natural-to formal-language generation using tensor product representations. arXiv preprint
_arXiv:1910.02339._
Xinyun Chen, Chang Liu, and Dawn Song. 2018b.
Tree-to-tree neural networks for program translation.
In NIPS, pages 2547–2557.
Yu Chen, Lingfei Wu, and Mohammed J Zaki. 2019b.
Graphflow: Exploiting conversation flow with graph
neural networks for conversational machine comprehension. arXiv preprint arXiv:1908.00059.
Yu Chen, Lingfei Wu, and Mohammed J Zaki. 2020.
Reinforcement learning based graph-to-sequence
model for natural question generation. ICLR.
Deborah A. Dahl, Madeleine Bates, Michael Brown,
William Fisher, Kate Hunicke-Smith, David Pallett,
Christine Pao, Alexander Rudnicky, and Elizabeth
Shriberg. 1994. [Expanding the scope of the atis](https://www.aclweb.org/anthology/H94-1010)
[task: The atis-3 corpus. In HUMAN LANGUAGE](https://www.aclweb.org/anthology/H94-1010)
_TECHNOLOGY: Proceedings of a Workshop held at_
_Plainsboro, New Jersey, March 8-11, 1994._
[Li Dong and Mirella Lapata. 2016. Language to logi-](https://doi.org/10.18653/v1/P16-1004)
[cal form with neural attention. In Proceedings of the](https://doi.org/10.18653/v1/P16-1004)
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
33–43, Berlin, Germany. Association for Computational Linguistics.
[Li Dong and Mirella Lapata. 2018. Coarse-to-fine de-](https://doi.org/10.18653/v1/P18-1068)
[coding for neural semantic parsing. In Proceedings](https://doi.org/10.18653/v1/P18-1068)
_of the 56th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 731–742, Melbourne, Australia. Association
for Computational Linguistics.
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley,
Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In
_Proceedings of the 34th International Conference_
_on Machine Learning-Volume 70, pages 1263–1272._
JMLR. org.
Alex Graves, Abdel-rahman Mohamed, and Geoffrey
Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international
_conference on acoustics, speech and signal process-_
_ing, pages 6645–6649. IEEE._
Jetic G¯u, Hassan S. Shavarani, and Anoop Sarkar. 2018.
[Top-down tree structured decoding with syntactic](https://doi.org/10.18653/v1/D18-1037)
[connections for neural machine translation and pars-](https://doi.org/10.18653/v1/D18-1037)
[ing. In Proceedings of the 2018 Conference on Em-](https://doi.org/10.18653/v1/D18-1037)
_pirical Methods in Natural Language Processing,_
pages 401–413, Brussels, Belgium. Association for
Computational Linguistics.
Xiaojie Guo, Lingfei Wu, and Liang Zhao. 2018. Deep
graph translation. arXiv preprint arXiv:1805.09980.
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017.
Inductive representation learning on large graphs. In
_Advances in Neural Information Processing Systems,_
pages 1024–1034.
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
[Oren Etzioni, and Nate Kushman. 2014. Learning](https://doi.org/10.3115/v1/D14-1058)
[to solve arithmetic word problems with verb catego-](https://doi.org/10.3115/v1/D14-1058)
[rization. In Proceedings of the 2014 Conference on](https://doi.org/10.3115/v1/D14-1058)
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533, Doha, Qatar. Association_
for Computational Linguistics.
[Robin Jia and Percy Liang. 2016. Data recombination](https://doi.org/10.18653/v1/P16-1002)
[for neural semantic parsing. In Proceedings of the](https://doi.org/10.18653/v1/P16-1002)
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
12–22, Berlin, Germany. Association for Computational Linguistics.
[Zhanming Jie and Wei Lu. 2018. Dependency-based](https://doi.org/10.18653/v1/D18-1265)
[hybrid trees for semantic parsing. In Proceedings of](https://doi.org/10.18653/v1/D18-1265)
_the 2018 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 2431–2441, Brus-_
sels, Belgium. Association for Computational Linguistics.
Thorsten Joachims, Thomas Hofmann, Yisong Yue,
and Chun-Nam Yu. 2009. Predicting structured objects with support vector machines. _Communica-_
_tions of the ACM, 52(11):97._
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
_arXiv:1412.6980._
Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional
networks. In International Conference on Learning
_Representations (ICLR)._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
[MAWPS: A math word problem repository. In Pro-](https://doi.org/10.18653/v1/N16-1136)
_ceedings of the 2016 Conference of the North Amer-_
_ican Chapter of the Association for Computational_
_Linguistics: Human Language Technologies, pages_
1152–1157, San Diego, California. Association for
Computational Linguistics.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. [Learning to automatically](https://doi.org/10.3115/v1/P14-1026)
[solve algebra word problems.](https://doi.org/10.3115/v1/P14-1026) In Proceedings of
_the 52nd Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 271–281, Baltimore, Maryland. Association
for Computational Linguistics.
2850
-----
Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu,
[and Tong Zhang. 2018. Multi-head attention with](https://doi.org/10.18653/v1/D18-1317)
[disagreement regularization. In Proceedings of the](https://doi.org/10.18653/v1/D18-1317)
_2018 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 2897–2903, Brus-_
sels, Belgium. Association for Computational Linguistics.
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
[Bing Tian Dai, and Dongxiang Zhang. 2019. Model-](https://doi.org/10.18653/v1/P19-1619)
[ing intra-relation in math word problems with differ-](https://doi.org/10.18653/v1/P19-1619)
[ent functional multi-head attentions. In Proceedings](https://doi.org/10.18653/v1/P19-1619)
_of the 57th Annual Meeting of the Association for_
_Computational Linguistics, pages 6162–6167, Flo-_
rence, Italy. Association for Computational Linguistics.
Yujia Li, Richard Zemel, Marc Brockschmidt, and
[Daniel Tarlow. 2016. Gated graph sequence neural](https://www.microsoft.com/en-us/research/publication/gated-graph-sequence-neural-networks/)
[networks. In Proceedings of ICLR’16.](https://www.microsoft.com/en-us/research/publication/gated-graph-sequence-neural-networks/)
Wang Ling, Phil Blunsom, Edward Grefenstette,
Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Fumin
[Wang, and Andrew Senior. 2016. Latent predictor](https://doi.org/10.18653/v1/P16-1057)
[networks for code generation. In Proceedings of the](https://doi.org/10.18653/v1/P16-1057)
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
599–609, Berlin, Germany. Association for Computational Linguistics.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancou-_
ver, Canada. Association for Computational Linguistics.
Thang Luong, Hieu Pham, and Christopher D. Man[ning. 2015. Effective approaches to attention-based](https://doi.org/10.18653/v1/D15-1166)
[neural machine translation. In Proceedings of the](https://doi.org/10.18653/v1/D15-1166)
_2015 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 1412–1421, Lis-_
bon, Portugal. Association for Computational Linguistics.
Christopher Manning, Mihai Surdeanu, John Bauer,
Jenny Finkel, Steven Bethard, and David McClosky.
[2014. The Stanford CoreNLP natural language pro-](https://doi.org/10.3115/v1/P14-5010)
[cessing toolkit.](https://doi.org/10.3115/v1/P14-5010) In Proceedings of 52nd Annual
_Meeting of the Association for Computational Lin-_
_guistics: System Demonstrations, pages 55–60, Bal-_
timore, Maryland. Association for Computational
Linguistics.
Xiaochang Peng, Daniel Gildea, and Giorgio Satta.
2018. Amr parsing with cache transition systems.
In Thirty-Second AAAI Conference on Artificial In_telligence._
Jeffrey Pennington, Richard Socher, and Christopher
[Manning. 2014. Glove: Global vectors for word rep-](https://doi.org/10.3115/v1/D14-1162)
[resentation. In Proceedings of the 2014 Conference](https://doi.org/10.3115/v1/D14-1162)
_on Empirical Methods in Natural Language Process-_
_ing (EMNLP), pages 1532–1543, Doha, Qatar. Asso-_
ciation for Computational Linguistics.
Maxim Rabinovich, Mitchell Stern, and Dan Klein.
[2017. Abstract syntax networks for code generation](https://doi.org/10.18653/v1/P17-1105)
[and semantic parsing. In Proceedings of the 55th An-](https://doi.org/10.18653/v1/P17-1105)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 1139–_
1149, Vancouver, Canada. Association for Computational Linguistics.
Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom
Kwiatkowski, Dipanjan Das, Mark Steedman, and
Mirella Lapata. 2016. [Transforming dependency](https://doi.org/10.1162/tacl_a_00088)
[structures to logical forms for semantic parsing.](https://doi.org/10.1162/tacl_a_00088)
_Transactions of the Association for Computational_
_Linguistics, 4:127–140._
Benjamin Robaidek, Rik Koncel-Kedziorski, and Hannaneh Hajishirzi. 2018. Data-driven methods for
solving algebra word problems. _arXiv preprint_
_arXiv:1804.10718._
Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel
[Gildea. 2018. A graph-to-sequence model for AMR-](https://doi.org/10.18653/v1/P18-1150)
[to-text generation. In Proceedings of the 56th An-](https://doi.org/10.18653/v1/P18-1150)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 1616–_
1626, Melbourne, Australia. Association for Computational Linguistics.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
Sequence to sequence learning with neural networks.
In NIPS, pages 3104–3112.
Ioannis Tsochantaridis, Thorsten Joachims, Thomas
Hofmann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output variables. Journal of machine learning research,
6(Sep):1453–1484.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova,
Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2017. Graph attention networks. _ArXiv,_
abs/1710.10903.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
[and Xiaojiang Liu. 2018a. Translating a math word](https://doi.org/10.18653/v1/D18-1132)
[problem to a expression tree.](https://doi.org/10.18653/v1/D18-1132) In Proceedings of
_the 2018 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 1064–1069, Brus-_
sels, Belgium. Association for Computational Linguistics.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In AAAI.
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers
with recursive neural networks. In AAAI.
2851
-----
Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun
[Liu. 2017. Deep neural machine translation with lin-](https://doi.org/10.18653/v1/P17-1013)
[ear associative unit. In Proceedings of the 55th An-](https://doi.org/10.18653/v1/P17-1013)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 136–_
145, Vancouver, Canada. Association for Computational Linguistics.
[Zhipeng Xie and Shichao Sun. 2019. A goal-driven](https://doi.org/10.24963/ijcai.2019/736)
[tree-structured neural model for math word prob-](https://doi.org/10.24963/ijcai.2019/736)
[lems.](https://doi.org/10.24963/ijcai.2019/736) In Proceedings of the Twenty-Eighth In_ternational Joint Conference on Artificial Intelli-_
_gence, IJCAI-19, pages 5299–5305. International_
Joint Conferences on Artificial Intelligence Organization.
Kun Xu, Lingfei Wu, Zhiguo Wang, and Vadim
Sheinin. 2018a. Graph2seq: Graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823.
Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. 2018b. Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. arXiv preprint
_arXiv:1808.07624._
Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei
Chen, and Vadim Sheinin. 2018c. Sql-to-text generation with graph-to-sequence model. arXiv preprint
_arXiv:1809.05255._
[Pengcheng Yin and Graham Neubig. 2017. A syntactic](https://doi.org/10.18653/v1/P17-1041)
[neural model for general-purpose code generation.](https://doi.org/10.18653/v1/P17-1041)
In Proceedings of the 55th Annual Meeting of the As_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 440–450, Vancouver, Canada._
Association for Computational Linguistics.
Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, and Alexander L. Gaunt.
2019. [Learning to represent edits.](https://openreview.net/forum?id=BJl6AjC5F7) In 7th Inter_national Conference on Learning Representations,_
_ICLR 2019, New Orleans, LA, USA, May 6-9, 2019._
Pengcheng Yin, Chunting Zhou, Junxian He, and Gra[ham Neubig. 2018. StructVAE: Tree-structured la-](https://doi.org/10.18653/v1/P18-1070)
[tent variable models for semi-supervised semantic](https://doi.org/10.18653/v1/P18-1070)
[parsing. In Proceedings of the 56th Annual Meet-](https://doi.org/10.18653/v1/P18-1070)
_ing of the Association for Computational Linguis-_
_tics (Volume 1: Long Papers), pages 754–765, Mel-_
bourne, Australia. Association for Computational
Linguistics.
Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured
classification with probabilistic categorial grammars.
In In Proceedings of the 21st Conference on Uncer_tainty in AI, pages 658–666._
[Yanyan Zou and Wei Lu. 2019. Text2Math: End-to-](https://doi.org/10.18653/v1/D19-1536)
[end parsing text into math expressions. In Proceed-](https://doi.org/10.18653/v1/D19-1536)
_ings of the 2019 Conference on Empirical Methods_
_in Natural Language Processing and the 9th Inter-_
_national Joint Conference on Natural Language Pro-_
_cessing (EMNLP-IJCNLP), pages 5327–5337, Hong_
2852
Kong, China. Association for Computational Linguistics.
-----
| [
"Shucheng, Li",
"Yang, Liu",
"Shiwei, Feng",
"Fangli, Xu",
"Lingfei, Wu",
"Fengyuan, Xu",
"Sheng, Zhong",
"Trevor, Cohn",
"Yulan, He"
] | 2020-11-01T00:00:00 | EMNLP 2020 Findings | false | 64 | 11 | null | https://aclanthology.org/2020.findings-emnlp.255 | https://arxiv.org/abs/2004.13781 | https://www.semanticscholar.org/paper/f1aef5403012d2a70344bc70d58d720aef85834c |
Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction | Solving math word problems requires deductive reasoning over the quantities in the text. Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree models to generate mathematical expressions without explicitly performing relational reasoning between quantities in the given context. While empirically effective, such approaches typically do not provide explanations for the generated expressions. In this work, we view the task as a complex relation extraction problem, proposing a novel approach that presents explainable deductive reasoning steps to iteratively construct target expressions, where each step involves a primitive operation over two quantities defining their relation. Through extensive experiments on four benchmark datasets, we show that the proposed model significantly outperforms existing strong baselines. We further demonstrate that the deductive procedure not only presents more explainable steps but also enables us to make more accurate predictions on questions that require more complex reasoning. | This work views the task as a complex relation extraction problem, and proposes a novel approach that presents explainable deductive reasoning steps to iteratively construct target expressions, where each step involves a primitive operation over two quantities defining their relation. | ## Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction
**Zhanming Jie[♥♦], Jierui Li[♣♦]** and Wei Lu[♦]
♥ByteDance AI Lab, ♣University of Texas at Austin
♦StatNLP Research Group, Singapore University of Technology and Design
[email protected], [email protected], [email protected]
**Question: In a division sum, the remainder is 8**
_and the divisor is 6 times the quotient and is obt-_
_-ained by adding 3 to the thrice of the remainder._
_What is the dividend?_
**Answer: 129.5 Expr:** **8 × 3 + 3** × **8 × 3 + 3** ÷6 +8
Tree generation: 7 ops (( ) ( + ) )
Tree generation: 7 ops +
× 8
+ ÷
3 + 6
×
× 3
8 3
8 3
Our deductive procedure: 5 ops
**Abstract**
Solving math word problems requires deductive reasoning over the quantities in the text.
Various recent research efforts mostly relied
on sequence-to-sequence or sequence-to-tree
models to generate mathematical expressions
without explicitly performing relational reasoning between quantities in the given context. While empirically effective, such approaches typically do not provide explanations
for the generated expressions. In this work,
we view the task as a complex relation ex_traction problem, proposing a novel approach_
that presents explainable deductive reasoning
steps to iteratively construct target expressions,
where each step involves a primitive operation over two quantities defining their relation. Through extensive experiments on four
benchmark datasets, we show that the proposed model significantly outperforms existing strong baselines. We further demonstrate
that the deductive procedure not only presents
more explainable steps but also enables us to
make more accurate predictions on questions
that require more complex reasoning.[1]
8 × 3 = 24 24 + 3 = 27 27 ÷ 6 = 4.5
1 2 3
27 × 4.5 = 121.5 121.5 + 8 = 129.5
4
5
Figure 1: A MWP example taken from MathQA. Top:
tree generation. Bottom: deductive procedure.
form of a tree structure, which is adopted in recent research efforts (Xie and Sun, 2019; Zhang
et al., 2020; Patel et al., 2021; Wu et al., 2021).
Specifically, the output is an expression that can be
obtained from such a generated structure. We note
that, however, there are several limitations with
such a structure generation approach. First, such a
process typically involves a particular order when
generating the structure. In the example, given the
complexity of the problem, the decision of generating the addition (“+”) operation as the very
first step could be counter-intuitive and does not
provide adequate explanations that show the reasoning process when being presented to a human
learner. Furthermore, the resulting tree contains
identical sub-trees (“8 × 3 + 3”) as highlighted in
blue dashed boxes. Unless a certain specifically
designed mechanism is introduced for reusing the
already generated intermediate expression, the approach would need to repeat the same effort in its
process for generating the same sub-expression.
**1** **Introduction**
Math word problem (MWP) solving (Bobrow,
1964) is a task of answering a mathematical question that is described in natural language. Solving
MWP requires logical reasoning over the quantities
presented in the context (Mukherjee and Garain,
2008) to compute the numerical answer. Various
recent research efforts regarded the problem as a
generation problem – typically, such models focus
on generating the complete target mathematical expression, often represented in the form of a linear
sequence or a tree structure (Xie and Sun, 2019).
Figure 1 (top) depicts a typical approach that
attempts to generate the target expression in the
[1Our code and data are released at https://github.](https://github.com/allanj/Deductive-MWP)
[com/allanj/Deductive-MWP.](https://github.com/allanj/Deductive-MWP)
-----
Solving math problems generally requires deductive reasoning, which is also regarded as one of the
important abilities in children’s cognitive development (Piaget, 1952). In this work, we propose a
novel approach that explicitly presents deductive
reasoning steps. We make a key observation that
MWP solving fundamentally can be viewed as a
_complex relation extraction problem – the task of_
identifying the complex relations among the quantities that appear in the given problem text. Each
primitive arithmetic operation (such as addition,
_subtraction) essentially defines a different type_
of relation. Drawing on the success of some recent models for relation extraction in the literature
(Zhong and Chen, 2021), our proposed approach
involves a process that repeatedly performs relation
extraction between two chosen quantities (including newly generated quantities).
As shown in Figure 1, our approach directly
extracts the relation (“multiplication”, or “×”) between 8 and 3, which come from the contexts “re_mainder is 8” and “thrice of the remainder”. In_
addition, it allows us to reuse the results from the
intermediate expression in the fourth step. This
process naturally yields a deductive reasoning procedure that iteratively derives new knowledge from
existing ones. Designing such a complex relation
extraction system presents several practical challenges. For example, some quantities may be irrelevant to the question while some others may need
to be used multiple times. The model also needs
to learn how to properly handle the new quantities that emerge from the intermediate expressions.
Learning how to effectively search for the optimal
sequence of operations (relations) and when to stop
the deductive process is also important.
In this work, we tackle the above challenges and
make the following major contributions:
- We formulate MWP solving as a complex relation extraction task, where we aim to repeatedly
identify the basic relations between different
quantities. To the best of our knowledge, this
is the first effort that successfully tackles MWP
solving from such a new perspective.
- Our model is able to automatically produce
explainable steps that lead to the final answer,
presenting a deductive reasoning process.
- Our experimental results on four standard
datasets across two languages show that our
model significantly outperforms existing strong
baselines. We further show that the model per
forms better on problems with more complex
equations than previous approaches.
**2** **Related Work**
Early efforts focused on solving MWP using probabilistic models with handcrafted features (Liguda
and Pfeiffer, 2012). Kushman et al. (2014) and
Roy and Roth (2018) designed templates to find the
alignments between the declarative language and
equations. Most recent works solve the problem by
using sequence or tree generation models. Wang
et al. (2017) proposed the Math23k dataset and presented a sequence-to-sequence (seq2seq) approach
to generate the mathematical expression (Chiang
and Chen, 2019). Other approaches improve the
seq2seq model with reinforcement learning (Huang
et al., 2018), template-based methods (Wang et al.,
2019), and group attention mechanism (Li et al.,
2019). Xie and Sun (2019) proposed a goal-driven
tree-structured (GTS) model to generate the expression tree. This sequence-to-tree approach significantly improved the performance over the traditional seq2seq approaches. Some follow-up works
incorporated external knowledge such as syntactic
dependency (Shen and Jin, 2020; Lin et al., 2021)
or commonsense knowledge (Wu et al., 2020). Cao
et al. (2021) modeled the equations as a directed
acyclic graph to obtain the expression. Zhang et al.
(2020) and Li et al. (2020) adopted a graph-to-tree
approach to model the quantity relations using the
graph convolutional networks (GCN) (Kipf and
Welling, 2017). Applying pre-trained language
models such as BERT (Devlin et al., 2019) was
shown to significantly benefit the tree expression
generation (Lan et al., 2021; Tan et al., 2021; Liang
et al., 2021; Li et al., 2021; Shen et al., 2021).
Different from the tree-based generation models,
our work is related to deductive systems (Shieber
et al., 1995; Nederhof, 2003) where we aim to obtain step-by-step expressions. Recent efforts have
also been working towards this direction. Ling
et al. (2017) constructed a dataset to provide explanations for expressions at each step. Amini et al.
(2019) created the MathQA dataset annotated with
step-by-step operations. The annotations present
the expression at each intermediate step during
problem-solving. Our deductive process (Figure 1)
attempts to automatically obtain the expression in
an incremental, step-by-step manner.
Our approach is also related to relation extraction
(RE) (Zelenko et al., 2003), a fundamental task in
-----
**input:** _q in Q[(][0][)]_
**axiom: 0 ∶** _q1, ⋯, q_ _Q_ 0
( )
_op_ _t ∶_ _q1, ⋯, q_ ∣ _t−1∣_
_qi_ −→ _qj:_ ⟨ _Q(_ )⟩ _t_
_t +⟨ 1 ∶_ _q1, ⋯∣_ _, q_ _Q∣⟩t−1_ _q_ _Q_ _t_ ∶= ei,j,op[⟩]
Figure 2: Our deductive system.∣ ( )∣ t is the current step.∣ ( )∣ ( )
⟨ ∣
⋅ denotes the quantity list.
⟨ ⟩
the field of information extraction that is focused on
identifying the relationships between a pair of entities. Recently, Zhong and Chen (2021) designed
a simple and effective approach to directly model
the relations on the span pair representations. In
this work, we treat the operation between a pair of
quantities as the relation at each step in our deductive reasoning process. Traditional methods (Liang
et al., 2018) applied rule-based approaches to extract the mathematical relations.
MWP solving is typically regarded as one of
the system 2 tasks (Kahneman, 2011; Bengio et al.,
2021), and our current approach to this problem is
related to neural symbolic reasoning (Besold et al.,
2017). We design differentiable modules (Andreas
et al., 2016; Gupta et al., 2020) in our model (§3.2)
to perform reasoning among the quantities.
**3** **Approach**
The math word problem solving task can be defined as follows. Given a problem description
_S =_ _w1, w2, ⋯, wn_ that consists of a list of n
words and QS = _q1, q2, ⋯, qm_, a list of m quantities that appear in {, our task is to solve the prob-}
_S_
lem and return the numerical answer. Ideally, the { }
answer shall be computed through a mathematical
reasoning process over a series of primitive mathematical operations (Amini et al., 2019) as shown in
Figure 1. Such operations may include “+” (addi_tion), “−” (subtraction), “×” (multiplication), “÷”_
(division), and “∗∗” (exponentiation).[2]
In our view, each of the primitive mathematical operations above can essentially be used for
describing a specific relation between quantities.
Fundamentally, solving a math word problem is
a problem of complex relation extraction, which
requires us to repeatedly identify the relations between quantities (including those appearing in the
text and those intermediate ones created by relations). The overall solving procedure requires in
2While we consider binary operators, extending our approach to support unary or ternary operators is possible (§4.3).
voking a relation classification module at each step,
yielding a deductive reasoning process.
In practice, some questions cannot be answered
without relying on certain predefined constants
(such as π and 1) that may not have appeared in the
given problem description. We therefore also consider a set of constants = _c1, c2, ⋯, c_ . Such
_C_ _C_
constants are also regarded as quantities (i.e., they
∣ ∣
would be regarded as _qm {+1, qm+2, . . ., q}m+_ )
_C_
which may play useful roles when forming the fi∣ ∣
nal answer expression. { }
**3.1** **A Deductive System**
As shown in Figure 1, applying the mathematical
relation (e.g., “+”) between two quantities yields
an intermediate expression e. In general, at step
_t, the resulting expression e[(][t][)]_ (after evaluation)
becomes a newly created quantity that is added
to the list of candidate quantities and is ready for
participating in the remaining deductive reasoning
process from step t + 1 onward. This process can
be mathematically denoted as follows:
- Initialization:
= ∪
_Q[(][0][)]_ _QS_ _C_
- At step t:
_ei,j,op(t)_ == _qi_ −op→ _q∪j_ _qeii,j,op, qt_ _j ∈[}]_ _Q[(][t][−][1][)]_
_Q[(][t][)]_ _Qt[(][t][−][1][)]_ ( )
_q_ _Q_ _t_ ∶= _ei,j,op_ {
_t∣_ ( )∣ ( )
where ei,j,op [represents the expression after apply-]
ing the relation( ) _op to the ordered pair_ _qi, qj_ . Following the standard deduction systems (Shieber
et al., 1995; Nederhof, 2003), the reasoning pro- ( )
cess can be formulated in Figure 2. We start with
an axiom with the list of quantities in . The
_op_ _Q[(][0][)]_
inference rule is qi −→ _qj as described above to_
obtain the expression as a new quantity at step t.
**3.2** **Model Components**
**Reasoner** Figure 3 shows the deductive reasoning procedure in our model for an example that
involves 3 quantities. We first convert the quantities (e.g., 2, 088) into a general quantity token
“<quant>”. We next adopt a pre-trained language
model such as BERT (Devlin et al., 2019) or
Roberta (Cui et al., 2019; Liu et al., 2019) to obtain
the quantity representation q for each quantity q.
-----
**Rationalizer** **Mechanism**
Multi-head Self-Attention Attention _Q =_ **_qi, e_** _, K =_ **_qi, e_** _, V =_ **_qi, e_**
GRU cell GRU_Cell input = qi, previous hidden = e
( [ ] [ ] [ ])
( )
Table 1: The mechanism in different rationalizers.
**_qi_** Rationalizer **_q[′]i_**
**_e_** Expression
Figure 4: Rationalizing quantity representation.
where sq ⋅ and se ⋅ are the scores assigned to the
quantity and the expression, respectively, and wq
and we are the corresponding learnable parameters.( ) ( )
Our goal is to find the optimal expression sequence
_e[(][1][)], e[(][2][)], ⋯, e[(][T]_ [)] that enables us to compute the
final numerical answer, where T is the total number
of steps required for this deductive process.[ ]
**Terminator** Our model also has a mechanism
that decides whether the deductive procedure is
ready to terminate at any given time. We introduce
a binary label τ, where 1 means the procedure
stops here, and 0 otherwise. The final score of
the expression e at time step t can be calculated as:
_If_ _a_ _machine can make 2,088 gears_ _in_ **_8 hours,_**
_q1_ _q2_
_how many_ _gears_ _it_ _make_ _in_ **_9_** _hours?_
_q3_
_t = 1_
_t = 2_
[
**_q1_** **_q2_** **_q3_**
1
FFNop=“×” **_e1,2,×_**
**_q1, q2, q1 ◦_** **_q2_** FFNop=“÷” **_e1,2,÷_**
[ ]
**_q[′]1_** **_q[′]2_** **_q[′]3_** **_q4_**
2
**_q[′]3[,][ q]4[,][ q][′]3_** [◦] **_[q]4_** FFNop=“×” **_e3,4,×_**
Figure 3: Model architecture for the deductive reasoner.
We show the inference procedure to obtain the expression “q1 ÷ q2 × q3” for the example question.
Given the quantity representations, we consider all
the possible quantity pairs, _qi, qj_ . Similar to Lee
et al. (2017), we can obtain the representation of
each pair by concatenating the two quantity repre- ( )
sentations and the element-wise product between
them. As shown in Figure 3, we apply a non-linear
feed-forward network (FFN) on top of the pair representation to get the representation of the newly
created expression. The above procedure can be
mathematically written as:
**_ei,j,op = FFNop_** **_qi, qj, qi ◦_** **_qj_** _, i ≤_ _j_ (1)
where ei,j,op is the representation of the intermedi
([ ])
ate expression e and op is the operation (e.g., “+”,
“−”) applied to the ordered pair (qi, qj). FFNop is
an operation-specific network that gives the expression representation under the particular operation
_op. Note that we have the constraint i ≤_ _j. As a_
result we also consider the “reverse operation” for
division and subtraction (Roy and Roth, 2015).
As shown in Figure 3, the expression e1,2,÷ will
be regarded as a new quantity with representation
**_q4 at t = 1. In general, we can assign a score_**
to a single reasoning step that yields the expres_t_
sion ei,j,op [from][ q][i] [and][ q][j] [with operation][ op][. Such]
a score can be calculated by summing over the( )
scores defined over the representations of the two
quantities and the score defined over the expression:
_t_
_s_ _ei,j,op[)][ =][ s][q][(][q]i[)][ +][ s][q][(][q]j[)][ +][ s][e][(][e][i,j,op][)]_ (2)
( )
where we have:
(
_sq_ **_qi_** = wq ⋅ FFN **_qi_**
(3)
_se_ **_ei,j,op_** = we ⋅ **_ei,j,op_**
( ) ( )
( )
_t_ _t_
_S_ _ei,j,op[, τ]_ [)][ =][ s][(][e]i,j,op[)] [+] **[w][τ]** [⋅] [FFN][(][e][i,j,op][)][ (4)]
( ) ( )
where wτ is the parameter vector for scoring the τ .
(
**Rationalizer** Once we obtain a new intermediate expression at step t, it is crucial to update the
representations for the existing quantities. We call
this step rationalization because it could potentially
give us the rationale that explains an outcome (Lei
et al., 2016). As shown in Figure 4, the intermediate expression e serves as the rationale that explains
how the quantity changes from q to q[′]. Without
this step, there is a potential shortcoming for the
model. That is, because if the quantity representations do not get updated as we continue the deductive reasoning process, those expressions that were
initially highly ranked (say, at the first step) would
always be preferred over those lowly ranked ones
throughout the process.[3] We rationalize the quantity representation using the current intermediate
expression e[(][t][)], so that the quantity is aware of the
generated expressions when its representation gets
updated. This procedure can be mathematically
formulated as follows:
**_q[′]i_** [=][ Rationalizer][(][q]i[,][ e][(][t][)][)][ ∀] [1][ ≤] _[i][ ≤]_ [∣][Q][∣] [(5)]
3See the supplementary material for more details on this.
-----
**Avg.**
**Dataset** #Train #Valid #Test **#Const.** **Lang.**
**Sent Len**
MAWPS 01,589 0,199 0,199 30.3 17 English
Math23k 21,162 1,000 1,000 26.6 02 Chinese
MathQA† 16,191 2,411 1,605 39.6 24 English
SVAMP 03,138 - 1,000 34.7 17 English
Table 2: Dataset statistics. †: we follow Tan et al.
(2021) to do preprocessing and obtain the subset.
Two well-known techniques we can adopt as rationalizers are multi-head self-attention (Vaswani
et al., 2017) and a gated recurrent unit (GRU) (Cho
et al., 2014) cell, which allow us to update the quantity representation, given the intermediate expression representation. Table 1 shows the mechanism
in two different rationalizers. For the first approach,
we essentially construct a sentence with two token
representations – quantity qi and the previous expression e – to perform self-attention. In the second
approach, we use qi as the input state and e as the
previous hidden state in a GRU cell.
**3.3** **Training and Inference**
Similar to training sequence-to-sequence models (Luong et al., 2015), we adopt the teacherforcing strategy (Williams and Zipser, 1989) to
guide the model with gold expressions during training. The loss[4] can be written as:
_T_
_t_
**_θ_** = max **_θ_** _ei,j,op[, τ]_ [)]]
_L_ ∑t=1 _i,j,op_ ∈H[(][t][)],τ _S_ ( ) (6)
_t_
( ) − (θ( _ei[∗],j)[∗],op[∗][, τ][ ∗][[))][ +](_ _[ λ][∣∣][θ][∣∣][2]_
_S_
( )
where θ includes all parameters in the deductive
(
reasoner and H[(][t][)] contains all the possible choices
of quantity pairs and relations available at time step
_t. λ is the hyperparameter for the L2 regularization_
term. The set H[(][t][)] grows as new expressions are
constructed and become new quantities during the
deductive reasoning process. The overall loss is
computed by summing over the loss at each time
step (assuming totally T steps).
During inference, we set a maximum time step
_Tmax and find the best expression e[∗]_ that has the
highest score at each time step. Once we see τ = 1
is chosen, we stop constructing new expressions
4Actually, one might have noticed that this loss comes with
a trivial solution at θ = 0. In practice, however, our model
and training process would prevent us from reaching such
a degenerate solution with proper initialization (Goodfellow
et al., 2016). This is similar to the training of a structured
perceptron (Collins, 2002), where a similar situation is also
involved.
60 MAWPS 60 Math23k
|MAWPS|Col2|
|---|---|
|||
|||
Math23k
1 2 3 4 5 ≥ 6 1 2 3 4 5 ≥ 6
40 MathQA 40 SVAMP
|MathQA|Col2|
|---|---|
|||
|||
|||
SVAMP
1 2 3 4 5 ≥ 6 1 2 3 4 5 ≥ 6
Figure 5: Percentage of questions with different operation count.
and terminate the process. The overall expression
(formed by the resulting expression sequence) will
be used for computing the final numerical answer.
**Declarative Constraints** Our model repeatedly
relies on existing quantities to construct new quantities, which results in a structure showing the deductive reasoning process. One advantage of such an
approach is that it allows certain declarative knowledge to be conveniently incorporated. For example,
as we can see in Equation 6, the default approach
considers all the possible combinations among the
quantities during the maximization step. We can
easily impose constraints to avoid considering certain combinations. In practice, we found in certain
datasets such as SVAMP, there does not exist any
expression that involve operations applied to the
same quantity (such as 9 + 9 or 9 × 9, where 9 is
from the same quantity in the text). Besides, we
also observe that the intermediate results would not
be negative. We can simply exclude such cases
in the maximization process, effectively reducing
the search space during both training and inference.
We show that adding such declarative constraints
can help improve the performance.
**4** **Experiments**
**Datasets** We conduct experiments on four
datasets across two different languages:
MAWPS (Koncel-Kedziorski et al., 2016),
Math23k (Wang et al., 2017), MathQA (Amini
et al., 2019), and SVAMP (Patel et al., 2021).
The dataset statistics can be found in Table 2.
For MathQA[5], we follow Tan et al. (2021)[6] to
5The original MathQA (Amini et al., 2019) dataset contains a certain number of instances that have annotated equations which cannot lead to the correct numerical answer.
6Our dataset size is not exactly the same as Tan et al. (2021)
as they included some instances that are wrongly annotated.
We only kept the part that has correct annotations. We con
-----
**Model** **Val Acc.**
GroupAttn (Li et al., 2019) 76.1
Transformer (Vaswani et al., 2017) 85.6
BERT-BERT (Lan et al., 2021) 86.9
Roberta-Roberta (Lan et al., 2021) 88.4
GTS (Xie and Sun, 2019) 82.6
Graph2Tree (Zhang et al., 2020) 85.6
Roberta-GTS (Patel et al., 2021) 88.5
Roberta-Graph2Tree (Patel et al., 2021) 88.7
**Val Acc.**
**Model**
**Test** **5-fold**
GroupAttn (Li et al., 2019) 69.5 66.9
mBERT-LSTM (Tan et al., 2021) 75.1 -
BERT-BERT (Lan et al., 2021) - 76.6
Roberta-Roberta (Lan et al., 2021) - 76.9
GTS (Xie and Sun, 2019) 75.6 74.3
KA-S2T† (Wu et al., 2020) 76.3 -
MultiE&D (Shen and Jin, 2020) 78.4 76.9
Graph2Tree (Zhang et al., 2020) 77.4 75.5
NeuralSymbolic (Qin et al., 2021) - 75.7
NUMS2T† (Wu et al., 2021) 78.1 -
HMS (Lin et al., 2021) 76.1 -
BERT-Tree (Li et al., 2021) 82.4 -
BERT-DEDUCTREASONER 91.2 (± 0.16)
ROBERTA-DEDUCTREASONER **92.0 (± 0.20)**
MBERT-DEDUCTREASONER 91.6 (± 0.13)
XLM-R-DEDUCTREASONER 91.6 (± 0.11)
BERT-DEDUCTREASONER 84.5 (± 0.16) 82.6 (± 0.17)
URS ROBERTA-DEDUCTREASONER **85.1 (± 0.24) 83.0 (± 0.23)**
O MBERT-DEDUCTREASONER 84.3 (± 0.19) 82.5 (± 0.33)
XLM-R-DEDUCTREASONER 84.0 (± 0.22) 82.0 (± 0.12)
Table 4: Results on Math23k. †: they used their own
splits (so their results may not be directly comparable).
Table 3: 5-fold cross-validation results on MAWPS.
adapt the dataset to filter out some questions
that are unsolvable. We consider the operations
“addition”, “subtraction”, “multiplication”, and
“division” for MAWPS and SVAMP, and an extra
“exponentiation” for MathQA and Math23k.
The number of operations involved in each question can be one of the indicators to help us gauge
the difficulty of a dataset. Figure 5 shows the percentage distribution of the number of operations
involved in each question. The MathQA dataset
generally contains larger portions of questions that
involve more operations, while 97% of the questions in MAWPS can be answered with only one
or two operations. More than 60% of the instances
in MathQA have three or more operations, which
likely makes their problems harder to solve. Furthermore, MathQA (Amini et al., 2019) contains
GRE questions in many domains including physics,
geometry, probability, etc., while Math23k questions are from primary school. Different from other
datasets, SVAMP (Patel et al., 2021)[7] is a challenging set that is manually created to evaluate a
model’s robustness. They applied variations over
the instances sampled from MAWPS. Such variations could be: adding extra quantities, swapping
the positions between noun phrases, etc.
**Baselines** The baseline approaches can be
broadly categorized into sequence-to-sequence
(S2S), sequence-to-tree (S2T) and graph-to-tree
(G2T) models. GroupAttn (Li et al., 2019) designed several types of attention mechanisms such
as question or quantity related attentions in the
seq2seq model. Tan et al. (2021) uses multilingual
firmed such information with the authors of Tan et al. (2021),
and make our version of this dataset publicly available.
7There is no test split for this dataset. We strictly follow
the experiment setting in Patel et al. (2021).
BERT with an LSTM decoder (mBERT-LSTM).
Lan et al. (2021) presented two seq2seq models that
use BERT/Roberta as both encoder and decoder,
namely, **BERT-BERT and Roberta-Roberta.**
Sequence-to-tree models mainly use a tree-based
decoder with GRU (GTS) (Xie and Sun, 2019) or
BERT as the encoder (BERT-Tree) (Liang et al.,
2021; Li et al., 2021). NUMS2T (Wu et al., 2020)
and NeuralSymbolic (Qin et al., 2021) solver incorporate external knowledge in the S2T architectures. Graph2Tree (Zhang et al., 2020) models
the quantity relations using GCN.
**Training Details** We adopt BERT (Devlin et al.,
2019) and Roberta (Liu et al., 2019) for the English
datasets. Chinese BERT and Chinese Roberta (Cui
et al., 2019) are used for Math23k. We use the
GRU cell as the rationalizer. We also conduct
experiments with multilingual BERT and XLMRoberta (Conneau et al., 2020). The pre-trained
models are initialized from HuggingFace’s Transformers (Wolf et al., 2020). We optimize the loss
with the Adam optimizer (Kingma and Ba, 2014;
Loshchilov and Hutter, 2019). We use a learning
rate of 2e-5 and a batch size of 30. The regularization coefficient λ is set to 0.01. We run our models
with 5 random seeds and report the average results
(with standard deviation). Following most previous works, we mainly report the value accuracy
(percentage) in our experiments. In other words,
a prediction is considered correct if the predicted
expression leads to the same value as the gold expression. Following previous practice (Zhang et al.,
2020; Tan et al., 2021; Patel et al., 2021), we report
-----
**Model** **Val Acc.**
Graph2Tree (Zhang et al., 2020) 69.5
BERT-Tree (Li et al., 2021) 73.8
mBERT+LSTM (Tan et al., 2021) 77.1
**Model** **Val Acc.**
GroupAttn (Li et al., 2019) 21.5
BERT-BERT (Lan et al., 2021) 24.8
Roberta-Roberta (Lan et al., 2021) 30.3
GTS[∗] (Xie and Sun, 2019) 30.8
Graph2Tree (Zhang et al., 2020) 36.5
BERT-Tree (Li et al., 2021) 32.4
Roberta-GTS (Patel et al., 2021) 41.0
Roberta-Graph2Tree (Patel et al., 2021) 43.8
BERT-DEDUCTREASONER 78.5 (± 0.07)
ROBERTA-DEDUCTREASONER **78.6 (± 0.09)**
MBERT-DEDUCTREASONER 78.2 (± 0.21)
XLM-R-DEDUCTREASONER 78.2 (± 0.11)
BERT-DEDUCTREASONER 35.3 (± 0.04)
+ constraints 42.3 (± 0.09)
ROBERTA-DEDUCTREASONER 45.0 (± 0.10)
+ constraints **47.3 (± 0.20)**
MBERT-DEDUCTREASONER 36.1 (± 0.07)
+ constraints 41.3 (± 0.08)
XLM-R-DEDUCTREASONER 38.1 (± 0.08)
+ constraints 44.6 (± 0.15)
Table 5: Test accuracy comparison on MathQA.
5-fold cross-validation results on both MAWPS[8]
and Math23k, and also report the test set performance for Math23k, MathQA and SVAMP.
**Additional Experiments (experiments conducted after ACL conference)**
All experiments are incorporated with constraints
ROBERTA-DEDUCTREASONER† 48.9
DEBERTA-BASE-DEDUCTREASONER 55.6
DEBERTA-V3-LARGE-DEDUCTREASONER 62.0
DEBERTA-V2XX-LARGE-DEDUCTREASONER 63.6
Table 6: Test accuracy comparison on SVAMP. †: the
number is different because we also allow that the pair
of quantities in an expression can be the same quantity
for all additional experiments.
**4.1** **Results**
**MAWPS and Math23k** We first discuss the results on MAWPS and Math23k, two datasets that
are commonly used in previous research. Table
3 and 4 show the main results of the proposed
models with different pre-trained language models. We compare with previous works that have
reported results on these datasets. Among all the
encoders for our model DEDUCTREASONER, the
Roberta encoder achieves the best performance. In
addition, DEDUCTREASONER significantly outperforms all the baselines regardless of the choice of
encoder. The performance on the best S2S model
(Roberta-Roberta) is on par with the best S2T
model (Roberta-Graph2Tree) on MAWPS. Overall, the accuracy of Roberta-based DEDUCTREA
SONER is more than 3 points higher than RobertaGraph2Tree (p < 0.001)[9] on MAWPS, and more
than 2 points higher than BERT-Tree (p < 0.005)
on Math23k. The comparisons show that our deductive reasoner is robust across different languages
and datasets of different sizes. We noted that different approaches compare against each other though
the experiments are conducted in different settings
on the Math23k dataset. We present the detailed
comparsion in Appendix C.
**MathQA and SVAMP** As mentioned before,
MathQA and SVAMP are more challenging – the
former consists of more complex questions and the
latter consists of specifically designed challenging
questions. Table 5 and 6 show the performance
comparisons. We are able to outperform the best
baseline mBERT-LSTM[10] by 1.5 points in accuracy
8All previous efforts combine training/dev/test sets and
perform 5-fold cross validation, which we follow.
9We conduct bootstrapping t-test to compare the results.
10We ran the their code on our adapted MathQA dataset.
on MathQA. Different from other three datasets,
the performance between different language models shows larger gaps on SVAMP. As we can see
from baselines and our models, the choice of encoder appear to be important for solving questions
in SVAMP – the results on using Roberta as the
encoder are particularly striking. Our best variant
ROBERTA-DEDUCTREASONER achieves an accuracy score of 47.3 and is able to outperfrom the
best baseline (Roberta-Graph2Tree) by 3.5 points
(p < 0.01). By incorporating the constraints from
our prior knowledge (as discussed in §3.3), we observe significant improvements for all variants – up
to 7.0 points for our BERT-DEDUCTREASONER.
Overall, these results show that our model is
more robust as compared to previous approaches
on such challenging datasets.
**Fine-grained Analysis** We further perform finegrained performance analysis based on questions with different numbers of operations. Table 7 shows the accuracy scores for questions that involve different numbers of operations. It also shows the equation accuracy on
all datasets[11]. We compared our ROBERTA
11Equ Acc: we regard an equation as correct if and only
if it matches with the reference equation (up to reordering of
sub-expressions due to commutative operations, namley “+”
and “×”).
-----
**MAWPS** **Math23k**
**Rationalizer**
**Equ Acc.** **Val Acc.** **Equ Acc.** **Val Acc.**
NONE 88.4 91.8 71.5 77.8
Self-Attention 88.3 91.7 77.5 84.8
GRU unit 88.6 92.0 79.0 85.1
Table 9: Performance comparison on different rationalizer using the Roberta-base model.
**Question: Xiaoli and Xiaoqiang typed a manuscript together. Their**
_typing speed ratio was 5:3. Xiaoli typed 1,400 more words than_
_Xiaoqiang. How many words are there in this manuscript?_
**Gold Expr:** 5÷ 5+31400−3÷ 5+3 **Answer: 5600**
**Gold deduction:**
( ) ( )
**MAWPS** **Math23k** **MathQA** **SVAMP**
#Operation
Baseline OURS Baseline OURS Baseline OURS Baseline OURS
1 88.2 **92.7** 91.3 **93.6** **77.3** **77.4** **51.9** **52.0**
2 91.3 **91.6** 89.3 **92.0** 81.3 **83.5** 17.8 **32.1**
3 - - 74.5 **77.0** 81.9 **83.4** - -
4 - - 59.1 **60.3** 79.3 **81.7** - -
>=5 - - 56.5 **69.2** **71.5** **71.4** - -
**Overall Performance**
Equ Acc. 80.8 88.6 71.2 79.0 74.0 74.0 40.9 45.0
Val Acc. 88.7 92.0 82.4 85.1 77.1 78.6 43.8 47.3
Table 7: Acc. under different number of operations.
**MAWPS** **Math23k** **MathQA** **SVAMP**
Unused 6.5% 8.2% 20.7% 44.5%
Accuracy (unused = 0) 93.6 87.1 81.4 63.6
Accuracy (unused ≥ 1) 100.0† 62.1 67.4 27.0
Table 8: Value accuracy with respect to the number of
unused quantities. The second row shows the percentage of instances that have unused quantities. †: may
not be representative as there are only 3 instances.
DEDUCTREASONER with the best performing
baselines in Table 3 (Roberta-Graph2Tree), 4
(BERT-Tree), 5 (mBERT+LSTM) and 6 (RobertaGraph2Tree). On MAWPS and Math23k,
our ROBERTA-DEDUCTREASONER model consistently yields higher results than baselines. On
MathQA, our model also performs better on questions that involve 2, 3, and 4 operations. For the
other more challenging dataset SVAMP, our model
has comparable performance with the baseline on
1-step questions, but achieves significantly better
results (+14.3 points) on questions that involve 2
steps. Such comparisons on MathQA and SVAMP
show that our model has a robust reasoning capability on more complex questions.
We observe that all models (including ours and
existing models) are achieving much lower accuracy scores on SVAMP, as compared to other
datasets. We further investigate the reason for this.
Patel et al. (2021) added irrelevant information
such as extra quantities in the question to confuse
the models. We quantify the effect by counting
the percentage of instances which have quantities
unused in the equations. As we can see in Table 8,
SVAMP has the largest proportion (i.e., 44.5%) of
instances whose gold equations do not fully utilize
all the quantities in the problem text. The performance also significantly drops on those questions
with more than one unused quantity on all datasets.
The analysis suggests that our model still suffer
from extra irrelevant information in the question
and the performance is severely affected when such
irrelevant information appears more frequently.
**Predicted deduction:**
5 + 3 = 8 5 ÷ 8 = 0.625 3 ÷ 8 = 0.375
1 2 3
0.625 − 0.375 = 0.25 1400 ÷ 0.25 = 5600
4 5
5 − 3 = 2 1400 ÷ 2 = 700 5 + 3 = 8 700 × 8 = 5600
1 2 3 4
Figure 6: Deductive steps by our reasoner.
**Effect of Rationalizer** Table 9 shows the performance comparison with different rationalizers. As
described in §3.2, the rationalizer is used to update
the quantity representations at each step, so as to
better “prepare them” for the subsequent reasoning
process given the new context. We believe this step
is crucial for achieving good performance, especially for complex MWP solving. As shown in Table 9, the performance drops by 7.3 points in value
accuracy for Math23k without rationalization, confirming the importance of rationalization in solving
more complex problems that involve more steps.
As most of the questions in MAWPS involve only
1-step questions, the significance of using rationalizer is not fully revealed on this dataset.
It can be seen that using self-attention achieves
worse performance than the GRU unit. We believe the lower performance by using multi-head
attention as rationalizer may be attributed to two
reasons. First, GRU comes with sophisticated internal gating mechanisms, which may allow richer
representations for the quantities. Second, attention, often interpreted as a mechanism for measuring similarities (Katharopoulos et al., 2020), may
be inherently biased when being used for updating quantity representations. This is because when
measuring the similarity between quantities and a
specific expression (Figure 4), those quantities that
have just participated in the construction of the expression may receive a higher degree of similarity.
-----
**Question: There are 255 apple trees in the orchard. Planting another**
**_35 pear trees makes the number exactly the same as the apple trees. If_**
_every 20 pear trees are planted in a row, how many rows can be planted_
_in total?_
**Gold Expr:** 255 − 35 ÷ 20 **Answer: 11**
**Predicted Expr:** 255 + 35 ÷ 20 **Predicted: 14.5**
**Deductive Scores: (** )
( )
255 + 35 = 290 Prob.: 0.068 > 255 − 35 = 220 Prob.: 0.062
**Perturbed Question: There are 255 apple trees in the orchard. The**
**_number of pear trees is 35 fewer than the apple trees. If every 20 pear_**
_trees are planted in a row, how many rows can be planted in total?_
255 + 35 = 290 Prob.: 0.061 < 255 − 35 = 220 Prob.: 0.067
Figure 7: Question perturbation in deductive reasoning.
**4.2** **Case Studies**
**Explainability of Output** Figure 6 presents an
example prediction from Math23k. In this question, the gold deductive process first obtains the
speed difference by “5 ÷ 5 + 3 − 3 ÷ 5 + 3 ”
and the final answer is 1400 divided by this difference. On the other hand, the predicted deductive ( ) ( )
process offers a slightly different understanding in
speed difference. Assuming speed can be measured
by some abstract “units”, the predicted deductive
process first performs subtraction between 5 and 3,
which gives us “2 units” of speed difference. Next,
we can obtain the number of words associated with
each speed unit (1400÷2). Finally, we can arrive at
the total number of words by multiplying the number of words per unit (700) and the total number
of units (8).[12] Through such an example we can
see that our deductive reasoner is able to produce
explainable steps to understand the answers.
**Question Perturbation** The model predictions
also give us guidance to understand the errors. Figure 7 shows how we can perturb a question given
the error prediction (taken from Math23k). As we
can see, the first step is incorrectly predicted with
the “+” relation between 255 and 35. Because the
first step involves the two quantities in the first two
sentences, where we can locate the possible cause
for the error. The gold step has a probability of
0.062 which is somewhat lower than the incorrect
prediction. We believe that the second sentence
(marked in red) may convey semantics that can be
challenging for the model to digest, resulting in the
incorrect prediction. Thus, we perturb the second
sentence to make it semantically more straightforward (marked below in blue). The probability for
the sub-expression 225 − 35 becomes higher after
the purtubation, leading to a correct prediction (the
12Interestingly, when we presented this question to 3 human
solvers, 2 of them used the first approach and 1 of them arrived
at the second approach.
“−” relation). Such an analysis demonstrates the
strong interpretability of our deductive reasoner,
and highlights the important connection between
math word problem solving and reading comprehension, a topic that has been studied in educational
psychology (Vilenius-Tuohimaa et al., 2008).
**4.3** **Practical Issues**
We discuss some practical issues with the current
model in this section. Similar to most previous research efforts (Li et al., 2019; Xie and Sun, 2019),
our work needs to maintain a list of constants (e.g.,
1 and π) as additional candidate quantities. However, a large number of quantities could lead to a
large search space of expressions (i.e., H). In practice, we could select some top-scoring quantities
and build expressions on top of them (Lee et al.,
2018). Another assumption of our model, as shown
in Figure 3, is that only binary operators are considered. Actually, extending it to support unary or
ternary operators can be straightforward. Handling
unary operators would require the introduction of
some unary rules, and a ternary operator can be
defined as a composition of two binary operators.
Our current model performs the greedy search
in the training and inference process, which could
be improved with a beam search process. One challenge with designing the beam search algorithm is
that the search space H[(][t][)] is expanding at each step
_t (Equation 6). We empirically found the model_
tends to favor outputs that involve fewer reasoning
steps. In fact, better understanding the behavior
and effect of beam search in seq2seq models remains an active research topic (Cohen and Beck,
2019; Koehn and Knowles, 2017; Hokamp and Liu,
2017), and we believe how to perform effective
beam search in our setup could be an interesting
research question that is worth exploring further.
**5** **Conclusion and Future Work**
We provide a new perspective to the task of MWP
solving and argue that it can be fundamentally regarded as a complex relation extraction problem.
Based on this observation, and motivated by the
deductive reasoning process, we propose an end-toend deductive reasoner to obtain the answer expression in a step-by-step manner. At each step, our
model performs iterative mathematical relation extraction between quantities. Thorough experiments
on four standard datasets demonstrate that our deductive reasoner is robust and able to yield new
-----
state-of-the-art performance. The model achieves
particularly better performance for complex questions that involve a larger number of operations. It
offers us the flexibility in interpreting the results,
thanks to the deductive nature of our model.
Future directions that we would like to explore
include how to effectively incorporate commonsense knowledge into the deductive reasoning process, and how to facilitate counterfactual reasoning (Richards and Sanderson, 1999).
**Acknowledgements**
We would like to thank the anonymous reviewers
and our ARR action editor for their constructive
comments, and Hang Li for helpful discussions
and comments on this work. This work was done
when Jierui Li was working as a research assistant at SUTD, and when Wei Lu was serving as a
consultant at ByteDance AI Lab.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. Mathqa: Towards interpretable math](https://aclanthology.org/N19-1245.pdf)
[word problem solving with operation-based for-](https://aclanthology.org/N19-1245.pdf)
[malisms. In Proceedings of NAACL.](https://aclanthology.org/N19-1245.pdf)
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and
[Dan Klein. 2016. Neural module networks. In Pro-](https://openaccess.thecvf.com/content_cvpr_2016/papers/Andreas_Neural_Module_Networks_CVPR_2016_paper.pdf)
_ceedings of CVPR._
Yoshua Bengio, Yann Lecun, and Geoffrey Hinton.
[2021. Deep learning for ai. Communications of the](https://doi.org/10.1145/3448250)
_ACM, 64(7):58–65._
Tarek R Besold, Artur d’Avila Garcez, Sebastian Bader,
Howard Bowman, Pedro Domingos, Pascal Hitzler,
Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd,
[Priscila Machado Vieira Lima, et al. 2017. Neural-](https://arxiv.org/pdf/1711.03902.pdf)
[symbolic learning and reasoning: A survey and in-](https://arxiv.org/pdf/1711.03902.pdf)
[terpretation. arXiv preprint arXiv:1711.03902.](https://arxiv.org/pdf/1711.03902.pdf)
[Daniel G Bobrow. 1964. Natural language input for a](https://dspace.mit.edu/bitstream/handle/1721.1/6903/AITR-219.pdf?sequence=2)
[computer problem solving system.](https://dspace.mit.edu/bitstream/handle/1721.1/6903/AITR-219.pdf?sequence=2)
Yixuan Cao, Feng Hong, Hongwei Li, and Ping Luo.
[2021. A bottom-up dag structure extraction model](https://ojs.aaai.org/index.php/AAAI/article/view/16075/15882)
[for math word problems. In Proceedings of AAAI.](https://ojs.aaai.org/index.php/AAAI/article/view/16075/15882)
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-aligned [equation](https://doi.org/10.18653/v1/N19-1272) generation for
[solving and reasoning math word problems.](https://doi.org/10.18653/v1/N19-1272) In
_Proceedings of NAACL._
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bah[danau, and Yoshua Bengio. 2014. On the properties](https://aclanthology.org/W14-4012.pdf)
[of neural machine translation: Encoder–decoder ap-](https://aclanthology.org/W14-4012.pdf)
[proaches. In Proceedings of SSST-8, Eighth Work-](https://aclanthology.org/W14-4012.pdf)
_shop on Syntax, Semantics and Structure in Statisti-_
_cal Translation._
[Eldan Cohen and Christopher Beck. 2019. Empirical](http://proceedings.mlr.press/v97/cohen19a/cohen19a.pdf)
[analysis of beam search performance degradation in](http://proceedings.mlr.press/v97/cohen19a/cohen19a.pdf)
[neural sequence models. In Proceedings of ICML.](http://proceedings.mlr.press/v97/cohen19a/cohen19a.pdf)
[Michael Collins. 2002. Discriminative training meth-](https://aclanthology.org/W02-1001.pdf)
[ods for hidden markov models: Theory and experi-](https://aclanthology.org/W02-1001.pdf)
[ments with perceptron algorithms. In Proceedings](https://aclanthology.org/W02-1001.pdf)
_of EMNLP._
Alexis Conneau, Kartikay Khandelwal, Naman Goyal
Vishrav Chaudhary Guillaume Wenzek, Francisco
Guzmán, Edouard Grave Myle Ott Luke Zettle[moyer, and Veselin Stoyanov. 2020. Unsupervised](https://aclanthology.org/2020.acl-main.747.pdf)
[cross-lingual representation learning at scale.](https://aclanthology.org/2020.acl-main.747.pdf) In
_Proceedings of ACL._
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin,
Ziqing Yang, Shijin Wang, and Guoping Hu. 2019.
[Pre-training with whole word masking for chinese](https://arxiv.org/pdf/1906.08101.pdf)
[bert. arXiv preprint arXiv:1906.08101.](https://arxiv.org/pdf/1906.08101.pdf)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. Bert: Pre-training of deep](https://aclanthology.org/N19-1423.pdf)
[bidirectional transformers for language understand-](https://aclanthology.org/N19-1423.pdf)
[ing. In Proceedings of NAACL.](https://aclanthology.org/N19-1423.pdf)
Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
[2016. Deep learning. MIT press.](http://www.deeplearningbook.org)
Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and
[Matt Gardner. 2020. Neural module networks for](https://openreview.net/pdf?id=SygWvAVFPr)
[reasoning over text. In Proceedings of ICLR.](https://openreview.net/pdf?id=SygWvAVFPr)
Chris Hokamp and Qun Liu. 2017. [Lexically con-](https://doi.org/http://dx.doi.org/10.18653/v1/P17-1141)
[strained decoding for sequence generation using grid](https://doi.org/http://dx.doi.org/10.18653/v1/P17-1141)
[beam search. In Proceedings of ACL.](https://doi.org/http://dx.doi.org/10.18653/v1/P17-1141)
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
[2018. Neural math word problem solver with rein-](https://aclanthology.org/C18-1018.pdf)
[forcement learning. In Proceedings of COLING.](https://aclanthology.org/C18-1018.pdf)
Daniel Kahneman. 2011. _Thinking, fast and slow._
Macmillan.
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pap[pas, and François Fleuret. 2020. Transformers are](https://arxiv.org/pdf/2006.16236.pdf)
[rnns: Fast autoregressive transformers with linear at-](https://arxiv.org/pdf/2006.16236.pdf)
[tention. In Proceedings of ICML.](https://arxiv.org/pdf/2006.16236.pdf)
[Diederik P Kingma and Jimmy Ba. 2014. Adam: A](https://arxiv.org/pdf/1412.6980.pdf?source=post_page---------------------------)
[method for stochastic optimization. arXiv preprint](https://arxiv.org/pdf/1412.6980.pdf?source=post_page---------------------------)
_arXiv:1412.6980._
Thomas N Kipf and Max Welling. 2017. [Semi-](https://aclanthology.org/2020.emnlp-demos.6/)
[supervised classification with graph convolutional](https://aclanthology.org/2020.emnlp-demos.6/)
[networks. In Proceedings of ICLR.](https://aclanthology.org/2020.emnlp-demos.6/)
[Philipp Koehn and Rebecca Knowles. 2017. Six chal-](https://aclanthology.org/W17-32.pdf#page=40)
[lenges for neural machine translation. In Proceed-](https://aclanthology.org/W17-32.pdf#page=40)
_ings of the First Workshop on Neural Machine Trans-_
_lation, ACL._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. Mawps:](https://aclanthology.org/N16-1136.pdf)
[A math word problem repository. In Proceedings of](https://aclanthology.org/N16-1136.pdf)
_NAACL._
-----
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. [Learning to automatically](https://aclanthology.org/P14-1026.pdf)
[solve algebra word problems.](https://aclanthology.org/P14-1026.pdf) In Proceedings of
_ACL._
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan,
Bing Tian Dai, Yan Wang, Dongxiang Zhang, and
[Ee-Peng Lim. 2021. Mwptoolkit: An open-source](https://arxiv.org/pdf/2109.00799.pdf)
[framework for deep learning-based math word prob-](https://arxiv.org/pdf/2109.00799.pdf)
[lem solvers. arXiv preprint arXiv:2109.00799.](https://arxiv.org/pdf/2109.00799.pdf)
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle[moyer. 2017. End-to-end neural coreference resolu-](https://aclanthology.org/D17-1018.pdf)
[tion. In Proceedings of EMNLP.](https://aclanthology.org/D17-1018.pdf)
Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018.
[Higher-order coreference resolution with coarse-to-](https://aclanthology.org/N18-2108.pdf)
[fine inference. In Proceedings of NAACL.](https://aclanthology.org/N18-2108.pdf)
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.
[Rationalizing neural predictions. In Proceedings of](https://aclanthology.org/D16-1011.pdf)
_EMNLP._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
[Bing Tian Dai, and Dongxiang Zhang. 2019. Model-](https://www.aclweb.org/anthology/P19-1619.pdf)
[ing intra-relation in math word problems with differ-](https://www.aclweb.org/anthology/P19-1619.pdf)
[ent functional multi-head attentions. In Proceedings](https://www.aclweb.org/anthology/P19-1619.pdf)
_of the ACL._
Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu,
[Fengyuan Xu, and Sheng Zhong. 2020. Graph-to-](https://aclanthology.org/2020.findings-emnlp.255.pdf)
[tree neural networks for learning structured input-](https://aclanthology.org/2020.findings-emnlp.255.pdf)
[output translation with applications to semantic pars-](https://aclanthology.org/2020.findings-emnlp.255.pdf)
[ing and math word problem. In Proceedings of Find-](https://aclanthology.org/2020.findings-emnlp.255.pdf)
_ings of EMNLP._
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou,
[Chao Li, Hongzhi Liu, and Yunbo Cao. 2021. Seek-](https://arxiv.org/pdf/2110.08464.pdf)
[ing patterns, not just memorizing procedures: Con-](https://arxiv.org/pdf/2110.08464.pdf)
[trastive learning for solving math word problems.](https://arxiv.org/pdf/2110.08464.pdf)
_arXiv preprint arXiv:2110.08464._
Chao-Chun Liang, Yu-Shiang Wong, Yi-Chung Lin,
[and Keh-Yih Su. 2018. A meaning-based statistical](https://aclanthology.org/N18-1060.pdf)
[english math word problem solver. In Proceedings](https://aclanthology.org/N18-1060.pdf)
_of NAACL._
Zhenwen Liang, Jipeng Zhang, Jie Shao, and Xiangliang Zhang. 2021. [Mwp-bert: A strong base-](https://arxiv.org/abs/2107.13435)
[line for math word problems.](https://arxiv.org/abs/2107.13435) _arXiv preprint_
_arXiv:2107.13435._
[Christian Liguda and Thies Pfeiffer. 2012. Modeling](https://link.springer.com/content/pdf/10.1007/978-3-642-31178-9_29.pdf)
[math word problems with augmented semantic net-](https://link.springer.com/content/pdf/10.1007/978-3-642-31178-9_29.pdf)
[works. In International Conference on Application](https://link.springer.com/content/pdf/10.1007/978-3-642-31178-9_29.pdf)
_of Natural Language to Information Systems._
Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen,
[Qi Liu, Hao Wang, and Shijin Wang. 2021. Hms:](https://www.aaai.org/AAAI21Papers/AAAI-4958.LinX.pdf)
[A hierarchical solver with dependency-enhanced un-](https://www.aaai.org/AAAI21Papers/AAAI-4958.LinX.pdf)
[derstanding for math word problem. In Proceedings](https://www.aaai.org/AAAI21Papers/AAAI-4958.LinX.pdf)
_of AAAI._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://aclanthology.org/P17-1015.pdf)
[tion: Learning to solve and explain algebraic word](https://aclanthology.org/P17-1015.pdf)
[problems. In Proceedings of ACL.](https://aclanthology.org/P17-1015.pdf)
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
[Roberta: A robustly optimized bert pretraining ap-](https://arxiv.org/pdf/1907.11692.pdf)
[proach. arXiv preprint arXiv:1907.11692.](https://arxiv.org/pdf/1907.11692.pdf)
Ilya Loshchilov and Frank Hutter. 2019. [Decoupled](https://arxiv.org/pdf/1711.05101.pdf)
[weight decay regularization.](https://arxiv.org/pdf/1711.05101.pdf) In Proceedings of
_ICLR._
Minh-Thang Luong, Hieu Pham, and Christopher D
[Manning. 2015. Effective approaches to attention-](https://aclanthology.org/D15-1166.pdf)
[based neural machine translation. In Proceedings of](https://aclanthology.org/D15-1166.pdf)
_EMNLP._
[Anirban Mukherjee and Utpal Garain. 2008. A review](https://link.springer.com/article/10.1007/s10462-009-9110-0)
[of methods for automatic understanding of natural](https://link.springer.com/article/10.1007/s10462-009-9110-0)
[language mathematical problems. Artificial Intelli-](https://link.springer.com/article/10.1007/s10462-009-9110-0)
_gence Review, 29(2):93–122._
[Mark-Jan Nederhof. 2003. Weighted deductive parsing](https://www.aclweb.org/anthology/J03-1006.pdf)
[and knuth’s algorithm. Computational Linguistics,](https://www.aclweb.org/anthology/J03-1006.pdf)
29(1):135–143.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://aclanthology.org/2021.naacl-main.168)
[math word problems? In Proceedings of NAACL.](https://aclanthology.org/2021.naacl-main.168)
Jean Piaget. 1952. _[Child’s Conception of Number.](https://doi.org/https://doi.org/10.4324/9781315006222)_
Routledge.
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng
[Tang, and Liang Lin. 2021. Neural-symbolic solver](https://aclanthology.org/2021.acl-long.456.pdf)
[for math word problems with auxiliary tasks. In Pro-](https://aclanthology.org/2021.acl-long.456.pdf)
_ceedings of ACL-IJCNLP._
Cassandra A Richards and Jennifer A Sanderson. 1999.
[The role of imagination in facilitating deductive](https://doi.org/https://doi.org/10.1016/s0010-0277(99)00037-2)
[reasoning in 2-, 3-and 4-year-olds.](https://doi.org/https://doi.org/10.1016/s0010-0277(99)00037-2) _Cognition,_
72(2):B1–B9.
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://aclanthology.org/D15-1202.pdf)
[metic word problems. In Proceedings of EMNLP.](https://aclanthology.org/D15-1202.pdf)
[Subhro Roy and Dan Roth. 2018. Mapping to declara-](https://aclanthology.org/Q18-1012.pdf)
[tive knowledge for word problem solving. Transac-](https://aclanthology.org/Q18-1012.pdf)
_tions of the Association for Computational Linguis-_
_tics, 6:159–172._
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin
[Jiang, Ming Zhang, and Qun Liu. 2021. Generate &](https://aclanthology.org/2021.findings-emnlp.195.pdf)
[rank: A multi-task framework for math word prob-](https://aclanthology.org/2021.findings-emnlp.195.pdf)
[lems. In Proceedings of Findings of EMNLP.](https://aclanthology.org/2021.findings-emnlp.195.pdf)
[Yibin Shen and Cheqing Jin. 2020. Solving math word](https://aclanthology.org/2020.coling-main.262)
[problems with multi-encoders and multi-decoders.](https://aclanthology.org/2020.coling-main.262)
In Proceedings of COLING.
Stuart M Shieber, Yves Schabes, and Fernando CN
[Pereira. 1995. Principles and implementation of de-](http://www.eecs.harvard.edu/~shieber/Biblio/Papers/infer.pdf)
[ductive parsing. The Journal of logic programming,](http://www.eecs.harvard.edu/~shieber/Biblio/Papers/infer.pdf)
24(1-2):3–36.
Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing
[Jiang. 2021. Investigating math word problems us-](https://arxiv.org/abs/2105.08928)
[ing pretrained multilingual language models. arXiv](https://arxiv.org/abs/2105.08928)
_preprint arXiv:2105.08928._
-----
quantity representations, especially for questions
that require more than 3 operations to solve (i.e.,
_t ≥_ 3). Because if the quantity representations
do not get updated as we continue the deductive
reasoning process, those expressions that were initially highly ranked (say, at the first step) would
always be preferred over those lowly ranked ones
throughout the process.
We provide an example here to illustrate the scenario. Suppose our target expression is 1 + 2 ∗
3 + 4, the first step is to predict:
( )
( ) _e[(][1][)]_ = 1 + 2 (7)
In order to obtain the correct intermediate expression 1 + 2 as e[(][1][)], the model has to give the highest
score to this expression. Note that, the score of
the expression e1,2,+ also has to be larger than the
score of e3,4,+.
1 1
_s_ _e1,2,+[)][ >][ s][(][e]3,4,+[)]_ (8)
( ) ( )
However, in order to reach the final target expres
(
sion, in the next step, the model needs to construct
the intermediate expression 3 + 4. Without the rationalizer, the representations for the quantities are
unchanged, so we would have:
2 1 1 2
_s_ _e1,2,+[)][ =][ s][(][e]1,2,+[)][ >][ s][(][e]3,4,+[)][ =][ s][(][e]3,4,+[)]_
( ) ( ) ( ) ( ) (9)
From here we could see that the model would(
not be able to produce the intermediate expression
3 + 4 in the second step (but would still prefer to
generate another 1 + 2). With the rationalizer in
place, the above two equations in Equation 9 in
general may not hold, which effectively prevents
such an issue from happening.
**B** **Additional Implementation Details**
We implement our model with PyTorch and run
all experiments using Tesla V100 GPU. The feedforward network in our model is simply linear transformation followed by the ReLU activation. We
also apply layer normalization and dropout in the
feed-forward network. The hidden size in the feedforward network is 768, which the is same as the
hidden size used in BERT/Roberta.
**C** **Detailed Comparison on Math23k**
**Dataset**
We try to find the experiment details of previous
work on the Math23k dataset. Table 10 shows their
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
[Kaiser, and Illia Polosukhin. 2017. Attention is all](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)
[you need. In Proceedings of NeurIPS.](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)
Piia Maria Vilenius-Tuohimaa, Kaisa Aunola, and Jari[Erik Nurmi. 2008. The association between mathe-](https://doi.org/http://dx.doi.org/10.1080/01443410701708228)
[matical word problems and reading comprehension.](https://doi.org/http://dx.doi.org/10.1080/01443410701708228)
_Educational Psychology, 28(4):409–426._
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
[2019. Template-based math word problem solvers](https://ojs.aaai.org/index.php/AAAI/article/view/4697/4575)
[with recursive neural networks. In Proceedings of](https://ojs.aaai.org/index.php/AAAI/article/view/4697/4575)
_AAAI._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://aclanthology.org/D17-1088.pdf)
_ceedings of EMNLP._
[Ronald J Williams and David Zipser. 1989. A learn-](http://leech.cybernoid.gr/files/text/publications/A%20Learning%20Algorithm%20for%20Continually%20Running%20Fully%20Recurrent%20Neural%20Networks%20-%2010.1.1.52.9724.pdf)
[ing algorithm for continually running fully recurrent](http://leech.cybernoid.gr/files/text/publications/A%20Learning%20Algorithm%20for%20Continually%20Running%20Fully%20Recurrent%20Neural%20Networks%20-%2010.1.1.52.9724.pdf)
[neural networks.](http://leech.cybernoid.gr/files/text/publications/A%20Learning%20Algorithm%20for%20Continually%20Running%20Fully%20Recurrent%20Neural%20Networks%20-%2010.1.1.52.9724.pdf) _Neural computation, 1(2):270–_
280.
Thomas Wolf, Julien Chaumond, Lysandre Debut, Victor Sanh, Clement Delangue, Anthony Moi, Pierric Cistac, Morgan Funtowicz, Joe Davison, Sam
Shleifer, et al. 2020. [Transformers: State-of-the-](https://aclanthology.org/2020.emnlp-demos.6/)
[art natural language processing. In Proceedings of](https://aclanthology.org/2020.emnlp-demos.6/)
_EMNLP: System Demonstrations._
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuan-Jing
[Huang. 2020. A knowledge-aware sequence-to-tree](https://aclanthology.org/2020.emnlp-main.579.pdf)
[network for math word problem solving. In Proceed-](https://aclanthology.org/2020.emnlp-main.579.pdf)
_ings of EMNLP._
Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuan-Jing
Huang. 2021. [Math word problem solving with](https://aclanthology.org/2021.acl-long.455.pdf)
[explicit numerical values. In Proceedings of ACL-](https://aclanthology.org/2021.acl-long.455.pdf)
_IJCNLP._
[Zhipeng Xie and Shichao Sun. 2019. A goal-driven](https://www.ijcai.org/proceedings/2019/0736.pdf)
[tree-structured neural model for math word prob-](https://www.ijcai.org/proceedings/2019/0736.pdf)
[lems. In Proceedings of IJCAI.](https://www.ijcai.org/proceedings/2019/0736.pdf)
Dmitry Zelenko, Chinatsu Aone, and Anthony
[Richardella. 2003. Kernel methods for relation ex-](https://www.jmlr.org/papers/volume3/zelenko03a/zelenko03a.pdf)
[traction.](https://www.jmlr.org/papers/volume3/zelenko03a/zelenko03a.pdf) _Journal of machine learning research,_
3(Feb).
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
[Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-to-](https://aclanthology.org/2020.acl-main.362.pdf)
[tree learning for solving math word problems. In](https://aclanthology.org/2020.acl-main.362.pdf)
_Proceedings of ACL._
[Zexuan Zhong and Danqi Chen. 2021. A frustratingly](https://doi.org/10.18653/v1/2021.naacl-main.5)
[easy approach for entity and relation extraction. In](https://doi.org/10.18653/v1/2021.naacl-main.5)
_Proceedings of NAACL._
**A** **Importance of Rationalizer**
We further elaborate on the importance of the rationalizer module in this section. As mentioned
in §3.2, it is crucial for us to properly update the
-----
**“Split Variant”**
**Model** **Beam** **Train/Val/Test** **Train/Test** **5-fold** **Customized** **Remark**
**Size** **21162/1000/1000** **22162/1000** **CV** **Split**
Group Attention (Li et al., 2019) 5 - 69.5 66.9 -
GTS (Xie and Sun, 2019) 5 - - 74.3 -
KA-S2T (Wu et al., 2020) 5 - - - 76.3 They randomly split into 80%/20%
MultiE&D (Shen and Jin, 2020) 5 78.4 (unknown)† 76.9
Graph2Tree (Zhang et al., 2020) 5 77.4 75.5 -
NeuralSymbolic (Qin et al., 2021) 1 - - 75.7 -
NUM2ST (Wu et al., 2021) 5 - - - 78.1 They randomly split into train/val/test
HMS (Lin et al., 2021) 1 76.1 - - -
BERT-Tree (Li et al., 2021) 3 82.4 - - -
mBART-Large (Shen et al., 2021) 10 85.4 (unknown)† † 84.3 -
ROBERTA-DEDUCTREASONER 1 84.3 (± 0.34) 86.0 (± 0.26) 83 (± 0.36) -
ROBERTA-LARGE-DEDUCTREASONER 1 85.8 (± 0.42) 87.1 (± 0.21) - -
Table 10: Detailed comparison of different approaches on the Math23k dataset. † indicates the paper only mentioned the results are evaluated on test set but not mentioned if they used validation set for experiment.
performance with respect to different data splits,
different beam sizes, etc.
- Group Attention (Li et al., 2019): According
to their paper in Table 2, they use the train/test
split.
- GTS (Xie and Sun, 2019): They only report the
five-fold cross-validation performance.
- KA-S2T (Wu et al., 2020): According to §3.1
in their paper, they use their customized split
though they still directly compared with the
GTS approach.
- MultiE&D (Shen and Jin, 2020): They did not
mention that if they have validation set. But
they mentioned that the results are evaluated on
the test set. Thus, we marked it “unknown”.
- Graph2Tree (Zhang et al., 2020): They also
mentioned that the results are evaluated on test
set. But the validation set could be used in
their implementation based on the observation
in their codebase.
- NeuralSymbolic (Qin et al., 2021): They used
greedy search (§4.2.2) for generation and experimented with 5-fold cross validation in the
experiments (§4.3).
- NUM2ST (Wu et al., 2021): They used customized splits according to §3.1 in their paper.
- HMS (Lin et al., 2021): The beam size is
not mentioned in the paper. But we found
the default value in the open-source codebase
is 1. They did not mentioned the exact split
either but simply mentioning “follow previ_ous work”._ We assume they are using the
train/validation/test split as we found that the
codebase contains the validation set.
- BERT-Tree (Li et al., 2021): They use
train/validation/test split as described in the paper.
- mBART-Large (Shen et al., 2021): They did
not mentioned the split either but mentioning
that the results are evaluated on the test set.
We present our performance for all the settings
in Table 10. To compare with the recent work using
mBart-Large (Shen et al., 2021), we also report the
performance with a Roberta-Large encoder for our
DEDUCTREASONER on the test set.
-----
| [
"Zhanming, Jie",
"Wei, Lu",
"Jierui, Li"
] | 2022-09-15T00:00:00 | ACL 2022 Long Papers | true | 64 | 11 | null | http://arxiv.org/abs/2203.10316 | https://arxiv.org/abs/2203.10316 | https://www.semanticscholar.org/paper/a376e6b118a10a8cb8a2920f6b83a70f087579f5 |
TheoremQA: A Theorem-driven Question Answering Dataset | The recent LLMs like GPT-4 and PaLM-2 have made tremendous progress in solving fundamental math problems like GSM8K by achieving over 90% accuracy. However, their capabilities to solve more challenging math problems which require domain-specific knowledge (i.e. theorem) have yet to be investigated. In this paper, we introduce TheoremQA, the first theorem-driven question-answering dataset designed to evaluate AI models’ capabilities to apply theorems to solve challenging science problems. TheoremQA is curated by domain experts containing 800 high-quality questions covering 350 theorems from Math, Physics, EE&CS, and Finance. We evaluate a wide spectrum of 16 large language and code models with different prompting strategies like Chain-of-Thoughts and Program-of-Thoughts. We found that GPT-4’s capabilities to solve these problems are unparalleled, achieving an accuracy of 51% with Program-of-Thoughts Prompting. All the existing open-sourced models are below 15%, barely surpassing the random-guess baseline. Given the diversity and broad coverage of TheoremQA, we believe it can be used as a better benchmark to evaluate LLMs’ capabilities to solve challenging science problems. | This paper introduces TheoremQA, the first theorem-driven question-answering dataset designed to evaluate AI models' capabilities to apply theorems to solve challenging science problems and finds that GPT-4's capabilities to solve these problems are unparalleled, achieving an accuracy of 51% with Program-of-Thoughts Prompting. | ## TheoremQA: A Theorem-driven Question Answering Dataset
♠Wenhu Chen∗, [♥]Ming Yin, [♠]Max Ku, [♦]Pan Lu, [♦]Yixin Wan,
♠Xueguang Ma, ♥Jianyu Xu, ♥Xinyi Wang, ♦Tony Xia
University of Waterloo, Canada[♠]
University of California, Santa Barbara, United States[♥]
University of California, Los Angeles, United States[♦]
**Abstract**
The recent LLMs like GPT-4 and PaLM-2 have
made tremendous progress in solving fundamental math problems like GSM8K by achieving over 90% accuracy. However, their capabilities to solve more challenging math problems which require domain-specific knowledge
(i.e. theorem) have yet to be investigated. In
this paper, we introduce TheoremQA, the first
theorem-driven question-answering dataset designed to evaluate AI models’ capabilities to
apply theorems to solve challenging science
problems. TheoremQA is curated by domain
experts containing 800 high-quality questions
covering 350 theorems[1] from Math, Physics,
EE&CS, and Finance. We evaluate a wide spectrum of 16 large language and code models with
different prompting strategies like Chain-ofThoughts and Program-of-Thoughts. We found
that GPT-4’s capabilities to solve these problems are unparalleled, achieving an accuracy
of 51% with Program-of-Thoughts Prompting.
All the existing open-sourced models are below
15%, barely surpassing the random-guess baseline. Given the diversity and broad coverage of
TheoremQA, we believe it can be used as a better benchmark to evaluate LLMs’ capabilities
to solve challenging science problems.
a narrow subject. On the other hand, these datasets
do not involve much domain-specific knowledge,
aka theorem. Due to these two deficiencies, we believe that these datasets are not ideal to benchmark
the existing powerful LLMs (Brown et al., 2020;
Tamkin et al., 2022; Chen et al., 2021b; Chowdhery et al., 2022; Hoffmann et al., 2022; Taylor
et al., 2022) due to their simplicity. In fact, on the
popular GSM8K dataset (Cobbe et al., 2021), GPT4 (OpenAI, 2023) and PaLM-2 (Google, 2023)
both already achieved 92% accuracy. Similarly,
we tested GPT-4 (OpenAI, 2023) on the subsets of
several other listed datasets in Table 1 and observed
90+% accuracy in most cases. The only exception
is MATH (Hendrycks et al., 2021) containing highschool math competition problems with SoTA performance around 50% (Zheng et al., 2023). However, MATH (Hendrycks et al., 2021) is focused on
math skills rather than theorem.
In this paper, we propose the first theoremdriven QA dataset built on university-level theorems across Math, Physics, EE&CS, and Finance.
The whole collection process takes two steps: (1)
we first enumerate roughly 400 theorems in different subfields like algebra, number theory, graph
theory, information theory, etc, (2) we ask domain
experts to search for questions regarding these theorems from different sources like Internet and Textbooks. The domain experts will adjust these questions to ensure the answers follow the desired format for the ease of automatic evaluation. Through
the careful construction process, we collected 800
high-quality question-theorem-answer triples as
our final release version.
We evaluate a wide spectrum of instructionfinetuned language and code models including
GPT (Brown et al., 2020), Claude (Bai et al.,
2022), LLaMA (Touvron et al., 2023), Pythia (Biderman et al., 2023), CodeGen (Nijkamp et al.,
2022), GLM (Zeng et al., 2022), StarCoder (Li
et al., 2023), and CodeT5+ (Wang et al., 2023)
**1** **Introduction**
A long-standing goal of AI systems is to help human beings solve challenging problems, especially
more domain-specific problems. To benchmark
the progress towards this goal, researchers propose to evaluate AI systems’ performance on different math word problem (WMP) datasets. In
recent years, there has been a plethora of WMP
datasets (Lu et al., 2023c), which we include in Table 1. Most of these datasets are meant for fundamental questions aimed at Grade 1-12 students on
∗ Authors ordered by contribution. Corresponding author
email: [email protected]
1e.g. Taylor’s theorem, Lagrange’s theorem, Huffman
coding, Quantum Theorem, Elasticity Theorem, etc
-----
Figure 1: The overview of TheoremQA and the prompting strategies adopted.
Dataset Domain Level Source Theorem
DRAW (Upadhyay and Chang, 2015) Algebra Elementary School Generated -
MAWPS (Koncel-Kedziorski et al., 2016) Arithmetic Elementary School Generated -
DRAW1K (Upadhyay and Chang, 2017) Algebra Elementary School Generated -
ASDiv (Miao et al., 2020) Arithm/Algebra Elementary School Internet -
SVAMP (Patel et al., 2021a) Arithm/Algebra Elementary School ASDiv -
Math23K (Wang et al., 2017) Algebra Elementary School Internet -
TabMWP (Lu et al., 2023b) Arithm/Algebra Elem./Middle School Textbooks NO
GSM8K (Cobbe et al., 2021) Arithm/Algebra Middle School Annotated NO
GEOS (Seo et al., 2015) Geometry Middle School SAT NO
Geometry3K (Lu et al., 2021a) Geometry Middle/High School Textbooks NO
GeoQA (Chen et al., 2021a) Geometry Middle/High School Exam NO
UniGeo (Chen et al., 2022a) Geometry Middle/High School Textbooks NO
ScienceQA (Lu et al., 2022) Science Middle/High School Textbooks NO
MATH (Hendrycks et al., 2021) Math High School Competition YES
AQuA (Ling et al., 2017) Arithm/Algebra University GMAT/GRE NO
MathQA (Amini et al., 2019) Arithm/Algebra University AQuA NO
MathQA-Python (Austin et al., 2021) Arithm/Algebra University AQuA NO
FinQA (Chen et al., 2021c) Finance University CrowdSource NO
TAT-QA (Zhu et al., 2021) Finance University CrowdSource NO
TheoremQA (Ours) STEM University Internet+Expert 350+
Table 1: List of existing Math and STEM QA datasets.
on our dataset. We adopt two prompting methods:
Chain-of-Thoughts (CoT) (Wei et al., 2022b) and
Program-of-Thoughs (PoT) (Chen et al., 2022b) to
prompt the large language models. We also investigate how to infuse the theorem into the thought
process of LLMs and how to present the multimodal inputs to the LLMs.
In the course of our experiments, several notable observations were made. First, GPT-4 (OpenAI, 2023) significantly outperformed all existing
models, reaching an accuracy level of 51% when
combined with Program-of-Thoughts prompting.
Trailing behind GPT-4, the second most effective
model was ChatGPT, achieving an accuracy of 35%
through the same prompting method. Additionally,
our human evaluation determined that half of GPT4’s errors are caused by minor mistakes like calculation errors, rounding errors, etc. We believe these
errors could be easily rectified with a more deliberate prompting strategy or human intervention. This
suggests that there is still significant headroom for
GPT-4 to achieve with more deliberate prompting
strategies. Secondly, we found that all open-source,
instruction-tuned language and code models scored
below 15% in accuracy, barely exceeding the random guess baseline of 10%. Our human evaluation reveals that open-source models like Alpaca
are making errors mainly due to their ignorance
of the theorem, where 90% of the errors are not
rectifiable. This stark gap between GPT and opensource models suggests that further enhancement
strategies, such as science-focused pre-training or
fine-tuning, should be considered to narrow the performance disparity. Thirdly, we explored the potential to do theorem-augmented generation. However,
the simple strategy of concatenation did not yield
a significant improvement. We conjecture that a
more complex integration strategy may be needed
to achieve more gains. Lastly, we examined the performance of various multi-modal instruction-tuned
models on the multimodal subset of the TheoremQA
dataset. Surprisingly, these models did not demonstrate significant performance gains over their textonly counterparts. This is mainly due to the unnaturalness of the image, which consists of lots
of diagrams and text. Such images are not well
-----
captured by existing visual encoder models.
To sum up, our contributions are three folds:
- We propose the first theorem-driven questionanswering dataset to understand LLMs’ capabilities to apply science theorems.
- We comprehensively evaluate a wide spectrum
of 16 LLMs on TheoremQA.
- We perform different analyses in the theorem
integration and multimodal understanding aspects to provide detailed insights.
**2** **Related Work**
**2.1** **Math Word Problems**
Mathematical reasoning skills are crucial for
general-purpose intelligent systems, garnering significant interest from the research community. In
the past, studies have explored the ability of NLP
models to solve arithmetic and algebraic problems (Hosseini et al., 2014; Koncel-Kedziorski
et al., 2015; Roy and Roth, 2015; Ling et al., 2017).
More recently, researchers have introduced increasingly challenging datasets (Saxton et al., 2019;
Miao et al., 2020; Amini et al., 2019; Hendrycks
et al., 2021; Lu et al., 2021b; Patel et al., 2021b)
aimed at enhancing difficulty, diversity, and adversarial robustness. LiLA (Mishra et al., 2022) proposes to assemble a vast collection of mathematical
datasets into a single, unified dataset. LiLA also annotates Python programs as target outputs for solving mathematical problems. However, the existing
datasets were mostly focused on grade school simple mathematics. To further investigate the LLMs’
capabilities to assist humans to solve challenging
math problems, we propose TheoremQA as the first
benchmark to enable research in this direction.
**2.2** **Large Language Models**
In recent years, there has been a surge of research
and development in the area of large language
models (LLMs) that have significantly advanced
the field of natural language processing. GPT3 (Brown et al., 2020) demonstrated a strong capability to perform few-shot predictions, where
the model is given a description of the task in
natural language with few examples. By using
human-feedback reinforcement learning, InstructGPT (Ouyang et al., 2022) has shown its unprecedented capabilities to follow human instructions.
Scaling model size, data, and computing are crucial to enable this learning ability. Later, Rae
et al. (2021); Chowdhery et al. (2022); Zhang et al.
(2022); Touvron et al. (2023); Chen et al. (2021b)
have proposed to train different types of LLMs with
different training recipes. The capability to follow
few-shot exemplars to solve unseen tasks is not
existent on smaller LMs, but only emerges as the
model scales up (Wei et al., 2022a). More recently,
GPT-4 (OpenAI, 2023) shows tremendous progress
on lots of complex reasoning tasks spanning mathematics, coding, vision, medicine, law, psychology,
and more. Bubeck et al. (2023) shows that GPT-4
is already demonstrating more general intelligence
than previous AI models. To further validate GPT4’s capability to solve challenging reasoning tasks,
we propose TheoremQA as the new benchmark to
further understand LLMs’ upper limit.
**2.3** **Reasoning with Large Language Model**
To better unleash large language models’ capabilities to solve complex reasoning tasks. Chain-ofThought Prompting (Wei et al., 2022b; Kojima
et al., 2022; Wang et al., 2022) was proposed,
which aims at prompting the large language models
to generate the ‘thought process’ before outputting
the answer. Later on, several other works (Drozdov
et al., 2022; Zhou et al., 2022; Nye et al., 2021)
also propose different approaches to utilize LLMs
to solve reasoning tasks by allowing intermediate
steps. Our method can be seen as an extension to
CoT by leveraging an extra step of symbolic execution. Another line of work (Gao et al., 2022;
Chen et al., 2022b) was proposed to adopt Python
programs as the demonstration for the ‘thought
process’ to solve different reasoning tasks.
**3** **Dataset**
Our dataset collection pipeline contains two steps:
**Theorem Enumeration** Our aim was to encompass a wide range of theorems. To this end, we began by prompting Large Language Models (LLMs),
specifically GPT-4 (OpenAI, 2023), to enumerate
popular subfields in Mathematics, Physics, Finance,
and Electrical Engineering & Computer Science.
The covered subfields are listed in Figure 4. Subsequently, we prompted GPT-4 to propose plausible university-level theorems relevant to these subfields. For instance, within the ’Calculus’ subfield,
GPT-4 might suggest the ’Intermediate Value Theorem’, ’Rolle’s Theorem’, and so on. After gath
-----
ering an extensive list of theorems, we assembled
a team of domain experts (holders of Masters and
PhDs in Statistics, Electrical Engineering, Computer Science, and Finance) to refine the theorem
inventory and supplement any omitted theorems.
Ultimately, we collected approximately 400 theorems, encapsulating a diverse range of topics within
these fields. We then delegated these theorems to
nine domain experts, instructing them to locate
question/answer pairs from varied sources. During
the annotation process, a small number of theorems
were discarded due to their evaluation complexity.
**Question** **Annotation** Our problems were
sourced from websites, books, or devised by the
experts themselves. One challenge we encountered
was the potential for questions found online to
have been included in the training data. To mitigate
this ’data contamination’ issue, we encouraged
domain experts to modify these questions. Another
challenge arose from questions with answers in
symbolic form, matrix form, figure form, etc.
These presented significant obstacles for our
automatic evaluation. To overcome this, we
instructed domain experts to alter the question so
the answer would be limited to the following forms:
(1) integer, (2) float, (3) list of integers/floats, (4)
boolean, and (5) multiple-choice options. For
instance, if the original question concerned a
matrix, we would revise it to ask about the trace of
the answer matrix. This modification significantly
streamlined the evaluation process. An example of
this can be found in Figure 2.
**Dataset Statistics** Finally, we collected a total
of 800 questions over 354 theorems. Specifically,
there are 199 Math theorems, 52 Physics theorems,
55 Finance theorems, and 48 CS&EE theorems.
There are 442 Math questions, 146 CS&EE questions, 131 physics questions, and 81 Finance questions. We show the answer-type distribution in Figure 3. To further enhance the multimodality aspect
of TheoremQA, we also include 51 questions with
image input (diagrams), where the model needs to
understand the visual input to answer questions.
The majority of the questions in TheoremQA have
float and integer as the answers, which is more
realistic than the existing multi-choice datasets like
ScienceQA (Lu et al., 2022) or AQuA QA (Ling
et al., 2017). Therefore, the models are unlikely to
take shortcuts to achieve high accuracy.
Figure 2: Examples from TheoremQA. The first question
requires the usage of Stoke’s theorem to transform the
double integral into a line integral. The second question
requires knowing the properties of Wiener’s process.
**Human-Level Performance** To provide a rough
but informative estimate of human-level performance. We randomly select 20 questions and assign these questions to the 4 Math&CS undergraduate students (average GPA) who have taken the
required courses regarding these questions. The
participants are given 24 hours with internet access
to solve these questions. The four undergraduate
students achieve 12/20, 15/20, 18/20, and 19/20
scores on these randomly sampled questions. From
this experiment, we are more confident that an
expert-level performance should be 100%.
**4** **Method**
Our method for addressing these demanding questions in the TheoremQA is comprised of several distinct modules, as outlined in Figure 1:
**Prompting** We utilize two established prompting
strategies:
- Chain-of-Thought Prompting (Wei et al.,
-----
**Multimodal Input** A small portion of our data
(50 instances) includes images, such as diagrams,
as supplemental input, particularly in geometry
questions. Since current LLMs don’t support such
multimodal inputs, we propose a solution: to employ captions like Chameleon (Lu et al., 2023a).
These captions describe the image and are then appended to the LLMs’ output as an additional signal.
**5** **Experiments**
**5.1** **Model Descriptions**
In our experiments, we mainly investigate the following models:
- GPT3/3.5/ChatGPT/GPT4: These are
instruction-tuned models from OpenAI[2].
- PaLM-2: This is the instruction-tuned model
from Google (Google, 2023).
- Claude-v1/Claude-instant: These are
instruction-tuned models from AnthropicAI[3].
- Alpaca-13B: This model is based on the
LLaMA (Touvron et al., 2023). Alapaca is
instruction-tuned by the 52K data generated
from GPT-4.
- Vicuna-13B: This model is based on the
LLaMA (Touvron et al., 2023). Vicuna is
instruction-tuned by the 100K ShareGPT data
generated by different GPT-based models.
- OpenAssistant-12B: This model is based on
Pythia (Biderman et al., 2023). The model is
instruction-tuned by OpenAssistant data[4].
- MOSS-instruct-16B: This model is based on
CodeGen (Nijkamp et al., 2022), which is further instruction-tuned with instruction following dataset distilled from GPT.[5].
- StarChat-16B: This model is based on StarCoder (Li et al., 2023). StartChat is being
instruction-tuned on OpenAssistant data[6] and
ShareGPT data.
2https://openai.com/
3https://www.anthropic.com/index/introducing-claude
4https://open-assistant.io/
5https://txsun1997.github.io/blogs/moss.html
6https://open-assistant.io/
float
option
list
bool
integer
47%
2%
9%
27%
15%
Figure 3: Answer type distribution in TheoremQA.
2022b): This strategy prompts the language
model to initially generate a step-by-step
thought process, eventually leading to the final
answer.
- Program-of-Thought Prompting (Chen et al.,
2022b; Gao et al., 2022): This strategy
prompts the language model to progressively
generate a program. The final answer is then
derived by executing this program.
By delegating computational tasks to an external executor, the problem-solving process is considerably
enhanced in its reliability. This improvement results in remarkable advancements in existing math
datasets being reported in (Chen et al., 2022b).
**Answer Extraction** We observed that parsing
the output from Large Language Models (LLMs)
can be challenging due to two main issues: (1) The
answer is often embedded within a sentence, making it difficult to extract using regular expressions,
and (2) The answer may not be normalized, such as
’pi / 3’ or ’2*10 - e’, which complicates comparison
with the ground truth. To tackle these problems, we
initially employ ChatGPT to identify the answer
span within the model’s output, then forward this
string to WolframAlpha (Inc.) for normalization
into a float, integer, or list.
**Theorem Augmentation** We explored the potential of enhancing large language models with
retrieved theorem descriptions to assess their effect
on performance. One approach is to retrieve descriptions of the given theorems from the Internet
to supplement the LLMs’ output. Another experiment involved prompting GPT-4 to generate text
descriptions of the theorem, which are then used as
an additional augmentation signal.
-----
Figure 4: Subfields of TheoremQA under Math, Physics, Engineering, and Finance.
- InstructCodeT5+: This model is based on
CodeT5+ (Wang et al., 2023). InstructCodeT5+ is further instruction-tuned on Code
Alpaca data[7] to follow instructions.
**5.2** **Main Results**
We demonstrate our main results on Table 2. We
will summarize different findings in the following:
**Closed-source Models** For GPT-3 (text-davinci002) and GPT-3.5 model, since these two models
are not Chat-based models, we need to demonstrate
one example ensure to help them generate outputs
of the desired format. With CoT prompting, GPT-3
(text-davinci-002) and GPT-3.5 models are only
achieving 16.6% and 22.8% accuracy. By adopting
the program as the intermediate reasoning form,
both models can gain reasonable improvements.
For Claude-v1, we found that it is matching the
performance of GPT-3.5. ChatGPT outperforms
GPT-3.5 and Claude-v1 significantly by 8%, which
indicates ChatGPT’s capabilities to perform complex numerical reasoning. GPT-4 is the strongest
model being evaluated, which beats all the rest
models by a huge margin. With Chain-of-Thoughts
prompting, GPT-4 can outperform ChatGPT by
13%. With Program-of-Thoughts prompting, GPT4 can outperform ChatGPT by 16%. Though some
other models have shown to match GPT-4 on simple tasks, GPT-4’s capability to solve challenging
tasks seems unparalleled.
7https://github.com/sahil280114/codealpaca
**Open-source Models** For the open-source models, we found that their performance is much behind. To better understand their accuracy, we also
provide the random-guess baseline of 10%. We
test both prompting strategies, however, their results consistently lie in the range of 10-14%. The
results indicate that these open-source LMs are still
struggling with more complex mathematical reasoning tasks in TheoremQA. Given that ChatGPT of
a similar size is able to achieve much higher performance, we believe the parameter size is not the
only cause. There is still a significant amount of effort during pre-training or supervised fine-tuning to
instill enough scientific knowledge into the models’
parameters to close the gap.
**Program of Thoughts Analysis** From Table 2,
we observe that PoT brings consistent improvement
over CoT on GPT-* models. Different GPT-* models can normally yield a gain of 5-8% accuracy. In
contrast, Claude-v1 and StarChat are almost obtaining the same accuracy. To better analyze where the
gains are coming from, we plot Figure 5 to understand how many of generated Python programs are
actually ‘executable’. As can be seen, both StarChat and CodeT5+ are having trouble generating
‘runnable’ programs with only 40% programs being
executable. Claude-v1 is able to increase the validity of the generated programs to 60%. In contrast,
GPT3.5 and ChatGPT can further increase the ratio
to around 80%. GPT-4 is extremely accurate in
generating programs, where 92% of the generated
programs are runnable. Such a high executable ratio explains why the gain brought to GPT-* model
-----
0.92
GPT3 GPT3.5ChatGPTGPT4 ClaudeStarChatCodeT5+
|0.82 0.78 0.72|Col2|Col3|Col4|Col5|Col6|0.6 0.4 0.36|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
Figure 5: Ratio of Executable Python Program of different models with PoT prompting.
Figure 6: An example of Multimodal question.
9.3
is much higher than Claude-v1 and StarChat.
**5.3** **Additional Result**
AlpacaVisualGLMLLaVAClaude GPT-3ChatGPTGPT4
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|7.5|Col10|
|---|---|---|---|---|---|---|---|---|---|
|5.6 3.7 3.7 3.7 1.8||||||||7.5||
|||||||||||
Figure 7: Accuracy on the Multimodal Question Subset
**Theorem Augmentation** We also investigate
whether feeding theorem as an additional text condition would help the model better solve the problem. Specifically, we ask GPT-4 to generate a
paragraph to describe the theorem, which we postprocessed to ensure correctness. We feed the theorem in the prompt to different language models
to see the performance change and plot our findings in Table 3. For all the evaluated scenarios, we
found that the improvement is limited to within 1%.
Unlike the Text or KB knowledge, theorem knowledge is more abstract and symbolic, simply concatenating the theorem definition is not enough. We
believe a more sophisticated augmentation scheme
is needed to truly help the model understand and
apply the theorems to solve problems.
**Multimodal Questions** Our aim was to assess
how effectively the current method could tackle
multimodal questions (those with image inputs) in
the TheoremQA dataset. An example is illustrated
in Figure 6, where an image is converted into ’captions’ by BLIP (Li et al., 2022). We graphed the
results from over 50 multimodal question subsets
in Figure 7. Notably, this subset posed substantial
challenges; none of the models were able to achieve
an accuracy rate of 10%. This is primarily due to
information loss during the captioning process.
In light of this, we conducted further evaluations on two multimodal instruction-tuned models,
LLaVA-13B (Liu et al., 2023) and VisualGLM6B (Zeng et al., 2022)[8]. These models utilize a
visual encoder (either CLIP (Radford et al., 2021)
or BLIP (Li et al., 2022)) to encode image input,
which is then integrated with language models for
multimodal conversation. However, these models
demonstrated performance similar to their text-only
8https://github.com/THUDM/VisualGLM-6B
equivalent, Alpaca, with the addition of a visual
encoder not significantly enhancing the results. We
hypothesize that the current visual encoding modules may not be suited for representing these diagrammatic images, resulting in these less than ideal
outcomes. We believe these multimodal questions
remain a challenge for the research community,
and we eagerly anticipate further advancements in
addressing these multimodal scientific questions.
**Error Analysis** We conduct detailed error analysis on 200 erroneous cases from different models
to analyze their error distribution. Specifically, we
pick GPT4, ChatGPT and Alpaca to understand
their error sources. We include the following error types: (E1) the model does not even know this
theorem, (E2) the model does know the theorem,
but uses the wrong formula or algorithm, (E3) the
model knows the theorem and the formula, the error is only caused by minor calculation mistakes.
The severity of the error decrease as the error number increases. We plot our findings in Figure 8,
where the bar indicate the percentage of specific
error types. We can observe that almost half of the
errors made by GPT4 are non-critical with caused
by minor calculation mistakes. This error analysis
suggests that there is a still a significant headroom
for GPT4 to improve with more deliberate prompting strategies or human intervention to mitigate
these minor errors. In contrast, Alpaca’s errors are
mainly caused by not knowing the theorem at all.
-----
|Model|Integer Float Option List Bool|Math CS&EE Physics Finance|All|
|---|---|---|---|
|Random Guess|0 0 38.9 0 65.5|10.0 24.7 0 4.9|10.5|
|---|---|---|---|
Chain of Thoughts (CoT)
|GPT-3 GPT-3.5 ChatGPT GPT-4 PaLM-2 Claude-v1 Cluade-instant|11.6 11.7 27.8 6.8 46.6 13.0 14.3 50.0 13.7 69.8 32.4 22.3 50.0 20.5 55.2 40.3 36.7 66.7 37.0 74.6 26.4 22.8 61.1 23.3 71.6 18.1 19.4 27.8 15.1 61.2 19.9 16.7 44.4 17.8 53.4|15.8 34.2 2.3 12.3 22.6 36.3 7.6 23.5 31.0 41.1 16.8 28.4 43.9 50.6 30.5 51.4 31.0 47.3 19.8 27.2 21.7 42.5 13.7 28.4 21.5 36.3 14.5 27.2|16.6 22.8 30.2 43.8 31.8 24.9 23.6|
|---|---|---|---|
|Alpaca (13B) Vicuna (13B) OpenAssistant (12B) MOSS (16B) StarChat (16B)|11.1 6.9 27.8 2.7 45.7 8.8 6.9 16.7 2.7 45.7 8.3 5.0 22.2 1.4 37.9 8.8 5.4 24.2 2.4 44.2 7.9 4.9 22.3 1.9 44.1|12.9 27.4 3.8 9.9 12.2 24.0 3.1 12.3 10.2 25.0 0 4.9 11.3 28.4 1.6 8.9 10.7 23.5 0.6 6.8|13.5 12.9 10.7 12.2 11.6|
|---|---|---|---|
Program of Thoughts (PoT)
|GPT-3 GPT-3.5 ChatGPT GPT-4 Claude-v1|17.1 15.9 22.2 9.6 49.1 23.6 19.9 50.0 21.9 61.2 31.0 35.0 38.9 21.9 54.3 44.4 50.7 66.7 39.7 78.4 17.1 21.8 33.3 6.9 62.5|23.3 25.4 8.4 17.3 26.7 41.1 14.5 30.9 35.7 35.6 26.7 49.4 52.0 51.4 45.8 66.7 23.1 37.5 17.1 28.4|20.6 27.8 35.6 52.4 25.9|
|---|---|---|---|
|StarChat (16B) InstructCodeT5+ (16B)|7.7 6.1 0.0 3.0 43.5 8.9 6.3 0.0 6.9 45.2|13.6 17.6 5.1 5.1 13.8 17.9 4.2 5.1|11.3 11.6|
|---|---|---|---|
Table 2: Results for CoT and PoT prompting on TheoremQA. We report the accuracy over different fine-grained
question types and scientific fields.
Model Method Theorem All GPT4 ChatGPT Alpaca
|Model Method Theorem All|Col2|
|---|---|
|ChatGPT ChatGPT|CoT - 30.2 CoT + 30.8|
|Col1|Col2|Col3|
|---|---|---|
||||
|0.46|Col2|Col3|
|---|---|---|
||||
||||
||||
|ChatGPT CoT - 30.2 ChatGPT CoT + 30.8|Col2|
|---|---|
|Claude-v1 CoT - 24.9 Claude-v1 CoT + 25.4||
|ChatGPT ChatGPT|PoT - 35.6 PoT + 35.8|
|0.36 0.28|Col2|Col3|Col4|0.46 0.38 0.240.24 0.15 0.1|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
GPT4 ChatGPT Alpaca
E1 E2 E3
Figure 8: Error Analysis for GPT4, ChatGPT and Alpaca. Error severity: E1 > E2 > E3
ample, the computation requires ‘for loop’ to iteratively compute delta values for Riemann Sum.
We found that such problems are also more natural
for programs to solve. Through these examples,
we can see GPT-4’s unprecedented capabilities to
solve these difficult math problems even without
any demonstration or hints.
We also show some examples in Figure 9 to compare the results of CoT and PoT prompting. We
can see that the PoT can significantly shorten the
output sequence length. By leveraging the additional tool, PoT is able to significantly lower the
task difficulty.
|Alpaca-13B Alpaca-13B|CoT - 13.5 CoT + 14.2|
|---|---|
Table 3: Results for CoT and PoT prompting with additional theorem conditions.
**Case Study** We list a few successful and failed
examples generated by GPT-4 in Figure 9 to
do a side-by-side comparison between chainof-thoughts prompting and program-of-thoughts
prompting. In the first example, the question is
regarding ‘orthogonal projection theorem’. As can
be seen, Chain-of-Thoughts prompting requires a
very long paragraph to generate the results. We
prompted GPT-4 a few times with the same input and the results seem unstable. Sometimes the
model will make tiny computation mistakes in the
middle to derive the wrong answer. In contrast,
the program solution is brief and concise, which
leads to rather stable outputs. For the second ex
-----
Figure 9: Case Study of GPT-4 generation with both prompting strategies.
**6** **Conclusion**
In this paper, we propose the first theorem-driven
science question-answering dataset and evaluate
different LLMs on it. Though GPT-4 can achieve
strong performance on our new dataset, the existing
open-source LLMs are still struggling to achieve
reasonable performance. We conjecture it is essen
tial to leverage more science-related pre-training
or fine-tuning to close the gap. On the hand, we
found that the multimodal science questions are
still extremely challenging for the existing visual
LLMs. We believe more specialized visual encoding models are needed to better represent diagrams
in these science questions.
-----
**Limitations**
In this work, we explore the possibilities to utilize
different large language models to solve challenging theorem-driven questions. There are still some
limitations: (1) our answer extraction is still not
perfect. There are some cases where our answer
extractor is not able to locate the answer. Thus the
final accuracy is still an approximate lower bound.
(2) in our dataset collection, we specifically avoid
the hard-to-evaluate cases where the answer is a
formula, figure, or a matrix. Our choice of the
questions can be biased in terms of evaluating the
overall ability. (3) in the multimodal questions in
TheoremQA, we have investigated different existing models but none of them succeed in achieving
reasonable performance.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math
word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten
Bosma, Henryk Michalewski, David Dohan, Ellen
Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021.
Program synthesis with large language models. arXiv
_preprint arXiv:2108.07732._
Yuntao Bai, Saurav Kadavath, Sandipan Kundu,
Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini,
Cameron McKinnon, et al. 2022. Constitutional
ai: Harmlessness from ai feedback. arXiv preprint
_arXiv:2212.08073._
Stella Biderman, Hailey Schoelkopf, Quentin Anthony,
Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai
Prashanth, Edward Raff, et al. 2023. Pythia: A suite
for analyzing large language models across training
and scaling. arXiv preprint arXiv:2304.01373.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint
_arXiv:2303.12712._
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin,
[Chongyu Chen, and Xiaodan Liang. 2022a. UniGeo:](https://aclanthology.org/2022.emnlp-main.218)
[Unifying geometry logical reasoning via reformu-](https://aclanthology.org/2022.emnlp-main.218)
[lating mathematical expression. In Proceedings of](https://aclanthology.org/2022.emnlp-main.218)
_the 2022 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 3313–3323, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang,
Lingbo Liu, Eric Xing, and Liang Lin. 2021a. Geoqa:
A geometric question answering benchmark towards
multimodal numerical reasoning. In Findings of
_the Association for Computational Linguistics: ACL-_
_IJCNLP 2021, pages 513–523._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde de Oliveira Pinto, Jared Kaplan,
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg
Brockman, et al. 2021b. Evaluating large language models trained on code. _arXiv preprint_
_arXiv:2107.03374._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022b. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena
Shah, Iana Borova, Dylan Langdon, Reema Moussa,
Matt Beane, Ting-Hao Huang, Bryan R Routledge,
et al. 2021c. Finqa: A dataset of numerical reasoning
over financial data. In Proceedings of the 2021 Con_ference on Empirical Methods in Natural Language_
_Processing, pages 3697–3711._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Andrew Drozdov, Nathanael Schärli, Ekin Akyürek,
Nathan Scales, Xinying Song, Xinyun Chen, Olivier
Bousquet, and Denny Zhou. 2022. Compositional
semantic parsing with large language models. arXiv
_preprint arXiv:2209.15003._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language
models. arXiv preprint arXiv:2211.10435.
-----
Google. 2023. Palm 2 technical report.
_https://ai.google/static/documents/palm2techreport.pdf._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. Conference
_on Neural Information Processing Systems._
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks,
Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv
_preprint arXiv:2203.15556._
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
Etzioni, and Nate Kushman. 2014. Learning to solve
arithmetic word problems with verb categorization.
In EMNLP, pages 523–533.
[Wolfram Research, Inc. Mathematica, Version 13.2.](https://www.wolfram.com/mathematica)
Champaign, IL, 2022.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances
_in Neural Information Processing Systems._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In Proceedings of
_the 2016 conference of the north american chapter of_
_the association for computational linguistics: human_
_language technologies, pages 1152–1157._
Junnan Li, Dongxu Li, Caiming Xiong, and Steven
Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding
and generation. In International Conference on Ma_chine Learning, pages 12888–12900. PMLR._
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas
Muennighoff, Denis Kocetkov, Chenghao Mou, Marc
Marone, Christopher Akiki, Jia Li, Jenny Chim, et al.
2023. Starcoder: may the source be with you! arXiv
_preprint arXiv:2305.06161._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167._
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae
Lee. 2023. Visual instruction tuning. arXiv preprint
_arXiv:2304.08485._
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
Huang, Xiaodan Liang, and Song-Chun Zhu. 2021a.
[Inter-GPS: Interpretable geometry problem solving](https://doi.org/10.18653/v1/2021.acl-long.528)
[with formal language and symbolic reasoning. In](https://doi.org/10.18653/v1/2021.acl-long.528)
_Proceedings of the 59th Annual Meeting of the Asso-_
_ciation for Computational Linguistics and the 11th_
_International Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pages 6774–_
6786, Online. Association for Computational Linguistics.
Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. 2022. Learn to explain:
Multimodal reasoning via thought chains for science
question answering. Advances in Neural Information
_Processing Systems, 35:2507–2521._
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, KaiWei Chang, Ying Nian Wu, Song-Chun Zhu, and
Jianfeng Gao. 2023a. Chameleon: Plug-and-play
compositional reasoning with large language models.
_arXiv preprint arXiv:2304.09842._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu,
Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark,
and Ashwin Kalyan. 2023b. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In International Conference on
_Learning Representations (ICLR)._
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao,
Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun
Zhu. 2021b. Iconqa: A new benchmark for abstract
diagram understanding and visual language reasoning. In The 35th Conference on Neural Information
_Processing Systems (NeurIPS) Track on Datasets and_
_Benchmarks._
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and KaiWei Chang. 2023c. A survey of deep learning for
mathematical reasoning. In The 61st Annual Meet_ing of the Association for Computational Linguistics_
_(ACL)._
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing
english math word problem solvers. In Proceedings
_of the 58th Annual Meeting of the Association for_
_Computational Linguistics, pages 975–984._
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard
Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark,
and Ashwin Kalyan. 2022. Lila: A unified benchmark for mathematical reasoning. In Proceedings
_of the 2022 Conference on Empirical Methods in_
_Natural Language Processing (EMNLP)._
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan
Wang, Yingbo Zhou, Silvio Savarese, and Caiming
Xiong. 2022. Codegen: An open large language
model for code with multi-turn program synthesis.
_arXiv preprint arXiv:2203.13474._
-----
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language
models. In Deep Learning for Code Workshop.
OpenAI. 2023. Gpt-4 technical report. arXiv preprint
_arXiv:2303.08774._
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _Advances in Neural_
_Information Processing Systems, 35:27730–27744._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021a. Are nlp models really able to solve simple
math word problems? In Proceedings of the 2021
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021b. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,
et al. 2021. Learning transferable visual models from
natural language supervision. In International confer_ence on machine learning, pages 8748–8763. PMLR._
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie
Millican, Jordan Hoffmann, Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models:
Methods, analysis & insights from training gopher.
_arXiv preprint arXiv:2112.11446._
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752._
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. arXiv preprint
_arXiv:1904.01557._
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
[Etzioni, and Clint Malcolm. 2015. Solving geome-](https://doi.org/10.18653/v1/D15-1171)
[try problems: Combining text and diagram interpre-](https://doi.org/10.18653/v1/D15-1171)
[tation. In Proceedings of the 2015 Conference on](https://doi.org/10.18653/v1/D15-1171)
_Empirical Methods in Natural Language Processing,_
pages 1466–1476, Lisbon, Portugal. Association for
Computational Linguistics.
Alex Tamkin, Kunal Handa, Avash Shrestha, and Noah
Goodman. 2022. Task ambiguity in humans and
language models. arXiv preprint arXiv:2212.10711.
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas
Scialom, Anthony Hartshorn, Elvis Saravia, Andrew
Poulton, Viktor Kerkez, and Robert Stojnic. 2022.
Galactica: A large language model for science. arXiv
_preprint arXiv:2211.09085._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Shyam Upadhyay and Ming-Wei Chang. 2015. Draw:
A challenging and diverse algebra word problem set.
Technical report, Citeseer.
[Shyam Upadhyay and Ming-Wei Chang. 2017. An-](https://aclanthology.org/E17-1047)
[notating derivations: A new evaluation strategy and](https://aclanthology.org/E17-1047)
[dataset for algebra word problems. In Proceedings](https://aclanthology.org/E17-1047)
_of the 15th Conference of the European Chapter of_
_the Association for Computational Linguistics: Vol-_
_ume 1, Long Papers, pages 494–504, Valencia, Spain._
Association for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 conference on empirical meth-_
_ods in natural language processing, pages 845–854._
Yue Wang, Hung Le, Akhilesh Deepak Gotmare,
Nghi DQ Bui, Junnan Li, and Steven CH Hoi. 2023.
Codet5+: Open code large language models for
code understanding and generation. arXiv preprint
_arXiv:2305.07922._
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022a. Emergent abilities of large language models.
_Transactions on Machine Learning Research._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou,
et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in
_Neural Information Processing Systems._
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b:
An open bilingual pre-trained model. arXiv preprint
_arXiv:2210.02414._
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
_arXiv preprint arXiv:2205.01068._
-----
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
Li, and Yu Li. 2023. Progressive-hint prompting
improves reasoning in large language models. arXiv
_preprint arXiv:2304.09797._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao
Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and
Tat-Seng Chua. 2021. Tat-qa: A question answering
benchmark on a hybrid of tabular and textual content
in finance. In Proceedings of the 59th Annual Meet_ing of the Association for Computational Linguistics_
_and the 11th International Joint Conference on Natu-_
_ral Language Processing (Volume 1: Long Papers),_
pages 3277–3287.
-----
| [
"Pan, Lu",
"Xueguang, Ma",
"Ming, Yin",
"Tony, Xia",
"Xinyi, Wang",
"Max, Ku",
"Wenhu, Chen",
"Yixin, Wan",
"Jianyu, Xu"
] | 2023-12-05T00:00:00 | EMNLP 2023 Main | true | 64 | 9 | null | http://arxiv.org/abs/2305.12524 | https://arxiv.org/abs/2305.12524 | https://www.semanticscholar.org/paper/a52dd1e900200e0733eea927edc7d6c27aeba187 |
A Knowledge-Aware Sequence-to-Tree Network for Math Word Problem Solving | With the advancements in natural language processing tasks, math word problem solving has received increasing attention. Previous methods have achieved promising results but ignore background common-sense knowledge not directly provided by the problem. In addition, during generation, they focus on local features while neglecting global information. To incorporate external knowledge and global expression information, we propose a novel knowledge-aware sequence-to-tree (KA-S2T) network in which the entities in the problem sequences and their categories are modeled as an entity graph. Based on this entity graph, a graph attention network is used to capture knowledge-aware problem representations. Further, we use a tree-structured decoder with a state aggregation mechanism to capture the long-distance dependency and global expression information. Experimental results on the Math23K dataset revealed that the KA-S2T model can achieve better performance than previously reported best results. | Experimental results revealed that the KA-S2T model can achieve better performance than previously reported best results and use a tree-structured decoder with a state aggregation mechanism to capture the long-distance dependency and global expression information. | # A Knowledge-Aware Sequence-to-Tree Network for Math Word Problem Solving
**Qinzhuo Wu, Qi Zhang[∗], Jinlan Fu, Xuanjing Huang**
Shanghai Key Laboratory of Intelligent Information Processing,
School of Computer Science, Fudan University, Shanghai, China
(qzwu17,qz,fujl16,xjhuang)@fudan.edu.cn
With the advancements in natural language
processing tasks, math word problem solving
has received increasing attention. Previous
methods have achieved promising results but
ignore background common-sense knowledge
not directly provided by the problem. In
addition, during generation, they focus on
local features while neglecting global information. To incorporate external knowledge and
global expression information, we propose a
novel knowledge-aware sequence-to-tree (KAS2T) network in which the entities in the
problem sequences and their categories are
modeled as an entity graph. Based on this
entity graph, a graph attention network is used
to capture knowledge-aware problem representations. Further, we use a tree-structured
decoder with a state aggregation mechanism
to capture the long-distance dependency and
global expression information. Experimental
results on the Math23K dataset revealed that
the KA-S2T model can achieve better performance than previously reported best results.
**1** **Introduction**
Math word problem solving has attracted increasing attention, and many math word problem solving
systems have been developed. Early statistical learning methods (Feigenbaum et al., 1963;
Fletcher, 1985; Bakman, 2007; Roy and Roth,
2016) extracted templates or features from problems and generated corresponding math expressions based on these templates or features. These
methods require a large number of manually formulated features or can only be applied to small
application problems in small areas. In recent years,
many methods (Wang et al., 2017, 2018b; Xie and
Sun, 2019) have been developed that apply neural
networks to analyze math word problems, with
_∗_ Corresponding author.
|/|-|50|*|+|2|3|6|4|
|---|---|---|---|---|---|---|---|---|
|2|3|
|---|---|
**Problem: Alan bought 2 green apples, 3 red apples, and 4**
oranges for a total of $50. Each apple weighed 0.4 kg and is
worth $6. Each orange weighs half as much as an apple.
How much does each orange cost?
**Knowledge:**
food fruit color
apple orange fruit apple orange green red
**Expression tree: Expression sequence:**
/ 1 step
8 step
- 4
/ - 50 - [+][ 2] 3 6 4
50 - 0.4 apple weight
+ ? or
2 3 6 apple price
Figure 1: An example of a math word problem.
With external knowledge, a model can capture the
relationships between the entities in the problem. With
the global information of a generated expression tree, a
model can capture information between long-distance
nodes.
promising results. These methods use end-to-end
models to directly generate the corresponding math
expressions from the problem text.
Although previous methods have achieved
promising results, several problems remain that
need to be addressed: 1) Background knowledge
and common sense should be incorporated. For
example, as shown in Figure 1, both apples and
oranges are fruit. Humans are naturally aware of
this common-sense information, but it is difficult
for the model to learn this information from
the problem text alone. 2) When generating
expressions, sequence-to-sequence (Seq2Seq)
methods tend to focus on local features and ignore
global information. For example, the root node
“/” of the expression tree in Figure 1 is directly
adjacent to its right child “4”, but they are eight
steps apart in the pre-order expression sequence.
Xie and Sun (2019) proposed a sequence-to-tree
(Seq2Tree) method for generating an expression
7137
-----
tree in pre-order based on the parent node and the
left sibling tree of each node. However, global
information is still not being considered in the
generated expression tree.
To overcome these problems, we propose a novel
knowledge-aware sequence-to-tree (KA-S2T)
method for exploring how to better utilize external
knowledge and capture the global expression
information. The proposed model connects
related entities and categories based on external
knowledge bases to capture common-sense
information and obtain better interaction between
words. In addition, we designed a tree-structured
decoder to capture the long-distance dependency
and global expression information. KA-S2T
updates all nodes in the generated expression at
each time step, whereby the node state is updated
by a recursive aggregation of its neighboring nodes.
Through multiple iterations of aggregation, the
model can use global information associated with
the generated expression to generate the next node
and thereby achieve better predictions.
The main contributions of this paper can be
summarized as follows:
_• We incorporate common-sense knowledge_
from external knowledge bases into math
word problem solving tasks.
_• We propose a tree-structured decoder for_
generating math expressions. To incorporate
global information associated with generated
partial expressions, we recursively aggregate
the neighbors of each node in the expression
at each time step.
_• We conducted experiments on the Math23k_
dataset to verify the effectiveness of our
KA-S2T model, and the results show that
our model achieved better performance than
previous methods.
**2** **Models**
In this section, we define the problem and present
our proposed KA-S2T model. As shown in Figure
2, we first use a bidirectional long short-term
memory (LSTM) network to encode the math word
problem sequences (Section 2.2). Then, we construct an entity graph based on external knowledge
to model the relationships between different entities
and categories in the problem (Section 2.3), and
use a two-layer graph attention network (GAT) to
obtain knowledge-aware problem representations
(Section 2.4). Finally, we used a tree-structured
decoder with a state aggregation mechanism to
generate pre-order traversal math expression trees
(Section 2.5).
**2.1** **Problem Definition**
Consider the input sequence of a math word
problem X = (x1, x2, . . ., xn). Our goal is to
train a model that can generate its math expression
Y = (y1, y2, . . ., yn′). The task is to estimate
a probability distribution in whichn′ **P(Y|X) =**
_t=1_ **[P][(][y][t][|][y][<t][,][ X)][. Here, words generated in the]**
math expression Y are either drawn from the input
Q
math word problem X, or a vocabulary V. Y, the
pre-order sequence of a math expression tree, is
executed to produce the answer to the problem X.
**2.2** **Bidirectional LSTM Encoder**
The encoder transforms the words in math word
problems into vector representations. We used a
bidirectional LSTM (BiLSTM) (Hochreiter and
Schmidhuber, 1997) network to encode each word
_xi into a vector representation h[seq]i_ :
**h[seq]i** = BiLSTM(e(xi), h[seq]i−1[)][ ∈] [R][n][×][2][d][,] (1)
where n and d are the size of the input sequence
X and the BiLSTM hidden state, respectively.
**e(xi) is the word embedding for word xi in the**
problem. _−h−[seq]i→_ and _←h[seq]i−−_ are the BiLSTM hidden
states generated by reading X in the forward and
backward order, respectively. We define the final
vector representation h[seq]i as the concatenation
of the forward and backward hidden states, i.e.,
**h[seq]i** = [−h−[seq]i→ : _←h[seq]i−−]._
**2.3** **Constructing Entity Graphs with**
**External Knowledge**
Each math word problem corresponds to an entity
graph G = (N, A), where N is a node list and A
is the adjacent matrix of these nodes. The graphs
are retrieved from external knowledge bases, with
the words in the math word problem as nodes. If
multiple words in X belong to the same category c
in the knowledge base, we set category c as a node
in the graph G and connect these words with their
categories. For example, both “green” and “red”
belong to the category “color”.
To incorporate knowledge about phrases, if
several phrases in X are combined with words
belonging to the same category and the same words
7138
-----
|know 2|h3know|
|---|---|
X1 X2 Xn
Embedding
…
LSTM LSTM … LSTM
Problem representations h1seq hseq2 hseqn Graph Attention Network cmknow
Graph Attention Network GAT GAT GAT h1know hknow2 hknow3 …… hknowi hknowi+1 hknowi+2 …… hknown−1 hknown
Knowledge graph vectors h1know hknow2 hknown h1seq hseq2 hseq3 hseqi hseqi+1 hseqi+2 hseqn−1 hseqn
Concatenate
Knowledge-aware problem representations h1ka hka2 hkan c1know c2know c3know
Tree-structured decoder h[ka] h[num] h[ka] h[num]
Timestep n Timestep n+1
State aggregate mechanism
s1 s1 r1
s2 st s2 st
Aggregate Aggregate
s3 st−1 s3 st−1 st,l r2 rt
Partial tree states rt Partial tree states rt,l
r3 rt−1 rt,l
ct ct,l
Expression tree y1 y1
y2 yt y2 yt
y3 yt−1 y3 yt−1 yt,l
Figure 2: Main structure of our proposed KA-S2T model. At the top side of this figure shows an encoder consisting
of a bidirectional LSTM network and a knowledge-aware graph attention network (see Section 2.2 and Section 2.4
for more details). The red line hnum indicates the representation of numbers in the problem, which are identified
as {N 1, N 2, N 3, . . .} according to their positions in the problem. Instead of generating these numbers directly
from the output vocabulary, KA-S2T generates position identifiers that copy the numbers from the problem. The
bottom of the figure shows a tree-structured decoder. At each time step, this decoder generates a current node state
based on its parent node. Then, the decoder uses a state aggregation mechanism to obtain the context state rt for
each node in the partial expression tree, and generates context vector ct based on encoder’s hidden states. See
Section 2.5 for more details.
color+apple has a node list N = _x1, x2, . . ., xn, c1, . . ., cm_
_{_ _}_
**2.4** **Knowledge-aware Problem**
color+apple
N1 green apples, N2 red apples and N3 oranges
color fruit food
Figure 3: An example of an entity graph
in order, then we build a phrase category c[′] for
these phrases and set c[′] as a node. As shown
in Figure 3, “green apples” and “red apples” are
combined in the same category words “green, red”
and by the same word “apple”. We build a phrase
category “color+apple” for these two phrases. Then
we connect this phrase category c[′] to the first and
last words of its related phrase.
With n words from the problem and m categories from the knowledge base, an entity graph
For an entity graph, we initialize category c with
the average of the vector representations of words
adjacent to c. For example, for the category
_c1 “color” adjacent to the word x2 “green” and_
the word x6 “red”, we initialize c1 as c1 =
avg(h[seq]2 _, h[seq]6_ ). In this way, for the nodes in
this entity graph, we have node initial vectors
**h[seq][′]** = **h[seq]1** _, . . ., h[seq]n_ _[,][ c]1[, . . .,][ c]m[}][. Then, we]_
_{_
use a two-layer GAT (Velickoviˇ c et al.´, 2017) to
obtain the hidden vectors h[know][′] of these nodes.
The GAT functions are given as follows:
**h[know]i** _[′]_ = _σ(_ _αijWkh[seq]j_ _[′]),_ (2)
_k=1||,...,K_
Aij=1
X
7139
-----
The decoder updates its state as follows:
**st,l = σ(Wleft[st : ct : rt : (e(yt)]),**
(5)
**st,r = σ(Wright[st : ct : rt : (e(yt)]),**
where Wleft and Wright are the weight matrices
and σ is a sigmoid function. e(yt) is the embedding
of t-th generated word yt. st,l and st,r is the left
child state and right child state of st, respectively.
For the root node y1, it initializes s1 with the max
pooling of h[ka]. ct is the context vector for the
hidden states of the encoder. rt is the context state
for the partial expression generated at previous time
steps.
Finally, the tree-structured decoder generates
a word from vocabulary V or copies a number
from the math word problem X with the following
distributions:
**Pgen[(][y]t[)=softmax(W]g[[][s]t** [:] **[c]t** [:] **[r]t[])]**
**Pcopy[(][y]t[)=softmax(W]p[[][s]t** [:] **[c]t** [:] **[r]t** [:] **[h][num][])]**
exp(LRelu(ws[T][[W][h][h][seq]i _[′]_ Whh[seq]j _[′]]))_
_αij =_ _||_
exp(LRelu(ws[T][[W][h][h][seq]i _[′]_ Whh[seq]j _[′]]))_
Aij=1 _||_
P
where ws[T] [,][ W][h][,][ W][k] [are trainable weight vector and]
matrices. || is concatenation operation and LRelu
is a LeakyRelu activation function (Xu et al., 2015).
_K is the number of heads in GAT and Aij = 1_
means there are edge between the i-th and j-th
node.
To represent n words in problem X, we simply
select the first n vectors of h[know][′] _∈_ **R[(][n][+][m][)][×][2][d]**
as the knowledge graph vectors.
**h[know]** = h[know][′][0 : n]. (3)
After concatenating the word vector h[seq]
and the knowledge graph vectors h[know], the
knowledge-aware problem representation h[ka] is
obtained and fed to the tree-structured decoder.
**h[ka]** = [h[seq] : h[know]]. (4)
If there are nnum numbers in X, we may want
to copy them directly from problem X rather than
generating them from the vocabulary. We extract
**h[num]** _∈_ R[n][num][×][2][d] from h[ka] based on the position
of these numbers. h[num]i is the representation of
the i-th number in the problem. We use h[num] to
compute the distribution score of these numbers in
Equation 6.
**2.5** **Tree-structured Decoder**
In the KA-S2T tree-structured decoder, we generate pre-order expressions Y from top to bottom. At
time step t, if the yt we generate is an operator, this
means it is an internal node and its left child yt,l
and right child yt,r must still be generated. If yt is
a number, it is a leaf node. Once the children of
all the internal nodes are generated, then the output
expression sequence Y can be transformed into a
complete executable expression tree.
The tree-structured decoder has three roles: 1)
it attentively reads the knowledge-aware problem
representation to obtain a context vector, and uses
this vector to update the decoder’s state; 2) it
recursively aggregates the neighbors of each node
in the generated partial expression to capture global
information; and 3) it adaptively chooses a word
from the vocabulary or copies a number from the
math word problem for generation.
_βt =_ _σ(Wz[st :_ **ct :** **rt :** **h[num]]),**
(6)
(1 − _βt)Pgen[(][y]t[)]_
_βt Pcopy[(][y]t[)][,]_
**P(yt** _y<t, X) =_
_|_
where Wg, Wp are weight matrices. βt ∈ [0, 1] is
a gate value to determine whether generate a word
from vocabulary or copy a number from math word
problem. y<t represents the words generated at earlier timesteps. The final distribution P(yt _y<t, X)_
_|_
is a concatenation of generate distribution Pgen(yt)
and copy distribution Pcopy(yt).
**Attention. We use attention mechanism (Bah-**
danau et al., 2014) to compute the context vector
**ct. Given the decoder state st and the expression**
context state rt, it first attends on the encoder’s
problem representation h[ka] to obtain ct, which is
defined as below:
exp(Wetanh(Wmh[ka]i [+W][s][s][t] [+W][r][r][t][))]
_αti =_ _n_
exp(Wetanh(Wmh[ka]j [+W][s][s][t] [+W][r][r][t][))]
_j=1_
P
_αtih[ka]i_ _[,]_
_i=1_
X
**ct =**
(7)
where We, Wm and Ws are the weight matrices. αti denotes the attention distribution on the
knowledge-aware problem representations h[ka].
**State Aggregation Mechanism: To incorpo-**
rate the global information associated with the
7140
-----
previously generated expression tree, the node state
is recursively aggregated with its neighbor nodes in
the expression tree at each time step. At time step t,
all the generated nodes (rt)[0] = **s1,s2,. . ., st** are
_{_ _}_
aggregated with a two-layer graph convolutional
network (GCN) (Kipf and Welling, 2016). The
aggregation functions are as follows:
n
Dii = j=1 [A]ij[exp][,]
(8)
(rt)[γ][+1]X= σ(D[−][1]A[exp](rt)[γ]Wr),
originally associated with an expression and answer. All problems in this dataset are described in
Chinese and can be solved by a linear expression
that contains only one unknown variable. We
randomly split the dataset into a training set (80%)
and testing set (20%).
Furthermore, we replaced all of the numbers in
the problems with position tokens (e.g., N1, N2,
_N3) in the preprocessing stage. After the model_
generated the expression, we replaced the position
tokens in the expression with the numbers from
the original problem, and executed this replaced
expression to produce the answer.
We used Cilin (Mei, 1985) and Hownet (Dong
et al., 2006) as our External Knowledge Source.
Cilin is a Chinese synonym dictionary, where each
word belongs to several different word groups.
Hownet is a knowledge graph of Chinese words and
concepts, where each word is labeled by several
semantic units. We used these word groups and
semantic units as our categories, and we set the
max length of phrases in the phrase category to 3.
We obtained 8,883 word-category pairs and 10,864
phrase category pairs.
**3.2** **Implementation Details**
Our code was implemented with Pytorch [1]. We select the 4,000 words that appeared most frequently
in the training data as the input vocabulary, and
replaced the rest of the words in the problems with
a token UNK. We set the dimension of hidden
vectors d = 256. Both GCN and GAT have two
layers. The number of heads K in GAT is 8.
Model optimization was performed using an Adam
optimizer (Kingma and Ba, 2014) with the learning
rate set to 0.001. For the hyper-parameter setting,
we set the dropout (Srivastava et al., 2014) rate to
0.5 and the batch size to 64. During training, it took
80 epochs to train the model. During decoding, the
beam size was set to 5.
**3.3** **Baselines**
To evaluate the performance of the proposed
method, we compare it with the following baselines:
_• DNS (Wang et al., 2017): This method has a_
two-layer GRU (Chung et al., 2014) encoder
and a two-layer LSTM decoder. In addition,
it uses a retrieval model to detect the problem
that is most similar to the query problem from
1https://pytorch.org/
where Wr is weight matrix. A[exp] is the adjacent
matrix of the generated partial expression. If yi
is the child or parent of yj or i = j, A[exp]ij = 1. D
means the number of adjacent nodes of each node
plus 1. We use D[−][1]A to normalize A. In this study,
after two hops of GCN computation, we obtained
the final context state rt for each node in the partial
expression.
**2.6** **Training**
Given the training data D = (X, Y), the objective
function is to minimize the negative log likelihood:
ND
∆(D, θ) = (9)
_−_ i=1 [log][ P][(Y][|][X)][.]
During training, for each question–answer pairX
(X, Y), we used the pre-order traversal of Y as
the ground truth. The conditional probability is
**P(Y|X):**
_n[′]_
[ P(yt _y<t, X)_
_|_
_t=1_
Y
log P(Y|X) =
(10)
+P(yt,l _y<t, X)+P(yt,r_ _y<t, X)]_
_|_ _|_
Here, P(yt,l _y<t, X) is an additional regular term_
_|_
about the child loss. At time step t, we only
used the left child state st,l and right child state
**st,r to calculate the respective distribution scores**
**P(yt,l** _y<t, X) and P(yt,r_ _y<t, X), as shown in_
_|_ _|_
Equation 6. We expect that the distribution of
node yt is close to the ground truth, and that the
distributions of its left and right children yt,l, yt,r
are also close to the ground truth.
**3** **Experiment**
**3.1** **Dataset**
We evaluated the proposed method on a large-scale
dataset called Math23K, which was gathered by
Wang et al. (2017) and contains 23,161 elementary
school math word problems. Each problem was
7141
-----
|Models|Accuracy|
|---|---|
|DNS (Wang et al., 2017) DNS+Retrieval (Wang et al., 2017)|58.1% 64.7%|
|Bi-LSTM (Wang et al., 2018a) ConvS2S (Wang et al., 2018a) Transformer (Wang et al., 2018a) Ensemble (Wang et al., 2018a)|66.7% 64.2% 62.3% 68.4%|
|RecursiveNN (Wang et al., 2019)|68.7%|
|Tree-Decoder (Liu et al., 2019)|69.0%|
|GTS (Xie and Sun, 2019)|74.3%|
|KA-S2T (Our)|76.3%|
Table 1: Answer accuracy of our model and other stateof-the-art models on Math23K dataset.
the training set, and uses its expression as a
template for the query problem. It combines
the retrieval model with the DNS model to
form a hybrid model “DNS+Retrieval”.
_• Ensemble (Wang et al., 2018a): This ensem-_
ble model combines three types of Seq2Seq
models: a bidirectional LSTM network (Wu
et al., 2016), a convolutional Seq2Seq model
(Gehring et al., 2017), and a transformer
(Vaswani et al., 2017).
_• RecursiveNN (Wang et al., 2019): A recur-_
sive neural network model first predicts the
tree structure template using a Seq2Seq model,
and then infers the expression based on the
features extracted by a bidirectional LSTM
and self-attention mechanism.
_• Tree-Decoder (Liu et al., 2019): A Seq2Tree_
generative model with an auxiliary stack and
a tree-structured decoder that generates the
abstract syntax tree of the equation in a topdown manner. We call this method “Tree**Decoder”.**
_• GTS (Xie and Sun, 2019): A tree structured_
neural model that generates an expression
tree in a goal-driven manner based on the
parent node and the left sibling tree of each
node. It uses top-down goal decomposition
and bottom-up subtree embedding to directly
predict the expression tree.
**3.4** **Result Analysis**
To assess the overall performance of our KA-S2T
model, we compared it with the performances
Models Accuracy
GTS (Xie and Sun, 2019) 74.3%
KA-S2T w/o knowledge 75.5%
KA-S2T w/o multiple category 75.7%
KA-S2T w/o phrase category 76.0%
**KA-S2T** 76.3%
Table 2: Ablation study on reducing the amount of
external knowledge incorporated into the model. “w/o
phrase category” indicates the removal of knowledge
about phrase categories from the KA-S2T model. “w/o
multiple category” indicates that each entity in the
knowledge base is connected to only one category that
is most relevant to it.
of other state-of-the-art models on the Math23K
dataset. Table 1 shows the accuracy of the results
obtained by executing the generated expressions
of these models, from which we can conclude the
following:
1) The tree-structured decoder can improve the
performance of most baselines. For example,
the Seq2Tree baseline Tree-Decoder and GTS
performed better than the best-performing Seq2Seq
baseline RecursiveNN. This result demonstrates the
effect of the tree-structured decoder.
2) The deep-learning models DNS and Ensemble
did not perform as well as the RecursiveNN with
attention mechanism. Tree-Decoder and GTS both
have an attention structure, which proves that an
attention mechanism can effectively capture the
key features of a problem.
3) GTS performed the best of all the baselines,
even better than the Tree-Decoder, which also has a
Seq2Tree structure. The reason for this may be that
GTS directly uses the states of the parent node and
left sibling node to generate the current node. TreeDecoder still sequentially generates expressions
based on the last node state, and takes the sibling
node and parent node states as additional features.
4) Finally, compared with GTS, the accuracy
of KA-S2T was 2.0% better. We attribute the
superior performance of KA-S2T to two properties:
KA-S2T incorporates external knowledge, which
can better capture the relationship between words.
KA-S2T recursively aggregates the neighbors of
each node in the partial expression, and thus better
captures the global information associated with the
currently generated expression tree.
**3.5** **Ablation Study**
**Effect of external knowledge: Table 2 shows the**
7142
-----
Models Accuracy
90
80
50
40
RecursiveNN (Wang et al., 2019) 68.7%
GTS (Xie and Sun, 2019) 74.3%
KA-S2T w/o child loss & state agg 72.9%
KA-S2T w/o state agg 73.5%
KA-S2T w/o child loss 75.2%
**KA-S2T** 75.5%
Table 3: Ablation study of different decoder structures.
“w/o state agg” indicates that the model did not use
the context state rt to incorporate global expression
information at each time step. “w/o child loss”
indicates that the loss function did not use the
additional regular terms defined in Equation 10 to
introduce child loss. For a fair comparison, no external
knowledge was used in KA-S2T and its variants.
70
60
30
20
50
40
10
30
|Col1|Col2|Col3|Col4|Col5|Col6|KA-S2T GTS [Xie and Sun, 2019] Tree-Decoder [Liu et al., 2019]|
|---|---|---|---|---|---|---|
|||||RecursiveNN [Wang et al., 2019] Proportion||RecursiveNN [Wang et al., 2019] Proportion|
||||||||
||||||||
||||||||
KA-S2T
GTS [Xie and Sun, 2019]
Tree-Decoder [Liu et al., 2019]
RecursiveNN [Wang et al., 2019]
Proportion
<=3 5 7 9 >=11
Equation length
Figure 4: Model performance on expression trees of
different lengths
cilitates the capture of long-distance dependencies.
2) The “KA-S2T w/o child loss” model outperformed the best-performing Seq2Tree baseline
GTS, but the “KA-S2T w/o child loss & state agg”
model did not perform as well as GTS. Compared
with the basic Seq2Tree model, GTS uses bottomup subtree embedding to introduce left sibling tree
information. The state aggregation mechanism
achieves further improvements, not only focusing
on the left sibling subtree, but also incorporating
the global information associated with the entire
generated partial expression. This result proves the
effectiveness of the state aggregation mechanism.
3) The KA-S2T model with child loss is better
than the KS-S2T model without child loss, which
proves that the child loss function obtains better
performance.
**Model performance on expression trees of**
**different lengths:** We compared the KA-S2T
model with the other three best-performing state-ofthe-art methods to investigate the performances of
the models that have expression trees of different
lengths. As shown in Figure 4, KA-S2T outperformed the other three state-of-the-art methods
with respect to expression trees of different lengths,
especially when the length of the expression tree
was between 5 and 9. One possible explanation for
this is that because the expression tree is complex,
to achieve better performance, the model needs
external knowledge and the global information
associated with the expression. However, when
the expression is too complex, the probability of
the model producing correct results is too low,
so the performance gap between the different
models is not as obvious. These results further
demonstrates the beneficial effect of incorporating
results of ablation experiments conducted to reduce
the amount of external knowledge incorporated into
the model. We have the following observations:
1) Without external knowledge, the KA-S2T’s
answer accuracy would be reduced to 75.5%. However, “KA-S2T w/o knowledge” still outperforms
the best-performing baseline GTS, which further
verifies the effectiveness of our tree-structured
decoder. We will analyze the effectiveness of these
tree-structured decoder in the following section.
2) The use of multiple categories and phrase
categories can improve accuracy by 0.2% and
0.5%, respectively. Their combination can provide
further improvement in model performance. These
results show that the external knowledge of the
relationship between entities and categories enables
the model to capture common-sense information
and obtain better interaction between words.
**Effect of KA-S2T tree-structured decoder:**
We designed several ablation experiments to measure the effect of our KA-S2T tree-structured
decoder, the results of which are shown in Table
3. For a fair comparison, we used no external
knowledge in these KA-S2T and variant models.
From the table, we can see that:
1) The “KA-S2T w/o child loss & state agg”
model, which can be regarded as a basic Seq2Tree
model, achieved better accuracy than the bestperforming Seq2Seq baseline RecursiveNN. The
main difference between this model and RecursiveNN is that this model generates the current
node state based on its parent node, and RecursiveNN generates the current node state based on
the last node. This finding once again confirms the
effectiveness of the Seq2Tree structure because fa
7143
-----
**Problem 1: The library purchased N1 different types of books. Among them, there are N2 literary**
books and N3 encyclopedias. The number of science books is N4 more than N5 times the number
of literary books. How many books are there in total?
**GTS: + + * N2 N5 N4 N2** **KA-S2T: + + * N2 N5 N4 + N2 N3**
**Problem 2: A school spent N1 to buy N2 basketballs and N3 footballs. The price of each basketball**
is N4. How much does each basketball cost more than each football?
**GTS: - N4 / - N1 * N2 N4 N2** **KA-S2T: - N4 / - N1 * N2 N4 N3**
Table 4: Two examples of expressions generated by KA-S2T compared with GTS.
external knowledge and using state aggregation
mechanism.
**3.6** **Case Study**
Table 4 shows two examples generated by our KAS2T model for comparison with GTS (Xie and Sun,
2019).
In Problem 1, without external knowledge, GTS
does not know that encyclopedias are books much
like literary books and scientific books, and therefore it generates incorrect results. By incorporating
external knowledge, KA-S2T is able to obtain the
relationship between these three types of books.
In Problem 2, there is a long distance between
“N3” and the first two nodes [-, N4] of the expression tree. GTS does not realize that the current subexpression tree indicates the price of each football
and generates “N2” based on the nearest node “N4”.
Our proposed method can capture long-distance
features and therefore generate correct results.
**4** **Related Work**
Solving math word problems has long been a
challenging task (Fletcher, 1985; Bakman, 2007;
Roy and Roth, 2016) and has attracted the attention
of many researchers. Some methods on math word
problem solving incorporate extra features by manually crafting fine-grained templates or defining
math concepts. Huang et al. (2017) formulated finegrained templates and aligned numbers in math
word problems to those candidate templates. Roy
and Roth (2018) developed declarative rules to
transform math concepts into expressions. These
methods require manually formulated features and
may be difficult to apply to math word problems
in different domains. Recently, many studies have
used deep learning methods to incorporate external
knowledge from the knowledge base into many
NLP tasks, such as dialogue systems (Zhong et al.,
2019) and reading comprehension (Wang and Jiang,
2019; Qiu et al., 2019). These methods extend
knowledge triples into natural language sequences
or build multi-hop inference graphs based on
relationships in the knowledge base, and have
achieved promising results. In this paper, we model
the entities in the problem and their categories as
entity graphs and use graph attention to generate
knowledge-aware problem representations.
Seq2Seq neural networks (Sutskever et al., 2014)
have achieved promising results on math word
problem solving. For instance, Wang et al. (2017)
used a Seq2Seq model to generate math expressions. Wang et al. (2018b) incorporated reinforcement learning into the model to construct a math
expression step by step. Zou and Lu (2019) used a
data-driven approach to semantically parsing text
into math expressions. Recently, tree-structured
decoder was used to further improve the seq2seq
framework. Xie and Sun (2019) propose a seq2tree
model to generate expression tree in a goal-driven
manner based on the parent node and left sibling
tree of each node. Liu et al. (2019) propose a
tree-structured decoding method with an auxiliary
stack that generates the abstract syntax tree of the
equation in a top-down manner. In this paper, we
generated the pre-order math expression tree based
on parent node state of each node and recursively
aggregated neighbors of each node in the partial
expression tree to incorporate global information.
**5** **Conclusion**
In this study, we proposed a novel knowledgeaware sequence-to-tree model that can automatically solve math word problems. We used an entity
graph to incorporate common sense knowledge
from external knowledge bases into the proposed
model. In addition, we proposed a tree-structured
decoder with a state aggregation mechanism for
generating math expressions. Our experimental
results confirmed that our KA-S2T model outperformed other state-of-the-art models.
7144
-----
**Acknowledgments**
The authors wish to thank the anonymous reviewers
for their helpful comments. This work was partially
funded by China National Key R&D Program
(No.2018YFB1005104, 2018YFC0831105,
2017YFB1002104), National Natural Science
Foundation of China (No.61751201, 61976056,
61532011), Shanghai Municipal Science and
Technology Major Project (No.2018SHZDZX01),
Science and Technology Commission of
Shanghai Municipality Grant (No.18DZ1201000,
17JC1420200).
**References**
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
Bengio. 2014. Neural machine translation by jointly
learning to align and translate. _arXiv preprint_
_arXiv:1409.0473._
Yefim Bakman. 2007. Robust understanding of
word problems with extraneous information. arXiv
_preprint math/0701393._
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho,
and Yoshua Bengio. 2014. Empirical evaluation
of gated recurrent neural networks on sequence
modeling. arXiv preprint arXiv:1412.3555.
Zhendong Dong, Qiang Dong, and Changling Hao.
2006. Hownet and the computation of meaning.
Edward A Feigenbaum, Julian Feldman, et al. 1963.
_Computers and thought, volume 7._ McGraw-Hill
New York.
Charles R Fletcher. 1985. Understanding and solving
arithmetic word problems: A computer simulation.
_Behavior Research Methods, Instruments, &_
_Computers, 17(5):565–571._
Jonas Gehring, Michael Auli, David Grangier, Denis
Yarats, and Yann N Dauphin. 2017. Convolutional
sequence to sequence learning. In ICML, pages
1243–1252. JMLR. org.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997.
Long short-term memory. _Neural computation,_
9(8):1735–1780.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian
Yin. 2017. Learning fine-grained expressions to
solve math word problems. In EMNLP, pages 805–
814.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
_arXiv:1412.6980._
Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional
networks. arXiv preprint arXiv:1609.02907.
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke
Kawahara. 2019. Tree-structured decoding for
solving math word problems. In EMNLP-IJCNLP,
pages 2370–2379.
Jiaju Mei. 1985. _Tongyi ci cilin._ Shangai cishu
chubanshe.
Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen
Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, and
Jun Zhao. 2019. Machine reading comprehension
using structural knowledge graph-aware network. In
_EMNLP-IJCNLP, pages 5898–5903._
Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. _arXiv preprint_
_arXiv:1608.01413._
Subhro Roy and Dan Roth. 2018. Mapping to
declarative knowledge for word problem solving.
_Transactions of the Association for Computational_
_Linguistics, 6:159–172._
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,
Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
Dropout: a simple way to prevent neural networks
from overfitting. The Journal of Machine Learning
_Research, 15(1):1929–1958._
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
Sequence to sequence learning with neural networks.
In Advances in neural information processing
_systems, pages 3104–3112._
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is
all you need. In Advances in Neural Information
_Processing Systems, pages 5998–6008._
Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova,
Adriana Romero, Pietro Lio, and Yoshua Bengio.
2017. Graph attention networks. _arXiv preprint_
_arXiv:1710.10903._
Chao Wang and Hui Jiang. 2019. Explicit utilization
of general knowledge in machine reading comprehension. In ACL, pages 2263–2272.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math word
problem to a expression tree. In EMNLP, pages
1064–1069.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b.
Mathdqn: Solving arithmetic word problems via
deep reinforcement learning. In AAAI.
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, and Heng Tao Shen. 2019. Templatebased math word problem solvers with recursive
neural networks. In AAAI.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In
_EMNLP, pages 845–854._
7145
-----
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V
Le, Mohammad Norouzi, Wolfgang Macherey,
Maxim Krikun, Yuan Cao, Qin Gao, Klaus
Macherey, et al. 2016. Google’s neural machine
translation system: Bridging the gap between
human and machine translation. _arXiv preprint_
_arXiv:1609.08144._
Zhipeng Xie and Shichao Sun. 2019. A goaldriven tree-structured neural model for math word
problems. In IJCAI, pages 5299–5305. AAAI Press.
Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li.
2015. Empirical evaluation of rectified activations in convolutional network. _arXiv preprint_
_arXiv:1505.00853._
Peixiang Zhong, Di Wang, and Chunyan Miao.
2019. Knowledge-enriched transformer for emotion
detection in textual conversations. In EMNLP_IJCNLP, pages 165–176._
Yanyan Zou and Wei Lu. 2019. Text2math: End-toend parsing text into math expressions. In EMNLP_IJCNLP, pages 5330–5340._
7146
-----
| [
"Qinzhuo, Wu",
"Yang, Liu",
"Jinlan, Fu",
"Qi, Zhang",
"Bonnie, Webber",
"Xuanjing, Huang",
"Trevor, Cohn",
"Yulan, He"
] | 2020-11-01T00:00:00 | EMNLP 2020 Main | true | 62 | 10 | null | https://aclanthology.org/2020.emnlp-main.579 | null | https://www.semanticscholar.org/paper/24ed85ad966823868c1694a19385d01c6ad71008 |
Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks | We introduce Goat, a fine-tuned LLaMA model that significantly outperforms GPT-4 on a range of arithmetic tasks. Fine-tuned on a synthetically generated dataset, Goat achieves state-of-the-art performance on BIG-bench arithmetic sub-task. In particular, the zero-shot Goat-7B matches or even surpasses the accuracy achieved by the few-shot PaLM-540B. Surprisingly, Goat can achieve near-perfect accuracy on large-number addition and subtraction through supervised fine-tuning only, which is almost impossible with previous pretrained language models, such as Bloom, OPT, GPT-NeoX, etc. We attribute Goat's exceptional performance to LLaMA's consistent tokenization of numbers. To tackle more challenging tasks like large-number multiplication and division, we propose an approach that classifies tasks based on their learnability, and subsequently decomposes unlearnable tasks, such as multi-digit multiplication and division, into a series of learnable tasks by leveraging basic arithmetic principles. We thoroughly examine the performance of our model, offering a comprehensive evaluation of the effectiveness of our proposed decomposition steps. Additionally, Goat-7B can be easily trained using LoRA on a 24GB VRAM GPU, facilitating reproducibility for other researchers. We release our model, dataset, and the Python script for dataset generation. | Goat, a fine-tuned LLaMA model that significantly outperforms GPT-4 on a range of arithmetic tasks, is introduced and an approach that classifies tasks based on their learnability, and subsequently decomposes unlearnable tasks into a series of learnable tasks by leveraging basic arithmetic principles is proposed. | ## Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks
**Tiedong Liu**
National University of Singapore
[email protected]
**Bryan Kian Hsiang Low**
National University of Singapore
[email protected]
**Goat-7B**
**3978640188 + 42886272 =**
3978640188 + 42886272 = 4021526460
**4523646 minus 67453156**
4523646 − 67453156 = −62929510
**Calculate 397 x 4429**
397 × 4429
= 4429 × (300 + 90 + 7)
= 4429 × 300 + 4429 × 90 + 4429 × 7
= 1328700 + 398610 + 31003
= 1727310 + 31003
= 1758313
**What is 8914/64?**
8914 − 64 × 100 = 8914 − 6400 = 2514
2514 − 64 × 30 = 2514 − 1920 = 594
594 − 64 × 9 = 594 − 576 = 18
Therefore, 8914 ÷ 64 = 139 R 18
Figure 1: Example of Goat’s response on addition, subtraction, multiplication, and division tasks. Prompts are
marked in bold, followed by Goat-7B’s response.
arithmetic tasks, particularly multiplication and division of large numbers, currently remains far from
optimal, with accuracy levels trending toward zero.
In this paper, we present Goat, a fine-tuned language model that is GOod at Arithmetic Tasks.
Goat achieves state-of-the-art performance in elementary arithmetic, including addition, subtraction,
multiplication, and division of integers. We adopt
an end-to-end supervised instruction-finetuning
paradigm on LLaMA (Touvron et al., 2023), leveraging a synthetically generated dataset containing
around 1 million samples. Unlike previous research
on arithmetic computation (Lee and Kim, 2023;
**Abstract**
We introduce Goat, a fine-tuned LLaMA model
that significantly outperforms GPT-4 on a range
of arithmetic tasks. Fine-tuned on a synthetically generated dataset, Goat achieves state-ofthe-art performance on BIG-bench arithmetic
sub-task. In particular, the zero-shot Goat7B matches or even surpasses the accuracy
achieved by the few-shot PaLM-540B. Surprisingly, Goat can achieve near-perfect accuracy on large-number addition and subtraction through supervised fine-tuning only, which
is almost impossible with previous pretrained
language models, such as Bloom, OPT, GPTNeoX, etc. We attribute Goat’s exceptional
performance to LLaMA’s consistent tokenization of numbers. To tackle more challenging
tasks like large-number multiplication and division, we propose an approach that classifies
tasks based on their learnability, and subsequently decomposes unlearnable tasks, such
as multi-digit multiplication and division, into
a series of learnable tasks by leveraging basic
arithmetic principles. We thoroughly examine the performance of our model, offering a
comprehensive evaluation of the effectiveness
of our proposed decomposition steps. Additionally, Goat-7B can be easily trained using
LoRA on a 24GB VRAM GPU, facilitating reproducibility for other researchers. We release
our model, dataset, and the Python script for
dataset generation.[1]
**1** **Introduction**
Large language models (LLMs) have shown remarkable proficiency across a wide range of natural language processing (NLP) tasks (Brown
et al., 2020; Chowdhery et al., 2022; Thoppilan
et al., 2022). Notably, GPT-4 (OpenAI, 2023)
has achieved state-of-the-art performances in such
tasks. However, it is surprising that such powerful language models still struggle with elementary
arithmetic tasks. The performance of GPT-4 in
[1https://github.com/liutiedong/goat.](https://github.com/liutiedong/goat)
-----
Nogueira et al., 2021; Nye et al., 2021; Qian et al.,
2022; Zhou et al., 2022b), our study demonstrates
that through supervised fine-tuning alone and without applying any special techniques, our model
is capable of generating direct answers for largenumber addition and subtraction with near-perfect
accuracy in a zero-shot setting. We attribute this exceptional arithmetic ability to LLaMA’s consistent
tokenization of numbers and show that this is almost impossible to achieve for previous LLMs such
as Bloom (Scao et al., 2022), OPT (Zhang et al.,
2022), GPT-NeoX (Black et al., 2022), Pythia (Biderman et al., 2023), etc.
However, the model encounters significant difficulties when generating direct answers for arithmetic tasks like large-number multiplication and division. To overcome this challenge, we propose an
approach that categorizes various arithmetic tasks
into learnable and unlearnable tasks, subsequently
decomposing the unlearnable tasks, such as multidigit multiplication and division, into a series of
learnable tasks by leveraging basic arithmetic principles. Our approach ensures that the intermediate
supervision which facilitates the model’s learning
is also easily understandable and interpretable by
humans. We fine-tune our model to generate the
proposed CoT before generating the final answer,
similar to sketchpad (Nye et al., 2021). Our method
outperforms GPT-4’s long multiplication and long
division methods by a large margin. We assess
the performance of our model using BIG-bench
(Srivastava et al., 2022) arithmetic sub-task, and
provide a comprehensive evaluation of the effectiveness of our proposed method. Our findings
suggest that the model can learn the pattern and
generalize to unseen data instead of purely memorizing the computation. Additionally, Goat-7B
can be conveniently trained using Low-Rank Adaptation (LoRA) (Hu et al., 2021) technique on a
24GB VRAM GPU, making it easily reproducible
for other researchers.
To summarize, our contributions include:
- Our model achieves state-of-the-art performance on various elementary arithmetic tasks,
including addition, subtraction, multiplication,
and division of positive integers (Section 4).
We show that an open-sourced model finetuned on a synthetically generated dataset has
the potential to achieve even higher accuracy
on arithmetic tasks compared to GPT-4.
- To the best of our knowledge, we are the first
to demonstrate the feasibility that supervised
fine-tuning alone can enable LLMs to generate direct answers for certain elementary arithmetic tasks, such as large-number addition
and subtraction, without applying any special
techniques (Section 3.3). Previously effective chain-of-thought (CoT) methods, such
as those used for addition in sketchpad (Nye
et al., 2021) and LM Tutor (Qian et al., 2022),
are no longer necessary. The impressive performance is mainly attributed to LLaMA’s
consistent tokenization of numbers.
- To solve large-number multiplication and division, we propose a novel decomposition
method based on the learnability of the task,
leveraging basic arithmetic principles to ensure human interpretability (Section 3.4).
- We systematically investigate the proposed
decomposition method and demonstrate its
effectiveness (Section 5). We conduct thorough experiments on the decomposition steps
in a fully synthetic environment by mitigating many hard-to-control aspects of natural
language. Our experimental setup offers an
ideal platform to study the impact of CoT and
intermediate supervision.
- Our end-to-end instruction tuning pipeline can
be easily integrated into existing instructiontuned language models (Chiang et al., 2023;
Taori et al., 2023) and potentially enhance
their mathematical reasoning for math word
problems. We release the model, dataset, and
script for generating the dataset.
**2** **Related Work**
**2.1** **Instruction Tuning**
Instruction tuning (Chung et al., 2022; Ouyang
et al., 2022; Sanh et al., 2021) is a technique used
to align pretrained language models with human instructions. It enables targeted customization of
LLMs to specific tasks, enhancing their ability
to generate more accurate and contextually relevant responses and improving the zero-shot performance. The dataset used for instruction tuning can
be human-written (Ouyang et al., 2022), machinegenerated (Peng et al., 2023; Taori et al., 2023;
Wang et al., 2022), or collected from web (Geng
et al., 2023). Recently, there has been extensive
research on fine-tuning LLaMA (Touvron et al.,
-----
2023) for various downstream tasks using instruction tuning (Chiang et al., 2023; Geng et al., 2023;
Taori et al., 2023; Xu et al., 2023; Yunxiang et al.,
2023). Creating high-quality instruction tuning
datasets can be expensive and time-consuming. In
this study, we utilize a simple Python program to
generate input-output pairs for arithmetic tasks.
**2.2** **Arithmetic Reasoning**
Arithmetic reasoning has been a topic of interest in
NLP research for many years (Lu et al., 2022). Recently, the use of pretrained models (Brown et al.,
2020; OpenAI, 2023) has shown great capabilities
in solving math word problems. Particularly, chain
_of thought (CoT) (Kojima et al., 2022; Wei et al.,_
2022; Zhou et al., 2022a) provides the model with
the intermediate steps to derive the final answer.
However, studies have shown that LLMs struggle
with basic arithmetic computation and often make
arithmetic mistakes, even though the reasoning process is correct (Cobbe et al., 2021; Gao et al., 2022;
Schick et al., 2023). Consequently, one key challenge of arithmetic reasoning, aside from mapping
natural language to arithmetic expressions, is how
to compute the generated arithmetic expressions
with high accuracy.
**2.3** **Arithmetic Computation**
Recent studies have explored using external tools
to evaluate arithmetic expressions. Toolformer
(Schick et al., 2023) and GSM8K (Cobbe et al.,
2021) invoke an external calculator to compute the
generated arithmetic expression. PoT (Chen et al.,
2022) and PAL (Gao et al., 2022) generate programs that can be executed to produce the final
answer. While arithmetic can be solved using calculators or programs easily, the ability to perform
arithmetic computation is a remarkable trait of human intelligence, and we anticipate LLMs should
possess this ability as well.
Previous studies have evaluated the arithmetic
abilities of LLMs. Nogueira et al. (2021) have
evaluated addition and subtraction tasks. Muffo
et al. (2022) have further examined 2-digit multiplication. Yuan et al. (2023) have tested different
types of arithmetic operations. CoT seems to be
a promising solution for arithmetic computation
as well. Similar to humans, autoregressive language model may rely on intermediate supervision
to generate the final answer. Scratchpad (Nye et al.,
2021) finetunes the language models to produce
CoT before generating an answer, and has demon
strated effectiveness on 8-digit addition. However,
we show that previously effective CoT methods,
such as those used for addition in sketchpad (Nye
et al., 2021) and LM Tutor (Qian et al., 2022), are
no longer necessary for certain arithmetic tasks
like addition. By leveraging simple supervised finetuning alone, our model can perform addition and
subtraction with sufficiently high accuracy. For
challenging tasks like large-number multiplication
and division, previous studies (Muffo et al., 2022;
Lee and Kim, 2023) either fail to compute or are
inefficient. Furthermore, our model is trained endto-end such that it can follow human instructions.
**3** **Method**
**3.1** **Language Model**
LLaMA (Touvron et al., 2023) is a collection of
open-source pretrained language models trained on
trillions of tokens using publicly available datasets,
and achieves state-of-the-art performance on many
benchmarks.
Previous studies (Kim et al., 2021; Nogueira
et al., 2021) have shown that tokenization is important for LLM’s arithmetic ability. Many commonlyused subword tokenization techniques today are
not ideal to represent numbers. However, LLaMA
splits each digit into an individual token (Yuan
et al., 2023), thereby ensuring consistent tokeniza_tion of numbers, as shown in Appendix B._
The selection of language models is crucial to
our work. We believe the remarkable arithmetic
ability demonstrated in this work is mainly attributed to LLaMA’s consistent tokenization of
numbers. We experimentally verify that other
LLMs, such as Bloom, OPT, GPT-NeoX, and
Pythia, finetuned on the same arithmetic dataset,
cannot match LLaMA’s arithmetic ability.
**3.2** **Learnability of Arithmetic Tasks**
Wies et al. (2022) have provided a theoretical analysis on the use of intermediate supervision for solving composite tasks. Specifically, they have shown
that for any family of tasks which on the one hand,
are unlearnable, and on the other hand, can be decomposed into a polynomial number of simple subtasks, unlearnable composite problems can become
learnable by using intermediate supervision or step_by-step CoT._
Building upon their analysis, we first experimentally categorize learnable and unlearnable tasks. In
the context of arithmetic computation, learnable
-----
|Col1|Task Input Output|
|---|---|
|Learnable|Copying 59265395 59265395 Split 4536 4000 + 500 + 30 + 6 Comparison 8116449, 97863 8116449 > 97863 Ordering 3568, 9591, 8061 3568, 8061, 9591 Addition 1270769 + 264985867430 264987138199 Subtraction 40920 6173772696 6173731776 − − Multiplication nD 1D 591714761929184 4 2366859047716736 × × Division nD 1D 339229815457 4 84807453864 R 1 ÷ ÷|
|Unlearnable|Multiplication nD mD 6983387 16919 118151924653 × × Division nD mD 64729486 472 137138 R 350 ÷ ÷|
Table 1: Summary and examples of learnable and unlearnable arithmetic tasks. For example, nD ÷ 1D means
_n-digit by 1-digit division, where n ≥_ 1. Unlearnable tasks are mainly multi-digit multiplication and division where
_n, m > 1. There are some special cases mentioned in Appendix E._
_tasks generally refer to those for which the model_
can be successfully trained to generate direct answers, achieving sufficiently high accuracy within a
predefined number of training epochs. Conversely,
_unlearnable tasks are those that the model strug-_
gles to learn and generate direct answers correctly
even with extensive training. While the exact reason behind the varying learnability of tasks is not
yet fully understood and requires further investigation, we hypothesize that it is associated with the
complexity of the underlying pattern and the size
of working memory required for completing the
task (Bubeck et al., 2023).
We experimentally examine the learnability of
these tasks by fine-tuning the model specifically for
each task in a simplified synthetic environment (Table 7). Our recognized learnable and unlearnable
tasks are listed in Table 1.
The categorization of tasks also aligns with human perception. With practice, humans can mentally calculate the addition and subtraction of two
large numbers, writing down the final numerical
answer directly from the left (most significant figure) to the right (least significant figure) without
the need for sketchpad. However, mentally solving
large-number multiplication and division is undeniably a challenging task.
We also observe that our classification of tasks
is consistent with the performance of GPT-4. In
particular, GPT-4 excels in generating direct answers for large-number addition and subtraction.
However, its accuracy significantly drops when it
comes to multi-digit multiplication and division
tasks. Our observation aligns with the claim made
by Bubeck et al. (2023) that GPT-4 has a short
working memory and performs poorly on composite arithmetic tasks. This is particularly evident in
the case of multiplication, which involves multiple
steps of addition. The inability of powerful models like GPT-4 to directly solve unlearnable tasks
may suggest that generating direct answers for such
tasks is extremely challenging, even with extensive
training.
It is noteworthy that a task that is learnable for
LLaMA may not necessarily be learnable for other
LLMs, which is validated in our experiments in
Section 5.3. Furthermore, not all tasks classified as
unlearnable are entirely impossible for the model
to learn. For instance, 2-digit by 2-digit multiplication is considered an unlearnable task in our
case. However, the model can still learn to generate
the direct answer by overfitting to the training set,
which contains an exhaustive enumeration of all
possible 2-digit multiplication. Nevertheless, the
process takes nearly 10 epochs to achieve around
90% accuracy. In contrast, by inserting our proposed CoT before the final answer, the model can
achieve comparable accuracy in 2-digit multiplication with only 1 epoch of training. These findings
align with the claim (Wies et al., 2022) that the
presence of intermediate supervision facilitates the
learning process.
**3.3** **Addition and Subtraction**
Addition and subtraction tasks are learnable, as
with supervised fine-tuning alone, the model exhibits a remarkable ability to accurately generate
direct numerical answers. The model successfully
captures the underlying patterns of the arithmetic
operations. This is evident from the model’s near
-----
perfect accuracy on the unseen test set, despite
being trained on a very limited subset of the data.
It is worth mentioning that addition and subtraction operations do not require the use of CoT. This
contrasts with previous studies that have employed
CoT for addition and subtraction tasks (Lee and
Kim, 2023; Nye et al., 2021; Qian et al., 2022).
**3.4** **Multiplication**
We experimentally verify that n-digit by 1-digit
multiplication is learnable. In contrast, multi-digit
multiplication poses significant challenges for the
model, suggesting it to be an unlearnable task. To
overcome this issue, we adopt a similar strategy
used in sketchpad (Nye et al., 2021), which finetunes the LLMs to generate CoT before generating the answer. Specifically, we propose a CoT
that decomposes the multi-digit multiplication into
a series of 5 learnable sub-tasks: (1) extraction:
extract the arithmetic expression from the natural
language instruction, (2) split: split the smaller
number of the two into place values, (3) expan**sion: expand the sum based on the distributive**
property, (4) product: compute each product simultaneously, and (5) adding term by term: add
the first two terms and copy the rest, and the final
sum is obtained.
Consider the example in Fig. 1. Firstly, the arithmetic expression 397 × 4429 is extracted from the
instruction, which can be considered as a “copying”
task. Secondly, 397×4429 = 4429×(300+90+7)
involves two learnable tasks. The larger number of the two is placed in front and then the
smaller one is split, which is similar to “ordering” and “split” learnable tasks. The ordering
ensures that there are fewer summation terms in
the next step, thereby reducing the CoT length.
Thirdly, the sum is expanded using distributive law:
4429 × (300 + 90 + 7) = 4429 × 300 + 4429 ×
90 + 4429 × 7, which is similar to “copying” task.
Next, 4429 × 300 + 4429 × 90 + 4429 × 7 =
1328700 + 398610 + 31003 where the products
are computed at once by applying “multiplication
_n-digit by 1-digit” with zeros copied at the end of_
each product. Finally, we take the sum of the first
two terms at each step, and copy the rest terms,
leveraging “addition” and “copying”. Hence, a
composite unlearnable task is broken down into
simpler tasks that are all learnable.
**3.5** **Division**
Similarly, we observe that n-digit by 1-digit division is learnable. However, multi-digit division
is unlearnable. We design a novel CoT leveraging a modified slow division method based on the
following recurrence equation
_Rj −_ _D × (qn−(j+1) × 10[j]) = Rj+1_
where Rj is the j-th partial remainder of the division, qn (j+1) is the digit of the quotient in position
_−_
_n −_ (j + 1) numbered from least significant 0 to
most significant n − 1, n is the number of digits
in the quotient, and D is the divisor. Specifically,
the main idea is to subtract multiples of the divisor
from the dividend until the remainder is less than
the divisor.
Here is a detailed breakdown of the CoT used in
Fig. 1. Consider the first iteration (first equation).
The first step 8914−64×100 requires the model to
copy the dividend and the divisor, and subsequently
generate a number qn−(j+1) × 10[j] such that the
product of qn−(j+1) × 10[j] and the divisor D is less
than or equal to the partial remainder Rj. This inherently involves two learnable tasks: “n-digit by 1digit multiplication” and “comparison”. We experimentally show that this composite task is learnable.
The second step 8914 − 64 × 100 = 8914 − 6400
mainly involves a “copying” task and an “n-digit
by 1-digit multiplication” task. The third step
8914 − 6400 = 2514 leverages “subtraction”. The
process iterates until the leftover is less than the
divisor, which implies the model has to implicitly
learn comparison. Finally, the model generates the
quotient by combining all qn−(j+1)’s in previous
iterations, which can be considered as the inverse
of the “split” task, and finally copies the remainder
if it is not zero.
A summary of prompts and expected output for
various tasks are shown in Table 2.
**3.6** **Settings**
In this paper, we consider the addition and subtraction of two positive integers with each containing
up to 16 digits. It is worth noting that the result of
subtraction can be negative. To limit the maximum
generated sequence length, we consider the multiplication of two positive integers whose product
falls within 12 digits, and the division of two positive integers resulting in a quotient within 6 digits
where the dividend is less than 12 digits. Since
we focus on arithmetic tasks of integers, we aim
-----
**Task** **Learnable** **Prompt** **CoT** **Target**
**ADD** ✓ 1463456 + 2107 ✗ 1463456 + 2107 = 1465563
**SUB** ✓ 2348233 minus 483579? ✗ 2348233 - 483579 = 1864654
**MUL**
_nD × 1D_ ✓ 593295 times 7 ✗ 593295 * 7 = 4153065
_nD × mD_ ✗ Calculate 24 x 79 ✓ 24 * 79 = 24 * (70 + 9) = 24 * 70 + \
24 * 9 = 1680 + 216 = 1896
**DIV**
_nD ÷ 1D_ ✓ Please tell 3651803/7 ✗ 3651803 / 7 = 521686 R 1
_nD ÷ mD_ ✗ What is 2546/38? ✓ 2546 - 38 * 60 = 2546 - 2280 = 266
266 - 38 * 7 = 266 - 266 = 0
Therefore, 2546 / 38 = 67
Table 2: Examples of prompts and targets for fine-tuning LLaMA. “\nAnswer: ” is appended at the end of each
prompt. It should be noted that there are a few special cases when CoT is not required (see Appendix E).
to obtain the least positive remainder in the case
when it is not divisible.
In Section 5.2, we present an analysis showcasing the limited extrapolation capabilities of finetuned LLMs. Consequently, input data that falls
outside the distribution of the training data is unlikely to yield reasonable answers. Our method
potentially applies to numbers with more digits,
though the training cost will increase correspondingly.
**3.7** **Dataset**
We generate the dataset synthetically using a
Python script. The dataset consists of around 1 million question-answer pairs. The answer contains
the proposed CoT as well as the final numerical output. The numbers are randomly generated, hence
ensuring a very low probability of instances being
duplicated, although small numbers may be sampled multiple times. We sample from log space to
ensure the numbers are equally likely to be sampled
from different orders of magnitude, which is similar to the sampling method used by Lee and Kim
(2023). The details of the dataset are presented in
Appendix F.
**3.8** **Fine-tuning**
To enable the model to solve arithmetic problems
based on instructions and facilitate natural language question answering, we generate hundreds
of instruction templates using ChatGPT (Table 6).
During the instruction tuning process, we randomly
select a template for each arithmetic input from the
training set, and fine-tune LLaMA-7B similar to
the method used in Alpaca (Taori et al., 2023). We
apply various techniques to enhance the model’s
adaptability to diverse question formats, such as
randomly removing spaces between numbers and
symbols in the arithmetic expression, replacing “*”
with “x” or “times”, etc.
Goat-7B can be easily fine-tuned using LoRA on
a 24GB VRAM GPU. In particular, the fine-tuning
process for a specific arithmetic sub-task, such as
8-digit addition using 100K instances, takes only
approximately 1.5 hours on an A10 GPU to achieve
near-perfect accuracy. The training hyperparameters are listed in Appendix A.
**4** **Experiments**
We evaluate our model using BIG-bench arithmetic
dataset (Srivastava et al., 2022), as well as our extra
selected tasks. The results are shown in Table 3.
Notably, in a zero-shot setting, Goat-7B achieves
comparable or even higher accuracy on BIG-bench
compared to the few-shot PaLM-540B.
**4.1** **Metric**
We first compute the accuracy based on the standard exact string match (Appendix C). We observe
that GPT-4’s accuracy under exact string match
is almost identically zero on tasks involving large
numbers. However, in many cases where the final answer is incorrect, the majority of digits in
the generated answer align with the target number,
with only a few digits being incorrect. Inspired
by recent study on the emergent abilities of LLMs
(Schaeffer et al., 2023), we include a digit match
metric that can reflect the per-token error rate of
the output, as each digit is uniquely represented by
a token in LLaMA.
-----
|Task|BIG-bench Extra Tasks|Col3|
|---|---|---|
|ADD|1D 2D 3D 4D 5D|8D+8D 16D+8D 16D+16D|
|GPT-4 Goat-7B|100/100 100/100 99.6/99.9 98.8/99.6 94.1/98.5 100/100 100/100 99.4/99.8 98.3/99.5 98.1/99.4|92.1/98.3 9.4/70.4 94.1/99.5 97.8/99.4 97.1/99.6 97.6/99.7|
|SUB|1D 2D 3D 4D 5D|8D 8D 16D 8D 16D 16D − − −|
|GPT-4 Goat-7B|100/100 100/100 99.2/99.6 98.9/99.6 92.4/98.1 100/100 100/100 99.7/99.9 98.6/99.6 98.4/99.5|70.5/91.5 10.6/68.8 59.6/88.2 96.8/99.3 95.8/99.2 96.3/99.3|
|MUL|1D 2D 3D 4D 5D|1D 16D 4D 8D 6D 6D × × ×|
|GPT-4 Goat-7B|100/100 99.4/99.8 30.3/83.0 5.3/61.8 0.0/47.9 100/100 100/100 97.8/99.4 96.9/99.2 96.7/99.3|61.5/92.3 0.0/45.9 0.0/49.8 99.7/99.9 88.1/97.8 96.8/99.5|
|DIV|1D 2D 3D 4D 5D|16D 1D 6D 3D 12D 6D ÷ ÷ ÷|
|GPT-4 Goat-7B|100/100 100/100 94.5/96.3 90.9/92.1 53.4/73.2 100/100 100/100 99.5/99.7 99.0/99.5 96.5/98.1|54.0/84.3 6.4/48.6 0.0/29.5 99.0/99.7 94.1/96.1 89.3/93.5|
Table 3: The result of GPT-4 and Goat-7B on BIG-bench Arithmetic sub-task and extra selected arithmetic tasks,
using metrics Exact String Match/Digit Match (Appendix C), shown in percentage. We test GPT-4 and Goat with
exactly the same questions and prompts. We evaluate GPT-4 using the API version on May 10th. For Big-bench
tasks, nD refers the n-digit by n-digit operation, except for division where nD means n-digit by m-digit where
_m ≤_ _n. BIG-bench only includes division operation without remainder, whereas in extra tasks we include the cases_
where the remainder is not zero and ask GPT-4 to output the answer in "quotient R remainder" format. It should be
noted that we exclude the BIG-bench test data from our training dataset as much as possible, although the overlap is
unavoidable for operations involving small numbers.
**4.2** **Comparison**
Comparing the performance of Goat and GPT-4
for large-number multiplication and division may
seem unfair, as GPT-4 generates direct answers
while Goat relies on CoT. Hence, we also evaluate GPT-4’s performance with CoT by appending
“Solve it step by step” at the end of each prompt. By
default, GPT-4 uses long multiplication and long
division methods. However, we observe that generating CoT only leads to marginal improvement
in accuracy. In some cases, the intermediate steps
from long multiplication and division are incorrect,
but surprisingly the final answer is correct. This
implies that GPT-4 does not effectively take advantage of intermediate supervision from CoT to
improve the final output. We identify the following
3 common errors from GPT-4’s solution, which results in incorrect final answers: (1) the alignment of
corresponding digits, (2) copying of numbers, and
(3) the intermediate result from n-digit by 1-digit
multiplication.
Additionally, we observe that GPT-4 performs
reasonably well on 8D +8D and 16D +16D tasks,
but fails on most 16D + 8D tasks, though intuitively 16D + 8D should be relatively easier than
16D+16D. While the exact reason for this remains
unclear, one possible factor could be GPT-4’s inconsistent number tokenization (Table 5), which
makes it difficult to align the corresponding digits
of two numbers.
**5** **Analysis**
**5.1** **Ablation study**
full CoT no split no expansion no adding term by term no CoT
1.00
0.75
0.50
0.25
0.00
50000 100000 150000 200000
Figure 2: Accuracy (exact string match) against the
number of samples seen during the training of 4D × 4D
task. Evaluated on the same randomly generated unseen
test set using training checkpoints.
Here we want to study the usefulness and effectiveness of each intermediate decomposition step.
Specifically, for multiplication (Fig. 2), we com
-----
reported in (Kim et al., 2021), highlighting a limitation of our fine-tuned model and underscoring
the significance of training data distribution.
exact string match digit match
1.00
0.75
0.50
0.25
0.00
16 17 18 19 20 21
No. of Digits
Figure 4: Accuracy against the number of digits for the
addition task. The model is trained up to 16D+16D, and
tested on 17D+17D onward.
**5.3** **Comparison with Other LLMs**
We conduct comprehensive experiments on a variety of LLMs, including Bloom, OPT, GPT-J, GPTNeoX, and Pythia. These models are fine-tuned
using the identical dataset as that for Goat, maintaining consistency in the training hyperparameters.
Our experiment shows that they all struggle with
arithmetic tasks. Even for tasks that are considered
learnable for LLaMA, such as multi-digit addition,
the loss during fine-tuning is significantly higher
than that of LLaMA. The observation underscores
the claim made in (Nogueira et al., 2021) that tokenization is a crucial factor in the performance of
arithmetic tasks.
**5.4** **Few-shot Prompting with GPT-4**
GPT-4 demonstrates powerful in-context learning
abilities. We further examine the effectiveness of
our proposed decomposition method for solving
large-number multiplication and division by using
few-shot prompting with GPT-4 (see Appendix H).
We observe that our decomposition method allows
GPT-4 to generate correct answers more frequently
than using its default long multiplication and division methods. This further supports the effectiveness and validity of our approach. Examples of the
prompt and output are shown in Appendix H.
**6** **Limitations**
Humans are capable of performing multiplication
and division on arbitrarily large numbers, providing
sufficient time and space for calculations. In contrast, LLMs often suffer from extrapolation prob
full CoT no product no CoT
1.00
0.75
0.50
0.25
0.00
20000 40000 60000 80000 100000
Figure 3: Accuracy (exact string match) against the
number of samples seen during the training of 6D ÷ 3D
task. Evaluated on the same randomly generated unseen
test set using training checkpoints.
pare the accuracy of 4-digit by 4-digit multiplication by removing one particular step in the CoT,
including split, expansion, adding term by term
(referring to G), as well as no CoT. For division
(Fig. 3), we compare the accuracy of 6-digit by
3-digit division after removing the middle step that
computes the product (referring to G), as well as no
CoT. To minimize the impact caused by natural language, we conduct an ablation study in a simplified
synthetic environment (Table 7).
The multiplication results suggest that the
“adding term by term” step plays a crucial role in
obtaining the final answer. In contrast, the “split”
and “expand” steps have minimal impact, and can
potentially be omitted for generating more concise
CoT. This can be attributed to the nature of these
two intermediate steps, which primarily involve
simple and learnable tasks like copying and comparison. Nevertheless, we still retain these steps to
ensure human interpretability.
The accuracy of exact string match without CoT
remains consistently at zero for both 4D × 4D
multiplication and 6D ÷ 3D division. This further
showcases the validity of our approach, as breaking down complex arithmetic tasks into a series
of learnable tasks can indeed facilitate the training
process for LLMs.
**5.2** **Extrapolation**
Extrapolation refers to the ability of the model to
predict data that lies out-of-distribution (OOD) of
training data. We test addition for numbers larger
than those in the training data distribution. The results reveal that the model has limited extrapolation
capabilities. There is a gradual drop in accuracy,
as the test set deviates further from the training
set. This observation is consistent with the result
-----
lems. The models are unlikely to generate reasonable answers if the input deviates significantly from
the distribution of training data. To enhance the
human interpretability of intermediate supervision,
we use the straightforward CoT that follows simple
basic arithmetic rules. However, this design may
not be the most efficient way to facilitate the final
answer generation. There are potentially more suitable multiplication and division algorithms for the
model to learn. Besides, our research only focuses
on elementary arithmetic operations involving integers. Nevertheless, we anticipate that our method
could be applicable to decimal computation as well.
**7** **Conclusion**
In summary, we demonstrate the feasibility that
supervised fine-tuning alone can enable LLMs to
perform certain basic arithmetic operations with
high accuracy. With our proposed CoT, our model
achieves state-of-the-art performance on various
elementary arithmetic tasks. Our research offers an
excellent platform for investigating the mechanism
of working memory and the influence of intermediate supervision on text generation. Our method can
be easily integrated with other instruction-tuned
LLMs and has the potential to further enhance
arithmetic reasoning abilities in solving math word
problems.
**References**
Stella Biderman, Hailey Schoelkopf, Quentin Anthony,
Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai
Prashanth, Edward Raff, et al. 2023. Pythia: A suite
for analyzing large language models across training
and scaling. arXiv preprint arXiv:2304.01373.
Sidney Black, Stella Biderman, Eric Hallahan, Quentin
Anthony, Leo Gao, Laurence Golding, Horace
He, Connor Leahy, Kyle McDonell, Jason Phang,
Michael Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and
[Samuel Weinbach. 2022. GPT-NeoX-20B: An open-](https://doi.org/10.18653/v1/2022.bigscience-1.9)
[source autoregressive language model. In Proceed-](https://doi.org/10.18653/v1/2022.bigscience-1.9)
_ings of BigScience Episode #5 – Workshop on Chal-_
_lenges & Perspectives in Creating Large Language_
_Models, pages 95–136, virtual+Dublin. Association_
for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro,
[and Yi Zhang. 2023. Sparks of artificial general in-](https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/)
[telligence: Early experiments with gpt-4.](https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng,
Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan
Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al.
2023. Vicuna: An open-source chatbot impressing
gpt-4 with 90%* chatgpt quality.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models.
_arXiv preprint arXiv:2210.11416._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language
models. arXiv preprint arXiv:2211.10435.
Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song.
2023. Koala: A dialogue model for academic research. Blog post, April, 1.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan
Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint_
_arXiv:2106.09685._
Jeonghwan Kim, Giwon Hong, Kyung-min Kim, Junmo
[Kang, and Sung-Hyon Myaeng. 2021. Have you](https://doi.org/10.18653/v1/2021.emnlp-main.563)
seen that number? [investigating extrapolation in](https://doi.org/10.18653/v1/2021.emnlp-main.563)
[question answering models. In Proceedings of the](https://doi.org/10.18653/v1/2021.emnlp-main.563)
_2021 Conference on Empirical Methods in Natural_
_Language Processing, pages 7031–7037, Online and_
Punta Cana, Dominican Republic. Association for
Computational Linguistics.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. arXiv preprint
_arXiv:2205.11916._
-----
[Soochan Lee and Gunhee Kim. 2023. Recursion of](https://openreview.net/forum?id=PTUcygUoxuc)
[thought: Divide and conquer reasoning with language](https://openreview.net/forum?id=PTUcygUoxuc)
[models.](https://openreview.net/forum?id=PTUcygUoxuc)
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
Kai-Wei Chang. 2022. A survey of deep learning for mathematical reasoning. _arXiv preprint_
_arXiv:2212.10535._
Matteo Muffo, Aldo Cocco, and Enrico Bertino. 2022.
[Evaluating transformer language models on arith-](https://aclanthology.org/2022.lrec-1.30)
[metic operations using number decomposition. In](https://aclanthology.org/2022.lrec-1.30)
_Proceedings of the Thirteenth Language Resources_
_and Evaluation Conference, pages 291–297, Mar-_
seille, France. European Language Resources Association.
Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin.
2021. Investigating the limitations of transformers with simple arithmetic tasks. _arXiv preprint_
_arXiv:2102.13019._
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language
models. arXiv preprint arXiv:2112.00114.
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _Advances in Neural_
_Information Processing Systems, 35:27730–27744._
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with
gpt-4. arXiv preprint arXiv:2304.03277.
Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and
Xifeng Yan. 2022. Limitations of language models
in arithmetic and symbolic induction. arXiv preprint
_arXiv:2208.05051._
Victor Sanh, Albert Webson, Colin Raffel, Stephen H
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Teven Le Scao, Arun
Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint
_arXiv:2110.08207._
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman
Castagné, Alexandra Sasha Luccioni, François Yvon,
Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model.
_arXiv preprint arXiv:2211.05100._
Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo.
2023. Are emergent abilities of large language models a mirage? arXiv preprint arXiv:2304.15004.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. 2023. Toolformer:
Language models can teach themselves to use tools.
_arXiv preprint arXiv:2302.04761._
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta,
Adrià Garriga-Alonso, et al. 2022. Beyond the
imitation game: Quantifying and extrapolating the
capabilities of language models. _arXiv preprint_
_arXiv:2206.04615._
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann
Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
and Tatsunori B Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. GitHub repos_itory._
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam
Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng,
Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al.
2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language model with self generated instructions. arXiv
_preprint arXiv:2212.10560._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large
language models. arXiv preprint arXiv:2201.11903.
Noam Wies, Yoav Levine, and Amnon Shashua. 2022.
Sub-task decomposition enables learning in sequence
to sequence tasks. arXiv preprint arXiv:2204.02892.
Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley.
2023. Baize: An open-source chat model with
parameter-efficient tuning on self-chat data. arXiv
_preprint arXiv:2304.01196._
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang,
and Songfang Huang. 2023. How well do large language models perform in arithmetic tasks? _arXiv_
_preprint arXiv:2304.02015._
Li Yunxiang, Li Zihan, Zhang Kai, Dan Ruilong, and
Zhang You. 2023. Chatdoctor: A medical chat model
fine-tuned on llama model using medical domain
knowledge. arXiv preprint arXiv:2303.14070.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
-----
**A** **Hyperparameters**
Hyperparameter Value
batch size 128
learning rate 0.0003
lora r 64
lora alpha 64
lora target module q, v, k, o
lora dropout 0.05
epoch 1
Table 4: Hyperparameters for fine-tuning LLaMA-7B.
**B** **Tokenization**
Nogueira et al. (2021) demonstrate that models
with inconsistent tokenization of numbers barely
learn the addition of 2-digit numbers, and it completely fails to learn the addition of larger numbers.
Specifically, it has an accuracy of zero for 5 digits
or more. They attribute this failure to the lack of
systematic tokenization of individual digits. For
instance, “123” might be tokenized as “12” and
“3”, while “234” might be tokenized as “2” and
“34”. Consequently, the model is required to learn
that the embedding of a token may represent either
a single digit or two digits and so on. Hence, it
might be challenging for the model to learn to map
an embedding to a number when the number of
digits it represents changes irregularly. In Table 5,
we compare number tokenization across different
LLMs.
**C** **Metric**
Exact string match is defined as 1 if the output
string exactly matches the target string, and 0 otherwise. Then we take the average of exact string
match for each task. Char error rate (CER) is defined as the percentage of characters that were
incorrectly predicted. We compute CER using
Python torchmetrics package. Then we define digit
match accuracy as 1 − _cer. We include this metric_
because, for difficult tasks, the exact string match
could be identically zero, making it hard to evaluate the performance. In many cases, both GPT-4
and Goat may have very few incorrect digits in the
middle of the generated answer, and the number of
digits in the generated answer generally matches
the target number.
Opt: Open pre-trained transformer language models.
_arXiv preprint arXiv:2205.01068._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022a.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron
Courville, Behnam Neyshabur, and Hanie Sedghi.
2022b. Teaching algorithmic reasoning via incontext learning. arXiv preprint arXiv:2211.09066.
-----
**Model** **Number** **Tokenization**
LLaMA 74815 [1, 29871, 29955, 29946, 29947, 29896, 29945]
7481 [1, 29871, 29955, 29946, 29947, 29896]
748 [1, 29871, 29955, 29946, 29947]
74 [1, 29871, 29955, 29946]
7 [1, 29871, 29955]
GPT-4 74815 [20338, 868]
7481 [20338, 16]
748 [20338]
74 [5728]
7 [22]
Bloom 74815 [88241, 2057]
7481 [88241, 20]
748 [88241]
74 [8771]
7 [26]
OPT 74815 [2, 39373, 996]
7481 [2, 406, 34490]
748 [2, 39373]
74 [2, 5243]
7 [2, 406]
Pythia 74815 [24, 2385, 1010]
GPT-NeoX-20B 7481 [24, 34474]
MPT-7B 748 [24, 2385]
74 [3566]
7 [24]
GPT-J 74815 [48246, 1314]
GPT-Neo 7481 [22, 40271]
748 [48246]
74 [4524]
7 [22]
ChatGLM 74815 [5, 25, 16, 23, 9, 15, 130001, 130004]
7481 [5, 25, 16, 23, 9, 130001, 130004]
748 [5, 25, 16, 23, 130001, 130004]
74 [5, 25, 16, 130001, 130004]
7 [5, 25, 130001, 130004]
Table 5: Comparison of number tokenization of various LLMs. It should be noted that ChatGLM also splits each
digit into an individual token. Evaluating ChatGLM’s arithmetic abilities will be left as future work.
-----
**Index** **Template**
1 {arithmetic} =
2 What is {arithmetic}?
3 Compute {arithmetic}
4 Solve {arithmetic}
5 Determine {arithmetic}
6 Find {arithmetic}
7 What is the result of {arithmetic}?
8 Please help me calculate {arithmetic}.
9 Solve the following problem: {arithmetic}
10 I am looking for the value of {arithmetic}. Can you help?
11 What is the numerical value of {arithmetic}?
12 Help me obtain {arithmetic}
13 Show me the result of {arithmetic}?
14 Kindly calculate {arithmetic} for me.
15 Determine the value for {arithmetic}.
16 Can you please compute {arithmetic}?
17 Find the numerical value of {arithmetic}?
18 I would appreciate it if you could assist me in calculating {arithmetic}.
19 Please work out {arithmetic}.
20 What is the answer to {arithmetic}?
. . . . . .
Table 6: Example templates to fine-tune arithmetic tasks with natural language instructions, generated by ChatGPT.
During training, {arithmetic} is replaced by the randomly generated arithmetic expression, like 3425 ∗ 5823.
**D** **Simplified Synthetic Environment**
We use the simplified synthetic environment to
study the effectiveness of various CoT, by avoiding
many hard-to-control aspects of natural languages.
The difference between this and Goat is that we use
a more structured prompt without any instruction
template and a straightforward completion of the
task. This enables easy comparison between the
model’s performance on different tasks, allowing
us to examine the learnability of various sub-tasks
and explore the effectiveness of the proposed CoT.
The input and output examples for the simplified
synthetic environment are shown in Table 7.
**E** **Special Cases**
In general, multi-digit multiplication and division
are considered unlearnable, and we use the decomposition method to solve them. However, some
special cases within multi-digit multiplication and
division are learnable, and in these cases, we omit
CoT and generate the direct answer:
- For multiplication, one of the two numbers
contains only one non-zero digit, such as
857483 × 400 = 342993200. This type of
task is similar to learnable n-digit by 1-digit
multiplication, with the zeros being copied at
the end of the product.
- The dividend is equal to the divisor. In that
case, the quotient is identically one. For example, 358 ÷ 358 = 1.
- The dividend is less than the divisor. In
that case, the quotient is zero and the remainder equals the dividend. For example,
423 ÷ 968 = 0 R 423.
**F** **Dataset**
In general, it is difficult to determine the optimal
proportion for each task. The number and composition of data samples also depend on the problem
settings (see Section 3.6). We empirically find that
_n-digit by 1-digit multiplication and division may_
be easier than other tasks, as it requires fewer samples to reach the same level of accuracy as other
tasks during task-specific fine-tuning in the simplified synthetic environment. It is noteworthy that
the data samples are all randomly generated, so the
probability of the occurrence of duplicated samples
is very low for large numbers. Therefore, the train
-----
**Task** **CoT** **Prompt** **Target**
**Addition** ✗ 1463456 + 2107 1465563
**Subtraction** ✗ 2348233 - 483579 1864654
**Multiplication**
_nd × 1d_ ✗ 593295 * 7 4153065
_nd × md_ ✓ 24 * 79 24 * (70 + 9)
= 24 * 70 + 24 * 9 = 1680 + 216 = 1896
**Division**
_nd ÷ 1d_ ✗ 3651803 / 7 521686 R 1
_nd ÷ md_ ✓ 2551 / 38 2546 - 38 * 60 = 2546 - 2280 = 266
266 - 38 * 7 = 266 - 266 = 0
Therefore, 2551 / 38 = 67
Table 7: Examples of input and output for training and testing in the simplified synthetic environment, which is
used for testing the learnability of sub-tasks and ablation studies. Specifically, “+”, “-”, “*”, and “\” are used for
addition, subtraction, multiplication, and division, respectively. Space is inserted between numbers and symbols.
The input and output are formatted to mitigate the influence of natural language.
Division n/m the ablation study as it is crucial for multi-digit
Division n/m
14.0% Addition
23.5%
Division n/1
7.5%
Multiplication nxm
Subtraction
23.9%
23.5%
Multiplication nx1
7.5%
Figure 5: Composition of tasks in the dataset. = 4429 × 300 + 4429 × 90 + 4429 × 7
ing loss can reflect the test accuracy on unseen the
test set, if the dataset is only trained for one epoch.
Since the synthetic dataset can be generated very
easily, we first create a dataset that contains a sufficient number of data samples for training and then
observe the training loss and apply early stopping.
We observe that the training loss does not show any
significant decrease after training on about one million samples. It should be noted that convergence
also depends on other hyper-parameters such as
batch size and learning rate. Hence, it is recommended to use a dataset larger than what is necessary and terminate the training process when the
training loss no longer decreases.
**G** **Ablation Study**
We name the steps (shown in the box below) as
(1) extraction, (2) split, (3) expansion, (4) product,
and (5, 6, ...) adding term by term. The ablation
study is performed by removing one particular step
while keeping other steps unchanged. We exclude
the (1) “extraction” and (4) “product” steps from
**Multiplication**
**Calculate 397 x 4429 \nAnswer:**
397 × 4429 (1)
= 4429 × (300 + 90 + 7) (2)
= 4429 × 300 + 4429 × 90 + 4429 × 7
(3)
= 1328700 + 398610 + 31003 (4)
= 1727310 + 31003 (5)
= 1758313 (6)
For division, the ablation study is performed by
removing the middle step (bold) that computes the
product for all iterations, while keeping other steps
unchanged.
**Division**
**What is 8914/64? \nAnswer:**
8914 − 64 × 100 = 8914 − **6400 =2514**
2514 − 64 × 30 = 2514 − **1920 =594**
594 − 64 × 9 = 594 − **576 =18**
Therefore, 8914 ÷ 64 = 139 R 18
-----
**H** **Few-shot Prompting with GPT-4**
Here are two examples showing 3-shot prompting
using GPT-4.
**3-Shot prompting for division**
**Prompt:**
Here are 3 examples of question-solution
pairs for solving arithmetic division prob-
lem. Solve the last question using the same
format given in the solutions.
“question”: “440331 / 22 = ”,
“solution”: “440331 - 22 * 20000 = 440331
- 440000 = 331\n331 - 22 * 10 = 331 -
220 = 111\n111 - 22 * 5 = 111 - 110 =
1\nTherefore, 440331 / 22 = 20015 R 1”
“question”: “115247727 / 970051 = ”,
“solution”: “115247727 - 970051
- 100 = 115247727 - 97005100 =
18242627\n18242627 - 970051 * 10 =
18242627 - 9700510 = 8542117\n8542117
- 970051 * 8 = 8542117 - 7760408 =
781709\nTherefore, 115247727 / 970051 =
118 R 781709”
“question”: “4144442 / 234443 = ”,
“solution”: “4144442 - 234443 * 10 =
4144442 - 2344430 = 1800012\n1800012
- 234443 * 7 = 1800012 - 1641101 =
158911\nTherefore, 4144442 / 234443 = 17
R 158911”
“question”: “262122 / 591 = ”
**GPT-4:**
To solve the division problem of 262122
divided by 591 in the same format, we’ll
subtract multiples of 591 from 262122 until
we reach a number less than 591, then that’s
our remainder.
So,
“solution”: “262122 - 591 * 400 = 262122 -
236400 = 25722\n25722 - 591 * 40 = 25722
- 23640 = 2082\n2082 - 591 * 3 = 2082 -
1773 = 309\nTherefore, 262122 / 591 = 443
R 309”
**3-Shot prompting for multiplication**
**Prompt:**
Here are 3 examples of question-solution
pairs for solving arithmetic multiplication
problems. Solve the last question using the
same format given in the solutions.
“question”: “235658 * 91204 = ”,
“solution”: “235658 * 91204 = 235658
- (90000 + 1000 + 200 + 4) = 235658 *
90000 + 235658 * 1000 + 235658 * 200 +
235658 * 4 = 21209220000 + 235658000
+ 47131600 + 942632 = 21444878000
+ 47131600 + 942632 = 21492009600 +
942632 = 21492952232”
“question”: “59909636 * 6070 = ”,
“solution”: “59909636 * 6070 = 59909636 *
(6000 + 70) = 59909636 * 6000 + 59909636
- 70 = 359457816000 + 4193674520 =
363651490520”
“question”: “7661 * 9461 = ”,
“solution”: “7661 * 9461 = 9461 * (7000
+ 600 + 60 + 1) = 9461 * 7000 + 9461 *
600 + 9461 * 60 + 9461 * 1 = 66227000
+ 5676600 + 567660 + 9461 = 71903600
+ 567660 + 9461 = 72471260 + 9461 =
72480721”
“question”: “58061 * 3569 = ”,
**GPT-4:**
To solve the multiplication problem of
58061 multiplied by 3569 in the same for-
mat, we’ll distribute 58061 across the sum
of the digits of 3569 multiplied by their re-
spective places.
So,
“solution”: “58061 * 3569 = 58061 *
(3000 + 500 + 60 + 9) = 58061 * 3000
+ 58061 * 500 + 58061 * 60 + 58061 *
9 = 174183000 + 29030500 + 3483660 +
522549 = 203213500 + 3483660 + 522549
= 206697160 + 522549 = 207219709”
-----
| [
"Tiedong, Liu",
"Bryan Kian Hsiang, Low"
] | 2023-05-23T00:00:00 | null | false | 62 | 0 | null | http://arxiv.org/abs/2305.14201 | https://arxiv.org/abs/2305.14201 | https://www.semanticscholar.org/paper/8c7846c9805834dbe2fb0c8f48253b8d65b79d6a |
LAMBADA: Backward Chaining for Automated Reasoning in Natural Language | Remarkable progress has been made on automated reasoning with natural text, by using Large Language Models (LLMs) and methods such as Chain-of-Thought prompting and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from intended conclusion to supporting axioms) is significantly more efficient at proof-finding. Importing this intuition into the LM setting, we develop a Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules, that are simply implemented by few-shot prompted LLM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required. | A Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules, that are simply implemented by few-shot prompted LLM inference, and achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on two challenging logical reasoning datasets. | # LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
**Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran**
Google Research
{mehrankazemi, njkim, bhatiad, xxujasime, ramachandrand}@google.com
**Abstract** 1. Rough and cold that is what they
Remarkable progress has been made on automated reasoning with natural text, by using
Language Models (LMs) and methods such
as Chain-of-Thought and Selection-Inference.
These techniques search for proofs in the forward direction from axioms to the conclusion,
which suffers from a combinatorial explosion
of the search space, and thus high failure rates
for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from the intended conclusion to supporting axioms) is significantly
more efficient at proof-finding. Importing this
intuition into the LM setting, we develop a
_Backward Chaining algorithm, called LAM-_
BADA, that decomposes reasoning into four
sub-modules. These sub-modules are simply
implemented by few-shot prompted LM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on two challenging
logical reasoning datasets, particularly when
deep and accurate proof chains are required.
**1** **Introduction**
Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has
been a fundamental goal for AI since its early
days (McCarthy, 1959; Hewitt, 1969). Furthermore, logical reasoning, especially reasoning with
unstructured, natural text is an important building block for automated knowledge discovery and
holds the key for future advances across various scientific domains. While in recent years tremendous
progress has been made towards natural language
understanding thanks to pretrained language models (LMs) (Brown et al., 2020; Chowdhery et al.,
2022, i.a.,), the performance of these models for
logical reasoning still lags behind (Rae et al., 2021;
Creswell et al., 2023; Valmeekam et al., 2022) compared to the advancements in other areas such as
reading comprehension and question-answering.
**Facts:** Eric is nice
1. Rough and cold that is what they
say about Blue Bob.
2. Eric, who is relatively young, is also Rule Selection
pretty big and tends to be cold. Rule6
3. Fred is green and cold too. Eric is big Decompose Eric is young
4. For being so cold, it's good Harry
can remain nice. Fact Check Eric is red Cache
**Rules:** Rule Selection
1. Rough, cold people are blue. Rule3
2. Big, kind folks are green ones. Eric is big Decompose Eric is cold
3. If a person is big, rough, and cold,
they are also red. Cache Eric is rough Cache
4. Most round and cold people are Rule Selection
often rough. Rule4 Rule5
5. Cold, young people are also certain
to be rough people. Decompose Decompose
6. An individual who is big, red and young is also a nice individual. Eric is round Eric is cold Eric is cold Eric is young
**Goal:** Rule Selection Pruned Fact Check Fact Check
- Eric is nice?
**Label**
- Proved
Figure 1: The search trace of LAMBADA on an example from the ParaRules subset of ProofWriter (the Sign
_Agreement and failed Fact Check modules are omitted_
for brevity).
While many problems benefit from LM scaling,
scaling has been observed to provide limited benefit
for solving complex reasoning problems. For example, Creswell et al. (2023) observed that for the
Gopher family of LMs (Rae et al., 2021), the benefit of scaling for logic-based tasks is significantly
worse than for other language tasks. Moreover,
while finetuning initially seemed to enable logical
reasoning in LMs (Clark et al., 2021; Tafjord et al.,
2021), further exploration revealed that finetuned
LMs mostly exploit spurious correlations (e.g., the
correlation between the number of rules and the
label) as opposed to learning to reason (Zhang
et al., 2022b; Schlegel et al., 2022; Liu et al., 2023).
Recently, prompting strategies such as Chain-ofThought (Wei et al., 2022) and Scratchpad (Nye
et al., 2022) have contributed to improving performance of LMs on reasoning tasks, although they
have been also shown to struggle with proof planning for more complex logical reasoning problems
(Saparov and He, 2023).
One solution to the aforementioned problems is
to integrate the strength and reliability of classical
AI models in logical reasoning with LMs (Garcez
and Lamb, 2020; Marcus, 2020). In the literature,
6547
-----
**2** **Related Work**
The deep learning based models that have been
developed to solve text-based (logical) reasoning
tasks can be categorized as follows (see Huang and
Chang 2022 for a recent survey of the literature).
**Pretraining on Relevant Tasks: Pretraining an**
LM on corpora relevant to the target reasoning task
can lead to improvements (Hendrycks et al., 2021;
Shen et al., 2021). Pretraining is, however, costly
especially for larger LMs.
**Implicit Reasoning: These approaches finetune**
LMs to produce the label directly given the input
(Clark et al., 2021; Betz et al., 2021; Saeed et al.,
2021; Han et al., 2022); reasoning is expected to
happen implicitly in the parameters of the LM. It
has been shown that finetuning LMs on logical
reasoning tasks makes them learn spurious correlations (Zhang et al., 2022b; Schlegel et al., 2022),
and is not robust to multi-hop reasoning (Kassner et al., 2020). Besides, finetuning large LMs
is costly especially when the dataset is large, and
may introduce distributional shocks to the model
(Kazemi et al., 2023). In this paper, we focus on
models that only take in-context examples as supervision.
**Explicit Reasoning: Generating the interme-**
diate reasoning steps such as the chain of reasoning (Wei et al., 2022; Nye et al., 2022; Dalvi
et al., 2021; Zelikman et al., 2022; Zhang et al.,
2022a) has shown substantial improvement for
many reasoning tasks (Suzgun et al., 2022). Such
chains have been explored both in the forward and
the backward directions, e.g., using multiple constrained LMs for logical reasoning (Zhang et al.,
2022a). Gontier et al. (2020) investigated how
transformer models perform when trained to perform forward or backward chaining, and drew conclusions about their internal reasoning strategies.
We compare against a popular recent prompting
strategy, namely Chain-of-Thought (CoT) (Wei
et al., 2022), from this category.
**Verifiers: To improve CoT, some works train a**
verifier using chain-level labels. The verifier takes
a reasoning chain produced by the model as input
and judges the quality of the chain (Cobbe et al.,
2021; Shen et al., 2021; Jhamtani and Clark, 2020;
Zelikman et al., 2022). Using this verifier, one can
then generate multiple reasoning chains (e.g., by
running the algorithm multiple times with different decoding temperatures) and use the best chain
according to the verifier. Since LAMBADA also
there are two major approaches to logical reasoning
(Poole and Mackworth, 2010):
1. Forward Chaining (FC) where one starts from
the facts and rules (“theory”), and iterates between making new inferences and adding them to
the theory until the goal statement can be proved
or disproved,
2. Backward Chaining (BC) where one starts from
the goal and uses the rules to recursively decompose it into sub-goals until the sub-goals can be
proved or disproved based on the theory.
Previous approaches to reasoning with LMs mostly
incorporate elements of FC into LMs (Tafjord et al.,
2021; Creswell et al., 2023). FC requires selecting a subset of facts and rules from the entire set,
which might be difficult for an LM as it requires a
combinatorial search over a large space. Moreover,
deciding when to halt and declare failure to prove
is challenging in FC, as also noted by Creswell
et al. (2023), sometimes requiring specialized modules trained on intermediate labels (Creswell and
Shanahan, 2022). Indeed, the classical automated
reasoning literature is heavily weighted towards
BC or goal-directed strategies for proof-finding.
In this paper, we show experimentally that BC
is better suited for text-based deductive logical reasoning, as it does not require a combinatorial search
for subset selection and there are more natural halting criteria for it. We develop a hybrid LAnguage
**Model augmented BAckwarD chAining technique**
(LAMBADA), where BC drives the high-level proof
planning, and the LM performs the textual understanding and individual reasoning steps. We conduct experiments with challenging datasets for LM
reasoning containing examples expressed in naturalistic text. The datasets contain proof chains of
up to 5 hops in depth, and examples where the goal
can neither be proved nor disproved from the provided theory. We show that LAMBADA achieves
substantially higher deductive accuracy, and is considerably more likely to generate valid reasoning
chains compared to other techniques which find correct conclusions with spurious proof traces, while
also being more query efficient than other LMbased modular reasoning approaches. Our results
strongly indicate that future work on reasoning
with LMs should incorporate backward chaining or
goal-directed planning strategies.
6548
-----
generates proofs, verifiers are also applicable to
our algorithm. In this paper, we assume not having
access to chain-level labels, and leave experiments
with verifiers as future work.
**Length generalization:** A number of approaches specifically look into whether LMs can
generalize from examples requiring shorter reasoning chains (shown to them either as demonstration
or as finetuning data) to examples requiring longer
chains (Anil et al., 2022; Tafjord et al., 2021). With
our model, length generalization comes for free
because the model learns the building blocks of
solving the problem that are applied as many times
as needed to solve the problem.
**Modular Reasoning: These approaches break**
the problem into smaller modules and use separate LMs to solve each module (Zhou et al., 2022;
Khot et al., 2023; Sprague et al., 2022; Zhou et al.,
2023; Dua et al., 2022; Wang et al., 2022; Schlag
et al., 2023). LM-based approaches to logical reasoning typically makes use of a single LM module;
for example, in Tafjord et al. (2021), a single LM
module iteratively and exhaustively infers all conclusions based on the facts and rules, and then the
goal statement is compared against the final set of
conclusions to confirm if it can be proved from
the theory. Since exhaustively deriving all conclusions is computationally expensive, Creswell et al.
(2023) consider a more scalable approach where
the conclusions that are derived are informed by the
goal; they iteratively apply two LLM modules one
selecting a subset of the facts and rules informed
by the goal and the other making new inferences
based on the selected facts and rules and adding
it back to the theory. In this paper, we compare
against the second approach.
**Natural Language Inference (NLI): Logical**
reasoning can also be understood as identifying
whether a logical entailment relation holds between two propositions (premise and hypothesis;
the premise is the theory and the hypothesis is the
statement to be proved). In this sense, NLI models
are also relevant, although inferences under NLI
typically adopt a more relaxed notion of entailment
rather than purely logical (Dagan et al., 2013; Bowman et al., 2015; Williams et al., 2018).
**3** **LAMBADA: Language Model**
**Augmented Backward Chaining**
We focus on performing automated reasoning over
_facts, i.e., natural language assertions such as_
“Nice people are red”, that are coherent but
not necessarily grounded in reality. A rule is a natural language statement that is either of the form,
or can be rewritten in the form, “If P then Q”;
e.g., “Rough, cold people are blue” can be
rewritten as “If a person is rough and cold,
then they are blue”. P is called the antecedent
and Q is called the consequent of the rule. A the_ory_ consists of facts = _f1, f2, . . ., fn_ and
_C_ _F_ _{_ _}_
rules = _r1, r2, . . ., rm_ . We let represent a
_R_ _{_ _}_ _G_
_goal that we would like to prove or disprove based_
on the theory. An example theory with fictional
characters and rules is demonstrated in Figure 1.
Based on the theory, one should prove or disprove
the goal “Eric is nice”.
**3.1** **Backward Chaining**
Backward chaining (BC) is a strategy for reasoning
that starts from the goal and recursively breaks the
goal into sub-goals based on the rules that can be
applied to it, until the sub-goals can be proved or
disproved based on the facts or no more rules can
be applied to break down the sub-goal further.
Figure 1 shows an example of BC applied to
a theory to prove a goal. Initially, BC verifies if
the goal can be proved or disproved based on the
facts (this step is omitted from the figure). Since
none of the facts directly prove or disprove the goal,
BC next selects a rule that can be applied to break
down the goal into sub-goals. Whether or not a
rule applies to a goal is determined by an operation
called unification in logic; Rule6 has the same consequent as the goal so the operation can be applied,
but the other rules have different consequents and
it cannot be applied. Using Rule6, the goal can be
broken down into three sub-goals that should be
proved for the goal to be proved. BC then makes
recursive calls to prove each sub-goal. The algorithm continues until either a halting criterion is
reached (e.g., reaching a certain depth in search),
or a sub-goal can no longer be broken down (e.g.,
the left sub-tree under “Eric is rough”), or all
sub-goals are proved (e.g., the right sub-tree under
“Eric is rough”).
The outcome of BC for a goal is either PROVED,
DISPROVED, or UNKNOWN; e.g., its output for the
goal in Figure 1 is PROVED, for “Fred is not
green?” is DISPROVED (because it contradicts
Fact3), and for “Fred is round?” is UNKNOWN
(because the theory does not entail or contradict it).
6549
-----
**Algorithm 1 LAMBADA**
**Input: Theory C = (F, R), Goal G, Max-Depth**
D
1: factCheckResult = FactCheck(G, F)
2: if factCheckResult ̸= UNKNOWN then
3: **return factCheckResult**
4: if D == 0 then
5: **return UNKNOWN**
6: Rs = RuleSelection(G, R)
7: for r ∈ Rerank(Rs) do
8: **G = GoalDecomposition(r, G)**
9: **if ProveSubgoals(C, G, D) then**
10: **if SignAgreement(r, G) then**
11: **return PROVED**
12: **else**
13: **return DISPROVED**
14: return UNKNOWN
**3.2** **LM Modules in LAMBADA**
To enable applying BC for text-based reasoning,
we introduce four LM-based modules: Fact Check,
_Rule Selection, Goal Decomposition, and Sign_
_Agreement, each implemented by showing relevant_
in-context demonstrations to a pretrained LM (see
Appendix D.3 for details). We describe these modules and then proceed to the full algorithm.
**3.2.1** **Fact Check**
Given a set of facts F from the theory and a goal
_G, the Fact Check module verifies if there exists a_
fact f ∈F such that f entails G (in which case the
goal is proved) or f entails the negation of G (in
which case the goal is disproved). If no such fact
can be found, then the truth of G remains unknown.
We implement Fact Check with two submodules: the first sub-module selects a fact from
the set of facts that is most relevant to the goal, and
the second sub-module verifies if the goal can be
proved or disproved based on that fact.[1] Since the
first sub-module may fail to identify the best fact
on the first try, if the truth of the goal remained
unknown after one try, the selected fact can be removed and the sub-modules can be called again.
This process can be repeated multiple times. In our
experiments, we call the two sub-modules twice.
**3.2.2** **Rule Selection**
Given a set of rules R from the theory and a goal
_G, the Rule Selection module identifies the rules_
_r ∈R such that the consequent of r unifies with G._
These rules are then used for decomposing the goal
into sub-goals. If no such rule can be identified,
then the truth of G remains unknown.
As we did for Fact Check, we implement Rule Se_lection with two sub-modules: the first sub-module_
identifies the consequent of each rule (independent
of the goal), and the second sub-module takes the
rule consequents and the goal as input and identifies which one unifies with the goal. Note that
due to the recursive nature of BC, the Rule Selec_tion module may be invoked multiple times during_
the proof of a goal. Since identifying the consequent of each rule is independent of the goal, this
sub-module only needs to be called once.
1Note that we select only one fact because the goals
and sub-goals in the datasets we work with can be
proved/disproved using single facts; The two modules can
be adapted to selected multiple facts if this is not the case.
**3.2.3** **Goal Decomposition**
Given a rule r and a goal G such that the consequent
of r unifies with G, the Goal Decomposition module identifies the sub-goals that need to be proved
in order for G to be proved or disproved. The subgoals are identified based on the antecedent of r.
**3.2.4** **Sign Agreement**
In the case where we succeed in proving the antecedent of r, whether the goal is proved or disproved depends on whether the sign of the goal
agrees or disagrees with the sign of the consequent
of r. For instance, in Figure 1, for the goal “Eric
is nice.”, since the sign of the goal agrees with
the sign of the consequent of Rule6 and the antecedent of the rule is proved, we conclude that the
goal is proved. However, if Rule6 was “[...] is
not going to be a nice individual.”, then
the sign of the goal would disagree with the sign
of the consequent and so we would conclude that
the goal is disproved. This motivates the fourth
module, Sign Agreement, described below.
Given a rule r and a goal G, the Sign Agreement
module verifies if the sign of the consequent of r
agrees or disagrees with the sign of the goal or not.
**3.3** **The LAMBADA Algorithm**
Algorithm 1 provides a high-level description of
how the four LM modules described earlier can
be integrated with BC to enable text-based logical
reasoning (the function calls corresponding to LM
modules are color-coded).
LAMBADA can be understood as a depth-first
6550
-----
**Algorithm 2 ProveSubgoals**
**Input: Theory C = (F, R), Sub-Goals G, Max-**
Depth D
1: for G in G do
2: result = LAMBADA(C, G, D-1)
3: **if result ̸= PROVED then**
4: **return False # Assuming conjunction**
5: return True
search algorithm over the facts and the rules. It
takes as input a theory C = (F, R), a goal G, and
a depth D that defines a halting criterion for the
algorithm based on the maximum allowed depth
for the search. The search depth is a natural halting
criterion corresponding to the maximum number of
reasoning hops required for answering questions.
Initially, the algorithm uses the Fact Check module to check if G can be proved or disproved using
the facts. If this is the case, then the algorithm stops
and returns the result (PROVED or DISPROVED).
If G cannot be proved or disproved, then the
algorithm checks the depth D: if D = 0, then the
algorithm stops and returns UNKNOWN indicating
that G could not be proved or disproved. Otherwise,
the algorithm proceeds with applying rules.
The Rule Selection module is used to identify the
rules Rs from R whose consequent unifies with G.
Once the set Rs is identified, if LAMBADA can start
with the rules that have a higher chance of succeeding at (dis)proving the goal, it can save computations and be less error-prone. Therefore, we include
a Rerank function in LAMBADA. Based on the intuition that shorter rules are likely to have fewer
sub-goals (hence a higher chance of success), we
start the search from shorter rules and proceed to
longer rules if the shorter ones fail. We leave more
sophisticated ranking strategies as future work.
For each selected rule, the algorithm uses the
_Goal Decomposition module to decompose G into_
a set of sub-goals G that need to be proved and
checks whether those sub-goals can be proved by
making recursive calls to the algorithm (with reduced depth). If the sub-goals can be proved, then
the algorithm uses the Sign Agreement module
to check whether the sign of the rule consequent
agrees or disagrees with the sign of G. If it does,
then the algorithm returns PROVED and otherwise
DISPROVED. If there is no rule for which the subgoals can be proved, then UNKNOWN is returned.
During a proof, LAMBADA may be called multiple times with the same theory and goal; in Ap
pendix A we explain how cycles and redundant
computations can be avoided using a cache.
**4** **Experimental Setup**
We describe our baselines and datasets here, and
provide further implementation details in Appendix D. Unless stated otherwise, all experiments
are based on the PaLM 540B model (Chowdhery
et al., 2022).
**4.1** **Baselines**
We compare against the following two baselines.
**Chain-of-Thought (CoT) (Wei et al., 2022) is**
a popular neural approach based on demonstrating chains of inference to the LM within the
in-context prompt. In addition to the few-shot
demonstrations in <INPUT>/<LABEL> format in typical in-context learning settings, in CoT, an intermediate explanation for the label is also provided (<INPUT>/<EXPLANATION>/<LABEL>). In
our work, the explanation corresponds to the proof.
**Selection-Inference (SI) (Creswell et al., 2023)**
is a strong modular reasoning approach based on
forward chaining. SI contains two modules: (1) se_lection, which, guided by the goal, selects a subset_
of the facts and rules from which new conclusions
can be derived toward proving the goal, and (2)
_inference, which takes the selected facts and rules_
and derives a new conclusion. The two modules
are called iteratively, each time producing a single
conclusion that is added back to the theory before
the next iteration. The iterations continue until a
halting criterion is met (a fixed number of steps in
Creswell et al. 2023).
**4.2** **Datasets**
We experiment with challenging deductive logical
reasoning datasets outlined below.
**ProofWriter (Tafjord et al., 2021) is a com-**
monly used synthetic dataset for testing logical reasoning when facts and rules are expressed in naturalistic text. It contains two subsets: an open-world
assumption (OWA) subset and a closed-world assumption (CWA) subset. In this paper, we use the
OWA subset. Each example is a (theory, goal) pair
and the label is one of {PROVED, DISPROVED,
UNKNOWN} where UNKNOWN indicates that the
goal can neither be proved nor disproved. The
dataset has five parts, each part requiring 0, ≤ 1,
_≤_ 2, ≤ 3 and ≤ 5 hops of reasoning, respectively.
We report two sets of results on this dataset: (1)
6551
-----
1.0
0.8
Accuracy0.60.4
0.2
0.0
1.0
0.8
0.6
0.4
0.2
0.0
1.0
0.8
Accuracy0.60.4
0.2
0.0
0.8
0.6
0.4
0.2
0.0
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
0.98
0.97 0.9 0.87
0.82
0.77 0.72
0.590.61 0.580.56 0.540.51 0.50.46
Majority Class
CoT
SI
Lambada
(a) ProofWriter-PUD
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
0.99 1.0
0.98 0.92 0.93 0.91 0.87
0.77 0.8 0.78 0.81
0.69 0.65
0.59 0.54
Majority Class
CoT
SI
Lambada
(b) ProofWriter-PD
0.8
0.6
0.4
0.2
0.0
Depth-1 Depth-3 Depth-5
0.97 0.98 0.99 0.96
0.9
0.76
0.7
0.52
0.45
Majority Class
CoT
SI
Lambada
(c) PrOntoQA
CoT Lambada
CoT Lambada
0.86
0.6
0.94
0.28
(d) ParaRules
(e) ProofWr. (d5)
Figure 2: Prediction accuracy results on (a) ProofWriter-PUD (b) ProofWriter-PD, (c) PrOntoQA, and (d)
ParaRules datasets. (e) The proof accuracy of CoT and LAMBADA on ProofWriter (Depth-5) for a set of randomly sampled examples for which the models correctly predicted if the goal can be proved or disproved.
with examples labeled UNKNOWN removed (for
compatibility with previous work), and (2) with all
three labels. Note that intermediate proof chains
from ProofWriter are not used by our models in
making predictions. For both cases, due to the cost
of inference, we used the first 1000 examples in the
test set. Hereafter, we refer to these two subsets as
_ProofWriter-PD and ProofWriter-PUD._
**PrOntoQA (Saparov and He, 2023) is a syn-**
thetic dataset created to analyze the capacity of
LM-based approaches for logical reasoning. Compared to ProofWriter, PrOntoQA has lower natural
language diversity and less l fact/rule variations
(e.g., no conjunctions). However, the search traces
typically contain multiple paths with only one of
them leading to the proof, thus enabling testing the
proof planning of different models. This dataset
has multiple versions; we use the fictional charac_ters version, which is one of the hardest versions_
according to Saparov and He (2023). Similarly
to ProofWriter, each version of PrOntoQA is divided into different parts depending on the depth
of reasoning chains required (1, 3, and 5 hops).
**ParaRules (Tafjord et al., 2021) is a version**
of ProofWriter where the synthetically generated
sentences in the theory are rewritten by crowdworkers to increase diversity and naturalness of the text.
This lets us move beyond evaluating reasoning with
templatic expressions, which is a key limitation of
the other datasets. Each fact in ParaRules may be
a combination of several sub-facts (see Fig. 1 for
an example). The examples require proof depths
of up to 5 and the label can be PROVED, DIS
PROVED, or UNKNOWN. We found some minor
quality issues in ParaRules; we manually verified
and fixed the first 500 examples of the test set (see
Appendix D.2) and used this set for evaluation.
**5** **Results**
We now describe the results and compare LAM
BADA and the baselines in detail.
**5.1** **Label Prediction Accuracy**
The results are reported in Figure 2, (a)–(d).[2] LAM
BADA significantly outperforms the baselines, especially on ProofWriter-PUD which contains UN
KNOWN labels (44% relative improvement compared to CoT and 56% compared to SI on Depth-5),
the higher depths of PrOntoQA (37% relative improvement compared to CoT and 113% compared
to SI on Depth-5), and the ParaRules dataset (43%
relative improvement compared to CoT). These
results overall show the merit of LAMBADA for
logical reasoning. We highlight that the reasoning capacity of LAMBADA robustly generalizes to
more naturalistic expressions, as demonstrated by
the high accuracy on ParaRules, which is exactly
2Due to the low performance of SI on ProofWriter and
PrOntoQA and its high number of LM calls (see Figure 7), we
only compared LAMBADA against CoT for ParaRules.
6552
-----
**5.3** **Forward vs. Backward Chaining**
As previously explained, SI is based on forward
chaining and its selection module requires a combinatorial search to find the right subset of facts
and rules (see Appendix C), and the search space
becomes progressively larger in each iteration of
the algorithm as new inferences are added to the
theory. To verify whether the increase in the search
space makes forward chaining progressively harder,
we measured the success rate of the k-th inference
of SI for different values of k on Depth-5 of PrOntoQA (see Appendix B.3 for details). From the
results in Figure 3, we can see that the success
rate indeed decreases in the later inferences of the
model, where the size of the input theory is larger
and therefore a larger space needs to be searched to
find the right combination of facts and rules. Note
that none of the components in LAMBADA require
selecting a subset, hence no combinatorial search
is required (see Appendix C for more details).
SI also suffers from inferring redundant facts.
Figure 4 reports the number of unique inferences from SI for the examples in ProofWriterPD (Depth-5) where SI incorrectly predicted UN
KNOWN (i.e., examples where a proof exists but
SI failed to find it). The result shows that SI inferences contained no redundant facts only 29% of the
time; in 7% of the cases, all 5 inferred facts were
identical, and in another 10%, only two unique inferences were made. This shows that SI, and maybe
more generally forward-chaining approaches, suffer from redundant inference.
SI also over-predicts DISPROVED in the binary
case and UNKNOWN in the three-way classification
case (see Appendix B.4), performing even worse
than the majority class for Depth-5 of PrOntoQA
which has more PROVED labels than DISPROVED.
These results, together with Figure 2, show that
backward chaining (which is the backbone of reasoning in LAMBADA) is a better choice compared
to forward chaining (the backbone in SI).
0.53
1st 2nd 3rd 4th 5th
0.47
0.34
0.31 0.31
k-th inference
0.50
0.45
0.40
0.35
0.30
Figure 3: The success rate of the k-th inference of SI
on PrOntoQA (Depth-5) for different values of k. As
_k increases, the size of the input theory becomes larger_
and the success rate decreases.
the desired outcome of combining the strengths of
an LM and a symbolic reasoning algorithm.
The results in Figure 2(a) reveal a shortcoming
of the CoT approach in dealing with UNKNOWN
labels. That is, unlike the examples for which the
label is PROVED or DISPROVED, there is no natural
chain of thought for the examples whose labels are
UNKNOWN. Nevertheless, the performance of CoT
is competitive for the ProofWriter-PD dataset, and
the accuracy does not diminish substantially with
increasing depth. We investigate the reason for this
behaviour of CoT in the next section.
**5.2** **Proof Accuracy**
To understand the reason behind the high accuracy
of CoT on higher depths of ProofWriter-PD, we
randomly selected 50 examples from Depth-5 of
the dataset where CoT predicted the label correctly,
and manually verified if the proof chain is correct or
not. For comparison, we also manually verified the
proofs generated by LAMBADA following a similar
procedure. The results are reported in Figure 2(e).
While LAMBADA mostly produces correct
chains, CoT produces correct chains only for 28%
of the examples. We find that hallucination is the
main source of error (48% of the examples; see Appendix B.2 for other prominent failure modes). The
hallucinated facts and rules mostly resulted in shortcuts to the correct answer. This hints at the possibility of spurious correlations in ProofWriter-PD
that can be exploited by CoT (see Appendix B.2,
Figure 10 for examples). This result is consistent
with previous work showing that when LMs are
asked to solve logical reasoning end-to-end, they
rely on spurious correlations (Zhang et al., 2022b).
Note that for modular approaches like SI and LAM
BADA, the intermediate modules are impervious to
the spurious correlations between the input and the
label and do not suffer from this issue.
7% 1 unique inference
2 unique inferences
3 unique inferences
29% 4 unique inferences
5 unique inferences
20%
10%
7%
34%
29%
Figure 4: Number of unique inferences generated by
SI for Depth-5 of ProofWriter-PUD when selection and
inference modules are called five times.
6553
-----
0.8
0.6
1.0
0.8
0.6
0.4
0.2
0.0
0.25
0.20
0.15
0.10
0.05
0.00
0.4
0.2
0.0
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
Forward CoT
Backward CoT
(a)
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
Forward CoT
Backward CoT
(b)
(c)
Figure 5: Prediction accuracy results on (a) ProofWriter-PUD and (b) ProofWriter-PD with forward and backward
CoT. (c) compares the proof accuracy of forward and backward CoT on ProofWriter (Depth-5) for a set of randomly
sampled examples for which the models correctly predicted the proof label.
that future work can use the traces of our model to
finetune (smaller) language models (e.g., Zelikman
et al. 2022), or use the traces as training data in future language models to improve their performance
with CoT prompting.
Taking the label and proof accuracy results together, there is also a potential that backward CoT
models are more heavily relying on spurious correlations for the PD case where backward CoT
outperformed CoT, as backward CoT achieves a
similar label accuracy as forward CoT but with a
much lower proof accuracy.
**5.5** **Qualitative Analysis**
1.0
0.8
0.6
0.4
0.2
0.0
Fact Check
0.94 0.99 0.94 0.98
0.81 0.93 0.97
0.75
0.68 0.75
0.45
0.35 0.31 0.33 0.32 PaLM 8B
PaLM 62B
PaLM 540B
1 trial
Fact Check
2 trials
Rule
Selection
Goal
Decomposition
Sign
Agreement
Figure 6: ProofWriter (val) performance of modules
in LAMBADA in isolation, for different LM sizes.
**5.4** **Does Backward CoT Suffice?**
Our results may raise the question of whether it is
enough to directly incorporate the steps of backward chaining into CoT prompts, or if modularity
(as in LAMBADA) is also needed. To answer this
question, we experiment with a backward version
of CoT where the proofs are written in the backward direction from the goal to the premises. The
label accuracies are presented in Figure 5(a)–(b) for
ProofWriter-PUD and ProofWriter-PD, and their
proof accuracy on ProofWriter-PD (Depth-5) in
Figure 5(c). The label accuracy of forward and
backward CoT are comparable, but forward CoT
leads to better performance on PUD and backward
CoT leads to better performance on PD. For proof
accuracy, however, we see a clear difference between the two versions where backward CoT produces substantially lower quality proofs compared
to forward chaining. This result is consistent with
the observations of Gontier et al. (2020) for finetuned LMs.
The above results show that a modular formulation (as in LAMBADA) is key to successful logical
reasoning and simply providing CoT in the backward direction does not suffice. We note, however,
In Figure 1, we show the search trace created by
LAMBADA for an example from ParaRules, where
the answer was predicted correctly. From the figure,
one can see how backward chaining helps LAM
BADA effectively search and create the reasoning
chain and how the LM helps fact checking, rule
selection, goal decomposition, and sign agreement
checking. In Appendix B.1, we include an example
that has a much larger search trace.
**5.6** **Individual Module Analysis**
To understand which components in LAMBADA
are responsible for the failure cases, we computed
the individual accuracy of the four modules described in Section 3. For this purpose, we created
four datasets from the validation set of ProofWriter,
each measuring only the performance of one module in isolation (see Appendix D.1 for details).
Based on the results of the PaLM 540B model in
Figure 6, Rule Selection is the lowest performing
module followed by Goal Decomposition. It is possible that the Rule Selection module (partially) fails
for some examples but LAMBADA still arrives at
6554
-----
1.0
0.8
0.6
0.4
0.2
0.0
219.34
10[2]
10[1]
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
SI
Lambada 99.45
57.22
27.79 27.30
18.53
12.77
10.14
7.26
2.98
Dataset Depth
Figure 7: Comparing LAMBADA and SI w.r.t. the average number of inference calls they make per example
for different subsets of the ProofWriter-PUD dataset.
Original Test Set
Novel Token Test Set
Novel Template Test Set
Figure 8: The performance of LAMBADA on
ProofWriter-PUD for the original, novel token, and
novel template test sets.
PUD. LAMBADA requires much fewer calls compared to SI, especially at higher depths: for Depth1, LAMBADA requires 3.8x fewer calls whereas for
Depth-5 it requires 11.8x fewer calls.
the correct conclusion and proof (e.g., if in Figure 1
the third call to Rule Selection only returned Rule5).
For Fact Check, when we allow the model to only
select one fact, the accuracy is 0.94 but when we
allow the model to select two facts, the accuracy
is near perfect. The Sign Agreement module also
shows near-perfect accuracy.
**5.7** **The Role of Scale**
**5.9** **Lexical Robustness**
To analyze the lexical sensitivity of LAMBADA, we
modified the test set of ProofWriter-PUD by replacing various lexical items (names, adjectives, and
verbs) with novel tokens and the rule templates with
novel ones. We then compared the performance of
LAMBADA on the original and the modified test
sets using the same few-shot examples. The details of the modifications are in Appendix B.5. As
can be seen in Figure 8, the performance of LAM
BADA remains almost unchanged, demonstrating
robustness to lexical and templatic variations.
We repeat the experiment from Section 5.6 with
PaLM 62B and 8B to examine the effect of LM
scale on LAMBADA. According to the results in
Figure 6, when we use PaLM 62B, the performance
of the Goal Decomposition and Sign Agreement
modules remain comparable, but the performance
for the Fact Check and Rule Selection modules
drop substantially. Unlike the first two modules,
the second two rely on a one-to-many comparison
between the goal and each of the facts/rules which
may require a larger model capacity. Moreover,
we observe that in PaLM 8B, the accuracy for all
components drops significantly, in some cases becoming close to random prediction.
We argue that the extent to which the higherlevel reasoning algorithm breaks the problem into
sub-problems should be dependent on the scale
and power of the base LMs. If smaller LMs are
used, then one may need finer-grained problem decomposition (e.g., further decomposing the one-tomany comparisons in the selection module). And
as LMs become larger and stronger in the future,
one could rely on them to solve problems with a
coarser-grained decomposition of the problem.
**5.8** **Number of Inference Calls**
**6** **Conclusion and Future Directions**
We developed LAMBADA, an algorithm for deductive logical reasoning with natural language that
combines the capacity of LMs to handle naturalistic text input with the backward chaining algorithm for robust symbolic reasoning. We showed
that LAMBADA achieves significant improvements
over competitive approaches on challenging benchmarks, both in terms of label accuracy (predicting
if a statement can be proved or disproved based
on a theory) and proof accuracy. Importantly, this
improvement was also observed in a dataset that expresses the theory in more naturalistic expressions,
clearly illustrating the benefit of combining an LM
with reasoning modules. We also demonstrated the
query efficiency and lexical robustness of LAM
BADA. Although in this paper we only experiment
with formal reasoning problems and datasets, we
believe our key insight on the efficacy of backward,
goal-directed reasoning with LMs has broader implications and can be adapted to other NLP tasks
where multi-step inference is required.
Another advantage of LAMBADA is its efficiency
compared to other approaches that require multiple
LM inference calls per example such as SI. In Figure 7, we compare the average number of LM calls
per example, for different depths of ProofWriter
6555
-----
**Limitations**
We identify some limitations and risks with our
current work that can be addressed in future work.
- The current work is mainly applicable to logical entailment problems, where one needs to
solve a classification problem of whether a goal
can be proved, disproved, or neither proved nor
disproved based on a theory. Future work can
extend LAMBADA to non-classification cases,
e.g., where one needs to apply logical reasoning
to answer questions such as “What color is
Fiona?”.
- The current work assumes all the rules are given
as input and the rule set is small enough to be
included in the prompt. Future work can extend
LAMBADA to the cases where not all the rules
are provided as input and part of the knowledge
has to come from the LM itself, as well as the
case where not all the rules can be included in
the prompt due to the limitation in the prompt
length.
- The current work is limited to deductive reasoning with the modus ponens rule; future work
can expand the applicability of LAMBADA on
datasets with other types of rules such as proof
by contradiction, disjunction elimination, etc.
- The calls made to the LM modules in LAMBADA
are dependent on the value from the previous call.
That is, we need to wait for the results from one
call before we decide what the next call must be.
Since making batch calls to the LMs is typically
easier and faster, future work can find ways to
implement LAMBADA with batch LM calls.
- While we showed that LAMBADA is more efficient than SI in terms of the number of inference
calls it makes to the LM, it still requires many
more calls to the LM compared to approaches
such as CoT, hence increasing the required computation and cost.
**References**
Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor
Lewkowycz, Vedant Misra, Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, and Behnam
[Neyshabur. 2022. Exploring length generalization](https://proceedings.neurips.cc/paper_files/paper/2022/file/fb7451e43f9c1c35b774bcfad7a5714b-Paper-Conference.pdf)
[in large language models. In Advances in Neural](https://proceedings.neurips.cc/paper_files/paper/2022/file/fb7451e43f9c1c35b774bcfad7a5714b-Paper-Conference.pdf)
_Information Processing Systems, volume 35, pages_
38546–38556. Curran Associates, Inc.
Gregor Betz, Christian Voigt, and Kyle Richardson.
2021. [Critical thinking for language models.](https://aclanthology.org/2021.iwcs-1.7) In
_Proceedings of the 14th International Conference_
_on Computational Semantics (IWCS), pages 63–75,_
Groningen, The Netherlands (online). Association
for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts,
[and Christopher D. Manning. 2015. A large anno-](https://doi.org/10.18653/v1/D15-1075)
[tated corpus for learning natural language inference.](https://doi.org/10.18653/v1/D15-1075)
In Proceedings of the 2015 Conference on Empiri_cal Methods in Natural Language Processing, pages_
632–642, Lisbon, Portugal. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry,
Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon
Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu,
Clemens Winter, Chris Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners. In](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
_Advances in Neural Information Processing Systems,_
volume 33, pages 1877–1901. Curran Associates,
Inc.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng
Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier
Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan,
Hyeontaek Lim, Barret Zoph, Alexander Spiridonov,
Ryan Sepassi, David Dohan, Shivani Agrawal, Mark
Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz,
Erica Moreira, Rewon Child, Oleksandr Polozov,
Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta,
Jason Wei, Kathy Meier-Hellstern, Douglas Eck,
Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
[PaLM: Scaling language modeling with pathways.](https://arxiv.org/abs/2204.02311)
_arXiv:2204.02311._
Peter Clark, Oyvind Tafjord, and Kyle Richardson.
2021. [Transformers as soft reasoners over lan-](https://dl.acm.org/doi/abs/10.5555/3491440.3491977)
[guage.](https://dl.acm.org/doi/abs/10.5555/3491440.3491977) In Proceedings of the Twenty-Ninth Inter_national Joint Conference on Artificial Intelligence,_
IJCAI’20.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse,
and John Schulman. 2021. [Training verifiers to](https://arxiv.org/abs/2110.14168)
[solve math word problems. arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
Antonia Creswell and Murray Shanahan. 2022.
[Faithful reasoning using large language models.](https://arxiv.org/abs/2208.14271)
_arXiv:2208.14271._
6556
-----
Antonia Creswell, Murray Shanahan, and Irina Higgins. 2023. [Selection-inference: Exploiting large](https://openreview.net/forum?id=3Pf3Wg6o-A4)
[language models for interpretable logical reasoning.](https://openreview.net/forum?id=3Pf3Wg6o-A4)
In The Eleventh International Conference on Learn_ing Representations._
Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures
_on Human Language Technologies, 6(4):1–220._
Bhavana Dalvi, Peter Jansen, Oyvind Tafjord, Zhengnan Xie, Hannah Smith, Leighanna Pipatanangkura,
[and Peter Clark. 2021. Explaining answers with en-](https://doi.org/10.18653/v1/2021.emnlp-main.585)
[tailment trees. In Proceedings of the 2021 Confer-](https://doi.org/10.18653/v1/2021.emnlp-main.585)
_ence on Empirical Methods in Natural Language_
_Processing, pages 7358–7370, Online and Punta_
Cana, Dominican Republic. Association for Computational Linguistics.
Dheeru Dua, Shivanshu Gupta, Sameer Singh, and
[Matt Gardner. 2022. Successive prompting for de-](https://aclanthology.org/2022.emnlp-main.81)
[composing complex questions.](https://aclanthology.org/2022.emnlp-main.81) In Proceedings of
_the 2022 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 1251–1265, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
[Artur d’Avila Garcez and Luis C Lamb. 2020. Neu-](https://arxiv.org/abs/2012.05876)
[rosymbolic ai: the 3rd wave. arXiv:2012.05876.](https://arxiv.org/abs/2012.05876)
Nicolas Gontier, Koustuv Sinha, Siva Reddy, and Chris
[Pal. 2020. Measuring systematic generalization in](https://proceedings.neurips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-Paper.pdf)
[neural proof generation with transformers. In Ad-](https://proceedings.neurips.cc/paper_files/paper/2020/file/fc84ad56f9f547eb89c72b9bac209312-Paper.pdf)
_vances in Neural Information Processing Systems,_
volume 33, pages 22231–22242. Curran Associates,
Inc.
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting
Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, et al.
[2022. FOLIO: Natural language reasoning with first-](https://arxiv.org/abs/2209.00840)
[order logic. arXiv:2209.00840.](https://arxiv.org/abs/2209.00840)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. [Measuring mathematical](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper-round2.pdf)
[problem solving with the math dataset. In Proceed-](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper-round2.pdf)
_ings of the Neural Information Processing Systems_
_Track on Datasets and Benchmarks, volume 1. Cur-_
ran.
Carl Hewitt. 1969. Planner: A language for proving
theorems in robots. In Proceedings of the 1st Inter_national Joint Conference on Artificial Intelligence,_
IJCAI’69, page 295–301, San Francisco, CA, USA.
Morgan Kaufmann Publishers Inc.
[Jie Huang and Kevin Chen-Chuan Chang. 2022. To-](https://arxiv.org/abs/2212.10403)
[wards reasoning in large language models: A survey.](https://arxiv.org/abs/2212.10403)
_arXiv:2212.10403._
[Harsh Jhamtani and Peter Clark. 2020. Learning to ex-](https://doi.org/10.18653/v1/2020.emnlp-main.10)
[plain: Datasets and models for identifying valid rea-](https://doi.org/10.18653/v1/2020.emnlp-main.10)
[soning chains in multihop question-answering. In](https://doi.org/10.18653/v1/2020.emnlp-main.10)
_Proceedings of the 2020 Conference on Empirical_
_Methods in Natural Language Processing (EMNLP),_
pages 137–150, Online. Association for Computational Linguistics.
Nora Kassner, Benno Krojer, and Hinrich Schütze.
2020. [Are pretrained language models symbolic](https://doi.org/10.18653/v1/2020.conll-1.45)
[reasoners over knowledge?](https://doi.org/10.18653/v1/2020.conll-1.45) In Proceedings of
_the 24th Conference on Computational Natural Lan-_
_guage Learning, pages 552–564, Online. Associa-_
tion for Computational Linguistics.
Mehran Kazemi, Sid Mittal, and Deepak Ramachandran. 2023. [Understanding finetuning for fac-](https://arxiv.org/abs/2301.11293)
[tual knowledge extraction from language models.](https://arxiv.org/abs/2301.11293)
_arXiv:2301.11293._
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao
Fu, Kyle Richardson, Peter Clark, and Ashish Sab[harwal. 2023. Decomposed prompting: A modular](https://openreview.net/forum?id=_nGgzQjzaRy)
[approach for solving complex tasks. In The Eleventh](https://openreview.net/forum?id=_nGgzQjzaRy)
_International Conference on Learning Representa-_
_tions._
Bingbin Liu, Jordan T. Ash, Surbhi Goel, Akshay Kr[ishnamurthy, and Cyril Zhang. 2023. Transformers](https://openreview.net/forum?id=De4FYqjFueZ)
[learn shortcuts to automata. In The Eleventh Inter-](https://openreview.net/forum?id=De4FYqjFueZ)
_national Conference on Learning Representations._
Gary Marcus. 2020. [The next decade in AI:](https://arxiv.org/abs/2002.06177)
[four steps towards robust artificial intelligence.](https://arxiv.org/abs/2002.06177)
_arXiv:2002.06177._
[John McCarthy. 1959. Programs with common sense.](http://www-formal.stanford.edu/jmc/mcc59.html)
In Proceedings of the Teddington Conference on the
_Mechanization of Thought Processes, pages 75–91,_
London. Her Majesty’s Stationary Office.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, Charles Sutton, and Augustus Odena.
2022. [Show your work: Scratchpads for interme-](https://openreview.net/forum?id=HBlx2idbkbq)
[diate computation with language models. In Deep](https://openreview.net/forum?id=HBlx2idbkbq)
_Learning for Code Workshop._
David L Poole and Alan K Mackworth. 2010. Artificial
_Intelligence: foundations of computational agents._
Cambridge University Press.
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie
Millican, Jordan Hoffmann, Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George
van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang,
Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich
Elsen, Siddhant Jayakumar, Elena Buchatskaya,
David Budden, Esme Sutherland, Karen Simonyan,
Michela Paganini, Laurent Sifre, Lena Martens,
Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato,
Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste
6557
-----
Lespiau, Maria Tsimpoukelli, Nikolai Grigorev,
Doug Fritz, Thibault Sottiaux, Mantas Pajarskas,
Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi,
Vladimir Mikulik, Igor Babuschkin, Aidan Clark,
Diego de Las Casas, Aurelia Guy, Chris Jones,
James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac,
Ed Lockhart, Simon Osindero, Laura Rimell, Chris
Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray
Kavukcuoglu, and Geoffrey Irving. 2021. [Scal-](https://arxiv.org/abs/2112.11446)
[ing language models: Methods, analysis & insights](https://arxiv.org/abs/2112.11446)
[from training Gopher. arXiv:2112.11446.](https://arxiv.org/abs/2112.11446)
Mohammed Saeed, Naser Ahmadi, Preslav Nakov, and
[Paolo Papotti. 2021. RuleBERT: Teaching soft rules](https://doi.org/10.18653/v1/2021.emnlp-main.110)
[to pre-trained language models. In Proceedings of](https://doi.org/10.18653/v1/2021.emnlp-main.110)
_the 2021 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 1460–1476, Online_
and Punta Cana, Dominican Republic. Association
for Computational Linguistics.
[Abulhair Saparov and He He. 2023. Language models](https://openreview.net/forum?id=qFVVBzXxR2V)
[are greedy reasoners: A systematic formal analysis](https://openreview.net/forum?id=qFVVBzXxR2V)
[of chain-of-thought. In The Eleventh International](https://openreview.net/forum?id=qFVVBzXxR2V)
_Conference on Learning Representations._
Imanol Schlag, Sainbayar Sukhbaatar, Asli Celikyilmaz, Wen-tau Yih, Jason Weston, Jürgen Schmidhuber, and Xian Li. 2023. Large language model
programs. arXiv preprint arXiv:2305.05364.
Viktor Schlegel, Kamen Pavlov, and Ian Pratt[Hartmann. 2022. Can transformers reason in frag-](https://aclanthology.org/2022.emnlp-main.768)
[ments of natural language?](https://aclanthology.org/2022.emnlp-main.768) In Proceedings of the
_2022 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 11184–11199, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin
[Jiang, Ming Zhang, and Qun Liu. 2021. Generate &](https://doi.org/10.18653/v1/2021.findings-emnlp.195)
[rank: A multi-task framework for math word prob-](https://doi.org/10.18653/v1/2021.findings-emnlp.195)
[lems. In Findings of the Association for Computa-](https://doi.org/10.18653/v1/2021.findings-emnlp.195)
_tional Linguistics: EMNLP 2021, pages 2269–2279,_
Punta Cana, Dominican Republic. Association for
Computational Linguistics.
Zayne Sprague, Kaj Bostrom, Swarat Chaudhuri, and
Greg Durrett. 2022. [Natural language deduction](https://aclanthology.org/2022.emnlp-main.564)
[with incomplete information.](https://aclanthology.org/2022.emnlp-main.564) In Proceedings of
_the 2022 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 8230–8258, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V Le, Ed H Chi,
Denny Zhou, et al. 2022. Challenging big-bench
tasks and whether chain-of-thought can solve them.
_arXiv:2210.09261._
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021.
[ProofWriter: Generating implications, proofs, and](https://doi.org/10.18653/v1/2021.findings-acl.317)
[abductive statements over natural language. In Find-](https://doi.org/10.18653/v1/2021.findings-acl.317)
_ings of the Association for Computational Linguis-_
_tics: ACL-IJCNLP 2021, pages 3621–3634, Online._
Association for Computational Linguistics.
Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan,
[and Subbarao Kambhampati. 2022. Large language](https://openreview.net/forum?id=wUU-7XTL5XO)
[models still can’t plan (a benchmark for LLMs on](https://openreview.net/forum?id=wUU-7XTL5XO)
[planning and reasoning about change). In NeurIPS](https://openreview.net/forum?id=wUU-7XTL5XO)
_2022 Foundation Models for Decision Making Work-_
_shop._
Boshi Wang, Xiang Deng, and Huan Sun. 2022. [It-](https://aclanthology.org/2022.emnlp-main.174)
[eratively prompt pre-trained language models for](https://aclanthology.org/2022.emnlp-main.174)
[chain of thought. In Proceedings of the 2022 Con-](https://aclanthology.org/2022.emnlp-main.174)
_ference on Empirical Methods in Natural Language_
_Processing, pages 2714–2730, Abu Dhabi, United_
Arab Emirates. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc V. Le,
[and Denny Zhou. 2022. Chain-of-thought prompt-](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[ing elicits reasoning in large language models. In](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
_Advances in Neural Information Processing Systems,_
volume 35, pages 24824–24837. Curran Associates,
Inc.
Adina Williams, Nikita Nangia, and Samuel Bowman.
[2018. A broad-coverage challenge corpus for sen-](http://aclweb.org/anthology/N18-1101)
[tence understanding through inference. In the North](http://aclweb.org/anthology/N18-1101)
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
_Volume 1 (Long Papers), pages 1112–1122. Associ-_
ation for Computational Linguistics.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good[man. 2022. STaR: Bootstrapping reasoning with rea-](https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf)
[soning. In Advances in Neural Information Process-](https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf)
_ing Systems, volume 35, pages 15476–15488. Cur-_
ran Associates, Inc.
Hanlin Zhang, Ziyang Li, Jiani Huang, Mayur Naik,
and Eric Xing. 2022a. [Improved logical reason-](https://openreview.net/forum?id=8lNy3QCaxHX)
[ing of language models via differentiable symbolic](https://openreview.net/forum?id=8lNy3QCaxHX)
[programming. In First Workshop on Pre-training:](https://openreview.net/forum?id=8lNy3QCaxHX)
_Perspectives, Pitfalls, and Paths Forward at ICML_
_2022._
Honghua Zhang, Liunian Harold Li, Tao Meng, KaiWei Chang, and Guy Van den Broeck. 2022b.
[On the paradox of learning to reason from data.](https://arxiv.org/abs/2205.11502)
_arXiv:2205.11502._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/forum?id=WZH7099tgfM)
[plex reasoning in large language models.](https://openreview.net/forum?id=WZH7099tgfM) In The
_Eleventh International Conference on Learning Rep-_
_resentations._
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron
Courville, Behnam Neyshabur, and Hanie Sedghi.
[2022. Teaching algorithmic reasoning via in-context](https://arxiv.org/abs/2211.09066)
[learning. arXiv:2211.09066.](https://arxiv.org/abs/2211.09066)
6558
-----
Dave is not green.
Fact Check
Fail
**Facts:** Rule6 Rule3
1. Anne is cold. Decompose Rule Selection Decompose
2. Anne is kind. Dave is nice. Dave is kind. Dave is blue. Dave is cold.
3. Charlie is nice. Fact Check Stop: Other Fact Check Already
4. Dave is white. Fail branch failed Fail Proved
5. Dave is young.
6. Fiona is blue. Rule Selection Rule Selection
7. Fiona is white. Rule2 Rule5
**Rules:** Decompose Decompose
1. If Dave is green and Dave is white Dave is green. Dave is cold.
then Dave is blue. Fact Check Fact Check
2.3. If something is green then it is nice. If something is blue and cold then it Decompose Rule6 Rule SelectionFail Rule3 Decompose Rule SelectionFail
4. is green. If something is white and young Dave is nice. Dave is kind. Dave is blue. Dave is cold. DecomposeRule8
then it is kind.
5. If something is cold then it is blue. Stop: Cycle Stop: Other Fact Check Stop: Other Dave is kind. Dave is young.
6. All nice, kind things are green. Detected branch failed Fail branch failed Fact Check Fact Check
7. All kind, cold things are white. Rule Selection Fail
8. If something is kind and young then Rule5 Rule Selection Already
it is cold. Decompose Rule4 Proved
**Goal:** Dave is cold.
Decompose
- Dave is not green. Fact Check
Dave is white. Dave is young.
Fail
Stop: Max Fact Check Fact Check
Depth Reached
Figure 9: The search trace of LAMBADA on an example from ProofWriter with depth=5 where the answer was
predicted correctly. The sign agreement module has been omitted for brevity. The modules color-coded with blue
represent the calls where the module retrieved the value from the cache instead of calling the LM.
**A** **Caching and Avoiding Loops for**
**LAMBADA**
Since LAMBADA is a recursive algorithm, during
the proof of an example Algorithm 1 may be called
with the same goal multiple times. For instance,
consider the goal “Eric is nice” for the theory in
Figure 1. Applying Rule6 breaks the goal into three
sub-goals. The first one is “Eric is big” which
is proved using the Fact Check module. For the
second sub-goal, Rule3 is used to compose it into
three sub-goals the first of which we have proved
before. Since we have already proved this sub-goal,
we can save a Fact Check call if we cache previous
results.
Note that the result of a call to LAMBADA can be
different depending on the input max depth. For example, the algorithm may return UNKNOWN when
called for the theory and goal in Figure 1 with max
depth 0, and return PROVED when called with max
depth 3. Specifically, if we can prove/disprove a
goal at depth d, we can conclude that it can be
proved/disproved at depths ≥ _d as well and we can_
get the value from the cache. Moreover, if the algorithm returns UNKNOWN for a goal at depth d, we
can conclude that it will also return UNKNOWN at
depths < d. Therefore, if the algorithm is called for
a theory and goal at depth d, we also check other
depths to see if we have the results for other depths
that apply to this case. Besides having a cache for
the entire algorithm that avoids redundant compu
tations when the truth of a goal has been previously
computed for a theory, each individual module can
also have its own cache as it is possible that the
module is called for the same theory and goal. We
show one such example in Figure 9 (to be discussed
in Section B).
LAMBADA may sometimes run into loops.
For example, to prove a (sub-)goal “Fiona is
round?”, after recursively identifying rules that
unify with it and decomposing it into sub-goals,
the algorithm may arrive at a point where it needs
to prove the “Fiona is round?” sub-goal, which
is equivalent to the initial goal. To avoid such loops,
for each path in the proof trace, we keep track of
the (sub-)goals that are to be proved and stop further exploring that branch of the search trace when
a loop is identified.
Note that in Algorithm 1, for clarity of the algorithm we did not include the caching and loop
avoidance operations. Also note that caching and
loop avoidance mainly help with reducing the number of inference calls.
**B** **Additional Results and Analyses**
In this section, we provide some more in-depth
qualitative and quantitative analysis of the results
from our model and the baselines.
**B.1** **Qualitative Analysis**
In Figure 9, we provide the search trace of LAM
BADA for an example in ProofWriter (Depth-5) for
6559
-----
**Facts: The bald eagle is green. The bald eagle is young. The bald eagle sees the dog. The bear likes the dog. The bear needs the cow. The**
cow needs the dog. The cow sees the dog. The dog is blue. The dog is green. The dog is young. The dog needs the bear. The dog needs the
cow.
**Rules:If someone sees the bald eagle and they are nice then the bald eagle needs the bear. If someone is nice and young then they need**
the dog. If someone likes the cow and the cow needs the dog then the cow is kind. If someone is young and blue then they like the bear. If
someone is blue and they like the bear then the bear likes the cow. If someone is green and they need the bear then they need the dog. If
someone sees the bear then they are nice. If someone is kind then they see the bear.
Hallucinated Fact **Question: The bear likes the cow.**
**CoT Inference: Since the facts state that the bear likes the cow so the statement the bear likes the cow is true.**
**Facts: [...] The bear sees the mouse. The lion chases the squirrel. The lion is blue. The mouse is big. [...]**
**Rules: If someone likes the mouse and they are blue then they are red. If someone is blue then they see the mouse. If the lion sees the**
squirrel and the lion is blue then the lion chases the mouse. If someone chases the bear then they see the squirrel. If someone sees the bear
then the bear likes the squirrel. If someone is young and they see the bear then they chase the mouse. If someone sees the mouse then they
chase the bear. If someone is blue and they chase the mouse then they are young.
**Question: The bear does not see the squirrel.**
**CoT Inference: Since the facts state that the bear sees the mouse and the rules state that if someone sees the mouse then they chase the**
Hallucinated Rule bear so we can add to the facts that the bear chases the mouse. Since the facts state that the bear chases the mouse and the rules state that
if someone chases the mouse then they see the squirrel so we can infer that the statement the bear does not see the squirrel is false.
**Facts: Anne is round. Anne is young. Charlie is green. Charlie is round. Charlie is young. Erin is big. Erin is green. Erin is round. Erin is**
young. Harry is nice. Harry is white.
**Rules: All young, white things are round. If something is nice and green then it is white. Cold things are round. Young, green things are nice.**
If something is big and green then it is nice. White, nice things are young. All green things are cold. White, round things are green. All cold,
round things are big.
**Question: Charlie is not nice.**
Conjunction Error **CoT Inference: Since the facts state that Charlie is green and the rules state that if something is green and nice then it is white so we can**
infer that Charlie is white. [...] so we can infer that Charlie is nice and the statement Charlie is not nice is false.
**Facts: Anne is blue. Anne is nice. Anne is quiet. Anne is rough. Anne is white. Bob is big. Charlie is rough. Erin is big. Erin is nice. Erin is**
young.
**Rules: Quiet, nice things are white. If something is rough then it is quiet. If Bob is white then Bob is young. If Anne is big and Anne is blue**
then Anne is rough. If Bob is rough and Bob is quiet then Bob is nice. Big things are rough. If Erin is nice and Erin is young then Erin is big.
**Question: Erin is not nice.**
**Inference: Since the facts state that Erin is nice [...] Since the facts state that Erin is rough and the rules state that if something is rough then**
Wrong derivation
it is quiet so we can infer that the statement Erin is not nice is false.
Figure 10: Examples of wrong CoT proof chains from four different categories. The erroneous part is marked in
red.
which LAMBADA correctly predicted that the goal
is disproved based on the theory. We deliberately
selected an example with a large search trace to
demonstrate the various aspects of LAMBADA.
LAMBADA starts by calling the Fact Check module on the goal which fails to prove or disprove
it. So Rule Selection is called which identifies two
rules that can be applied: Rule3 and Rule6. Since
Rule6 is shorter, the reranker ranks it higher; LAM
BADA starts with this rule and calls the Goal De_composition module which breaks the goal into_
two sub-goals: “Dave is nice.” and “Dave
is kind.”. Starting with the first sub-goal, Face
_Check fails on it so Rule Selection is called which_
selects Rule2 and Goal Decomposition decomposes
the sub-goal into “Dave is green.”.
Note that if the cycle checking was smart enough
to understand that this sub-goal is the negation
of the root goal, we could stop further searching
this branch. However, we currently only do cycle matching for exact matches so the algorithm
continues the search trace.
_Fact Check fails again so Rule Selection is called_
which selects Rule3 and Rule6 again, and since
Rule6 is shorter the algorithm continues with that
rule. Goal Decomposition breaks the sub-goal into
“Dave is nice.” and “Dave is kind.”. Considering the first sub-goal, the algorithm identifies
a cycle and stops the search. The second sub-goal
is also ignored as there is a conjunction between
the sub-goals.
The algorithm then continues with calling Goal
_Decomposition for Rule3 which breaks the sub-_
goal into “Dave is blue.” and “Dave is cold.”.
Starting with the first sub-goal, since Fact Check
fails the algorithm calls Rule Selection which selects Rule5 and Goal Decomposition breaks the
sub-goal into “Dave is cold.”. Face Check fails
on this sub-goal and since the maximum depth is
reached, the algorithm stops expanding this branch.
Moreover, the branch for “Dave is cold.” is no
longer pursued because there was a conjunction
between the sub-goals and one of them failed.
Moving on to the right branch in Figure 9, the
algorithm calls the Goal Decomposition module
for the goal and Rule3. Since we have previously
computed it, the sub-goals “Dave is blue.” and
“Dave is cold.” are returned from the cache.
_Fact Check is called on “Dave is blue.” and_
since it has been computed before, the result (fail
6560
-----
ure) is retrieved from the cache. The Rule Selection
module is called, where the result (Rule5) is again
retrieved from the cache. Goal Decomposition is
then called and the sub-goal “Dave is cold.” is
retrieved from the cache. Fact Check fails again
(retrieved from the cache), Rule Selection selects
Rule8 and Goal Decomposition produces two subgoals: “Dave is kind.” and “Dave is young.”.
For “Dave is kind.”, Fact Check fails, Rule Se_lection selects Rule4 and Goal Decomposition pro-_
duces two sub-goals: “Dave is white.” and
“Dave is young.”. For both of these sub-goals,
_Fact Check succeeds in proving them. The algo-_
rithm then also checks “Dave is young.” for the
right branch, but since this sub-goal has already
been proved, it just gets the result from the cache.
The algorithm then checks “Dave is cold.” for
the rightmost branch, but since this sub-goal has
already been proved, it just gets the result from the
cache.
The model also calls the Sign Agreement module
for rules on the right branch (not shown in the
Figure) and finds out that the sign of the rules and
the sub-goals agree for all cases, except for the very
first rule selected (Rule3) so it correctly concludes
that the goal is disproved.
**B.2** **Further Analysis of CoT**
In Figure 2(e), we observed that CoT mostly produces wrong proof chains even when the predicted
label is correct. Through manually analyzing 50
examples for which CoT predicted the correct label, we identified three dominant reasons for the
chains being wrong: 1- hallucinating rules or facts,
2- not understanding conjunction, and 3- making
invalid derivations. In Figure 10, we show failure
examples from each category. Notice that, e.g., in
the example with a hallucinated rule, CoT relies
on a rule “if someone chases the mouse then
they see the squirrel” which not only does
not appear in the provided set of rules, but cannot
even be derived with a combination of the rules.
The high label accuracy of CoT and its low proof
accuracy on ProofWriter-PD hint at the possibility
of spurious biases that can be exploited by CoT. For
example, we found that in 9.2% of the examples
which require 1+ reasoning hops, the consequent
of one of the rules in the theory is the same as
the goal to be proved, and for 98.9% of these examples the label is PROVED. In several of these
examples, CoT simply concluded that the goal can
be proved in 0 hops based on a hallucinated fact.
Moreover, the existence of the word “not” in the
goal is highly predictive of the label: goals having “not” are mostly DISPROVED and goals not
having “not” are mostly PROVED. The PUD case
solves the latter issue to a large extent as the label for a good portion of the examples with or
without “not” in UNKNOWN. The spurious correlations also explain the fluctuations in the CoT
performance across different depths, as the performance depends on how much those correlations
appear in the few-shot demonstrations.
We reiterate that for SI and LAMBADA, such
spurious correlations between the input and the
label cannot be exploited because the intermediate
modules are impervious to the correlations between
the input and the label.
**B.3** **Forward Chaining Becomes**
**Progressively More Difficult**
Algorithms such as SI that are based on forward
chaining require a combinatorial search of the theory to find the right subset of facts and rules in each
step of the reasoning. The search space becomes
progressively larger as the algorithm makes new
inferences and those inferences are added back to
the theory. For example, if the initial size of the
theory (i.e. the number of facts plus the number of
rules) is |C|, when making the k-th inference the
size of the theory is |C| + k − 1.
Conceptually, as the model produces more inferences, the distance to the goal (in terms of the
number of hops remaining between the goal and
the facts) should reduce and so the later inferences
should be more accurate. However, we hypothesize that the increase in the size of the theory (and
hence the size of the search space) may result in
lower success rates in the later inferences of the SI
model. To verify this experimentally, we further
analyzed the results of SI on depth-5 of PrOntoQA
as follows. We extracted the subset of examples
where the label was PROVED but SI failed to find
a proof (these are examples where at least one of
the inferences is not on the proof chain). Then, as
a proxy for measuring the responsibility of the k-th
inference of the model for the failure, we measured
the percentage of times the k-th inference was on
the proof chain (the proof chain for each test example is provided as part of the dataset). Notice
that it is possible that, e.g., the first inference is not
on the proof chain, but the rest of the inferences
6561
-----
Chain Of Thought
proved 84 23 169
disproved 14 82 180
True label
unknown 53 64 331
proved disproved unknown
Predicted label
(a) ProofWriter-PUD (Depth-5)
Selection Inference
proved 34 0 242
disproved 4 15 257
True label
unknown 19 15 413
proved disproved unknown
Predicted label
(b) ProofWriter-PUD (Depth-5)
LAMBADA
proved 163 2 111
disproved 0 149 127
True label
unknown 14 24 410
proved disproved unknown
Predicted label
(c) ProofWriter-PUD (Depth-5)
Chain Of Thought
proved 169 36
True label
disproved 67 110
proved disproved
Predicted label
(d) PrOntoQA (Depth-5)
Selection Inference
proved 6 209
True label
disproved 9 176
proved disproved
Predicted label
LAMBADA
proved 203 12
True label
disproved 2 183
proved disproved
Predicted label
(e) PrOntoQA (Depth-5)
(f) PrOntoQA (Depth-5)
Chain Of Thought
proved 112 3 10
disproved 8 95 22
True label
unknown 87 72 91
proved disproved unknown
Predicted label
(g) ParaRules
LAMBADA
proved 222 8 20
disproved 6 224 20
True label
unknown 40 44 416
proved disproved unknown
Predicted label
(h) ParaRules
Figure 11: Confusion matrices.
are. The results are reported in Figure 3 in the main
text. The results show that the chance of producing
inferences that are on the proof chain progressively
decreases in the later inferences of the model where
the size of the input theory (and hence the search
space) is larger.
**B.4** **Confusion Matrices**
prediction is PROVED than DISPROVED. We believe this is because DISPROVED cases typically
involve negation that makes the reasoning more
complex. However, there are several examples
for which the label is PROVED or DISPROVED,
whereas the model predicts UNKNOWN.
CoT and SI also show similar behaviour as LAM
BADA on ProofWriter-PUD but with a larger bias
toward prediction UNKNOWN. Moreover, SI shows
a large tendency toward predicting DISPROVED for
PrOntoQA.
**B.5** **Lexical Sensitivity Analysis**
We reported the overall model accuracies in the
main text. Here, we report finer-grained confusion
matrices that help better understand the biases of
the model. Figure 11 reports the confusion matrices
for our datasets. According to the results, we observe that whenever LAMBADA predicts PROVED
or DISPROVED, the prediction is mostly correct.
The accuracy is slightly more on cases where the
To analyze the lexical sensitivity of LAMBADA,
we created a new test for ProofWriter-PUD which
contains tokens that do not appear in demonstra
6562
-----
1.0
0.8
0.6
0.4
0.2
0.0
1.0
0.8
0.6
0.4
0.2
0.0
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
Original Test Set
Novel Token Test Set (v1)
Novel Token Test Set (v2)
(a)
Depth-0 Depth-1 Depth-2 Depth-3 Depth-5
Original Test Set
Novel Template Test Set (v1)
Novel Template Test Set (v2)
(b)
Figure 12: The performance of LAMBADA on ProofWriter-PUD for (a) the original and the novel token test sets,
(b) the original and the novel template test sets. The results show that LAMBADA is robust to lexical and template
modifications.
tion examples. Specifically, we manually created
a pool of entity names, animal names, adjectives,
and verbs (all of them previously not appearing in
the ProofWriter dataset) and then made the following modifications for each example: 1- identified
all entity names and mapped each entity name to
a randomly selected name from the pool, 2- identified all animals and mapped each of them to a
randomly selected animal from the pool, 3- identified all adjectives and mapped each of them to
a randomly selected adjective from the pool, and
4- identified all verbs and mapped each of them
(except the to be verbs) to a randomly selected verb
from the pool. As an example, dog may be mapped
to bison in one example and to camel in another.
Then, using the same few-shot examples as before,
we tested the performance of LAMBADA on this
modified test set and compared the results to the
original test set.
We also analyzed the sensitivity to the templates used for the rules. Toward this goal, we
identified the templates used for the rules in the
ProofWriter dataset and replaced each template
with another template (previously not appearing in
the ProofWriter dataset). For example, we changed
the template “[X] things are [Y]” to “It is a
truth that [X] things are always [Y] as
well”. Then, using the same few-shot examples as
before, we tested the performance of LAMBADA
on this modified test set and compared the results
to the original test set.
We repeated the aforementioned experiments
twice for each analysis each time using a different set of tokens/templates. The results in Figure 8
in the main text demonstrate the average accuracy
across two runs. The results for individual runs are
presented in Figure 12(a), (b) for the two analyses
respectively. According to the results, while we
observe some variations in the total accuracy (for
some depths the performance goes slightly down
and for some depths goes slightly up), the performance stays in the same ballpark, showing the robustness of LAMBADA. Moreover, comparing the
results on the modified test set with those of the
baselines reported in the main text, we observe that
even on this modified test set, LAMBADA performs
significantly better than the baselines tested on the
original test set.
**C** **Combinatorial Search Issue in**
**Forward Chaining**
Consider a simple fictional theory with the following facts:
[Anne is cold., Anne is nice and pink., Anne
is kind., Anne is green., Anne is big and
young., Anne is rough., Anne is round.]
the following rules:
[Cold, red people are white., Nice, blue
people are white., Kind, green people are
white., Cold, round people are white., Big,
green people are white.]
and the goal “Anne is white.”. An approach
based on forward chaining requires selecting a subset of the facts and rules from the theory from
which this goal can be proved. Specifically, it needs
to select “Anne is cold.”, “Anne is round.”,
and Cold, round people are white. from the
theory. Such a selection requires a combinatorial
search where different combinations of facts and
rules should be tested to see which one can lead to
proving the goal. An LM may fail to search this
space effectively in a single inference call.
SI uses an approximation to reduce the search
space: it first makes an inference call to an LM to
6563
-----
select one fact/rule, then it makes another inference
call to select the next fact/rule based on the first
one, and continues to make inference calls until
a halting criterion is met. This approximation reduces the search space from a combinatorial space
to a linear space. Since the facts/rules are not selected jointly, however, the chances of selecting the
wrong combinations of facts and rules increase because repairing a wrong first choice is not possible,
and this leads to low performance as evidenced in
our experimental results.
With a backward chaining approach such as
LAMBADA, on the other hand, no combinatorial
search (or approximations to it) is required: the
_Rule Selection module verifies each rule indepen-_
dently to see which one is applicable (i.e. a linear scan), the Goal Decomposition module breaks
goals into sub-goals based on each selected rule
independently of the other selected rules, and the
_Fact Check module verifies the existence of a fact_
that entails or contradicts the goal with a linear
search over the facts.
**D** **Implementation Details**
For our experiments, we used the PaLM 540B
model (Chowdhery et al., 2022) for all the models
(both LAMBADA and the baselines) served on a
4 × 4 TPU v4 architecture. The decoding temperature was set to zero. For testing CoT on PrOntoQA,
we used the same demonstration examples as the
original work but slightly changed the wording by
adding conjunctive words such as “Since” and “So”
to make the chains have a better flow. The reason
for this modification was that we found when working with PaLM, prompts that have a better flow
result in better predictions. This can be viewed
from Figure 13 where we compare the performance
for the original prompts vs. the prompts with the
conjunctive words added. It can be viewed that
while the latter slightly underperforms on Depth-1
(where the reasoning flow is not as important), it
substantially improves the results for higher depths
(especially Depth-5). For ProofWriter, we wrote
similar few-shot examples.
For SI, we used the same demonstration examples as in the original work for ProofWriter; for
PrOntoQA we wrote few-shot examples following
a similar pattern to those for ProofWriter. For each
dataset depth we used/wrote specific few-shot examples (e.g., when working with a subset of the
data that has examples requiring at most k hops
of reasoning, our CoT demonstrations also require
only k hops of reasoning), except for ProofWriter
Depth-5 where, following the original work, we
used it for testing length-generalization and only
included examples with chains up to 3 hops. For
running CoT on ProofWriter-PUD, we included
extra few-shot examples where the label is UN
KNOWN; the explanation for these examples is that
the goal cannot be proved or disproved with a combination of the facts and the rules. For running
SI on ProofWriter-PUD, after obtaining the inferences by running SI, we give the inferences and
the goal to our Fact Check module which decides
if the goal can be proved, disproved, or neither.
Since ProofWriter-PD and PrOntoQA are binary
datasets but LAMBADA makes three-way predictions (PROVED, DISPROVED, and UNKNOWN), to
test LAMBADA on these datasets, similar to SI we
combine the UNKNOWN and DISPROVED predictions into one class.
**D.1** **Datasets for Individual Module**
**Evaluation**
For creating datasets for measuring the performance of individual modules in LAMBADA, we
proceeded as follows. For Fact Check, we randomly selected 100 examples from the Depth-0 examples. We count a model prediction to be correct
if it produces the same label as the one specified
in the ProofWriter dataset. For Rule Selection, we
randomly selected 100 examples and manually enumerated every rule whose consequent unifies with
the goal. A model prediction is considered correct if it predicts all such rules correctly. For Goal
_Decomposition, we randomly selected 100 rules_
and goals such that the consequent of the rule unifies with the goal and then manually wrote the subgoals. A model prediction is considered correct if it
predicts all the sub-goals correctly. For Sign Agree_ment, we re-used the same examples from the Goal_
_Decomposition module and manually labeled them_
with respect to their sign agreement/disagreement.
**D.2** **Quality Issues in ParaRules**
We found the ParaRules dataset to has a high
amount of variation in the text, in the facts, and
in the rules thus making it a valuable benchmark
for evaluating text-based logical reasoning. We
also found a few quality issues in the ParaRules
dataset that were introduced when annotators converted facts and rules into natural language form.
Here, we describe some of the main issues that we
6564
-----
**Algorithm 3 FactCheck**
**Input:** Facts F, Goal G, Number of trials
_n_
1: for n times do do
2: f = FactSelection(F, G)
3: result = FactVerifier(f, G)
4: **if result ̸= UNKNOWN then**
5: **return result**
0.8
0.6
0.4
0.2
0.0
Depth-1 Depth-3 Depth-5
Original prompts
Prompts with better flow
6: _F = F −_ _f_
7: return UNKNOWN
**Algorithm 4 RuleSelection**
**Input: Rules R, Goal G**
1: I = RuleImplications(R)
2: selected = SelectRules(I, G)
3: return selected
Figure 13: CoT results on PrOntoQA with the original
prompts vs. the prompts with conjunctive words added
to make the sentences flow better.
found and fixed.
- Changing antecedents and consequents: We
found that in some cases where the rule was “X
and Y imply Z”, the natural language version of
the rule produced by annotators was written as if
“X implies Y and Z” or “X implies Y or Z”.
As an example, the rule “Cold, nice people
are red.” was written in natural language
form as “Some cold people can be nice
at times,and red at at other times.”.
For such cases, we modified the text to make the
antecedents and consequent match the original
rule.
- Introducing new antecedents: In some cases,
the annotator introduced new antecedents in the
rule. For example, for a rule where the antecedents were “green”, “red” and “rough”,
the annotator added another antecedent “naive”
(“If someone is green and naive ...”). For
such cases, we removed the extra antecedents.
- Turning general rules to specific ones: In
some cases, the natural language version of a
general rule was written for only a specific entity.
For example the rule “Rough, young, green
people are very round.” was written as “Tom
is a rough, young person to know ...”.
We removed the specific entities and made the
rule generally applicable.
- Introducing pronouns: For some of the facts,
we found that the annotator replaced the name of
the entity with a pronoun. As an example, “Dave
is ...” was annotated as “He is ...”. We
replaced the pronouns with the original entity
name in the theory.
The pseudo-code for the Fact Check module is
provided in Algorithm 3. For selecting a fact in
_Fact Check, our prompt looks like the following:_
Example 1
Fact1: <FACT1> Fact2: <FACT2> ...
Factn: <FACTn>
Question: <QUESTION>
Inference: For the question <QUESTION>
the most relevant fact is Facti (<FACTi>).
...
Example K
Fact1: <FACT> Fact2: <FACT> ... Factm:
<FACT>
Question: <QUESTION>
Inference:
For verifying if the goal/question can be derived from the selected fact, we use the following
prompt:
Example 1
Fact: <FACT>
Question: <QUESTION>
Inference: The fact <FACT> [X1] the
question <QUESTION> so [X2].
...
Example K
Fact: <FACT>
Question: <QUESTION>
Inference:
In the case where the goal can be proved from the
fact, we replace [X1] with “is equivalent to”
and [X2] with “so the answer is "yes"”. In
the case where the goal can be disproved from the
**D.3** **Prompts**
We provide an overview of the prompts we used
for each of the four components of our model for
the ProofWriter dataset.
6565
-----
fact, we replace [X1] with “is the negation of”
and [X2] with “so the answer is "no"”. And
in the case where the goal can neither be proved
nor disproved, we replace [X1] with “is neither
equivalent nor the negation of” and [X2]
with “so the question cannot be inferred
from the fact”.
The pseudo-code for the Rule Selection module
is provided in Algorithm 4. For finding the implication/consequent of the rules, we use the following
prompt:
Example 1
Rule1: <RULE1>, Rule2: <RULE2> ...
Rulen: <RULEn>
Inference: Rule1 implies [X1], _. . .,_
Rulen implies [Xn].
...
Example K
Rule1: <RULE1>, Rule2: <RULE2> ...
Rulem: <RULEm>
Inference:
[Xi]s depend on the consequent of each rule.
For rules such as “Rough, nice people are
red.” we write [Xi] as “(is; red)”, and for
rules such as “If the cat chases the dog
then the cat sees the dog.” we write [Xi]
as “(cat; chase; dog)”.
For rule selection based on the implications, we
use the following prompt:
Example 1
Rule1 implies <IMLP1>, Rule2 implies
<IMPL2>, ..., Rulen implies <IMPLn>
Question: <QUESTION>
Inference: The question is about
<IMPLq>: Rule1 <IMPL1> [X1] <IMPLq>, . . .,
<IMPLn> [Xn] <IMPLq>.
...
Example K
Rule1 implies <IMLP1>, Rule2 implies
<IMPL2>, ..., Rulem implies <IMPLm>
Question: <QUESTION>
Inference:
where each [X1] is either “is applicable to“
or “not applicable to“ depending on whether
the rule can be applied or not.
For goal decomposition, we use the following
prompt:
Example 1
Rule: <Rule>
Question: <QUESTION>
Inference: The question subject is
<SUBJq> and the rule premises are <PRM>*,
so the question breaks down to <SUBQ>*.
...
Example K
Rule: <RULE>
Question: <QUESTION>
Inference:
where <SUBJq> indicates the subject of the question, <PRM>* indicates the premises/antecedents in
the rule (the * indicates that there might be multiple
premises), and <SUBQ>* indicates the sub-goals.
Finally, for sign agreement, we use the following
prompt:
Example 1
Rule: <Rule>
Question: <QUESTION>
Inference: The rule implication <IMLPr>
is [Xr], the question <IMPLq> is [Xq],
so signs [Xd].
...
Example K
Rule: <RULE>
Question: <QUESTION>
Inference:
where <IMLPr> shows the implication of the
rule and <IMPLq> indicates the implication of the
question. [Xr] and [Xq] are either “positive“
or “negated“ depending on the sign of the implication. [Xd] is either “agree“ or “disagree“
depending on whether the signs agree or not.
6566
-----
**ACL 2023 Responsible NLP Checklist**
**A** **For every submission:**
□ A1. Did you describe the limitations of your work?
_Limitations section on p9_
□ A2. Did you discuss any potential risks of your work?
_Limitations section on p9_
□ A3. Do the abstract and introduction summarize the paper’s main claims?
_Abstract and Section 1 (Introduction)_
□ A4. Have you used AI writing assistants when working on this paper?
_Left blank._
**B** □ Did you use or create scientific artifacts?
_Section 3 creates a new artifact._
□ B1. Did you cite the creators of artifacts you used?
_We used three datasets referenced in Section 4 (Datasets)_
□ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
_The licenses can be found publicly on the corresponding websites: 1- ProofWriter https://allenai.org/data/proofwriter,_
_2- PrOntoQA: https://github.com/asaparov/prontoqa_
□ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided
that it was specified? For the artifacts you create, do you specify intended use and whether that is
compatible with the original access conditions (in particular, derivatives of data accessed for research
purposes should not be used outside of research contexts)?
_The datasets were used in the way they were used in the original works._
□ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any
information that names or uniquely identifies individual people or offensive content, and the steps
taken to protect / anonymize it?
_Not applicable. Left blank._
□ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and
linguistic phenomena, demographic groups represented, etc.?
_Not applicable. Left blank._
□ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits,
etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the
number of examples in train / validation / test splits, as these provide necessary context for a reader
to understand experimental results. For example, small differences in accuracy on large test sets may
be significant, while on small test sets they may not be.
_Appendix (implementation details)_
**C** □ Did you run computational experiments?
_Section 4_
□ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
_Appendix (Implementation details)_
_[The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing](https://2023.aclweb.org/)_
_[assistance.](https://2023.aclweb.org/blog/ACL-2023-policy/)_
6567
-----
□ C2. Did you discuss the experimental setup, including hyperparameter search and best-found
hyperparameter values?
_Appendix (Implementation details)_
□ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary
statistics from sets of experiments), and is it transparent whether you are reporting the max, mean,
etc. or just a single run?
_We ran our experiments only once, but there is no randomness in the experiments so running them_
_multiple times gives the same result as running once._
□ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did
you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
_We used PaLM (see appendix - implementation details)_
□ Did you use human annotators (e.g., crowdworkers) or research with human participants?
_Left blank._
□ D1. Did you report the full text of instructions given to participants, including e.g., screenshots,
disclaimers of any risks to participants or annotators, etc.?
_No response._
□ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants’ demographic
(e.g., country of residence)?
_No response._
□ D3. Did you discuss whether and how consent was obtained from people whose data you’re
using/curating? For example, if you collected data via crowdsourcing, did your instructions to
crowdworkers explain how the data would be used?
_No response._
□ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
_No response._
□ D5. Did you report the basic demographic and geographic characteristics of the annotator population
that is the source of the data?
_No response._
6568
-----
| [
"Mehran, Kazemi",
"Deepak, Ramachandran",
"Najoung, Kim",
"Deepti, Bhatia",
"Xin, Xu",
"Anna, Rogers",
"Jordan, Boyd-Graber",
"Naoaki, Okazaki"
] | 2023-07-01T00:00:00 | ACL 2023 Long Papers | true | 62 | 1 | null | https://aclanthology.org/2023.acl-long.361 | https://arxiv.org/abs/2212.13894 | https://www.semanticscholar.org/paper/03fb95e6be583ca954c3d00812a9e9a40f118e51 |
Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning | In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning. However, existing literature has highlighted the sensitivity of this capability to the selection of few-shot demonstrations. Current understandings of the underlying mechanisms by which this capability arises from regular language model pretraining objectives remain disconnected from the real-world LLMs. This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models. On this premise, we propose an algorithm to select optimal demonstrations from a set of annotated data with a small LM, and then directly generalize the selected demonstrations to larger LMs. We demonstrate significant improvement over baselines, averaged over eight GPT models on eight real-world text classification datasets. We also demonstrate the real-world usefulness of our algorithm on GSM8K, a math word problem dataset. Our empirical findings support our hypothesis that LLMs implicitly infer a latent variable containing task information. | The empirical findings support the hypothesis that LLMs implicitly infer a latent variable containing task information, and propose an algorithm to select optimal demonstrations from a set of annotated data with a small LM, and then directly generalize the selected demonstrations to larger LMs. | # Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
**Xinyi Wang[1], Wanrong Zhu[1], Michael Saxon[1], Mark Steyvers[2], William Yang Wang[1]**
1Department of Computer Science, University of California, Santa Barbara
2Department of Cognitive Sciences, University of California, Irvine
```
{xinyi_wang, wanrongzhu, saxon}@ucsb.edu,
[email protected], [email protected]
```
**Abstract**
In recent years, pre-trained large language models (LLMs) have demonstrated
remarkable efficiency in achieving an inference-time few-shot learning capability
known as in-context learning. However, existing literature has highlighted the
sensitivity of this capability to the selection of few-shot demonstrations. Current
understandings of the underlying mechanisms by which this capability arises from
regular language model pretraining objectives remain disconnected from the realworld LLMs. This study aims to examine the in-context learning phenomenon
through a Bayesian lens, viewing real-world LLMs as latent variable models. On
this premise, we propose an algorithm to select optimal demonstrations from a set
of annotated data with a small LM, and then directly generalize the selected demonstrations to larger LMs. We demonstrate significant improvement over baselines,
averaged over eight GPT models on eight real-world text classification datasets.
We also demonstrate the real-world usefulness of our algorithm on GSM8K, a math
word problem dataset. Our empirical findings support our hypothesis that LLMs
implicitly infer a latent variable containing task information. [1]
**1** **Introduction**
Transformer-based [41] pre-trained large language models (LLMs) have demonstrated significant
advancements in a variety of natural language processing (NLP) tasks. As the size of these LLMs
increases, they gain “in-context learning” capabilities, whereby the models achieve state-of-the-art
(SOTA) or near-SOTA performance by conditioning on a small number of demonstration examples
at inference time, without any need for updating model parameters [4]. Below is an example input
sequence for semantic analysis with in-context learning:
```
Great movie. Positive.\n The worst movie ever. Negative.\n Can’t wait to
see the second movie!
```
The first two lines are two demonstrations, and the third line is a test input. We expect an LLM to
output the correct label Positive as a continuation.
In-context learning has been demonstrated to be an effective technique for a wide range of NLP tasks.
However, it is sensitive to the choice, format, and even the order of the demonstrations used [29, 20].
This makes achieving optimal performance with in-context learning a significant challenge, requiring
real human effort to adjust the format and selection of demonstration examples. Heuristic solutions,
such as selecting demonstrations based on the similarity between the demonstrations and test input
[1Code: https://github.com/WANGXinyiLinda/concept-based-demonstration-selection.](https://github.com/WANGXinyiLinda/concept-based-demonstration-selection)
37th Conference on Neural Information Processing Systems (NeurIPS 2023).
-----
[19, 37] have been proposed, but a comprehensive understanding of why certain demonstrations
are effective while others are not remains elusive. Additionally, the mechanisms by which LLMs
acquire in-context learning capabilities through training on natural text under the standard language
model pre-training objective are not fully understood. Recent works on understanding in-context
learning provide valuable insights and theoretical results [5, 1, 42, 14, 12], but are limited in scope,
focusing on synthetic experiments to validate their hypotheses, while it remains unclear if these
results generalize to LLMs pre-trained on real-world natural language data. Xie et al. [50] introduced
a prominent result providing a latent topic (concept) variable interpretation for in-context learning.
They showed that the in-context learning predictor approaches the Bayes optimal predictor when the
number of demonstrations approaches infinity, under the assumption that both the pre-training data
distribution and task-specific data distribution are Hidden Markov Models (HMM). However, the
assumption that the data generation process is Hidden Markovian makes extrapolation of the result to
natural language questionable, and restricts empirical verification to synthetic data with toy models.
We are inspired by this prior work and introduce a more general and natural explanation built
on realistic assumptions, which gives rise to a practical demonstration selection algorithm. Our
explanation is inspired by the generation process of a topic model, i.e. a simple latent variable model:
_P_ (w1:T ) = _P_ (w1:T **_θ)P_** (θ)dθ
Θ _|_
Z
Where θ ∈ Θ represents a potentially high dimensional topic/concept variable, Θ is the space of the
topic/concept variable, and w1:T refers to the token sequence of a piece of text. Note that the topic
model here refers to the modern neural topic models [23, 22]. On the other hand, generative LLMs
model text data according to the general probabilistic decomposition:
_P_ (w1:T ) = _P_ (wi **_wi_** 1, ..., w1)
_|_ _−_
_i=1_
Y
While in practice, LLMs generate new tokens based on all previous tokens, we investigate whether a
simplified assumption similar to that of topic models can be made for LLMs:
_PM_ (wt+1:T **_w1:t) =_** _PM_ (wt+1:T **_θ)PM_** (θ **_w1:t)dθ_**
_|_ Θ _|_ _|_
Z
In this scenario, the generated tokens are assumed to be conditionally independent of previous
tokens, given the latent topic (concept) variable that acts like an approximate sufficient statistic for
the posterior information related to the prompt w1:t. For in-context learning, this concept variable
includes format and task information. By conditioning on an appropriate latent concept variable,
LLMs would generate the desired continuation with P (wt+1:T **_θ). As LLMs do not explicitly learn_**
a latent variable distribution like LDA-style topic models [3], we can instead utilize this formulation |
under an Empirical Bayesian formulation inspired by Lester et al. [17] to only approximate the
optimal latent variable value for a desired task, using a small LLM (with less than 1B parameters),
which is computationally efficient.
We empirically validate our explanation by selecting examples (w1:t in the equations) that are
most likely to infer the optimal latent variable value (those with the highest posterior probability
_P_ (θ|wt+1:T )). We then directly use them as demonstrations for in-context learning with other larger
LLMs (up to 175B parameters) and observed a significant performance improvement. The generalization of demonstrations between LLMs is likely a result of similar pre-training data distributions.
While our work is inspired by that of Xie et al. [50], our approach differs significantly in both
theoretical analysis and experimental settings. Our main contributions are as follows:
- We assume a general data generation process specified by a three-variable causal graph,
without constraints on the distribution function or the number of demonstrations.
- We prove under these realistic assumptions that the in-context learning predictor can
reach the Bayes optimal predictor with a finite number of demonstrations chosen using the
latent concept variable.
- We introduce an efficient, practical demonstration selection algorithm based on our
theoretical results, which can select demonstrations using a small LLM and then directly
generalize the demonstrations to other LLMs. The effectiveness of our algorithm is empirically validated using real-world LLMs on both text classification tasks and math word
problems.
-----
Our goal is to close the gap between theoretical understandings and real-world LLMs. To the best of
our knowledge, our proposed latent variable explanation of in-context learning is the first Bayesian
explanation that yields an effective algorithm in real-world scenarios.
**2** **Theoretical Analysis**
In in-context learning, the prompt w1:t is composed of several demonstrations and a test input. The
generated tokens wt+1:T represent the model’s prediction for the test input.
**2.1** **Notations and Problem Setting**
Suppose the objective of our task is to predict a discrete target variable Y ∈Y, given a token
sequence X ∈X, where X is the space of all possible token sequences. θ ∈ Θ is a potentially
high dimensional latent variable, where Θ is the high dimensional space of the variable. Unlike the
traditional topic model, θ is not assumed to be discrete, but continuously distributed over Θ. To
define the data generation process, we posit the existence of an underlying causal relation between
_X, Y, and θ. We examine two potential directions of this causal relation, namely X_ _Y_ **_θ and_**
_Y_ _X_ **_θ, which can be represented mathematically as the following structural equations:_**
_Y = f_ (X, θ, ϵ) _X = g(Y, θ, ϵ)_
Here ϵ ∈E is an independent noise variable, f : X × Θ × E →Y and g : Y × Θ × E →X are
two deterministic functions. Furthermore, we denote the joint data distribution by X, Y, θ ∼ _P_, and
assume that Y is sampled from a uniform distribution over Y. The distinction between these two
directions is crucial, as it allows us to utilize the direction in which the child variable (Y or X) is
independent of the other variables, given its parents.
We hypothesize that the causal direction depends on the nature of the task. For instance, in the task of
predicting the sentiment (Y ) of a movie review (X), it is reasonable to assume that the opinion about
the movie is formed before writing the review, thus making Y the cause of X, along with the task
concept of “writing a passage to express one’s opinion about the movie" (θ). Conversely, for the task
of classifying whether a product review (X) is helpful to other customers (Y ), it is the quality of the
review (X) that cause other customers to upvote it (Y ), along with the task concept of “rating the
helpfulness of this review" (θ). In the rest of the paper, we will focus on the X _Y_ **_θ direction_**
_and leave a detailed discussion of the other direction in the Appendix._
Suppose we are interested in a task (e.g. semantic analysis) denoted by d ∈T, where T is the space
of all possible tasks. We assume there is an injective function between T and Θ. i.e. for each task d,
there is a concept variable θ[d], such that each data (X _[d], Y_ _[d]) sampled from task d is generated by:_
_Y_ _[d]_ = f (X _[d], θ[d], ϵ)_
To perform in-context learning with an LLM (generically denoted by model label M ), we condition
on a fixed set of k demonstration examples (X1[d][, Y][ d]1 [)][,][ (][X]2[d][, Y][ d]2 [)][, ...,][ (][X]k[d][, Y]k[ d][)][ sampled from task][ d][.]
Following previous works [24, 26], as we are not using any instruction fine-tuned models, we
do not include a task description in the prompt, with the aim of focusing on the examination of
the demonstrations. To naturally project Y into the token space X, we define injective mappings
_τ_ _[d]_ : Y →X, which are typically defined by human understanding of the task d. e.g. for sentiment
analysis, τ _[d]_ map positive class to the token “positive" and negative class to the token “negative".
Additionally, a delimiter token w[d] is defined, typically an empty space or a new line token, to separate
the demonstrations when concatenated. We denote the LLM output probability of X, Y, and θ, with
the aforementioned preprocessing applied, by PM[d] [:]
_PM_ (τ _[d](Y )|X1[d][, τ][ d][(][Y][ d]1_ [)][,][ w][d][, ..., X]k[d][, τ][ d][(][Y][ d]k [)][,][ w][d][, X][) =][ P]M[ d] [(][Y][ |][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][)]_
**2.2** **Problem Analysis and Theoretical Results**
Suppose a set of observed data sampled from task d, denoted as D[d], is available, allowing for the
selection of the k most suitable demonstrations from it. For any incoming test example X, we have:
_PM[d]_ [(][Y][ |][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y]k[ d][, X][) =]_
_PM[d]_ [(][Y][ |][θ][, X][)][P][ d]M [(][θ][|][X]1[d][, Y]1[ d][, ..., X]k[d][, Y]k[ d][, X][)][d][θ] (1)
-----
Figure 1: An overview of our proposed two-phased algorithm. Demonstration selection and latent
concept learning share the same LLM as demonstration selection needs to reuse the learned concept
tokens, while at the in-context learning time, any other generative LLMs can be used. Here we only
illustrate the X _Y_ **_θ direction. The Y_** _X_ **_θ direction can be illustrated similarly by_**
exchanging X and Y in the above figure.
Here, we assume the sampling of the test example is independent of the sampling of the demonstrations, so Y is independent of the demonstrations given θ and X. We also assume that the pre-trained
data distribution PM[d] [is a suitable approximation of the assumed data distribution][ P] [:]
**Assumption 2.1. Assume that PM** (X) = P (X), and PM[d] [(][Y][ |][θ][, X][)][ ∝] _[P]_ [(][Y][ |][θ][, X][)][ for][ X][ ][ Y][ ][ θ][.]
Note that the assumption that a large language model captures the true distribution of language is
fairly common in the literature studying LLMs [50, 34, 47]. With this assumption, we establish:
**Proposition 2.2. If task d follows the X** _Y_ **_θ direction, then arg maxy_** _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][ is]
_∈Y_
_the Bayes optimal classifier._
In this case, only when PM[d] [(][θ][|][X]1[d][, Y][ d]1 [, ..., X] k[d][, Y][ d]k _[, X][)][ completely concentrate on][ θ][d][, can the in-]_
context learning classifier become the Bayes optimal classifier [11]:
**Theorem 2.3. If task d follows the X** _Y_ **_θ direction, then the in-context learning classifier_**
arg max _PM[d]_ [(][Y][ =][ y][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][)]_
_y_
_∈Y_
_always has a higher or equal probability of misclassification to the Bayes optimal classifier_
arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][. Equality only holds when]
_∈Y_
_∀x ∈X_ _, PM[d]_ [(][θ][d][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][ =][ x][) = 1][.]_
A similar argument can be made for the Y _X_ **_θ direction._** [2] Here, Equation (1) would become:
_PM[d]_ [(][X][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ ) =]_ _PM[d]_ [(][X][|][θ][, Y][ )][P]M[ d] [(][θ][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ )][d][θ]_ (2)
Θ
Z
Note that the left-hand side of Equation (1) and Equation (2) are similar to the direct and channel
method introduced by Min et al. [24]. However, our analysis differs from theirs in that we do not
treat (Y _X_ **_θ) as the universally superior channel direction for modeling in-context learning,_**
rather arguing that depending on the end task, the causal direction (X _Y_ **_θ) is sometimes better._**
This view is supported by our empirical results in Appendix B.
**3** **Method**
Here we demonstrate how the proposed theory can be practically applied to select optimal demonstration examples. Since latent variable θ encodes both the task and format information, the whole
distribution over Θ is too complex to model. Unlike traditional topic models, we will only focus on
estimating an optimal value θ[d] corresponding to task d.
First, we perform latent concept learning, wherein the task latent θ[d] is learned as a set of new token
embeddings using prompt tuning over the full demonstration candidate set. With this optimal task
latent, we then perform demonstration selection, where a smaller set of demonstrations is chosen to
maximize the likelihood of postfixing the latent concept tokens. We only need to use a small LLM to
do the above steps to obtain an optimal set of demonstrations that can be directly transferred to other
LLMs. Figure 1 is an overall illustration of our proposed method.
2The detailed argument of the Y X θ direction can be found in Appendix A.2.
-----
**Algorithm 1 Latent concept learning**
**Input: Dataset D = {(xi, yi, di)}i associated with a set of tasks S, LLM M**, number of concept
tokens per task c, learning rate α, and number of training steps N .
**Output: LLM M** _[′]_ with fine-tuned concept tokens.
Add c|S| new tokens to the vocabulary. i.e. The concept tokens _θ[ˆ][d]_ for each task in S. Randomly
initialize their embeddings Enew. Freeze all parameters in M except Enew;
**for step = 1 to N do**
Sample a random batch B in D and initialize gradient g ← 0;
**for each data point (x, y, d) in B do**
**end forg = g +** _[∂ℓ][(]∂E[X,Y]new[ ;ˆ]θ[d])_ ;
_Enew = Enew −_ _αg;_
**end for**
**3.1** **Latent Concept Learning**
We want to first find the optimal value of the latent concept variable θ[d] corresponding to a task d ∈T .
As arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][ is the Bayes optimal classifier according to Proposition 2.2,][ θ][d]
_∈Y_
should be able to minimize −EX,Y,d[log PM[d] [(][Y][ |][θ][d][, X][)]][ for the][ X][ ][ Y][ ][ θ][ direction. In practice,]
we try to align θ[d] to the token embedding space by adding new tokens to the vocabulary. After this
alignment, we hope to be able to use the learned new tokens of θ[d] as regular tokens.
More specifically, building upon the methodology proposed by Lester et al. [17], for each specific
task d, c new concept tokens (denoted as _θ[ˆ][d]) are added to the original vocabulary of LLM M to_
represent the corresponding task concept θ[d]. Subsequently, the embedding of these new tokens
_Enew(θ[ˆ][d]) is fine-tuned while freezing the remaining parameters of LLM M_ . The variable c is treated
as a hyperparameter. In practice, in order to condition on θ[d], the corresponding c concept tokens are
appended to the input X (or Y ) as shown in the example provided below, where c = 2:
```
<sentiment_token_1><sentiment_token_2> Can’t wait to see the second movie!
```
By giving the above input tokens, we ask the LLM to predict the correct label Positive for us. Note
that <sentiment_token_1> here is just a label assigned to the newly added concept token. It can
be anything as long as it does not overlap with the original vocabulary of LLM.
The fine-tuning objective would then be minimizing L(θ[ˆ][d]) = EX,Y [ℓ(X, Y ; θ[ˆ][d])], where
log PM[d] [(][Y][ |]θ[ˆ][d], X) if X _Y_ **_θ_**
_ℓ(X, Y ; θ[ˆ][d]) =_ _−_
( log PM[d] [(][X][|]θ[ˆ][d], Y ) if Y _X_ **_θ._**
_−_
Theoretically, if we can minimize the above loss function, a Bayes optimal classifier can be obtained,
and the concept tokens would be a reasonable delegate of the real latent concept variable:
**Proposition 3.1. When L(θ[ˆ][d]) is minimized, PM[d]** [(][Y][ |]θ[ˆ][d], X) = P (Y |θ[d], X) for X _Y_ **_θ. If the_**
_LLM M is invertible, then_ _θ[ˆ][d]_ = θ[d].[3]
We denote the LLM M with fine-tuned concept tokens by M _[′]. Since we add the concept tokens into_
the regular token vocabulary, the raw LLM output probability PM ′ (θ[ˆ][d] **_w1:t) (w1:t denote a given_**
_|_
prompt) would be in the token sequence space X instead of the concept space Θ. Since learning all
possible θ[d] _∈_ Θ is infeasible, we propose to approximate the concept space Θ by sampling a diverse
subset of tasks S ⊆T . Then the estimated conditional probability of θ[d] would be:
_PˆM[d]_ _[′]_ [(ˆ]θ[d]|w1:t) = _tP∈SM[d]_ _[P][′]_ [(ˆ]M[ t]θ[d][′]|[(ˆ]wθ[t]1:|wt)1:t)
To obtain the concept tokens for all tasks in P, we fine-tune all tasks together with the loss
_S_
_d∈S_ _[L][(][θ][d][)][. We summarize the proposed algorithm in Algorithm 1.]_
3
P More discussion can be found in Appendix A.3.
-----
**Algorithm 2 Demonstration selection**
**Input: dataset D[d]** for a task d. LLM with fine-tuned concept tokens M _[′]. The number of_
demonstrations k.
**Output: A set of selected demonstrations.**
**for each (X** _[d], Y_ _[d]) in D[d]_ **do**
Compute _P[ˆ]M[d]_ [(ˆ]θ[d]|X _[d], Y_ _[d]);_
**end for**
Select top k examples with the largest _P[ˆ]M[d]_ [(ˆ]θ[d]|X _[d], Y_ _[d]), denoted as (X1[d][, Y][ d]1_ [)][, ...,][ (][X]k[d][, Y]k[ d][)][;]
Note that the embedding matrix of a generative LLM is shared on both the input and output sides. So
while we only see the concept tokens on the input side at the training time, they can be viewed as
regular word tokens that can be generated on the output side.
**3.2** **Demonstration Selection**
According to Theorem 2.3, for a task d, to make the in-context learning classifier closer to the
Bayes optimal classifier, we need to select demonstrations (X1[d][, Y][ d]1 [)][, ...,][ (][X]k[d][, Y]k[ d][)][ that maximize]
_PM[d]_ [(][θ][d][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][)][ for all][ X][ ∈X]_ [. Then our goal then becomes selecting demonstra-]
tions that can best infer the task concept for all test inputs on average:
arg max EX [PM[d] [(][θ][d][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y]k[ d][, X][)]]_
_X1[d][,Y][ d]1_ _[,...,X]k[d][,Y][ d]k_
As test examples are sampled independent of the demonstrations, and PM (X) = P (X) according to
Assumption 2.1, we have
EX [PM[d] [(][θ][d][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][)] =][ P]M[ d]_ [(][θ][d][|][X]1[d][, Y]1[ d][, ..., X]k[d][, Y]k[ d][)]
If we assume each demonstration is also sampled independently, we have:
_k_
_i=1_ _[P][ d]M_ [(][θ][d][|][X]i[d][, Y]i[ d][)]
_PM[d]_ [(][θ][d][|][X]1[d][, Y]1[ d][, ..., X]k[d][, Y][ d]k [) =]
_PM[d]_ [(][θ][d][)][k][−][1]
Q
Assuming that θ has a uniform prior, then our goal becomes finding the top k demonstrations that
maximize _P[ˆ]M[d]_ _[′]_ [(ˆ]θ[d]|Xi[d][, Y][ d]i [)][. Note that the independence between demonstrations is a simplified]
assumption to reduce the combinatory search space of (X1[d][, Y]1[ d][)][, ...,][ (][X]k[d][, Y][ d]k [)][. In practice, selected]
demonstrations are likely correlated as some demonstrations may work well together but not necessarily work well by themselves. However, it would be too expensive to search the O(|D[d]|[k])
combinations over the candidate set D[d]. In practice, this simplification works reasonably well. We
leave this combinatory search problem to future research.
Also, as we are using an LLM to approximate the data distribution, the order of the demonstrations
might matter. We will show in the Experiment section that the order does not matter, so no reordering
of the selected demonstrations is needed. The full selection algorithm is shown in Algorithm 2.
**4** **Experiments**
**Datasets. We conduct experiments on eight datasets from five different types of NLP classification**
tasks: sentiment analysis, linguistic analysis, topic classification, emotion classification, and hate
speech detection. For sentiment analysis, we choose the Stanford Sentiment Treebank (SST2) dataset
[35] from the GLUE benchmark [43] and the financial phrase bank (FPB) dataset [21]. SST2 is
constructed based on movie reviews labeled “positive" or “negative", and FPB is based on financial
news labeled “positive", “negative", or “neutral". For linguistic analysis, we choose the Corpus
of Linguistic Acceptability (COLA) dataset [46] from the GLUE benchmark, based on sentences
collected from linguistic books, labeled with “acceptable" or “unacceptable". For topic classification,
we choose the DBpedia ontology classification dataset [52], based on DBpedia 2014 [16], labeled
with 14 different ontology classes. For emotion classification, we choose the dataset from Chatterjee
et al. [6] and Saravia et al. [33], both of which are collected from Twitter. Chatterjee et al. [6] (EmoC)
-----
Figure 2: Accuracy of 4-shot in-context learning using demonstrations selected by our method and
other baselines, averaged over eight datasets. Our demonstrations are selected using GPT2-large, and
the same set of demonstrations is then applied to all other LLMs.
predict emotion given a three-turn contextual dialogue, while Saravia et al. [33] predict emotion given
a Twitter message with clear emotion. For hate speech detection, we choose the online hate speech
detection dataset (ETHOS) [27], collected from online social media platforms. Here we detect two
types of hate speech: sexual orientation (ETHOS-SO) and religion (ETHOS-R). While in Section 2,
we assume that all tasks share the same label space Y, here we relax such assumption and allow a
different number of labels for different tasks. We use minimal formatting to process each example. A
detailed description of the datasets and our data processing procedure can be found in Appendix B.
**Experiment settings. To determine the causal direction for each task, we select the direction that can**
give higher accuracy when using random demonstrations[4]. We adopt the Y → _X ←_ **_θ direction for_**
sentiment analysis, topic classification, and emotion classification tasks, which is consistent with the
intuition that people usually have some sentiment, topic, or emotion in mind before writing a piece
of text. We adopt the X → _Y ←_ **_θ direction for the linguistic analysis and hate speech detection_**
type of tasks. While this is less intuitive, we can understand this as linguistic error and hate speech
detection are more of a post hoc task in contrast to the previous tasks.
Without specification, we use k = 4 number of demonstrations and c = 10 number of concept
tokens per dataset for our experiments, as the context length of GPT2 is 1024, and a larger number of
demonstrations may not be able to completely fit into it. We use GPT2-large to learn the concept
tokens and then compute the probability of each candidate demonstration example. We select our
demonstrations from a randomly selected 100 example subset of the train set as the candidate set
_D[d]. We use the same set of demonstrations selected by GPT2-large for all other LLMs. We test the_
performance of the selected demonstrations using at most 1000 examples randomly sampled from
the test set. Each experiment is repeated for five runs with different random seeds (the randomness
comes from the sampling of the candidate set and the sampling of the test set). We adopt a large
portion of the code from Min et al. [25], which is based on Huggingface [49].
**Baselines. We consider the following baselines:**
- Uniform: We uniformly select k demonstrations from D for each test example.
- Similar: According to Liu et al. [19], demonstrations that are semantically similar to the
test example would hare more performant. Following their method, we use a pre-trained
sentence Transformer [31] to calculate the cosine similarity between the demonstrations and
test examples. We choose the top k similar demonstrations from D for each test example.
**Main results.[5]** Figure 2 shows our main results averaged over all eight datasets, using the firstgeneration GPT2s and GPT3s, without any instruction fine-tuning [28] or Reinforcement Learning
from Human Feedback (RLHF) [36]. Our method significantly outperforms baselines on eight
different LLMs, with 12.5% relative improvement to the uniform selection baseline on average, which
shows the effectiveness of our method. The demonstrations selected by our method are exclusively
based on GPT2-large, while the same set of demonstrations can be generalized to all other GPTs.
**Results with non-GPT models. In Figure 3a, we test the demonstrations selected by our method**
using GPT2-large on more LLMs (GPT3 [4], GPT3-instruct [28, 36], GPT-J [44], OPT [51], and
LLaMA [38]) with similar sizes (6-7B), and show that the selected demonstrations improve in-context
learning performance of all of them. The fact that GPT3-curie obtains the largest performance
improvement is likely because similar pre-training data distributions help the generalization of the
4Detailed results see Figure 6 in Appendix B.
5The complete results with standard deviations in this section can be found in Appendix B.
-----
(b) Proposed method v.s. using
randomly selected tokens
(a) Proposed method v.s. randomly selected demonstrations
Figure 3: In-context learning accuracy averaged over all eight datasets.
Uniform Similar Ours w/ Llama 2 (7B) Ours w/ GPT2-XL (1.5B)
Prompt tuning - - 3.7 1.3
Llama 2 (7B) 11.4 13.1 19.3 15.9
Llama 2 (13B) 17.0 18.3 21.6 20.5
Llama 2 (70B) 50.2 53.5 54.3 52.9
ChatGPT (gpt-3.5-turbo) 76.5 78.1 81.2 80.4
Table 1: Prompt tuning and 4-shot in-context learning accuracy on a subset of GSM8K test set. Our
demonstrations are selected with either 7B Llama 2 or GPT2-XL
selected demonstrations. Different-size GPT2 models share the same pre-training corpus [30], while
GPT3s are pre-trained on a dataset expanded from the GPT2 pre-training corpus [4]. Thus the
pre-training distribution of GPT3-curie and GPT2-large can be assumed to be similar.
**Results on GSM8K. Since our primary goal is to connect the theory with real-world models and**
datasets, we did not try to include harder tasks in the main results in Figure 2. In practice, our
proposed method is most effective with hard tasks that even parameter-efficient fine-tuning with
smaller models cannot outperform in-context learning with the same or larger models. To showcase
the usefulness of our proposed algorithm, We added a new dataset, GSM8K [9], which is a math
word problem-solving dataset with chain-of-thoughts solutions. Table 1 shows the test accuracy of
the final numerical answer with greedy generation. We randomly select a test set of 200 examples
instead of using the full test set for computation efficiency. [6]
As shown in the first row of Table 1, prompt tuning with ten new tokens can only obtain less than
4% accuracy on the GSM8K test set. The last four rows show the in-context learning results with
different size Llama 2 models [39] and ChatGPT. Our proposed demonstration selection method (last
two columns) significantly outperformed the Uniform and Similar baseline. We also find that the
demonstrations selected with a larger model (7B) are more effective than those selected with a smaller
model (1.5B). The results show that our demonstration selection method is a good choice under a low
data setting, with a small computing budget and minimal inference latency. Our proposed method
can also potentially be combined with other prompting techniques [8] to boost performance further.
**Learned tokens v.s. Random tokens. To confirm the critical role of the latent concept variable in**
the proposed demonstration selection algorithm, we compare the performance of using the learned
concept tokens versus using randomly selected tokens from the original vocabulary in Figure 3b. The
demonstrations selected by random tokens only obtain the same performance as randomly selected
demonstrations, showing that the performance gain of our method comes from the learned concept
tokens containing the task and format information, not other elements of our algorithm.
_k ablation study. While we use k = 4 demonstrations for all experiments, we also test the_
effectiveness of our method using different k. As shown in Figure 4a, our method significantly
outperforms the random selection baseline with k = 2, 4, 8, and 16. To fit in large ks, we use
GPT3-ada with a longer context length (2048). Note that for real-world tasks, it is in general not true
that more demonstrations guarantee higher performance [7]. We can see that the uniform baseline
performance increases from k = 2 to k = 8, then drops a little at k = 16. Our method improves
the uniform baseline by around 5% absolute for all ks, while k = 4 improves the most (6.6%). Our
method appears to have a diminishing effect when k becomes larger, which is likely because the
effect of more demonstrations overwhelms the effect of demonstration choices.
6Note that we did not use a calculator to insert the correct result of each generated math equation during
generation for time efficiency, which resulted in slightly lower scores.
-----
_c ablation study._ While
we use c = 10 number of
concept tokens for all experiments, we also investigate the effect of different c
on our method. When c is
small (c = 5), the concept (a) k ablation study. (b) c ablation study.
tokens cannot effectively Figure 4: In-context learning accuracy of our method versus random
capture the task and format selection baseline averaged over all eight datasets with GPT3-ada.
information, thus cannot improve the performance. When c increases from 10 to 20, we observe a drop in the performance. It
is likely because the selectivity of the concept tokens decreases when c increases. The longer the
concept token sequence is, the more likely it will contain meaningless tokens that do not contribute to
demonstration selection.
**Effect of demonstrations’ order. We find that the demonstrations selected by our method are**
insensitive to their order in most cases.[7] An exception is the EmoC dataset, where our method has a
high variance. On the contrary, Lu et al. [20] found that the order of the demonstration matters, and
a good ordering cannot be transferred between different LLMs. We suspect that the ordering only
matters when the demonstration selection method is not robust. Since Lu et al. [20] randomly selects
one set of demonstrations for the whole test set, the variance in performance is high with different
demonstrations, thus ordering matters. And since such ordering is not transferable while our selected
demonstrations are highly transferable, we suspect the core task information is stored in the content
of the demonstrations, while the ordering mainly captures model-specific artifacts.
**Qualitative analysis. In Figure 5, we**
provide a t-SNE [40] projection of
the learned concept token embeddings.
The tokens corresponding to semantically similar tasks are close together.
Note that this result only aims to provide a straightforward illustration of
concept tokens. The effect of concept
tokens should be understood by the
previous quantitative results.[8]
We also list the top 4 selected demonstrations in Table 14 in Appendix B.
Compared to the examples with lower
scores, the selected examples for
GSM8K have more deductive reasoning (i.e. with the connecting words
‘so’, ‘then’, ‘thus’, etc.), instead of
listing parallel conditions. For SST2, Figure 5: t-SNE plot of the learned concept tokens for each
the selected examples are longer and task. Concept tokens that can be explained by similar tokens
more complex, sometimes including a are summarized in the graph.
‘but’. This can be understood as these
harder examples can represent the task more comprehensively. This conclusion also aligns with the
findings in [13] that hard examples in the pre-training data contribute to in-context learning the most.
The label distribution of the selected demonstrations is usually balanced in class, which reduces the
possible biases introduced by the demonstrations.
**5** **Related Work**
Heuristic solutions, such as selecting demonstrations based on the similarity between the demonstrations and test input [19, 37, 32] have been proposed. [20] propose to reorder the demonstration based
on the entropy of the predicted labels. In this paper, we use the similarity-based selection method
7Detailed results see Figure 9 in Appendix B.
8The list of similar tokens for these concept tokens can be found in Table 13 in Appendix B.
-----
as a baseline while do not include the label entropy-based reordering method as we show that the
ordering of the demonstrations does not matter for our method.
Previous research on the phenomenon of in-context learning in Transformers has identified a number
of pre-training data distributions that can lead to the emergence of this capability, including a Hidden
Markov Model distribution [50] and a skewed Zipfian distribution with high burstiness [5]. Other
studies have sought to understand the underlying mechanisms of in-context learning by making
connections with gradient descent [42, 10, 1], formalizing it as an algorithm learning problem [18],
or proposing a latent variable theory similar as ours [14, 12, 50]. While providing valuable insights
on how in-context learning works, these works are limited to synthetic datasets and toy Transformers,
while it remains unclear if these results generalize to LLMs pre-trained on real-world text data and
whether these results can help in-context learning performance. In contrast, we propose a Bayesian
explanation of in-context learning that can be verified with real-world LLMs on various NLP datasets.
Dai et al. [10] provide a practical algorithm based on the understanding that the Transformer has a
dual form of gradient descent. However, their empirical results are smaller in scale, with six datasets
and only one model (350M), and has less significant improvements (5.4% relative to baseline).
There are also works trying to understand in-context learning from an empirical perspective [2, 24].
Min et al. [26] found demonstrations’ ground truth labels do not matter for in-context learning, which
we find is not entirely accurate in Appendix B. On the other hand, chain-of-thoughts prompting
[48, 53, 45] find that providing step-by-step explanations improves in-context learning performance.
**6** **Conclusion**
In this work, we endeavor to comprehend large language models (LLMs) through a Bayesian lens and
posit them as implicit topic models that infer a latent conceptual variable from prompts. Motivated
by this understanding, we propose a two-step algorithm that first extracts latent conceptual tokens
from a small LLM and then selects demonstrations that have the greatest probability of predicting the
corresponding conceptual tokens. The selected demonstrations can then be directly generalized to
other LLMs. The efficacy of our algorithm across various text classification datasets and GPT models
validates our explanation of in-context learning.
**Acknowledgements**
This work was supported by the National Science Foundation award #2048122. The views expressed
are those of the author and do not reflect the official policy or position of the US government. We
thank Google for its generous gift to the University of California.
**References**
[1] E. Akyürek, D. Schuurmans, J. Andreas, T. Ma, and D. Zhou. What learning algorithm is
in-context learning? investigations with linear models. arXiv preprint arXiv:2211.15661, 2022.
[2] H. Bansal, K. Gopalakrishnan, S. Dingliwal, S. Bodapati, K. Kirchhoff, and D. Roth. Rethinking
the role of scale for in-context learning: An interpretability-based case study at 66 billion scale.
_arXiv preprint arXiv:2212.09095, 2022._
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3
(null):993–1022, mar 2003. ISSN 1532-4435.
[4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural
_information processing systems, 33:1877–1901, 2020._
[5] S. C. Chan, A. Santoro, A. K. Lampinen, J. X. Wang, A. Singh, P. H. Richemond, J. McClelland,
and F. Hill. Data distributional properties drive emergent few-shot learning in transformers.
_arXiv preprint arXiv:2205.05055, 2022._
[6] A. Chatterjee, K. N. Narahari, M. Joshi, and P. Agrawal. Semeval-2019 task 3: Emocontext
contextual emotion detection in text. In Proceedings of the 13th International Workshop
_on Semantic Evaluation, pages 39–48, Minneapolis, Minnesota, USA, 2019. Association for_
-----
[Computational Linguistics. doi: 10.18653/v1/S19-2005. URL https://www.aclweb.org/](https://www.aclweb.org/anthology/S19-2005)
```
anthology/S19-2005.
```
[7] J. Chen, L. Chen, and T. Zhou. It takes one to tango but more make trouble? in-context training
with different number of demonstrations. arXiv preprint arXiv:2303.08119, 2023.
[8] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling
computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588,
2022.
[9] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek,
J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint
_arXiv:2110.14168, 2021._
[10] D. Dai, Y. Sun, L. Dong, Y. Hao, Z. Sui, and F. Wei. Why can gpt learn in-context? language
models secretly perform gradient descent as meta optimizers. arXiv preprint arXiv:2212.10559,
2022.
[11] L. Devroye, L. Györfi, and G. Lugosi. A probabilistic theory of pattern recognition. In Stochastic
_Modelling and Applied Probability, 1996._
[12] M. Hahn and N. Goyal. A theory of emergent in-context learning as implicit structure induction.
_arXiv preprint arXiv:2303.07971, 2023._
[13] X. Han, D. Simig, T. Mihaylov, Y. Tsvetkov, A. Celikyilmaz, and T. Wang. Understanding
in-context learning via supportive pretraining data. In Proceedings of the 61st Annual Meeting
_of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12660–12673,_
2023.
[14] H. Jiang. A latent space theory for emergent abilities in large language models. arXiv preprint
_arXiv:2304.09960, 2023._
[15] B. LeBrun, A. Sordoni, and T. J. O’Donnell. Evaluating distributional distortion in neural
language modeling. In International Conference on Learning Representations, 2022.
[16] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann,
M. Morsey, P. Van Kleef, S. Auer, et al. Dbpedia–a large-scale, multilingual knowledge base
extracted from wikipedia. Semantic web, 6(2):167–195, 2015.
[17] B. Lester, R. Al-Rfou, and N. Constant. The power of scale for parameter-efficient prompt
tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language
_Processing, pages 3045–3059, 2021._
[18] Y. Li, M. E. Ildiz, D. Papailiopoulos, and S. Oymak. Transformers as algorithms: Generalization
and implicit model selection in in-context learning. arXiv preprint arXiv:2301.07067, 2023.
[19] J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen. What makes good in-context
examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd
_Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages_
100–114, Dublin, Ireland and Online, May 2022. Association for Computational Linguistics. doi:
[10.18653/v1/2022.deelio-1.10. URL https://aclanthology.org/2022.deelio-1.10.](https://aclanthology.org/2022.deelio-1.10)
[20] Y. Lu, M. Bartolo, A. Moore, S. Riedel, and P. Stenetorp. Fantastically ordered prompts and
where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th
_Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
pages 8086–8098, 2022.
[21] P. Malo, A. Sinha, P. Korhonen, J. Wallenius, and P. Takala. Good debt or bad debt: Detecting
semantic orientations in economic texts. Journal of the Association for Information Science
_and Technology, 65, 2014._
[22] Y. Miao, L. Yu, and P. Blunsom. Neural variational inference for text processing. In M. F.
Balcan and K. Q. Weinberger, editors, Proceedings of The 33rd International Conference on
_Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1727–1736,_
[New York, New York, USA, 20–22 Jun 2016. PMLR. URL https://proceedings.mlr.](https://proceedings.mlr.press/v48/miao16.html)
```
press/v48/miao16.html.
```
[23] Y. Miao, E. Grefenstette, and P. Blunsom. Discovering discrete latent topics with neural
variational inference. In International conference on machine learning, pages 2410–2419.
PMLR, 2017.
-----
[24] S. Min, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. Noisy channel language model prompting
for few-shot text classification. In Proceedings of the 60th Annual Meeting of the Association
_for Computational Linguistics (Volume 1: Long Papers), pages 5316–5330, 2022._
[25] S. Min, M. Lewis, L. Zettlemoyer, and H. Hajishirzi. MetaICL: Learning to learn in context.
In Proceedings of the 2022 Conference of the North American Chapter of the Association
_for Computational Linguistics: Human Language Technologies, pages 2791–2809, Seattle,_
United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.
[naacl-main.201. URL https://aclanthology.org/2022.naacl-main.201.](https://aclanthology.org/2022.naacl-main.201)
[26] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer.
Rethinking the role of demonstrations: What makes in-context learning work? In EMNLP,
2022.
[27] I. Mollas, Z. Chrysopoulou, S. Karlos, and G. Tsoumakas. Ethos: an online hate speech
detection dataset, 2020.
[28] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,
K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback.
_arXiv preprint arXiv:2203.02155, 2022._
[29] E. Perez, D. Kiela, and K. Cho. True few-shot learning with language models. In A. Beygelzimer,
Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing
_[Systems, 2021. URL https://openreview.net/forum?id=ShnM-rRh4T.](https://openreview.net/forum?id=ShnM-rRh4T)_
[30] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are
unsupervised multitask learners. 2019.
[31] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing.
[Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.](https://arxiv.org/abs/1908.10084)
```
10084.
```
[32] O. Rubin, J. Herzig, and J. Berant. Learning to retrieve prompts for in-context learning. arXiv
_preprint arXiv:2112.08633, 2021._
[33] E. Saravia, H.-C. T. Liu, Y.-H. Huang, J. Wu, and Y.-S. Chen. CARER: Contextualized
affect representations for emotion recognition. In Proceedings of the 2018 Conference on
_Empirical Methods in Natural Language Processing, pages 3687–3697, Brussels, Belgium,_
Oct.-Nov. 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1404. URL
```
https://www.aclweb.org/anthology/D18-1404.
```
[34] N. Saunshi, S. Malladi, and S. Arora. A mathematical exploration of why language models help
solve downstream tasks. In International Conference on Learning Representations, 2021. URL
```
https://openreview.net/forum?id=vVjIW3sEc1s.
```
[35] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts. Recursive
deep models for semantic compositionality over a sentiment treebank. In Proceedings of
_the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–_
1642, Seattle, Washington, USA, Oct. 2013. Association for Computational Linguistics. URL
```
https://aclanthology.org/D13-1170.
```
[36] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P. F.
Christiano. Learning to summarize with human feedback. Advances in Neural Information
_Processing Systems, 33:3008–3021, 2020._
[37] H. Su, J. Kasai, C. H. Wu, W. Shi, T. Wang, J. Xin, R. Zhang, M. Ostendorf, L. Zettlemoyer,
N. A. Smith, et al. Selective annotation makes language models better few-shot learners. arXiv
_preprint arXiv:2209.01975, 2022._
[38] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal,
E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv
_preprint arXiv:2302.13971, 2023._
[39] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra,
P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu,
J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini,
R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura,
M.-A. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov,
-----
P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten,
R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan,
P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic,
S. Edunov, and T. Scialom. Llama 2: Open foundation and fine-tuned chat models. 7 2023.
[URL http://arxiv.org/abs/2307.09288.](http://arxiv.org/abs/2307.09288)
[40] L. van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning
_[Research, 9(86):2579–2605, 2008. URL http://jmlr.org/papers/v9/vandermaaten08a.](http://jmlr.org/papers/v9/vandermaaten08a.html)_
```
html.
```
[41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and
I. Polosukhin. Attention is all you need. Advances in neural information processing systems,
30, 2017.
[42] J. von Oswald, E. Niklasson, E. Randazzo, J. Sacramento, A. Mordvintsev, A. Zhmoginov,
and M. Vladymyrov. Transformers learn in-context by gradient descent. _arXiv preprint_
_arXiv:2212.07677, 2022._
[43] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. Glue: A multi-task
benchmark and analysis platform for natural language understanding. EMNLP 2018, page 353,
2018.
[44] B. Wang and A. Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language
[Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021.](https://github.com/kingoflolz/mesh-transformer-jax)
[45] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, and D. Zhou. Self-consistency improves chain
of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
[46] A. Warstadt, A. Singh, and S. R. Bowman. Neural network acceptability judgments. arXiv
_preprint arXiv:1805.12471, 2018._
[47] C. Wei, S. M. Xie, and T. Ma. Why do pretrained language models help in downstream tasks?
an analysis of head and prompt tuning. Neural Information Processing Systems (NeurIPS),
2021.
[48] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought
prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[49] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf,
M. Funtowicz, et al. Huggingface’s transformers: State-of-the-art natural language processing.
_arXiv preprint arXiv:1910.03771, 2019._
[50] S. M. Xie, A. Raghunathan, P. Liang, and T. Ma. An explanation of in-context learning as
implicit bayesian inference. In International Conference on Learning Representations, 2022.
[URL https://openreview.net/forum?id=RdJVFCHjUMI.](https://openreview.net/forum?id=RdJVFCHjUMI)
[51] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V.
Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068,
2022.
[52] X. Zhang, J. Zhao, and Y. LeCun. Character-level convolutional networks for text classification.
_Advances in neural information processing systems, 28, 2015._
[53] D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, O. Bousquet, Q. Le,
and E. Chi. Least-to-most prompting enables complex reasoning in large language models.
_arXiv preprint arXiv:2205.10625, 2022._
-----
**A** **Proofs**
**A.1** **Direct direction**
**Assumption A.1. (Assumption 2.1) Assume that PM** (X) = P (X), and PM[d] [(][Y][ |][θ][, X][)][ ∝] _[P]_ [(][Y][ |][θ][, X][)]
for X _Y_ **_θ._**
**Proposition A.2. (Proposition 2.2) If task d follows the X** _Y_ **_θ direction, arg maxy_** _PM[d]_ [(][Y][ =]
_∈Y_
_y|θ[d], X ) is the Bayes optimal classifier. _
_Proof. Since the data generation of the task d can be written as Y = f_ (X, θ[d], ϵ), we have
_P_ _[d](Y |X) = P_ (Y |θ[d], X).
And by Assumption A.1, we have
arg maxy∈Y _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][) = arg max]y∈Y _P_ (Y = y|θ[d], X).
Thus arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][ is the Bayes optimal classifier.]
_∈Y_
**Theorem A.3. (Theorem 2.3) If task d follows the X** _Y_ **_θ direction, then the in-context learning_**
_classifier_
arg max _PM[d]_ [(][Y][ =][ y][|][X]1[d] [, Y][ d]1 _[, ..., X] _ _k[d][, Y][ d]k_ _[, X][)]_
_y∈Y_
_always has a higher or equal probability of misclassification to the Bayes optimal classifier_
arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][. Equality only takes when]
_∈Y_
_∀x ∈X_ _, PM[d]_ [(][θ][d][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][ =][ x][) = 1][.]_
_Proof. Recall that in Equation (1), we have_
_PM[d]_ [(][Y][ |][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][) =]_
_PM[d]_ [(][Y][ |][θ][, X][)][P][ d]M [(][θ][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][)][d][θ][.]_
By Proposition A.2, arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][ is the Bayes optimal classifier. Let][ C][θ][(][X][) =]
_∈Y_
arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][, X][)][, then the risk is defined as the probability of misclassification]
_∈Y_
_R(Cθ) = P_ (Cθ(X) = Y ) = EXY [1Cθ (X)=Y ].
_̸_ _̸_
Denote the in-context learning classifier arg maxy _PM[d]_ [(][Y][ =][ y][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][)][ by][ C][k][(][X][)][.]_
_∈Y_
We then have
_R(Ck) = EXY [1Ck(X)=Y ] = EX_ [ (1 _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][))][1]Ck(X)=y[]][.]
_̸_ _−_
_yX∈Y_
Such risk is minimized if and only if Ck(X) = _Cθd_ (X), which only holds when
_PM[d]_ [(][θ][d][|][X]1[d][, Y][ d]1 _[, ..., X]k[d][, Y][ d]k_ _[, X][ =][ x][) = 1][ for all][ x][ ∈X]_ [.]
**A.2** **Channel direction**
**Assumption A.4. Assume that PM** (X) = P (X), and PM[d] [(][X][|][θ][, Y][ )][ ∝] _[P]_ [(][X][|][θ][, Y][ )][ for the][ Y][ ]
_X_ **_θ direction._**
**Proposition A.5. If task d follows the Y** _X_ **_θ causal direction, arg maxy_** _PM[d]_ [(][X][|][θ][d][, Y][ =][ y][)]
_∈Y_
_is the Bayes optimal classifier when the label assignment is balanced. _
_Proof. Since the data generation of the task d can be written as X = g(Y, θ[d], ϵ), we have_
_P_ _[d](X|Y ) = P_ (X|θ[d], Y )
-----
When the label is balanced, i.e. P _[d](Y ) =_
1
_|Y|_ [, we have]
_P_ _[d](Y_ _X) =_ _[P][ d][(][X][|][Y][ )][P][ d][(][Y][ )]_ _P_ _[d](X_ _Y )_
_|_ _P_ (X) _∝_ _|_
And by Assumption A.4, we have
arg maxy∈Y _PM[d]_ [(][X][|][θ][d][, Y][ =][ y][) = arg max]y∈Y _P_ (X|θ[d], Y = y).
Thus arg maxy _PM[d]_ [(][X][|][θ][d][, Y][ =][ y][) = arg max]y _[P][ d][(][Y][ =][ y][|][X][)][ is the Bayes optimal classifier.]_
_∈Y_ _∈Y_
**Theorem A.6. If task d follows the Y** _X_ **_θ direction, then the in-context learning classifier_**
arg max _PM[d]_ [(][X][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ =][ y][)]_
_y∈Y_
_always has a higher or equal probability of misclassification to the Bayes optimal classifier_
arg maxy _PM[d]_ [(][X][|][θ][d][, Y][ =][ y][)][. Equality only takes when]
_∈Y_
_∀y ∈Y, PM[d]_ [(][θ][d][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ =][ y][) = 1][.]_
_Proof. This theorem can be proved similarly as Theorem A.3. Recall that in Equation (2), we have_
_PM[d]_ [(][X][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ ) =]_
_PM[d]_ [(][X][|][θ][, Y][ )][P]M[ d] [(][θ][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ )][d][θ][.]_
By Proposition A.5, arg maxy _PM[d]_ [(][X][|][θ][d][, Y][ =][ y][)][ is the Bayes optimal classifier. Let][ C][θ][(][X][) =]
_∈Y_
arg maxy _PM[d]_ [(][X][|][θ][, Y][ =][ y][)][, then the risk is defined as the probability of misclassification]
_∈Y_
_R(Cθ) = P_ (Cθ(X) = Y ) = EXY [1Cθ (X)=Y ].
_̸_ _̸_
Denote the in-context learning classifier arg maxy _PM[d]_ [(][X][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ =][ y][)][ by][ C][k][(][X][)][.]_
_∈Y_
We then have
_R(Ck) = EXY [1Ck(X)=Y ] = EX_ [ (1 _PM[d]_ [(][X][|][θ][d][, Y][ =][ y][))][1]Ck(X)=y[]][.]
_̸_ _−_
_yX∈Y_
Such risk is minimized if and only if Ck(X) = _Cθd_ (X), which only holds when
_PM[d]_ [(][θ][d][|][Y][ d]1 _[, X]1[d][, ..., Y]k[ d][, X]k[d][, Y][ =][ y][) = 1][ for all][ y][ ∈Y][.]_
**A.3** **Method**
**Proposition A.7. (Proposition 3.1) When L(θ[ˆ][d]) is minimized, PM[d]** [(][Y][ |]θ[ˆ][d], X) = P (Y |θ[d], X) for
_X_ _Y_ **_θ, and PM[d]_** [(][X][|]θ[ˆ][d], Y ) = P (X|θ[d], Y ) for Y _X_ **_θ. If the LLM M is invertible, then_**
_θˆ[d]_ = θ[d].
_Proof. The proof of this proposition is straightforward._
Since
_L(θ[ˆ][d]) = H(P_ (Y |θ[d], X)) + KL(P (Y |θ[d], X)||PM[d] [(][Y][ |]θ[ˆ][d], X))
when L(θ[ˆ][d]) is minimized, we have PM[d] [(][Y][ |]θ[ˆ][d], X) = P (Y |θ[d], X) for X _Y_ **_θ, and_**
_PM[d]_ [(][X][|]θ[ˆ][d], Y ) = P (X|θ[d], Y ) for Y _X_ **_θ._**
If M is invertible, since the embedding matrix is invertible with or without new concept tokens,
_PM[d]_ [(][Y][ |]θ, X[ˆ] ) = PM[d] [(][Y][ |]θ[ˆ][′], X) implies that θ[ˆ] = θ[ˆ][′]. Thus θ is identifiable, which means _θ[ˆ][d]_ = θ[d].
-----
Table 2: Prompt template and label mapping for the datasets we use. Since almost all sentences from
ETHOS contain offensive content, we mask out the key offensive words in the examples below.
Dataset Prompt Label Mapping
sentence: well worth revisiting as many times
SST-2 negative/positive
positive
The company anticipates its turnover for the whole 2010 to
FPB surpass that of the previous year when it was EUR 67.1 million . negative/neutral/positive
positive
It is this hat that I know the boy who is wearing.
COLA acceptable/unacceptable
unacceptable
Album/Animal/Artist/
Athlete/Building/Company/
EducationalInstitution/Film/
MeanOfTransportation/
NaturalPlace/OfficeHolder/
Plant/Village/WrittenWork
The Nucet River is a tributary of the Chiojdeanca
DBPedia River in Romania.
NaturalPlace
fast i mean fastingis a way of skipping meals i mena
EmoC you move on too fast angry/happy/others/sad
others
i feel this place was tragic anger/fear/joy/love/
EmoS
sadness sadness/surprise
[Masked] should be removed from the face of the earth
ETHOS-SO false/true
true
I hate being a [Masked], wish I was a [Masked]
ETHOS-R and no [Masked] on earth existed false/true
false
**B** **Experiments**
**Dateset. In Table 2, we show how we process the text classification datasets into prompts. For each**
dataset, we take at most 16384 examples from the training set for training, and uniformly sample
at most 1000 examples from the test set to test the in-context learning performance. In Table 3, we
show the train size and test size we used for each dataset. We also list the set of diverse tasks trained
with each dataset, which are denoted by their name in Huggingface datasets.[9] The license for SST2,
ETHOS-SO and ETHOS-R is GNU General Public License v3. FPB is under a Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported License. Note that these two datasets are
hate speech detection datasets for different kinds of hate speech and contain many offensive texts.
COLA is excerpted from the published works available on the website, and the copyright (where
applicable) remains with the original authors or publishers. DBpedia is under a Creative Commons
Attribution-ShareAlike License and the GNU Free Documentation License. EmoC and EmoS should
be used for educational and research purposes only.
**Experiment details. We run our experiments on A100, V100, and A6000 GPUs. We adopt a large**
portion of the code from the MetaICL repository [25][10]. The training takes around 20 to 40 hours on
a single GPU. We use a learning rate of 1e-4 and a batch size of 16, and train for 10k steps in total.
**Main results. In Table 4, we list the detailed results of our method and baselines with different LLMs**
on different datasets in Figure 2.
**Causal direction results. The detailed results with anti-causal direction (the opposite direction to**
what we described in Section 4 are in Table 7) are shown in Table 7, corresponding to Figure 6 in the
main text.
**Other LLMs results. The detailed results with other LLMs are shown in Table 6, corresponding to**
Figure 3a in the main text.
**Random token results. The detailed results with random tokens are shown in Table 5, corresponding**
to Figure 3b in the main text.
[9https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index)
[10https://github.com/facebookresearch/MetaICL](https://github.com/facebookresearch/MetaICL)
-----
datset d train size test size task set S
glue-cola/glue-mnli/glue-qqp/
SST2 (glue-sst2) 16384 1000
glue-mrpc/glue-qnli/glue-rte/glue-sst2/glue-wnli
glue-sst2/glue-mnli/math_qa/sciq/
social_i_qa/wino_grande/glue-qqp/
ag_news/financial_phrasebank/
poem_sentiment/anli/quarel/quartz/
medical_questions_pairs/paws/dbpedia_14
FPB (financial_phrasebank) 1811 453
glue-cola/glue-mnli/glue-qqp/glue-mrpc/
COLA (cola-sst2) 8551 1000
glue-qnli/glue-rte/glue-sst2/glue-wnli
glue-sst2/glue-mnli/math_qa/sciq/
social_i_qa/wino_grande/glue-qqp/
ag_news/financial_phrasebank/
poem_sentiment/anli/quarel/quartz/
medical_questions_pairs/paws/dbpedia_14
glue-sst2/amazon_polarity/
financial_phrasebank/poem_sentiment/
yelp_polarity/glue-cola/blimp/ag_news/
dbpedia_14/ethos/emo/emotion
glue-sst2/amazon_polarity/
financial_phrasebank/poem_sentiment/
yelp_polarity/glue-cola/blimp/ag_news/
dbpedia_14/ethos/emo/emotion
glue-sst2/amazon_polarity/
financial_phrasebank/poem_sentiment/
yelp_polarity/glue-cola/blimp/ag_news/
dbpedia_14/ethos/emo/emotion
glue-sst2/amazon_polarity/
financial_phrasebank/poem_sentiment/
yelp_polarity/glue-cola/blimp/ag_news/
dbpedia_14/ethos/emo/emotion
DBpedia (dbpedia_14) 16384 1000
EmoC (emo) 16384 1000
EmoS (emotion) 16000 1000
ETHOS-SO (ethos-sexual_orientation) 346 87
ETHOS-R (ethos-religion) 346 87
Table 3: Dataset details
Figure 6: Accuracy of randomly selected demonstrations averaged over seven different LLMs except
for GPT3-davinci, using the adopted causal direction and the anti-causal direction.
_k-ablation study results. The detailed results of k ablation study are shown in Table 10, correspond-_
ing to Figure 4a in the main text. In this experiment, we do not reorder the selected demonstrations
according to Equation (3), as we need to use GPT2-large for the reordering, and it cannot fit in all the
demonstrations. Instead, we order the selected demonstrations from the largest _P[ˆ]M[d]_ [(][θ][d][|][X] _[d][, Y][ d][)][ to]_
the smallest.
_c-ablation study results. The detailed results of c ablation study are shown in Table 11, corresponding_
to Figure 4b in the main text.
**Effect of using ground truth labels. According to [26], the ground truth label is not necessary**
for demonstrations to have a good in-context learning performance, which we found is not entirely
true for all the tasks. We compare our method with the randomly selected demonstration baseline
under three scenarios: (a) Original: demonstrations with the correct labels; (b) Random words:
using a random label projection map τ _[d]_ instead of a meaningful one. i.e., map each label to a fixed
random word. In this case, the mapping from the input tokens X to the labels Y is still preserved; (c)
**Random labels: assign a random label to each demonstration, with the original label projection map**
_τ_ _[d]. As shown in Figure 7, by using a random label projection map or randomly assigning the labels,_
the performance of the randomly selected demonstration baseline drops considerably. And randomize
the label assignment gives a larger performance drop than only using a random label projection map,
-----
Figure 7: In-context learning accuracy of our method versus random selection baseline, with (a)
ground truth labels (original), (b) random label mapping (random words), or random label assignments
(random label), averaged over all eight datasets. Numbers are obtained with GPT2-large.
Figure 8: Accuracy of in-context learning using our method versus the theoretical maximum accuracy
obtained using the learned concept tokens as prefixes. Numbers are obtained with GPT2-large.
which shows that the mapping between X and Y in the demonstrations matters. This indicates that
in-context learning infers the mapping between X and Y from the demonstrations instead of merely
invoking some learned function stored in the LLM parameters based on the appearance of X and
_Y . We also show that the demonstrations selected by our method represent the X −_ _Y mapping_
better, as under the Random words condition, our method performs better than the random selection
baseline, while our method does not improve the random selection baseline under the Random labels
condition. The detailed results with random words and random labels are shown in Table 8
**Optimal performance As stated in Theorem 2.3, the optimal performance of an in-context learning**
classifier is the Bayes optimal classifier arg maxy _PM[d]_ [(][Y][ =][ y][|][θ][d][, X][)][, which is approximated by]
_∈Y_
using the learned concept tokens as prefixes. Note that this approximated Bayes optimal classifier
cannot be transferred across different LLMs, as the learned concept tokens embeddings are aligned
with a specific LLM. The advantage of in-context learning with our method is that the demonstrations
can be transferred to any LLMs without training. Here we only compare the accuracy of in-context
learning with our method and the approximated Bayes optimal classifier using GPT2-large, as it is
the LLM that concept tokens are fine-tuned with. As shown in Figure 8, our method comes close
to the optimal accuracy on many datasets, while there are some datasets that our method is lagging.
This indicates that there are two ways to improve our method: the first is to improve the performance
of the optimal classifier, by introducing a better latent concept learning algorithm. The other way
is to reduce the performance gap between our method and the optimal classifier, by improving the
demonstration selection algorithm. The detailed results using the learned concept tokens as prefixes
are shown in Table 9.
**Reordering results. We reorder the selected demonstrations to maximize the posterior of the concept**
tokens:
arg max _PˆM[d]_ [(][θ][d][|][π][((][X]1[d][, Y]1[ d][)][, ...,][ (][X]k[d][, Y][ d]k [)))] (3)
_π∈Π_
Where π((X1[d][, Y][ d]1 [)][, ...,][ (][X]k[d][, Y]k[ d][))][ is a permutation of][ (][X]1[d][, Y]1[ d][)][, ...,][ (][X]k[d][, Y][ d]k [)][.][ Π][ is the set of all]
possible permutations of the k demonstrations. The detailed results with and without reordering are
shown in Table 12, corresponding to Figure 9.
**Similar tokens. We show the top ten similar tokens to some learned concept tokens in Table 13, as**
summarized in Figure 5 in the main text.
-----
Figure 9: In-context learning accuracy of our method versus random selection baseline, with and
without reordering. The red error bars represent the standard deviation across five runs. Numbers are
obtained with GPT2-large.
Table 4: Accuracy of selected demonstration. Our demonstrations are selected using GPT2-large,
and the same set of demonstrations is applied to all different LLMs. All LLMs are pre-trained only
with the language modeling objective, while the pre-training data size of GPT2s is much smaller than
GPT3s.
LLM Method SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
GPT2 Uniform 69.7 ± 1.8 52.9 ± 2.3 61.9 ± 1.4 48.0 ± 0.7 35.3 ± 1.7 26.4 ± 1.0 64.1 ± 4.8 71.0 ± 1.8 53.7
(124M) Similar 69.5 ± 0.6 55.9 ± 1.7 63.2 ± 1.2 44.7 ± 3.1 36.4 ± 2.0 26.6 ± 1.3 77.7 ± 2.7 80.0 ± 3.7 56.8
**Ours** 76.8 ± 2.9 64.5 ± 3.2 69.1 ± 0.2 53.5 ± 2.95 37.2 ± 11.1 30.6 ± 4.8 80.9 ± 1.9 76.8 ± 2.6 61.2
GPT2-m Uniform 70.8 ± 1.3 52.0 ± 1.7 57.8 ± 1.3 49.3 ± 2.0 34.2 ± 1.8 34.2 ± 1.8 76.3 ± 4.9 74.7 ± 2.2 56.2
(355M) Similar 75.0 ± 1.9 57.7 ± 2.0 57.5 ± 2.2 47.9 ± 6.0 37.2 ± 3.6 35.2 ± 1.8 86.9 ± 2.9 84.6 ± 4.3 60.3
**Ours** 81.2 ± 1.3 59.3 ± 4.3 69.0 ± 0.2 52.9 ± 2.3 40.4 ± 21.5 37.2 ± 2.4 83.7 ± 1.1 76.8 ± 1.1 62.6
GPT2-l Uniform 77.1 ± 1.2 51.3 ± 2.4 62.7 ± 0.8 54.4 ± 0.9 38.7 ± 2.1 34.5 ± 1.2 67.6 ± 4.3 72.9 ± 2.8 57.4
(774M) Similar 80.7 ± 1.6 54.8 ± 3.8 50.9 ± 1.4 51.1 ± 5.2 39.9 ± 2.6 35.1 ± 2.1 80.9 ± 2.8 84.4 ± 2.6 59.7
**Ours** 86.2 ± 1.4 60.4 ± 2.5 69.1 ± 0.2 56.5 ± 3.2 48.4 ± 17.0 38.6 ± 2.8 82.5 ± 1.5 76.6 ± 1.2 64.8
GPT2-xl Uniform 74.7 ± 0.9 53.2 ± 1.9 55.8 ± 1.6 53.0 ± 1.9 38.2 ± 1.5 38.2 ± 1.5 67.8 ± 6.4 72.6 ± 4.1 56.7
(1.5B) Similar 80.6 ± 1.3 53.0 ± 2.5 55.0 ± 2.5 51.6 ± 5.9 39.9 ± 2.0 32.9 ± 2.1 82.8 ± 2.2 83.9 ± 4.5 60
**Ours** 83.1 ± 3.6 62.0 ± 2.5 68.9 ± 0.2 58.6 ± 3.3 43.6 ± 16.4 43.6 ± 16.4 83.0 ± 1.3 77.9 ± 1.3 65.1
GPT3-a Uniform 76.9 ± 0.7 56.6 ± 1.1 53.1 ± 1.8 62.1 ± 1.4 38.6 ± 1.4 27.7 ± 1.3 65.5 ± 5.7 74.0 ± 3.0 56.8
(350M) Similar 78.7 ± 1.0 52.2 ± 2.7 53.1 ± 1.8 54.6 ± 1.7 42.4 ± 3.5 37.2 ± 1.1 84.1 ± 2.2 87.8 ± 3.5 61.3
**Ours** 85.4 ± 1.7 61.9 ± 10.5 58.2 ± 7.0 64.0 ± 4.4 43.0 ± 7.2 37.9 ± 2.3 84.4 ± 1.4 78.9 ± 0.9 64.2
GPT3-b Uniform 80.8 ± 0.6 55.2 ± 3.3 46.8 ± 2.0 66.5 ± 1.4 42.0 ± 0.7 27.0 ± 1.2 71.0 ± 4.6 72.6 ± 3.1 57.7
(1.3B) Similar 83.9 ± 1.3 56.2 ± 2.3 45.1 ± 1.8 59.8 ± 1.8 42.9 ± 3.5 38.1 ± 1.7 86.7 ± 3.0 86.4 ± 3.0 62.4
**Ours** 87.3 ± 2.0 64.3 ± 5.9 67.2 ± 0.9 70.2 ± 3.2 43.6 ± 13.0 38.9 ± 5.0 84.6 ± 0.9 78.9 ± 1.2 66.9
GPT3-c Uniform 84.2 ± 1.4 52.6 ± 1.8 59.1 ± 1.5 70.6 ± 0.8 44.3 ± 2.5 32.3 ± 1.9 77.5 ± 4.7 77.5 ± 0.6 62.3
(6.7B) Similar 85.7 ± 1.4 62.2 ± 0.9 58.0 ± 1.7 62.2 ± 2.0 47.4 ± 4.3 39.8 ± 1.7 89.2 ± 1.4 89.7 ± 1.9 66.8
**Ours** 88.8 ± 0.7 64.1 ± 5.7 69.0 ± 0.3 73.6 ± 2.9 50.3 ± 11.9 43.1 ± 4.6 86.2 ± 0.0 78.2 ± 0.0 69.2
GPT3-d Uniform 86.5 ± 0.9 59.2 ± 2.4 45.5 ± 2.8 73.6 ± 1.9 39.4 ± 0.7 40.6 ± 1.7 77.2 ± 2.6 76.8 ± 3.5 62.4
(175B) Similar 88.5 ± 0.8 55.4 ± 3.3 45.4 ± 1.5 67.2 ± 1.8 37.6 ± 1.6 39.8 ± 1.4 86.9 ± 2.4 89.0 ± 3.8 63.7
**Ours** 87.8 ± 3.4 62.7 ± 3.3 58.5 ± 8.2 75.5 ± 2.4 41.3 ± 3.6 42.7 ± 3.9 85.1 ± 0.0 79.3 ± 0.0 66.6
Avg Uniform 77.6 54.1 55.3 59.7 38.8 32.6 70.9 74.0 57.9
Similar 80.3 55.9 53.5 54.9 40.5 35.6 84.4 85.7 61.4
**Ours** 84.6 62.4 66.1 63.1 43.5 39.1 83.8 77.9 65.0
**Likelihood histogram. We also show histograms of the probability of each example predicting**
corresponding concept tokens in different datasets. We can see that the probability of prediction
concept tokens can well differentiate examples in a dataset.
**Selected demonstrations. Table 14 shows the selected top 4 demonstration by our proposed**
algorithm.
Task Selected demonstrations
-----
GSM8K `Question:` `It takes Bryan 5 minutes to walk from his house`
```
to the bus station. Then he rides the bus for 20 minutes.
After that, he walks 5 minutes from the bus station to his
job. It takes the same amount of time in the morning and
the evening. How many hours per year does Bryan spend
traveling to and from work, if he works every day? Bryan
spends 5+20+5 =«5+20+5=30»30 minutes traveling to work. He
does this twice a day, so he spends 30*2=«30*2=60»60 minutes
traveling per day. Thus, he spends 60/60=«60/60=1»1 hour
traveling to and from work every day. Thus, he spends 1
hour*365 days =«1*365=365»365 hours traveling per year. The
answer is: 365
```
GSM8K `Question:` `Cherry put up a delivery service.` `She charges`
```
$2.50 for a 3-5 kilograms cargo and $4 for a 6-8 kilograms
cargo. If she delivers four 5 kilograms cargo and two
8 kilograms cargo per day, how much money will she earn
in a week? Cherry earns $2.50 x 4= $«2.5*4=10»10 after
delivering four 5 kilograms of cargo per day. She earns $4
x 2= $«4*2=8»8 after delivering two 8 kilograms of cargo per
day. So, her total earning per day is $8+$10= $«8+10=18»18.
Therefore, she will earn $18 x 7= $«18*7=126»126 in a week.
The answer is: 126
```
GSM8K `Question:` `Bill is laying power cable for a new neighborhood.`
```
There are going to be 18 east-west streets that are 2 miles
long and 10 north-south streets that are four miles long.
It takes 5 miles of cable to electrify 1 mile of street. If
cable costs $2000/mile, what is the total cost of cable
for the neighborhood? First find the total distance
of the east-west streets: 18 streets * 2 miles/street
= «18*2=36»36 miles. Then find the total distance of
the north-south streets: 10 streets * 4 miles/street =
«10*4=40»40 miles. Then add the number of miles from each
type of street to find the total distance: 36 miles + 40
miles = «36+40=76»76 miles. Then multiply that number by
5 to find the number of miles of cable needed: 76 miles
street * 5 miles cable/mile street = «76*5=380»380 miles of
cable. Then multiply that number by the cost of one mile
of cable to find the total cost: 380 miles * $2000/mile =
$«380*2000=760000»760,000. The answer is: 760000
```
GSM8K `Question:` `John buys a gaming PC for $1200.` `He decides to`
```
replace the video card in it. He sells the old card for
$300 and buys a new one for $500. How much money did he
spend on his computer, counting the savings from selling
the old card? He spent an extra 500-300=$«500-300=200»200
on the video card. That means the total cost was
1200+200=$«1200+200=1400»1400. The answer is: 1400
```
SST2 `sentence:` `faced and spindly attempt at playing an ingenue`
```
makes her nomination as best actress even more of a an a
positive
```
SST2 `sentence:` `holofcener’s film offers just enough insight to`
```
keep it from being simpleminded, and positive
```
SST2 `sentence:` `i’m not a fan of the phrase ‘ life affirming’`
```
because it usually means ‘ schmaltzy,’ but real women have
curves truly is life affirming negative
```
-----
SST2 `sentence:` `the script is about as interesting as a recording`
```
of conversations at the wal-mart checkout line negative
```
DBpedia `OfficeHolder Lucie Papin (born September 7 1936) is a former`
```
Canadian politician who served in both the House of Commons
and Senate.
```
DBpedia `Village Kunkalamarru is very renowned village under`
```
Karamchedu Mandal which is located about 15 km from the
busy commercial town of Chirala in Prakasam district in the
state of Andhra Pradesh India.Its neighbouring villages are
Karamchedu Veerannapalem.
```
DBpedia `EducationalInstitution The Pontifical Catholic University`
```
of Puerto Rico at Mayagez is a university located in the
city of Mayagez Puerto Rico. It is part of the Pontifical
Catholic University of Puerto Rico. The university began
as an extension of the Catholic University of Puerto Rico
in the early 1960s. In 1982 it was awarded the official
title of Center and later it became the Mayagez Campus of
the Pontifical Catholic University of Puerto Rico at Mayagez
in 1996.
```
DBpedia `Artist Choi Dong-wook [citation needed]; born November 9`
```
1984) better known by his stage name Se7en is a South Korean
singer from YG Entertainment. He has also advanced into
Japan China and the United States.
```
Table 14: Selected demonstrations by our method.
**C** **Limitations and Future Work**
While the assumption that a large language model captures the true distribution of language is
fairly common in the literature studying LLMs [50, 34], this assumption is not entirely accurate in
practice. According to [15], LLMs systematically underestimate rare text sequences, which constitute
a significant portion of the long-tail distribution of language. Although this assumption is adequate to
achieve favorable empirical results, it is expected that more accurate language models will, in theory,
lead to improved outcomes.
The selection of the accompanying diverse tasks S is currently left to the user’s discretion. A better
approach to constructing such a task set is needed to gain a deeper understanding of latent concept
variables and to improve the latent concept learning algorithm.
Our algorithm currently only applies to classification tasks. More complex latent variables could
be designed to improve the in-context learning performance of more complex tasks like math word
questions and logical reasoning problems.
**D** **Broader Impact**
The utilization of language models (LLMs) for specific tasks is often hindered by the high cost
associated with training or fine-tuning them. However, the in-context learning paradigm offers a
cost-effective and convenient alternative for utilizing the power of pre-trained LLMs. Our work has
demonstrated a significant improvement in the performance of in-context learning through a relatively
low-cost and simple approach, thus making the use of LLMs more accessible for individuals with
limited resources.
However, it is important to consider the broader implications of the increasing use of LLMs. As
LLMs are not infallible and may make mistakes, it is crucial to explicitly warn users of the potential
for misleading output and to regulate the distribution of LLMs in order to prevent any negative
societal impact. Additionally, it is possible that LLMs could be intentionally misused, thus it is
important to consider the ethical implications of their use and to take appropriate measures to mitigate
-----
Table 5: Accuracy of selected demonstration. Our demonstrations are selected using GPT2-large,
and the same set of demonstrations is applied to all different LLMs. All LLMs are pre-trained only
with the language modeling objective, while the pre-training data size of GPT2s is much smaller than
GPT3s.
LLM Method SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
GPT2 Uniform 69.7 ± 1.8 52.9 ± 2.3 61.9 ± 1.4 48.0 ± 0.7 35.3 ± 1.7 26.4 ± 1.0 64.1 ± 4.8 71.0 ± 1.8 53.7
(124M) Random 69.8 ± 3.3 51.1 ± 1.7 69.0 ± 0.1 49.0 ± 4.5 33.7 ± 15.5 24.2 ± 7.6 66.4 ± 17.5 66.2 ± 16.2 53.7
**Ours** 76.8 ± 2.9 64.5 ± 3.2 69.1 ± 0.2 53.5 ± 2.95 37.2 ± 11.1 30.6 ± 4.8 80.9 ± 1.9 76.8 ± 2.6 61.2
GPT2-l Uniform 77.1 ± 1.2 51.3 ± 2.4 62.7 ± 0.8 54.4 ± 0.9 38.7 ± 2.1 34.5 ± 1.2 67.6 ± 4.3 72.9 ± 2.8 57.4
(774M) Random 81.9 ± 4.5 46.5 ± 4.7 64.9 ± 7.8 50.3 ± 4.3 42.5 ± 16.7 36.1 ± 6.5 67.6 ± 20.4 67.8 ± 15.0 57.2
**Ours** 86.2 ± 1.4 60.4 ± 2.5 69.1 ± 0.2 56.5 ± 3.2 48.4 ± 17.0 38.6 ± 2.8 82.5 ± 1.5 76.6 ± 1.2 64.8
Table 6: We test our method on other similar sizes (6-7B) LLMs.
LLM Method SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
GPT2-l Random 77.1 ± 1.2 51.3 ± 2.4 62.7 ± 0.8 54.4 ± 0.9 38.7 ± 2.1 34.5 ± 1.2 67.6 ± 4.3 72.9 ± 2.8 57.4
**Ours** 86.2 ± 1.4 60.4 ± 2.5 69.1 ± 0.2 56.5 ± 3.2 48.4 ± 17.0 38.6 ± 2.8 82.5 ± 1.5 76.6 ± 1.2 64.8
GPT3-c Random 84.2 ± 1.4 52.6 ± 1.8 59.1 ± 1.5 70.6 ± 0.8 44.3 ± 2.5 32.3 ± 1.9 77.5 ± 4.7 77.5 ± 0.6 62.3
**Ours** 88.8 ± 0.7 64.1 ± 5.7 69.0 ± 0.3 73.6 ± 2.9 50.3 ± 11.9 43.1 ± 4.6 86.2 ± 0.0 78.2 ± 0.0 69.2
GPT-J Random 78.5 ± 1.0 53.1 ± 1.7 58.3 ± 2.2 55.6 ± 1.2 38.5 ± 2.0 33.3 ± 1.5 76.6 ± 3.7 76.6 ± 1.4 58.8
**Ours** 87.8 ± 1.9 56.7 ± 4.3 69.1 ± 0.2 60.0 ± 3.6 32.5 ± 16.1 33.2 ± 2.8 85.3 ± 0.5 77.0 ± 0.0 62.7
OPT Random 72.4 ± 0.8 32.8 ± 0.3 34.8 ± 0.6 29.4 ± 1.4 67.1 ± 1.8 36.9 ± 0.6 86.2 ± 0.0 78.2 ± 0.0 54.7
**Ours** 74.2 ± 3.0 34.1 ± 6.1 35.7 ± 3.1 28.8 ± 2.1 76.7 ± 4.1 39.0 ± 3.4 86.2 ± 0.0 78.2 ± 0.0 56.6
LLaMA Random 57.7 ± 1.5 23.7 ± 1.3 30.8 ± 0.2 15.8 ± 0.8 4.4 ± 0.7 35.2 ± 0.7 66.2 ± 5.8 57.2 ± 5.1 36.4
**Ours** 60.5 ± 4.7 19.1 ± 1.9 30.8 ± 0.2 16.9 ± 1.3 4.3 ± 0.7 35.3 ± 0.6 77.2 ± 13.6 56.3 ± 10.8 37.6
any potential negative effects. We posit that these regulations and measures should be put in place at
the time of distributing LLMs to ensure the safe and responsible use of these models. Furthermore,
as we publicly release our code, we will also provide clear warnings and guidelines to users to ensure
that the potential risks associated with the use of our method are fully understood and addressed.
-----
Table 7: We test random selection baseline with anti-causal direction.
LLM SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R
GPT2 57.4 ± 1.9 56.6 ± 2.1 55.9 ± 1.7 11.3 ± 1.0 24.6 ± 2.4 22.1 ± 1.1 64.1 ± 4.8 58.6 ± 5.5
GPT2-m 56.7 ± 1.6 48.7 ± 2.1 55.3 ± 1.8 13.9 ± 1.2 22.4 ± 1.9 24.9 ± 2.3 44.8 ± 1.9 45.5 ± 3.5
GPT2-l 58.7 ± 0.7 33.7 ± 1.3 50.8 ± 1.6 13.6 ± 1.3 28.2 ± 3.6 26.2 ± 2.7 48.7 ± 3.7 53.6 ± 5.3
GPT2-xl 54.2 ± 0.5 46.8 ± 1.2 50.6 ± 1.1 12.6 ± 1.5 31.4 ± 2.8 25.9 ± 3.2 65.5 ± 4.9 61.8 ± 1.5
GPT3-a 55.8 ± 0.9 58.9 ± 2.1 51.6 ± 1.4 14.3 ± 0.8 54.2 ± 3.1 27.7 ± 1.3 49.2 ± 3.3 54.9 ± 6.4
GPT3-b 64.4 ± 1.6 58.9 ± 2.6 53.4 ± 1.1 14.6 ± 1.1 52.0 ± 2.5 27.0 ± 1.3 48.3 ± 2.7 51.0 ± 4.0
GPT3-c 78.2 ± 1.6 52.3 ± 2.3 53.7 ± 0.7 23.0 ± 2.5 49.1 ± 2.6 32.2 ± 1.9 57.9 ± 2.7 64.1 ± 5.0
Avg 60.8 50.8 53 14.8 37.4 26.6 54.1 55.6
Table 8: We test our method with random words and random labels using GPT2-large.
Method SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
R words Random 54.1 ± 4.2 43.4 ± 1.9 62.2 ± 4.9 11.2 ± 0.9 32.4 ± 5.2 19.1 ± 1.8 80.7 ± 4.8 77.0 ± 3.6 47.5
**Ours** 50.3 ± 1.3 44.9 ± 4.2 69.2 ± 0.2 13.9±1.2 37.8 ± 12.1 23.5 ± 7.4 86.0 ± 0.5 77.9 ± 0.5 50.5
R labels Random 51.5 ± 0.9 32.5 ± 1.2 49.3 ± 3.0 6.7 ± 1.0 25.1 ± 0.6 17.2 ± 0.9 48.0 ± 2.5 56.8 ± 3.1 35.9
**Ours** 49.6 ± 0.9 36.2 ± 2.5 49.3 ± 1.6 6.6± 0.2 24.7 ± 0.6 16.6 ± 1.0 51.0 ± 4.9 48.7 ± 3.5 35.3
Table 9: Accuracy using concept tokens as prefixes.
SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R
90.3 ± 0.0 86.1 ± 0.0 75.0 ± 0.1 92.6 ± 0.6 57.3 ± 1.8 53.8 ± 0.7 86.2 ± 0.0 78.2 ± 0.0
Table 10: k ablation study using GPT2-large, without reordering.
Method SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
_k = 2_ Random 74.4 ± 1.0 48.5 ± 1.1 48.9 ± 1.6 52.9 ± 2.0 42.8 ± 0.6 37.1 ± 1.2 66.9 ± 4.7 66.4 ± 6.8 54.7
**Ours** 78.1 ± 4.5 50.1 ± 2.9 54.3 ± 8.8 57.3 ± 5.1 41.1 ± 9.8 36.1 ± 2.6 84.6 ± 1.6 76.8 ± 4.5 59.8
_k = 4_ Random 76.9 ± 0.7 56.6 ± 1.1 53.1 ± 1.8 62.1 ± 1.4 38.6 ± 1.4 27.7 ± 1.3 65.5 ± 5.7 74.0 ± 3.0 56.8
**Ours** 86.2 ± 1.4 59.7 ± 2.8 69.1 ± 0.2 56.5 ± 3.2 38.2 ± 21.8 37.7 ± 2.5 83.0 ± 1.3 76.6 ± 1.2 63.4
_k = 8_ Random 79.9 ± 0.2 57.1 ± 1.6 51.3 ± 1.0 66.5 ± 1.2 37.6 ± 1.5 36.2 ± 0.6 68.5 ± 3.5 72.9 ± 3.3 58.8
**Ours** 87.0 ± 2.4 59.9 ± 3.3 55.3 ± 9.7 67.0 ± 0.9 39.9 ± 5.3 38.8 ± 2.6 77.0 ± 11.1 78.9 ± 0.9 63
_k = 16_ Random 79.9 ± 1.1 54.9 ± 2.7 54.5 ± 2.8 69.1 ± 1.1 33.7 ± 2.2 33.5 ± 1.4 64.8 ± 4.0 69.0 ± 3.2 57.4
**Ours** 84.6 ± 1.9 60.4 ± 6.4 62.0 ± 7.0 71.0 ± 1.9 37.2 ± 6.1 37.1 ± 2.2 72.4 ± 7.6 74.7 ± 4.7 62.4
Table 11: c ablation study using GPT2-large
SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
_c = 5_ 78.9 ± 2.4 59.8 ± 10.8 34.3 ± 5.0 62.9 ± 2.4 44.9 ± 9.5 38.1 ± 2.4 71.7 ± 5.9 62.1 ± 19.7 56.6
_c = 10_ 85.4 ± 1.7 61.9 ± 10.5 58.2 ± 7.0 64.0 ± 4.4 43.0 ± 7.2 37.9 ± 2.3 84.4 ± 1.4 78.9 ± 0.9 64.2
_c = 15_ 80.1 ± 1.4 64.3 ± 7.7 63.1 ± 9.4 58.7 ± 3.2 36.4 ± 11.5 38.6 ± 1.9 80.9 ± 3.9 76.3 ± 5.9 62.3
_c = 20_ 78.5 ± 4.1 51.8 ± 8.0 66.5 ± 2.3 58.0 ± 3.4 36.3 ± 4.3 41.8 ± 5.8 80.7 ± 4.5 73.8 ± 5.4 60.92
Table 12: Reorder versus not reorder using our method, with GPT2-large.
SST2 FPB COLA DBpedia EmoC EmoS ETHOS-SO ETHOS-R Avg
reorder 86.2 ± 1.4 60.4 ± 2.5 69.1 ± 0.2 56.5 ± 3.2 48.4 ± 17.0 38.6 ± 2.8 82.5 ± 1.5 76.6 ± 1.2 64.8
not reorder 86.2 ± 1.4 59.7 ± 2.8 69.1 ± 0.2 56.5 ± 3.2 38.2 ± 21.8 37.7 ± 2.5 83.0 ± 1.3 76.6 ± 1.2 63.4
-----
Table 13: We list the top 10 similar words (tokens) to some of the learned concept tokens.
concept token similar words
FPB-2 milo coordinate notify rendering benefiting routing EntityItem routed Messages Plot
FPB-3 unlocked updating deleting dropping damage updates drops Gained taken dropped
FPB-4 FX Safari Fixes advertisers Links Coins Operator marketers Guidelines
FPB-5 674 592 693 696 498 593 793 504 691 683
COLA-1 exha trunc curv fragmented elong iterator initialized bounds Iter filament
COLA-2 Sp spa contributed cerv borrower paper tiger Erica USH Schwartz
COLA-7 democr Barack WH ophobic neum Democrats Rachel WH Democrats
DBpedia-4 often impede blockade incarcerated LEASE pollutants pesticides uphe lawmakers fossils
DBpedia-5 categorized closes therapies antidepressant retrospective clinically physicians therapists randomized clinicians
DBpedia-7 JS provided Killed richness Compet Nevertheless Probably Proceedings horizontally
ETHOS-SO-3 Revolution Spread itu Million Pascal stabil Indy Georgian Figure resy
ETHOS-R-2 council Chocobo Shant uyomi aditional cumbers subur ThumbnailImage araoh Pharaoh
ETHOS-R-8 seems outlines emitted grin outline circuitry sized flips emits flipped
ETHOS-R-9 223 asel Cyrus Sith Scorpion Snape Jas Leia Ned Morty
EmoC-6 behavi checkpoints unintention crib eleph looph np mosquit blat pione
EmoC-8 depressed bullied choked stricken devastated unsuccessful cheated distraught troubled failing
EmoS-1 frightened rebellious depressed careless bullied restless reluctant distraught clumsy disgruntled
EmoS-5 obsessive crappy demonic delusions psychosis psychotic childish stupidity reckless insanity
EmoS-7 benevolent charismatic perfected volunte unintention pione innocuous fearless glamorous ruthless
EmoS-9 whispers pundits Sadly horribly curiously noticeably Sadly gaping painfully shockingly
-----
(a) SST2 (b) FBP
(c) COLA (d) DBpedia
(e) EmoC (f) EmoS
(g) ETHOS-SO (h) RTHOS-R
Figure 10: Historgrams of the probability of train examples in each dataset predicting corresponding
concept tokens.
-----
| [
"Wanrong, Zhu",
"Xinyi, Wang",
"Michael, Saxon",
"William Yang, Wang",
"Mark, Steyvers"
] | 2023-11-02T00:00:00 | NeurIPS 2023 Poster | true | 62 | 0 | null | https://openreview.net/forum?id=BGvkwZEGt7 | https://arxiv.org/abs/2301.11916 | https://www.semanticscholar.org/paper/29bd550d0ab53296790ceba31dfe0a06754bcdde |
Self-Evaluation Guided Beam Search for Reasoning | Breaking down a problem into intermediate steps has demonstrated impressive performance in Large Language Model (LLM) reasoning. However, the growth of the reasoning chain introduces uncertainty and error accumulation, making it challenging to elicit accurate final results. To tackle this challenge of uncertainty in multi-step reasoning, we introduce a stepwise self-evaluation mechanism to guide and calibrate the reasoning process of LLMs. We propose a decoding algorithm integrating the self-evaluation guidance via stochastic beam search. The self-evaluation guidance serves as a better-calibrated automatic criterion, facilitating an efficient search in the reasoning space and resulting in superior prediction quality. Stochastic beam search balances exploitation and exploration of the search space with temperature-controlled randomness. Our approach surpasses the corresponding Codex-backboned baselines in few-shot accuracy by $6.34$%, $9.56$%, and $5.46$% on the GSM8K, AQuA, and StrategyQA benchmarks, respectively. Experiment results with Llama-2 on arithmetic reasoning demonstrate the efficiency of our method in outperforming the baseline methods with comparable computational budgets. Further analysis in multi-step reasoning finds our self-evaluation guidance pinpoints logic failures and leads to higher consistency and robustness. Our code is publicly available at [https://guideddecoding.github.io/](https://guideddecoding.github.io/). | A stepwise self-evaluation mechanism to guide and calibrate the reasoning process of LLMs through stochastic beam search and a decoding algorithm integrating the self- evaluation guidance via stochastically beam search are proposed. | # Self-Evaluation Guided Beam Search for Reasoning
**Yuxi Xie[1][∗]** **Kenji Kawaguchi[1]** **Yiran Zhao[1]** **James Xu Zhao[1]**
**Min-Yen Kan[1][†]** **Junxian He[2][†]** **Michael Qizhe Xie[1][†]**
1 National University of Singapore 2 The Hong Kong University of Science and Technology
**Abstract**
Breaking down a problem into intermediate steps has demonstrated impressive
performance in Large Language Model (LLM) reasoning. However, the growth
of the reasoning chain introduces uncertainty and error accumulation, making it
challenging to elicit accurate final results. To tackle this challenge of uncertainty in
multi-step reasoning, we introduce a stepwise self-evaluation mechanism to guide
and calibrate the reasoning process of LLMs. We propose a decoding algorithm
integrating the self-evaluation guidance via stochastic beam search. The selfevaluation guidance serves as a better-calibrated automatic criterion, facilitating an
efficient search in the reasoning space and resulting in superior prediction quality.
Stochastic beam search balances exploitation and exploration of the search space
with temperature-controlled randomness. Our approach surpasses the corresponding Codex-backboned baselines in few-shot accuracy by 6.34%, 9.56%, and 5.46%
on the GSM8K, AQuA, and StrategyQA benchmarks, respectively. Experiment
results with Llama-2 on arithmetic reasoning demonstrate the efficiency of our
method in outperforming the baseline methods with comparable computational
budgets. Further analysis in multi-step reasoning finds our self-evaluation guidance
pinpoints logic failures and leads to higher consistency and robustness. Our code is
[publicly available at https://guideddecoding.github.io/.](https://guideddecoding.github.io/)
**1** **Introduction**
The remarkable empirical achievements of Large Language Models (LLMs) have recently ushered
in a new era in machine reasoning through few-shot prompting techniques (Brown et al., 2020;
Chowdhery et al., 2022; Touvron et al., 2023a; OpenAI, 2023). In particular, breaking down a
problem into intermediate stages, or a reasoning chain, can significantly improve model performance
on reasoning tasks (Cobbe et al., 2021). Various prompting approaches have been proposed to define
these chains, such as scratchpads (Nye et al., 2021), chain-of-thought (CoT) (Wei et al., 2022b),
_least-to-most (Zhou et al., 2023), and program-aided language models (PAL) (Gao et al., 2023; Chen_
et al., 2022). However, as the complexity and length of reasoning chains increase with the difficulty
of tasks, LLMs struggle with errors and imperfections that accumulate across multiple intermediate
steps (Wu et al., 2016; Guo et al., 2018; Chen et al., 2022). Furthermore, the growing number of
steps leads to an exponential growth in the search space for reasoning, making it exceedingly difficult
to obtain accurate final outcomes.
Confronted with the challenges of uncertainty in multi-step chaining, several previous studies have
worked on different aspects to alleviate the impact of reasoning errors. For instance, Wang et al.
(2023) introduce self-consistency as a method to determine the final answer through majority voting
using multiple sampled reasoning paths, while Li et al. (2022) investigate various prompts to diversify
the sampling outcomes. Gao et al. (2023) and Chen et al. (2022) utilize Python programs to
_∗Correspondence to: Yuxi Xie ([email protected])._
_†Equal advising. Ordering is determined by dice rolling._
37th Conference on Neural Information Processing Systems (NeurIPS 2023).
-----
**Multi-step Reasoning Chain**
**Question** **Basic Greedy Decoding** marilyn_copies = 88000
ratio = 10
Marilyn's 1[st] record sold answer = marilyn_copies // ratio
10 times as many
copies as Harald's. If _marilyn_copies = 88000_
they sold combined, how many 88,000 copies _total_sold = 88000_ _copies_mar = total_sold / 10_ **Answer: 8,800❌**
copies did Harald sell? _ratio = 10_ _answer = total_sold // marilyn_harald_ratio_
_marilyn_copies = 10_
_marilyn_harald_ratio = 10_ _answer = total_sold / (marilyn_harald_ratio + 1)_
**_…_** **_…_** **_…_** **Multi-step Reasoning Chain**
total_sold = 88000
**Self-Evaluation Guided Stochastic Beam Search** marilyn_harald_ratio = 10
answer = total_sold / (marilyn_harald_ratio + 1)
🤖 **Generation**
**LLM** **Evaluation** **Answer: 8,000✔**
Figure 1: Self-Evaluation can calibrate the decoding direction in multi-step reasoning. We illustrate
our method in the form of stepwise stochastic beam search with the beam size equal to 1. The scale
of the self-evaluation score is visualized in the colormap. We adopt Program-Aided Language models
(PAL) reasoning (Gao et al., 2023; Chen et al., 2022) for this math word problem.
achieve higher accuracy in mathematical computations. While these approaches have contributed
to significant performance improvements in reasoning, the process of generating reasoning chains
has been parameterized as a standard autoregressive process and intrinsically faces the challenge of
sampling within an exponentially large search space.
Motivated by this challenge, we employ LLM self-evaluation (Kadavath et al., 2022) as a bettercalibrated criterion to automatically guide the search in the reasoning space, drawing inspiration
from prior works on utilizing LLMs for self-evaluation (Rae et al., 2021; Paul et al., 2023; Madaan
et al., 2023; Shinn et al., 2023). We integrate the self-evaluation guidance for reasoning in a stepwise
and generalizable manner. Specifically, we formulate the reasoning chain generation as a decoding
process consisting of multiple intermediate steps. Unlike traditional text decoding where each step
produces a single token, we consider each decoding step as a reasoning logic composed of a sequence
of tokens. This framework enables us to employ beam search (Jurafsky and Martin, 2009; Graves,
2012) decoding tailored for intermediate steps and guide the beam searching process by controlling
the error of each reasoning step to prevent potential error accumulation throughout the chaining.
Figure 1 illustrates an example of decoding a chain of program-aided reasoning steps. Furthermore,
we incorporate temperature-controlled randomness (Ackley et al., 1985; Kool et al., 2019; Meister
et al., 2021) into the traditional (deterministic) beam search to balance the quality–diversity trade-off
in searching for better reasoning chains. Our approach has resulted in respectable improvements
across various arithmetic, symbolic, and commonsense reasoning tasks. For instance, by guiding the
reasoning decoding process of the Codex model (Chen et al., 2021), we achieve accuracies of 85.5%,
64.2%, and 77.2% on the GSM8K, AQuA, and StrategyQA benchmarks, compared to the vanilla
reasoning-enhanced Codex performance of 80.4%, 58.6%, and 73.2%, respectively. Our further
analysis on Llama-2 (Touvron et al., 2023b) demonstrates the efficiency of our method in surpassing
the self-consistency baseline under equivalent computational budgets.
**2** **Self-Evaluation Guided Stochastic Beam Search**
Considering the input prompt and question Q represented as x, we formulate the answer distribution
_P_ (a | x) by decomposing it as a reasoning chain generation process P (R | x) and an answer
generation process P (a | R, x):
_P_ (a _x) = ER_ _P (R_ _x)P_ (a _R, x),_ (1)
_|_ _∼_ _|_ _|_
where R is the intermediate reasoning chain variable that is typically a text sequence. P (a _R, x) =_
1A(a) _|_
max ( _A_ _,1)_ [, where][ A][ =][ execute][(][R][)][ represents the set of predicted answer(s) interpreted from][ R][,]
_|_ _|_
and 1A is the indicator function of the subset A. In practice, |A| ≥ 0 can be 0 or larger than 1 when
the reasoning R returns no valid answer or produces more than one possible answers, respectively.
-----
|s1 s2 … sT R = [s1, s2, …, sT] = s1:T|s1|s2|…|sT|
|---|---|---|---|---|
||R = [s1, s2, …, sT] = s1:T||||
||||||
|Gen st|Col2|
|---|---|
|…||
|marilyn = total_sold //|marilyn_harald_ratio|
|# Is the above step of re|asoning:|
|# (A) Correct||
|# (B) Incorrect||
|# The above step of reas|oning is:|
|(B), because Marilyn sol|d 10 times as many …|
|Col1|Self-Eval|Col3|
|---|---|---|
|PLM(s1:t1)||C(s1:t1)|
|…||…|
|PLM(s1:tn-1)||C(s1:tn-1)|
|PLM(s1:tn)||C(s1:tn)|
|PLM(s1:tn+1)||C(s1:tn+1)|
|…||…|
|PLM(s1:t2n-1)||C(s1:t2n-1)|
|PLM(s1:t2n)||C(s1:t2n)|
|Multi-step Reasoning Kept Candidate 1 Q s1 … st-1 Gen Kept Candidate 2 Q s1 … st-1 Gen|Col2|Multi-step Reasoning|
|---|---|---|
||Decoding at Timestep t||
**Timestep 1** **Timestep 2** **Timestep T** **Prompting Framework**
**_s[2]1_** **_s[T]1_** **Reasoning Chain**
**_ss[1][1]12_** **_s…[2]n_** **…** **_s…[T]2_** **_Rs = [[1]_** **_s[1], ss[2][2], …, …s[T]] = ss[1:][T][T]_** **LLM🤖** **_R_** **_Q_**
**_Q_** **_ss[1]…[1]nn-1_** **_ss[2][2]…nn+1+2_** **…** **_ss[T]…[T]nn+1_** **Predicted Final Answera** **Gen** **_s[t]_**
**Multi-step Reasoning** **_s[2]2n_** **_s[T]2n_** **marilyn = total_sold // marilyn_harald_ratio…**
**_s[t]1_** **Self-Eval** **# Is the above step of reasoning:**
**Kept Candidate Q** **_s[1]_** **…** **1** **_s[t][-1]_** **Gen** **_s[t]…n-1_** _PLM…(s[1:][t]1)_ _C(…s[1:][t]1)_ **Beam SearchStochastic(k = 2)** **# (A) Correct# (B) Incorrect**
**_s[t]n_** _PPLMLM((ss[1:][1:][t]n[t]n-1))_ _CC((ss[1:][1:][t]n[t]n-1))_ **# The above step of reasoning is:(B), because Marilyn sold 10 times as many …**
**Kept Candidate 2** **_s[t]n+1_** _PLM(s[1:][t]n+1)_ _C(s[1:][t]n+1)_ **Select k Paths**
**_Q_** **_s[1]_** **…** **_s[t][-1]_** **Gen** **_s[t]2…n-1_** _PLM(s…[1:][t]2n-1) C(s[1:]…[t]2n-1)_ **Self-Eval**
**Decoding at Timestep t** **_s[t]2n_** _PLM(s[1:][t]2n)_ _C(s[1:][t]2n)_
Figure 2: Our framework of self-evaluation guided stochastic beam search for multi-step reasoning.
The schema of the decoding process is on the left, where we keep k = 2 candidates at each timestep,
with the detailed illustration of timestep t at the bottom. Here “Gen” and “Self-Eval” represent the
generation and evaluation LLMs, respectively. The corresponding prompt formulations are provided
on the right, where the questions Q, reasoning steps R, and evaluation scripts are highlighted in
orange, green, and yellow, respectively. Steps in light green (e.g., s[t]) are for models to generate
or evaluate at the current timestep. Specifically, we follow Kadavath et al. (2022) to prompt the LLM
evaluation by answering the multiple-choice question, i.e., the lines starting with #.
Prior research has modeled the reasoning chain generation P (R | x) by prompting LLMs to explicitly
elaborate on the required intermediate steps R. Through setting different prompting schemes, the
reasoning process P (R | x) can be modeled as chain-of-thought free-text reasoning (Kojima et al.,
2022; Wei et al., 2022b), a two-stage question decomposition and answering pipeline (Zhou et al.,
2023), or program-aided reasoning to generate a python program (Gao et al., 2023; Chen et al., 2022).
While effective, previous work mostly uses a single sample of R from the LLMs to approximate the
expectation in Eq. 1 – the generated reasoning chain is often unreliable and causes incorrect answers.
To mitigate this issue, Wang et al. (2023) conduct majority voting to approximate the expectation via
sampling and aggregating multiple reasoning chains. Li et al. (2022) take a further step to diversify
the sampling and calibrate P (R | x) with a task-specific fine-tuned verifier. Another line of work
focuses on improving P (a | R, x) instead. For example, Gao et al. (2023) and Chen et al. (2022)
employ Python programs for more accurate calculations in math word problems.
In this work, we focus on improving P (R | x) to enhance the consistency of the sampled reasoning
chains. To this end, we propose to explicitly break down the reasoning process into multiple steps, as
shown in Figure 2, where each step yields a semantically integrated sequence of tokens, representing
a single step within the overall reasoning chain. From this perspective, we can approach the task of
enhancing P (R | x) as a decoding problem over the reasoning chains. Considering the exponentially
large search space and the potential unreliability of LLM-produced chains in reasoning, we propose a
constrained stochastic beam search decoding approach to improve the reasoning step by step and
obtain high-quality reasoning with a limited number of samples. We detail our approach next.
**2.1** **Multi-step Reasoning via Stochastic Beam Search**
In multi-step reasoning, a reasoning chain of T steps is sequentially generated through several
timesteps as R = [s[1], s[2], · · ·, s[T] ] = s[1:][T], where s[t] represents a sequence of tokens as the t-th step.
Formally, the reasoning generation process P (R | x) can be factorized in an autoregressive manner:
_P_ (R = s[1:][T] _| x) =_ _P_ (s[t] _| x, s[1:][t][−][1]),_ (2)
_t_
Y
which resembles the typical token-level autoregressive distribution of language models. Stepwise
reasoning allows us to formulate the process as a step-by-step decoding problem, where we can
utilize widely-used strategies such as beam search for the generation. Different from the typical text
-----
decoding process where each step consists of a single token, here we view a sequence of reasoning
tokens as a single step. One of the most severe issues in LLM-based reasoning is the potential
unreliability and inaccuracy of each reasoning step generated by the model. Furthermore, errors
from individual steps may accumulate throughout the reasoning chain, exacerbating the problem. To
address the issue, we define a constraint function C(s[t], s[1:][t][−][1]) ∈ [0, 1] within each reasoning step[3]
that outputs the LLM confidence in the correctness of the reasoning sequence s[t] based on the previous
context s[1:][t][−][1]. Then, we present a constrained decoding approach that combines the language model
probability and the correctness confidence as a new decoding objective function E(s[1:][T] ):
_E(s[1:][T]_ ) =
_PLM[λ]_ _G_ [(][s][t][ |][ x, s][1:][t][−][1][)][C][1][−][λ][(][s][t][)][,] (3)
where PLMG is the language model distribution [4]. λ ∈ [0, 1] is a weight hyperparameter to balance
the LM score and the confidence score. We will detail the design of C(s[t]) in Section 2.2. Eq 3 follows
an autoregressive factorization form, and thus traditional token-level decoding methods such as beam
search can be applied here on the chain level. As it is desirable to obtain high-quality reasoning
chains with limited samples that are scored high by E(s[1:][T] ), it is natural to utilize greedy or beam
search decoding to approximate the reasoning sequences that maximize E(s[1:][T] ).
Additionally, multiple diverse reasoning chains could be aggregated to further improve the final
accuracy, as suggested by Eq 1 and empirically confirmed by self-consistency reasoning (Wang et al.,
2023). To this end, we propose a variant of stochastic beam search (Kool et al., 2019; Meister et al.,
2021) to strike a tradeoff between exploration and exploitation. Concretely, for beam size k, at each
reasoning step we draw n samples of s[t] following PLMG (s[t] _| x, s[1:][t][−][1]) for each beam, and we end up_
with nk chain hypotheses of s[1:][t] to form the candidate set S, then we perform beam pruning through
sampling – we sample k reasoning beams without replacement, rather than finding the arg max k,
following a distribution defined by the accumulated score:[5]
_Pbeam(s[1:][t])_ exp( (s[1:][t])/τ ), _s[1:][t]_ (4)
_∝_ _E_ _∈S_
where the temperature τ is a hyperparameter to control the randomness in stochastic beam search;
when τ → 0, stochastic beam search becomes the vanilla beam search algorithm. The reasoning
beams s[1:][t] can be sampled efficiently since |S| = nk is a finite set. To enable fine-grained control
of sampling randomness in decoding, we also introduce a hyperparameter α ∈ [0, 1] so that τ can
decay step by step as τ → _ατ_ . By annealing τ with α, we can mitigate the error accumulation due to
aggregated randomness throughout chaining, as discussed in Section 3.4.
By incorporating controllable randomness, we not only achieve a more reliable single reasoning
chain generation by setting randomness to be small, but also leverage multiple diverse reasoning
chains with larger variance. Next, we introduce our constraint function C(s[t], s[1:][t][−][1]) that utilizes a
self-evaluation scheme to improve the consistency of each reasoning step.
**2.2** **Self-Evaluation as Correctness Control**
Inspired by the recent success of self-evaluation (Kadavath et al., 2022; Shinn et al., 2023; Madaan
et al., 2023; Paul et al., 2023), a scheme to prompt LLMs to evaluate their own generation, we use
LLMs to judge the correctness of s[t] based on s[1:][t][−][1]. Specifically, the evaluation and generation
models use the same backend LLM with different prompts, which consist of few-shot exemplars.
We follow previous works of CoT (Wei et al., 2022b) or PAL (Gao et al., 2023) to formulate the
generation prompts. To construct the in-context exemplars prompt for the self-evaluation LLM
_C_
LM, we provide stepwise evaluation examples (as question answering with rationales) in each
_C_
instance. Inspired by Kadavath et al. (2022), we design prompt in the form of multiple-choice
_C_
questioning (as shown in Figure 2) to better calibrate the model predictions, where we adopt the
token-level probability of option A to represent the correctness score as:
_C(s[t]) = PLMC_ (A | promptC, Q, s[1:][t]) (5)
354We will denote the LM generation probability byFor ease of notation, we will useIn Appendix A.1, we justify the approximation error rate of Eq C(st) throughout the paper when there is no confusion. P throughout the paper for simplification. 4, which computes normalized probability
on the subset S instead of on the entire set.
-----
**3** **Experiments**
**3.1** **Setup**
We present and analyze the results of our self-evaluation guided beam search with different LLM
backbones on various reasoning benchmarks. Implementation details including prompt examples and
hyperparameter setup can be found in Appendix A.3.
**Benchmarks.** We evaluate the effectiveness of our approach across three types of reasoning tasks:
(1) Arithmetic Reasoning on five math word problem benchmarks, including GSM8K (Cobbe et al.,
2021) on math word problems, AQuA (Ling et al., 2017) on algebraic word problems, SVAMP (Patel
et al., 2021) on structure variations of math word problems, ASDiv (Miao et al., 2020) on diverse
math word problems, and TabMWP (Lu et al., 2023) on tabular math word problems; (2) Symbolic
Reasoning on BIG-Bench (Srivastava et al., 2022), which involves Date Understanding of contextbased date inferring and Object Counting of enumerating and counting objects of different types;
(3) Commonsense Reasoning on three benchmarks, including CommonsenseQA (Talmor et al., 2019)
of commonsense questions that require prior world knowledge to answer, StrategyQA (Geva et al.,
2021) of questions that require a multi-hop strategy to answer, and Sports Understanding from
BIG-Bench (Srivastava et al., 2022) to determine whether a sports-related sentence is plausible.
**Baselines.** We consider two types of baselines: (1) Chain-of-Thought (CoT) (Wei et al., 2022b)
prompting in free-text reasoning and (2) Program-Aided Language models (PAL) (Ling et al., 2017)
and Program-of-Thought (PoT) (Chen et al., 2022) prompting in program-aided reasoning. We
also include their self-consistency (Wang et al., 2023) variants for multiple-chain reasoning. For
generation, we follow the few-shot exemplars of baselines. For self-evaluation, we manually create a
set of few-shot exemplars based on the baseline outputs on corresponding training data. We formulate
self-evaluation as a task of multiple-choice question answering, following Kadavath et al. (2022). For
baselines, we represent the cost as the number of generated tokens. For the cost of our method, we
also include the number of additional input tokens in self-evaluation for a fair comparison.
**Backboned LLMs.** We assess our approach on closed- and open-source LLMs using both PAL
and CoT prompting. For closed-source LLMs, we choose Codex (code-davinci-002) (Chen et al.,
2021) to report and compare the results on all datasets. We use Llama-2 (13B) (Touvron et al., 2023b)
as our open-source LLM to conduct cost–performance analysis on different datasets.
**3.2** **Main Results**
**Arithmetic and Symbolic Reasoning.** Table 1 shows the results for arithmetic and symbolic
reasoning. Our method achieves significant performance improvements on most benchmarks in both
single- (τ = 0) and multiple-chain scenarios, with PAL as the baseline. For arithmetic reasoning, we
observe absolute increases in accuracy of 5.3%, 8.3%, and 0.7% on GSM8K, AQuA, and SVAMP,
respectively. One possible explanation for this discrepancy in improvements is the reduced diversity
in LLM generations due to higher confidence in predictions, as evidenced by the relatively high
performance on the tasks. This highlights the importance of incorporating controllable randomness
into the candidate generations to expand the search space for self-evaluation guided decoding. We
further explore the impact of generation diversity by varying the temperature γ in Section 3.4.
For symbolic reasoning, our approach also leads to consistent performance gains. However, when the
baseline itself performs well on the task (e.g., 96.7% on Object Counting), our approach may not
yield substantial improvement. This can also be attributed to the constrained accessible search space
for self-evaluation guidance to refine the generation distribution. This limit suggests the inherent
deficiency in our LLM-based prompting method that it becomes increasingly challenging to calibrate
the generation direction when the model LM is more confident in its predictions. In other words, the
_G_
high baseline performance usually indicates lower diversity in the LLM generations even with a large
temperature γ, resulting in a limited accessible search space for the model to find a better solution.
**Commonsense Reasoning.** Table 2 compares methods using CoT prompting in commonsense
reasoning. Our approach shows consistent performance improvements across several tasks. For
example, on StrategyQA, we achieve an accuracy of 77.2% compared with 73.2% of the baseline.
-----
Table 1: Result comparison (accuracy %) on arithmetic and symbolic reasoning tasks. The best result
is in bold and the lowest cost is in green. We report methods all with Codex backbone for a fair
comparison. Similar to Huang et al. (2022), Diverse (Li et al., 2022) fine-tune task-specific verifiers to
apply weights on samples in self-consistency (SC). Other fine-tuning methods include reward-based
supervision (Uesato et al., 2022) and content-specific training (Lewkowycz et al., 2022). We also
report the number of tokens (# Tokens) on GSM8K to compare the costs of different methods.
Arithmetic Symbolic
Approach
GSM8K # Tokens AQuA SVAMP ASDiv TabMWP `DATE` `OBJECT`
single reasoning chain
CoT 65.6 0.2k 45.3 74.8 76.9 65.2 64.8 73.0
PoT 71.6 _−_ 54.1 85.2 _−_ 73.2 _−_ _−_
PAL 72.0 0.3k _−_ 79.4 79.6 _−_ 76.2 96.7
Ours-PAL 80.2 27.7k 55.9 89.6 84.9 79.1 **78.6** **96.8**
multiple reasoning chains
CoT, SC 78.0 5.3k 52.0 86.8 _−_ 75.4 _−_ _−_
CoT, Diverse 82.3 _−_ _−_ 87.0 **88.7** _−_ _−_ _−_
PoT, SC 80.0 _−_ 58.6 89.1 _−_ **81.8** _−_ _−_
PAL, SC 80.4 7.4k _−_ _−_ _−_ _−_ _−_ _−_
Ours-PAL **85.5** 550.0k **64.2** **90.3** 85.8 80.9 _−_ _−_
Table 2: Result comparison (accuracy %) on commonsense reasoning tasks, with Codex backbone.
Here we only report results in the single reasoning chain scenario following Wei et al. (2022b). We
report # Tokens on StrategyQA for cost comparison.
Approach StrategyQA # Tokens CommonsenseQA `Sports`
CoT 73.2 0.06k 77.9 **98.5**
Ours-CoT **77.2** 11.6k **78.6** 98.4
Human 87.0 _−_ 88.9 _−_
Likewise, the performance of our approach is constrained by the low diversity of LLM generations
on Sporting Understanding, as we observe on Object Counting in symbolic reasoning.
**Computational Cost Overhead.** Despite the fact that our approach achieves significant improvement on various benchmarks, we observe an overhead of computational cost compared with the
corresponding baselines. For example, the single-chain version of our approach using PAL costs
about 3 times more than the self-consistency baseline on GSM8K. As detailed in Appendix A.3, this
is due to a relatively large hyperparameter – the number of rollouts per beam n – which we set as 16
for better performance. To strike a balance between performance and cost and present a complete
picture, we adopt n = 2 and conduct cost–performance analysis on our approach in Section 3.3.
**3.3** **Cost Analysis**
Table 3 compares the baseline and our approach under comparable computational budgets (measured
in # Tokens). Our method consistently outperforms self-consistency on the arithmetic reasoning tasks
even when benchmarked for relatively less computational cost. For example, we achieve 46.1% on
GSM8K with a cost of 12.6k tokens, compared with the accuracy of 41.8% of self-consistency which
costs 13.9k tokens. Figure 4 further illustrates the cost-efficiency of our approach on GSM8K using
different prompting methods under various levels of costs. Our approach significantly outperforms
the corresponding equal-cost baseline especially when the computational budget increases, indicating
the improvement in the performance upper bound brought by our method.
However, our approach lags behind the CoT baseline on commonsense reasoning. This implies
the limitation of our method when applied to shorter reasoning chains, i.e., decreasing the number
-----
Table 3: Cost (# Tokens) and result (accuracy %) comparison on arithmetic and commonsense
reasoning tasks. We base our experiments on Llama-2 (13B) since Codex is not available. We show
the results of the baseline and our method both in the multiple-chain scenario for a fair comparison.
Here we use PAL and CoT prompting for arithmetic and commonsense reasoning, respectively.
Arithmetic (PAL) Commonsense (CoT)
Approach
GSM8K AQuA SVAMP ASDiv TabMWP StrategyQA CommonsenseQA
Baseline 41.8 30.7 71.2 66.2 43.7 **71.0** **74.4**
# Tokens 13.9k 6.6k 5.9k 2.7k 1.9k 2.7k 1.2k
Ours **46.1** **31.5** **74.6** **67.7** **49.6** 70.6 74.0
# Tokens 12.6k 6.0k 5.0k 2.5k 1.2k 2.6k 1.2k
45.4
46
44
42
40
38
36
34
44
42
40
38
36
34
2000 4000 6000 8000 10000 12000 14000
|Col1|Col2|Col3|46.1|
|---|---|---|---|
||||46.1|
|||45.2||
||42.9|41.7|41.|
||40.5 40|.4||
||38.9|||
|||||
|6.|1|PAL-SC Ours-PAL (sin|gle)|
|35. 3|0 35.7 4.6|Ours-PAL (m|ultiple)|
|||||
Cost (# Tokens)
38.8
37.1
2000 4000 6000 8000 10000 12000
|Col1|Col2|Col3|45|
|---|---|---|---|
||||4 44.9|
|43||.3 42.9|43.0|
|40.8 40.6||||
|.8||||
|||||
|1 36.5||Co Ou|T-SC rs-CoT (single)|
|||||
|34.5||Ou|rs-CoT (multip|
|||||
Cost (# Tokens)
(a) PAL Prompting Methods on GSM8K
(b) CoT Prompting Methods on GSM8K
Figure 4: Accuracy curves on GSM8K of different methods with the change of the cost. We conduct
the performance comparison using both PAL and CoT prompting with Llama-2 (13B) backbone.
500
0.0 0.2 0.4 0.6 0.8 1.0
0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
|ove cor|rall rect|Col3|
|---|---|---|
|wro|ng||
||||
||||
|rall rect|0.7|1 0.84|
|---|---|---|
|ong|||
||||
overall 0.83 0.88
correct
wrong
overall 0.71 0.84
correct
wrong
generation probability correctness confidence
|ov cor|erall rect|Col3|0.|76 0.86|
|---|---|---|---|---|
|wr|ong||||
||||||
||||||
overall 0.76 0.86
correct
wrong
(a) Score distribution of PAL baseline predictions on GSM8K.
0.0 0.2 0.4 0.6 0.8 1.0
1000
0
0.0 0.2 0.4 0.6 0.8 1.0
|Col1|Col2|(a) S|
|---|---|---|
|ov|erall|0.630.6|
|co wr|rrect ong||
||||
overallcorrect 0.63 0.65
wrong
generation probability
0.0 0.2 0.4 0.6 0.8 1.0
|erall|0.7|0 0.75|
|---|---|---|
|rrect ong|||
||||
overallcorrect 0.70 0.75
wrong
correctness confidence
|ov|erall|Col3|0.66|0.69|
|---|---|---|---|---|
|co wr|rrect ong||||
||||||
overall 0.66 0.69
correct
wrong
(b) Score distribution of CoT baseline predictions on StrategyQA.
Figure 5: Distributions of the self-evaluation score and its components (i.e., generation confidence
_P and correctness confidence C) on correct/incorrect baseline predictions. We highlight the median_
scores of the positive and negative cases using lines of the same colors respectively.
of intermediate steps weakens the effect of stepwise self-evaluation in beam search in reducing
error accumulation. On the other hand, self-consistency can directly improve performance through
instance-level aggregation without additional cost for self-evaluation. We analyze how our method
benefits longer reasoning chains on different tasks in Section 3.4.
**3.4** **Further Analysis**
We now provide a detailed analysis of why our method achieves significant gains.
**Generation and Self-evaluation Calibration.** We investigate the distributions of generation confidence (i.e., the LM probability P) and correctness confidence C in our self-evaluation score E. By
-----
Table 4: Absolute accuracy (in %) increases on instances of different complexity determined by the
length of reasoning chains (represented as # Steps).
GSM8K
# Steps # Ins. PAL Ours ∆Accu.
_< 7_ 437 85.8 91.3 +5.49
_∈_ (7, 9] 524 74.8 82.6 +7.82
_≥_ 9 358 72.9 82.6 +9.78
StrategyQA
# Steps # Ins. CoT Ours ∆Accu.
_< 4_ 637 84.6 84.9 +0.31
_∈_ [4, 5) 1, 301 78.6 79.1 +0.46
_≥_ 5 351 68.4 71.8 +3.42
82
80
78
76
74
72
70
42
40
38
36
34
84
82
80
78
|Col1|80.8|83.3 Ours-PA Ours-PA|84.5 L (single) L (multip|84. le)|
|---|---|---|---|---|
||79.1|79.2|78.4|77.|
0.2 0.5 0.8 1.0
|Col1|Ours-P PAL 36|AL (mult 38.3 .7|iple)|42|.8|
|---|---|---|---|---|---|
|2.6|35.7 34 33.4|.1 34.2||35 33|.7 .2|
10
|Col1|Col2|81.3|
|---|---|---|
||77.8 Ours-CoT (sin Ours-CoT (mu|gle) ltiple)|
||71.8|71.4|
0.0 0.5
81.3
77.8
Ours-CoT (single)
Ours-CoT (multiple)
71.8 71.4
Sampling temperature ( = 0.5, = 0.5)
84.5 84.8
83.3
Ours-PAL (single)
Ours-PAL (multiple)
80.8
79.1 79.2
78.4
77.9
Ours-PAL (single) 42.8
Ours-PAL (multiple)
PAL
38.3
36.7
35.7 35.7
34.1
33.4 34.2 33.2
32.6
Beam Size (k)
(a) effect of beam size
(b) effect of generation and sampling diversity
Figure 6: Accuracy curves and distributions of our approach on GSM8K with different hyperparameter
settings: (a) Changes in performance (Llama-2 backboned) when the beam size k varies. Methods
of the same k have equal computational costs; (b) Accuracy distributions (Codex backboned) with
different generation temperature γ and sampling temperature τ (with decay ratio α).
comparing the score distributions for correct and wrong predictions, we aim to gain an intuitive
understanding of whether these confidence scores are reliable. Figure 5 shows different score distributions on correct and [wrong] baseline predictions. The difference in distribution between the
two prediction sets is substantial for arithmetic reasoning, but negligible for commonsense reasoning.
Notably, in both instances, correctness confidence is more discriminatory than generation confidence.
To achieve a balance between these two confidence scores, we utilize a tunable hyperparameter λ,
setting λ = 0.5 for all datasets. Nevertheless, varying its value can lead to distinct outcomes. For
instance, when setting λ to 1 (E = C) or 0 (E = P), the performance on GSM8K decreases from
80.2% to 74.5% and 77.1%, respectively. This indicates that both scores play a crucial role in our
final performance. A more comprehensive analysis of λ can be found in Appendix A.2.
**Reasoning Complexity.** We investigate if our approach is more beneficial for instances needing
more reasoning steps. Table 4 shows that performance gains (in absolute accuracy % increase) increase
as reasoning chains become longer on both GSM8K and StrategyQA. Notably, the improvement
on StrategyQA primarily comes from improvements in longer reasoning chains, showcasing the
effectiveness of our method in navigating lengthy and intricate reasoning chains.
**Hyperparameters in Stochastic Beam Search.** We examine the significance of hyperparameters
associated with stochastic beam search, including the beam size k and the temperatures γ and τ
controlling the generation and sampling diversity, respectively.
Figure 6a shows the trend of performance improvement with the increase of beam size k. Notably,
our beam search approach inherently enables majority voting on the final beam without additional
cost, resulting in a more significant performance improvement in the multiple-chain reasoning when
the beam size is larger (e.g., 42.8% compared with 35.7% when k = 10).
For generation and sampling diversity, it is clear that more diversity resulting from higher temperatures
generally leads to a decline in performance when only considering a single reasoning chain. However,
diversity significantly benefits majority voting on multiple reasoning chains [6]. This benefit comes
6In this study, we did not explore higher generation temperatures (i.e., γ > 1.0) since this hyperparameter is
limited to 1.0 in the OpenAI API.
-----
|C|P|Ɛ|grace_weight = 125|
|---|---|---|---|
|C|P|Ɛ|alex_weight = 4 * grace_weight - 2|
|C|P|Ɛ|answer = grace_weight + alex_weight|
|[Q] Grace weighs 125 pounds. Alex weighs 2 pounds less than 4 times what 1 Grace weighs. What are their combined weights in pounds? [Ground-Truth a*] 623.0 1|Col2|Col3|Col4|[Q] Mariah and grandma used 1/4 and 1/2, respectively, from 364 2 yards in a skein of yarn. How many yards of yarn did they use in total? [Ground-Truth a*] 273.0 2|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|[Predicted a 11] 623✔.0 [R ] in Python 11 C P Ɛ grace_weight = 125 C P Ɛ alex_weight = 4 * grace_weight - 2 C P Ɛ answer = grace_weight + alex_weight||||[Predicted a 21] 273✔.0 [R ] in Python 21 C P Ɛ yards_per_skein = 364 C P Ɛ mariah_yards = 1 / 4 * yards_per_skein C P Ɛ grandma_yards = 1 / 2 * yards_per_skein C P Ɛ answer = mariah_yards + grandma_yards||||
|||||C|P|Ɛ|yards_per_skein = 364|
|||||C|P|Ɛ|mariah_yards = 1 / 4 * yards_per_skein|
|||||C|P|Ɛ|grandma_yards = 1 / 2 * yards_per_skein|
|||||C|P|Ɛ|answer = mariah_yards + grandma_yards|
|[Predicted a ] 627.0 12 ❌ [R ] in Python 12 C P Ɛ grace_weight = 125 C P Ɛ alex_weight = 2 C P Ɛ weight_multiplier = 4 C P Ɛ alex_total = alex_weight + weight_multiplier * grace_weight C P Ɛ answer = grace_weight + alex_total||||[Predicted a ] 273.0 22 ✔ [R ] in Python 22 C P Ɛ yarn_mariah = 1 / 4 C P Ɛ yarn_grandma = 1 / 2 C P Ɛ yards_per_skein = 364 C P Ɛ total_yards = yarn_mariah + yarn_grandma C P Ɛ yards_used = total_yards * yards_per_skein||||
|C|P|Ɛ|grace_weight = 125|C|P|Ɛ|yarn_mariah = 1 / 4|
|C|P|Ɛ|alex_weight = 2|C|P|Ɛ|yarn_grandma = 1 / 2|
|C|P|Ɛ|weight_multiplier = 4|C|P|Ɛ|yards_per_skein = 364|
|C|P|Ɛ|alex_total = alex_weight + weight_multiplier * grace_weight|C|P|Ɛ|total_yards = yarn_mariah + yarn_grandma|
|C|P|Ɛ|answer = grace_weight + alex_total|C|P|Ɛ|yards_used = total_yards * yards_per_skein|
**[Grace Q1]** **Graceweighs. What are their weighs 125 pounds. combined Alex weighs weights in pounds? 2** pounds less than 4 times what **[yards in a skein of yarn. How many yards of yarn did they use Q2] Mariah** and **grandma** used **1/4 and** **1/2, respectively, from in total364 ?**
**[Ground-Truth a1*]** 623.0 **[Ground-Truth a2*]** 273.0
**[Predicted [R11] in Pythona11]** 623.0✔ **[Predicted [R21] in Pythona21]** 273.0✔
C P Ɛ **grace_weight = 125** C P Ɛ **yards_per_skein = 364**
C P Ɛ **alex_weight = 4 * grace_weight - 2** C P Ɛ **mariah_yards = 1 / 4 * yards_per_skein**
C P Ɛ **answer = grace_weight + alex_weight** C P Ɛ **grandma_yards = 1 / 2 * yards_per_skein**
C P Ɛ **answer = mariah_yards + grandma_yards**
**[Predicted [R12] in Pythona12]** 627.0❌ **[Predicted [R22] in Pythona22]** 273.0✔
C P Ɛ **grace_weight = 125** C P Ɛ **yarn_mariah = 1 / 4**
C P Ɛ **alex_weight = 2** C P Ɛ **yarn_grandma = 1 / 2**
C P Ɛ **weight_multiplier = 4** C P Ɛ **yards_per_skein = 364**
C P Ɛ **alex_total = alex_weight + weight_multiplier * grace_weight** C P Ɛ **total_yards = yarn_mariah + yarn_grandma**
C P Ɛ **answer = grace_weight + alex_total** C P Ɛ **yards_used = total_yards * yards_per_skein**
(a) Examples of self-evaluation score distribution of different predictions on the GSM8K dataset.
|C|P|Ɛ|Columbus was Italian and the Portugese Empire was Portugese.|
|---|---|---|---|
|C|P|Ɛ|Thus, Columbus didn’t obtain funding ... Portugese Empire.|
|C|P|Ɛ|So the answer is no.|
|[Q] Did Columbus obtain his funding from the rulers of the Portugese Empire? 3 [Ground-Truth a*] no 3 [Explanation] King Ferdinand and Queen Isabella funded Columbus' voyage. They were the joint rulers of the Iberian Peninsula, which included Spain but excludes Portugal. King John II of Portugal rejected Columbus' request for funding.|Col2|Col3|Col4|[Q] Is Freya a combination of Athena and Aphrodite? [G4round-Truth a*] yes [Explanation] At4hena was the Greek goddess of war. Aphrodite was the Greek goddess of love. Freya was the Norse goddess of war, love, and fertility.|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||[ [RPre ]d ii nc t Ce hd a i4 n1] n oo f❌ a - -Thought 41 C P Ɛ Freya is a Norse goddess. C P Ɛ Athena and Aphrodite are Greek goddesses. C P Ɛ Thus, Freya is not a combination of Athena and Aphrodite. C P Ɛ So the answer is no.||||
|||||C|P|Ɛ|Freya is a Norse goddess.|
|[ [RPre ]d ii nc t Ce hd a i3 n1] n oo f✔ a - -Thought 31 C P Ɛ Columbus obtained his funding from the rulers of the Spanish Empire. C P Ɛ Thus, Columbus didn’t obtain his funding ... Portugese Empire. C P Ɛ So the answer is no.||||||||
|||||C|P|Ɛ|Athena and Aphrodite are Greek goddesses.|
|||||C|P|Ɛ|Thus, Freya is not a combination of Athena and Aphrodite.|
|C|P|Ɛ|Columbus obtained his funding from the rulers of the Spanish Empire.|||||
|||||C|P|Ɛ|So the answer is no.|
|C|P|Ɛ|Thus, Columbus didn’t obtain his funding ... Portugese Empire.|||||
|||||[ [RPre ]d ii nc t Ce hd a n2] y oe fs✔ i4 a - -Thought 42 C P Ɛ Freya is the goddess of love, beauty and fertility. C P Ɛ You can compare these to the Greek gods Athena and Aphrodite. C P Ɛ It is not clear how their powers exactly compare. C P Ɛ However, it seems that Freya must have attributes that match both. C P Ɛ So the answer is yes.||||
|C|P|Ɛ|So the answer is no.|||||
|[ [RPre ]d ii nc t Ce hd a i3 n2] n oo f✔ a - -Thought 32 C P Ɛ Columbus was Italian and the Portugese Empire was Portugese. C P Ɛ Thus, Columbus didn’t obtain funding ... Portugese Empire. C P Ɛ So the answer is no.||||42 C|P|Ɛ|Freya is the goddess of love, beauty and fertility.|
|||||C|P|Ɛ|You can compare these to the Greek gods Athena and Aphrodite.|
|||||C|P|Ɛ|It is not clear how their powers exactly compare.|
|||||C|P|Ɛ|However, it seems that Freya must have attributes that match both.|
|||||C|P|Ɛ|So the answer is yes.|
❌
✔
✔
✔
(b) Examples of self-evaluation score distribution of different predictions on the StrategyQA dataset. We also
provide explanations corresponding to the ground-truth answers for reference.
Figure 7: Comparisons among predictions of high and low self-evaluation scores on arithmetic
(7a for GSM8K) and commonsense (7b for StrategyQA) reasoning tasks. Scores from low to high
are visualized from [orange] (0.0), yellow (0.4), to [green] (1.0). Here C, P, and E represent the
evaluation confidence, the generation confidence, and their combination as the final score, respectively.
from the improved coverage of the plausible generations and the ensembling effect. Nevertheless, one
can adjust the sampling-related parameters (i.e., τ and α) to incorporate more randomness into the
generations. In practice, we find that a moderate temperature decay (e.g., α = 0.5) results in improved
performance. We conduct further analysis of the effect of sampling diversity in Appendix A.2.
**Qualitative Analysis.** We examine particular instances to investigate the behavior of correctness
confidence scores C and generation probabilities P in different scenarios. From the comparison
shown in Figure 7, we have the following main observations:
_• In general, the correctness confidence is more effective at identifying logical errors, taking into_
account the accumulated mistakes from prior steps, while the generation probability focuses more on
text perplexity as the confidence of the generation LLM.
_• When comparing arithmetic and commonsense tasks, LLMs exhibit greater confidence in dealing_
with structured and objective reasoning chains such as problems in GSM8K, for both generation and
self-evaluation, as opposed to reasoning chains in StrategyQA.
_• Reasoning chains that appear logically plausible can achieve high correctness confidence scores_
but still result in incorrect answers, as demonstrated in R41 in Figure 7b. Moreover, the correctness
confidence can be influenced by minor details (e.g., imperfect variable naming in PAL reasoning) and
assign low scores regardless of the correctness of the final answers as shown in R22 in Figure 7a.
_• Incoherence due to a sudden jump in reasoning (e.g., R32 in Figure 7b) can lead to low correctness_
confidence. Additionally, the correctness confidence tends to be lower when the generation LLM
makes a probability statement with less certainty, such as “it seems” as illustrated by R42 in Figure 7b.
-----
**4** **Related Work**
**Reasoning Formulation.** Several studies have attempted to better formulate the reasoning problem.
One approach is to generate rationales to enhance model interpretability (Zhou et al., 2020; Wiegreffe
and Marasovic, 2021; Wiegreffe et al., 2021). Recently, the focus has shifted towards decomposing
the reasoning process into intermediate steps before reaching the final answer (Wei et al., 2022b;
Zhou et al., 2023; Gao et al., 2023; Chen et al., 2022). Various decomposition techniques have been
explored, such as question reduction (Zhou et al., 2023; Yang et al., 2022), iterative prompting (Wang
et al., 2022), and chaining the steps (Wu et al., 2022). While incorporating intermediate reasoning
steps has resulted in substantial performance improvements, errors or imperfections can accumulate,
especially when the chains become longer (Wu et al., 2016; Guo et al., 2018). As such, we utilize
LLM self-evaluation as a stepwise criterion to improve the chaining process.
**LLM Self-Evaluation.** Recent research on LLM calibration shows that current LLMs’ probabilistic
predictions correspond well with actual token occurrence frequencies, leading to well-calibrated
predictions for specific tasks (Rae et al., 2021; Kadavath et al., 2022; Guo et al., 2017; Kadavath
et al., 2022; Jiang et al., 2021; Kuhn et al., 2023). Notably, scaling model size plays a crucial role
in enhancing calibration (Rae et al., 2021; Wei et al., 2022a). As LLMs exhibit good calibration,
an increasing number of studies focus on prompting LLMs to perform self-evaluation as a means
of verification (Zhang et al., 2023; Shinn et al., 2023; Madaan et al., 2023; Paul et al., 2023). Selfevaluation provides an effective and efficient assessment method without requiring task-specific
verifier fine-tuning, which typically involves additional annotations (Li et al., 2022). In contrast to
existing works that refine generation results through instance-level self-evaluation, our approach
applies self-evaluation results as a stepwise criterion to calibrate generation at a finer granularity.
By focusing on step-by-step self-evaluation, our method enables fine-grained guided decoding,
addressing the challenges associated with complex or lengthy reasoning.
**Decoding Strategies.** A tradeoff typically exists between diversity and quality. Deterministic
decoding methods such as greedy decoding and beam search (Jurafsky and Martin, 2009; Graves,
2012) often produce high-quality results but lack diversity (Stahlberg and Byrne, 2019; Meister et al.,
2020). Temperature sampling (Ackley et al., 1985), top-k sampling (Fan et al., 2018), and top-p
sampling (Holtzman et al., 2020) are various techniques used to enhance diversity. The recent work of
_tree-of-thought (Yao et al., 2023) explores different search algorithms such as breadth-first and depth-_
first searches tailored for different tasks. Differently, we propose a unified framework of stochastic
beam search (Caccia et al., 2020; Kool et al., 2019; Meister et al., 2021), which combines beam
search and temperature sampling to balance the quality–diversity trade-off in multi-step reasoning.
**5** **Discussion**
We have introduced a multi-step decoding method that calibrates reasoning with stepwise selfevaluation guidance via stochastic beam search for current large language models. The empirical
success of our method across a broad range of tasks, from arithmetic and symbolic to commonsense
reasoning, demonstrates its robustness and generalizability in various application areas. The significant performance gains of our method on long reasoning chains also highlight its applicability to
other multi-step tasks, such as multi-hop question answering and more complex scenarios involving
multi-modal understanding, reasoning, and planning. In future work, we will investigate how to utilize
external tools to further enhance the calibration and explore its generalizability on other multi-step
scenarios to deal with more complex information such as external knowledge and multimodalities.
**Potential Impacts and Limitations**
We propose self-evaluation guided stochastic beam search to facilitate multi-step reasoning. However,
our approach, based on stepwise self-evaluation guidance, has certain limitations. It requires access
to LLM logits to calculate the self-evaluation score, restricting its applicability to more powerful
LLMs, such as GPT-4, which do not provide token likelihoods. Plus, multi-step decoding inherently
causes additional costs from candidate sampling and self-evaluation. For optimal balance between
efficiency and cost, our approach is best applied to longer reasoning chains, where the cumulative
effect of calibration across multiple steps can improve the overall performance more significantly.
-----
**Acknowledgments and Disclosure of Funding**
The computational work for this article was partially performed on resources of the National Supercomputing Centre (NSCC), Singapore[7]. We would like to thank Prof. Hwee Tou Ng for his insightful
discussions that enhanced the depth and quality of our study.
**References**
[David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski. 1985. A learning algorithm for boltzmann](https://doi.org/https://doi.org/10.1016/S0364-0213(85)80012-4)
[machines. Cognitive Science, 9(1):147–169.](https://doi.org/https://doi.org/10.1016/S0364-0213(85)80012-4)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen
Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter,
Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark,
[Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
[models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
_on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual._
Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2020.
[Language gans falling short. In 8th International Conference on Learning Representations, ICLR 2020, Addis](https://openreview.net/forum?id=BJgza6VtPB)
_Ababa, Ethiopia, April 26-30, 2020. OpenReview.net._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri
Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael
Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov,
Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such,
Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob
[McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large](http://arxiv.org/abs/2107.03374)
[language models trained on code.](http://arxiv.org/abs/2107.03374)
[Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting:](https://doi.org/10.48550/arXiv.2211.12588)
[Disentangling computation from reasoning for numerical reasoning tasks. CoRR, abs/2211.12588.](https://doi.org/10.48550/arXiv.2211.12588)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha
Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael
Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk
Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito,
David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani
Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor
Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang,
Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck,
[Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. CoRR,](https://doi.org/10.48550/arXiv.2204.02311)
abs/2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,
[Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training](http://arxiv.org/abs/2110.14168)
[verifiers to solve math word problems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
[Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of](https://doi.org/10.18653/v1/P18-1082)
_the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages_
889–898, Melbourne, Australia. Association for Computational Linguistics.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham
[Neubig. 2023. PAL: program-aided language models. In International Conference on Machine Learning,](https://proceedings.mlr.press/v202/gao23f.html)
_ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning_
_Research, pages 10764–10799. PMLR._
[7https://www.nscc.sg/](https://www.nscc.sg/)
-----
[Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://doi.org/10.1162/tacl_a_00370)
[use a laptop? A question answering benchmark with implicit reasoning strategies. Trans. Assoc. Comput.](https://doi.org/10.1162/tacl_a_00370)
_Linguistics, 9:346–361._
[Alex Graves. 2012. Sequence transduction with recurrent neural networks. CoRR, abs/1211.3711.](http://arxiv.org/abs/1211.3711)
[Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In](http://proceedings.mlr.press/v70/guo17a.html)
_Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia,_
_6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1321–1330. PMLR._
[Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via](https://doi.org/10.1609/aaai.v32i1.11957)
[adversarial training with leaked information. Proceedings of the AAAI Conference on Artificial Intelligence,](https://doi.org/10.1609/aaai.v32i1.11957)
32(1).
[Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text](https://openreview.net/forum?id=rygGQyrFvH)
[degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa,](https://openreview.net/forum?id=rygGQyrFvH)
_Ethiopia, April 26-30, 2020. OpenReview.net._
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022.
[Large language models can self-improve.](http://arxiv.org/abs/2210.11610)
[Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know When language models](https://doi.org/10.1162/tacl_a_00407)
[know? on the calibration of language models for question answering. Trans. Assoc. Comput. Linguistics,](https://doi.org/10.1162/tacl_a_00407)
9:962–977.
[Dan Jurafsky and James H. Martin. 2009. Speech and language processing : an introduction to natural language](http://www.amazon.com/Speech-Language-Processing-2nd-Edition/dp/0131873210/ref=pd_bxgy_b_img_y)
_[processing, computational linguistics, and speech recognition. Pearson Prentice Hall, Upper Saddle River,](http://www.amazon.com/Speech-Language-Processing-2nd-Edition/dp/0131873210/ref=pd_bxgy_b_img_y)_
N.J.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer,
Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones,
Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny
Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson,
Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris
[Olah, and Jared Kaplan. 2022. Language models (mostly) know what they know. CoRR, abs/2207.05221.](https://doi.org/10.48550/arXiv.2207.05221)
[Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
[models are zero-shot reasoners. In NeurIPS.](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
[Wouter Kool, Herke van Hoof, and Max Welling. 2019. Stochastic beams and where to find them: The gumbel-](http://proceedings.mlr.press/v97/kool19a.html)
[top-k trick for sampling sequences without replacement. In Proceedings of the 36th International Conference](http://proceedings.mlr.press/v97/kool19a.html)
_on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings_
_of Machine Learning Research, pages 3499–3508. PMLR._
[Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for](https://openreview.net/pdf?id=VD-AYtP0dve)
[uncertainty estimation in natural language generation. In The Eleventh International Conference on Learning](https://openreview.net/pdf?id=VD-AYtP0dve)
_Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V. Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari,
[and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. In NeurIPS.](http://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html)
[Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2022. On the](http://arxiv.org/abs/2206.02336)
[advance of making language models better reasoners.](http://arxiv.org/abs/2206.02336)
[Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation:](https://doi.org/10.18653/v1/P17-1015)
[Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the](https://doi.org/10.18653/v1/P17-1015)
_Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1:_
_Long Papers, pages 158–167. Association for Computational Linguistics._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and
[Ashwin Kalyan. 2023. Dynamic prompt learning via policy gradient for semi-structured mathematical](https://openreview.net/pdf?id=DHyHRBwJUTN)
[reasoning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali,](https://openreview.net/pdf?id=DHyHRBwJUTN)
_Rwanda, May 1-5, 2023. OpenReview.net._
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha
Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta,
[Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. CoRR,](https://doi.org/10.48550/arXiv.2303.17651)
abs/2303.17651.
-----
[Clara Meister, Afra Amini, Tim Vieira, and Ryan Cotterell. 2021. Conditional poisson stochastic beam search.](http://arxiv.org/abs/2109.11034)
_CoRR, abs/2109.11034._
[Clara Meister, Ryan Cotterell, and Tim Vieira. 2020. Best-first beam search. Trans. Assoc. Comput. Linguistics,](https://doi.org/10.1162/tacl_a_00346)
8:795–809.
[Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing](https://doi.org/10.18653/v1/2020.acl-main.92)
[english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for](https://doi.org/10.18653/v1/2020.acl-main.92)
_Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 975–984. Association for Computational_
Linguistics.
Maxwell I. Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021.
[Show your work: Scratchpads for intermediate computation with language models. CoRR, abs/2112.00114.](http://arxiv.org/abs/2112.00114)
[OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.](https://doi.org/10.48550/arXiv.2303.08774)
[Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math](https://doi.org/10.18653/v1/2021.naacl-main.168)
[word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association](https://doi.org/10.18653/v1/2021.naacl-main.168)
_for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021,_
pages 2080–2094. Association for Computational Linguistics.
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi
[Faltings. 2023. REFINER: reasoning feedback on intermediate representations. CoRR, abs/2304.01904.](https://doi.org/10.48550/arXiv.2304.01904)
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick,
Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, PoSen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John
Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar,
Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre,
Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic
Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev,
Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien
de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego
de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew J. Johnson, Blake A. Hechtman, Laura
Weidinger, Iason Gabriel, William Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer,
Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and
[Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. CoRR,](http://arxiv.org/abs/2112.11446)
abs/2112.11446.
[Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dynamic memory](https://doi.org/10.48550/arXiv.2303.11366)
[and self-reflection. CoRR, abs/2303.11366.](https://doi.org/10.48550/arXiv.2303.11366)
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz,
Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv,
Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane,
Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew
La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna
Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun
Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas,
[and et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language](https://doi.org/10.48550/arXiv.2206.04615)
[models. CoRR, abs/2206.04615.](https://doi.org/10.48550/arXiv.2206.04615)
[Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In](https://doi.org/10.18653/v1/D19-1331)
_Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th_
_International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3356–3362, Hong_
Kong, China. Association for Computational Linguistics.
[Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question](https://doi.org/10.18653/v1/n19-1421)
[answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the](https://doi.org/10.18653/v1/n19-1421)
_North American Chapter of the Association for Computational Linguistics: Human Language Technologies,_
_NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages_
4149–4158. Association for Computational Linguistics.
-----
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR,](https://doi.org/10.48550/arXiv.2302.13971)
abs/2302.13971.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov,
Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya
Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao,
Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas,
Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux,
Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan
Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang,
Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela
Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.](https://doi.org/10.48550/arXiv.2307.09288)
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell,
[Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with process- and outcome-based](http://arxiv.org/abs/2211.14275)
[feedback.](http://arxiv.org/abs/2211.14275)
[Boshi Wang, Xiang Deng, and Huan Sun. 2022. Shepherd pre-trained language models to develop a train of](https://doi.org/10.48550/arXiv.2203.08383)
[thought: An iterative prompting approach. CoRR, abs/2203.08383.](https://doi.org/10.48550/arXiv.2203.08383)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The](https://openreview.net/pdf?id=1PL1NIMMrw)
_Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023._
OpenReview.net.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten
Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean,
[and William Fedus. 2022a. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022.](https://openreview.net/forum?id=yzkSU5zdwD)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and
[Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS.](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
[Sarah Wiegreffe and Ana Marasovic. 2021. Teach me to explain: A review of datasets for explainable natural](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/698d51a19d8a121ce581499d7b701668-Abstract-round1.html)
[language processing. In Proceedings of the Neural Information Processing Systems Track on Datasets and](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/698d51a19d8a121ce581499d7b701668-Abstract-round1.html)
_Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual._
[Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith. 2021. Measuring association between labels and free-text](https://doi.org/10.18653/v1/2021.emnlp-main.804)
[rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing,](https://doi.org/10.18653/v1/2021.emnlp-main.804)
_EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 10266–10284._
Association for Computational Linguistics.
[Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. Ai chains: Transparent and controllable human-ai](https://doi.org/10.1145/3491102.3517582)
[interaction by chaining large language model prompts. In Proceedings of the 2022 CHI Conference on Human](https://doi.org/10.1145/3491102.3517582)
_Factors in Computing Systems, CHI ’22, New York, NY, USA. Association for Computing Machinery._
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim
Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu,
Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian,
Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado,
[Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap](http://arxiv.org/abs/1609.08144)
[between human and machine translation. CoRR, abs/1609.08144.](http://arxiv.org/abs/1609.08144)
[Jingfeng Yang, Haoming Jiang, Qingyu Yin, Danqing Zhang, Bing Yin, and Diyi Yang. 2022. SEQZERO:](https://doi.org/10.18653/v1/2022.findings-naacl.5)
[few-shot compositional semantic parsing with sequential prompts and zero-shot models. In Findings of the](https://doi.org/10.18653/v1/2022.findings-naacl.5)
_Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages_
49–60. Association for Computational Linguistics.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan.
[2023. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601.](https://doi.org/10.48550/arXiv.2305.10601)
Tianyi Zhang, Tao Yu, Tatsunori Hashimoto, Mike Lewis, Wen-Tau Yih, Daniel Fried, and Sida Wang. 2023.
[Coder reviewer reranking for code generation. In International Conference on Machine Learning, ICML](https://proceedings.mlr.press/v202/zhang23av.html)
_2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research,_
pages 41832–41846. PMLR.
-----
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui,
[Olivier Bousquet, Quoc V. Le, and Ed H. Chi. 2023. Least-to-most prompting enables complex reasoning in](https://openreview.net/pdf?id=WZH7099tgfM)
[large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023,](https://openreview.net/pdf?id=WZH7099tgfM)
_Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, and Jian Tang.
[2020. Towards interpretable natural language understanding with explanations as latent variables. In Advances](https://proceedings.neurips.cc/paper/2020/hash/4be2c8f27b8a420492f2d44463933eb6-Abstract.html)
_in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems_
_2020, NeurIPS 2020, December 6-12, 2020, virtual._
-----
**A** **Appendix**
**A.1** **Theoretical Analysis of Eq. 4**
In Eq. 4, we use sampled from the language model LM generations. This is an approximation for sampling
_S_ _G_
from the infinite set of all possible chaining paths. And the finite set S is constructed based on the generation
LM PLM, which is different from our target distribution as shown in Eq. 4.
_G_
Specifically, denote the infinite set of all possible generated completions till thesampling from Pbeam[∗] [(][s][1:][t][) =] _s[1:][t]exp (E(s[1:][t])/τ_ ) _s t[1:]-th step asexp ([t]_ [exp (]E(s[1:][E][t]) S[(]/τ[s][1:][∗])[t], we approximate[)][/τ] [)] [, where][ S][ is]
_∈S[∗]_ [exp (][E][(][s][1:][t][)][/τ] [)][ via][ P][beam][(][s][1:][t][) =] _∈S_
the approximation of S _[∗]_ with |S|P = nk = M ≤|S _[∗]|._ P
Define the upper bound ¯c and the lower bound c on each exp (E(s[1:][t])/τ ) as ¯c ≥ exp (E(s[1:][t])/τ ) ≥ _c for all_
_s[1:][t]_ _∈S_ _[∗]. Define the ratio as r = ¯c/c. Note that c ≥_ 1 since E(s[1:][t])/τ ≥ 0. Thus, we can take r ≤ _c¯._
We now give the following proposition which shows that |Pbeam[∗] [(][s][1:][t][)][ −] _[P][beam][(][s][1:][t][)][|][ decreases at the rate of]_
( [1][−][M/]M[|S][∗][|] ) toward 0 as M increases. Note that as M increases toward, the numerator 1 _M/_
_Odecreases toward 0 while the factor for the denominator_ _M1_ [also decreases.] _|S_ _[∗]|_ _−_ _|S_ _[∗]|_
**Proposition 1. For any s[1:][t], the difference between Pbeam[∗]** [(][s][1:][t][)][ and][ P][beam][(][s][1:][t][)][ is bounded by]
1 _M/_ _∗_
_|Pbeam[∗]_ [(][s][1:][t][)][ −] _[P]beam[(][s][1:][t][)][| ≤]_ _[r][2]_ _−_ _M_ _|S_ _|_
_Proof. We now prove the second statement by analyzing the absolute difference:_
(6)
_|Pbeam[∗]_ [(][s][1:][t][)][ −] _[P]beam[(][s][1:][t][)][|]_ (7)
exp ( (s[1:][t])/τ ) exp ( (s[1:][t])/τ )
= _E_ _E_ (8)
_s[1:][t]_ _s[1:][t]_ [exp (][E][(][s][1:][t][)][/τ] [)]
_∈S[∗]_ [exp (][E][(][s][1:][t][)][/τ] [)][ −] _∈S_
P _s[1:][t]_ P _s[1:][t]_ [exp (][E][(][s][1:][t][)][/τ] [)]
= [exp (][E][(][s][1:][t][)][/τ] [)] _∈S[∗]_ [exp (][E][(][s][1:][t][)][/τ] [)][ −] [P] _∈S_ (9)
_s[1:][t]_ [exp (][E][(][s][1:][t][)][/τ] [)][ P]s[1:][t]
_∈S[P]_ _∈S[∗]_ [exp (][E][(][s][1:][t][)][/τ] [)]
exp ( P(s[1:][t])/τ ) _s[1:][t]_ [exp (][E][(][s][1:][t][)][/τ] [)]
_E_ _∈S[∗]\S_
= (10)
_s[1:][t]_ [exp (][E][(][s][1:][t][)][/τ] [)] _s[1:][t]_
_∈S_ [P] _∈S[∗]_ [exp (][E][(][s][1:][t][)][/τ] [)]
Since exp (E(s[1:][t])/τ P) is nonnegative, using the upper bound on each P exp (E(s[1:][t])/τ ), we have:
_c¯[2](_ _M_ )
_|Pbeam[∗]_ [(][s][1:][t][)][ −] _[P]beam[(][s][1:][t][)][| ≤]_ _s[1:][t]_ [exp (][E][(][s][1:][t][)][/τ]|S[)][∗]| − _s[1:][t]_ (11)
_∈S_ _∈S[∗]_ [exp (][E][(][s][1:][t][)][/τ] [)]
Similarly, using the lower bound on each exp ( PE(s[1:][t])/τ ), P
_c[2](_ _M_ ) 1 _M/_ _∗_
_|Pbeam[∗]_ [(][s][1:][t][)][ −] _[P]beam[(][s][1:][t][)][| ≤]_ [¯] _c|S[2]_ _[∗]| −M_ = r[2] _−_ _M_ _|S_ _|_ (12)
_|S_ _[∗]|_
-----
**A.2** **Extended Experiments**
78
76
76.6
0.00 0.25 0.50 0.75 1.00
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|80 79.0|.2 7|79.4 8.8|||
||||7|7 7|
||||= 0.8||
||LM tempe|rature|= 0.8||
||LM tempe|rature|= 1.0||
||||||
Sampling temperature ( = 0.5)
0.2 0.4 0.6 0.8 1.0
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|78.4||79. 78.8|4|||
|77.9||||||
|LM||||= 0.8||
||LM|tempera|ture|= 0.8||
|LM|LM|tempera|ture|= 1.0|7 3|
|||||||
Decay ratio ( = 0.5)
Figure 8: Accuracy curves with different sampling diversity. The two plots show the changes in
performance on GSM8K when the sampling temperature τ and its decay ratio α vary, respectively.
**Sampling Diversity.** In accordance with Figure 6b, we observe similar results when ablating the sampling
hyperparameters τ and α for the single reasoning chain case, as shown in Figure 8. Increasing τ and α generally
adds more diversity to the decoding process, but excessive randomness negatively impacts the performance of
the single-chain decoding. Generally, a moderate temperature decay results in improved performance. Therefore,
we set α = 0.5 throughout our experiments for simplicity and only tune τ for randomness control.
0.62
0.61
0.60
0.59
0.58
0.82
0.80
0.78
0.76
0.74
0.2 0.4 0.6 0.8
0.2 0.4 0.6 0.8
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
|||||= 0.2 = 0.4|
|||||= 0.8|
||||||
||||||
||||||
|= 0.2 = 0.5 = 0.8|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
= 0.2
= 0.4
= 0.8
= 0.2
= 0.5
= 0.8
(a) λ-AUC curves of Eλ on GSM8K (PAL). (b) λ-AUC curves of Eλ on StrategyQA (CoT).
Figure 9: The change of AUC scores with different values of λ in _λ. We calculate the AUC score as_
_E_
how Eλ can successfully determine whether the corresponding predicted reasoning chain can produce
the ground-truth answer. The predictions here are from the baseline methods (i.e., CoT & PAL) with
different LM temperatures γ, as represented by curves of different colors.
1.0
0.8
0.6
0.4
0.2
1.0
0.8
0.6
0.4
0.2
0.0
High-scored ( > 0.90) Predictions Low-scored ( < 0.70) Predictions
(a) GSM8K (PAL prompting)
0.0
High-scored ( > 0.70) Predictions Low-scored ( < 0.50) Predictions
(b) StrategyQA (CoT prompting)
Figure 10: Comparison between predictions of high v.s. low self-evaluation scores on instance-level
accuracy.
-----
**More Analysis on Self-Evaluation.** Recall that we use a combination of generation confidence and
faithfulness score as Eλ = C[λ] _· P_ [(1][−][λ][)], with λ ∈ [0, 1]. In our experiments, we set λ = 0.5 for all tasks for
simplicity. However, we investigate its effects here since, intuitively, it is an important hyperparameter for
distinguishing correct / incorrect predictions and might require different values for various reasoning tasks and
datasets. Its effect is also coupled with the language model temperature γ.
Figure 9 demonstrates how λ functions on arithmetic (GSM8K) and commonsense (StrategyQA). In general, we
observe that the performance remains relatively stable with different choices of λ on different datasets, although
fine-tuning this hyperparameter might lead to further improvements. This stability suggests that the choice of λ
is not overly sensitive across various reasoning tasks and datasets, but exploring its optimal value for specific
tasks could potentially lead to even better performances.
To examine the influence of incorporating faithfulness on LLM final predictions, we plot the distributions of
changes in different scores, specifically the faithfulness score C, the generation confidence P, and the overall
decoding score Eλ on the baseline reasoning chains and the reasoning chains generated by our method. We
categorize the data points into 4 sets based on whether our approach changes the final prediction. Since the
majority of the data points belong to the “both correct” set (in blue), where both baselines and our method
generate accurate predictions, we particularly highlight the last two sets (in green and red), where our method
results in improvement and degradation, respectively.
As shown in Figure 11, faithfulness typically works by significantly increasing the evaluation confidence
_C of model predictions, while the generation confidence P remains similar to that of the baseline methods._
Specifically, for the evaluation confidence C, our approach corrects the original predictions by increasing the
confidence scores. This indicates that evaluation confidence plays a crucial role in guiding the decoding toward a
better reasoning choice in decomposed reasoning. The increase is more significant for PAL when compared with
CoT. This demonstrates that LLMs are generally better at judging the logic in reasoning that is more structured,
while the free-text intermediate steps (e.g., CoT reasoning) may be challenging to conduct information extraction
and soundness checking.
A similar conclusion can be drawn from Figure 10, where the difference in instance-level accuracy distributions
between high-scored and low-scored predictions is more significant on the GSM8K dataset. For StrategyQA,
while the incorporation of faithfulness helps, the level of the score value does not align well with whether the
prediction is correct. For example, most of the low-scored predictions can still obtain the correct answers, as
shown by the plot on the right of Figure 10b.
-----
|Col1|both correct both wrong wrong correc|t|Col4|
|---|---|---|---|
||correct wron|g||
|||||
both correct
both wrong
wrong correct
correct wrong
change in generation probability ( )
10
0
|Col1|Col2|wrong correc correct wron|t g|
|---|---|---|---|
|||||
|||||
|||||
wrong correct
correct wrong
change in generation probability ( )
|Col1|both correct both wrong wrong correct|
|---|---|
||correct wrong|
|Col1|wrong correct correct wrong|
|---|---|
|||
|||
both correct
both wrong
wrong correct
correct wrong
change in correctness confidence ( )
change in correctness confidence ( )
|Col1|Col2|both correct both wrong wrong correct|
|---|---|---|
|||correct wrong|
||||
|ange in score ( ),||= (1 ), = 0|
|||wrong correct correct wrong|
||||
||||
||||
both correct
both wrong
wrong correct
correct wrong
change in score ( ), = (1 ), = 0.5
(a) Distributions of score shifts on GSM8K using PAL prompting.
25
|Col1|both correc|t|
|---|---|---|
||both wrong||
||wrong co correct w|rrect rong|
||||
both correct
both wrong
wrong correct
correct wrong
change in generation probability ( )
10
0
|Col1|wrong co|rrect|
|---|---|---|
||correct w|rong|
||||
||||
||||
wrong correct
correct wrong
change in generation probability ( )
|Col1|Col2|both correct|
|---|---|---|
|||both wrong|
|||wrong correct correct wrong|
||||
|Col1|Col2|wrong correct|
|---|---|---|
|||correct wrong|
||||
||||
both correct
both wrong
wrong correct
correct wrong
change in correctness confidence ( )
change in correctness confidence (
|Col1|Col2|both correct|
|---|---|---|
|||both wrong|
|||wrong correct correct wrong|
||||
|ange in score ( ),||= (1 ), = 0|
|||wrong correct|
|||correct wrong|
||||
||||
||||
both correct
both wrong
wrong correct
correct wrong
change in score ( ), = (1 ), = 0.5
(b) Distributions of score shifts on GSM8K using CoT prompting.
150
50
0
15
10
|both correct|Col2|Col3|
|---|---|---|
|both wrong wrong correc|t||
|correct wron|g||
||||
|Col1|wrong co|rrect|
|---|---|---|
||correct w|rong|
||||
||||
||||
||||
both correct
both wrong
wrong correct
correct wrong
change in generation probability ( )
wrong correct
correct wrong
change in generation probability (
|Col1|Col2|both correct|
|---|---|---|
|||both wrong wrong correct|
|||correct wrong|
||||
|Col1|Col2|wrong correct|
|---|---|---|
|||correct wrong|
||||
||||
||||
both correct
both wrong
wrong correct
correct wrong
change in correctness confidence ( )
change in correctness confidence (
|Col1|Col2|both correct|
|---|---|---|
|||both wrong wrong correct|
|||correct wrong|
||||
|ange in score ( ),||= (1 ), = 0|
|||wrong correct|
|||correct wrong|
||||
||||
||||
||||
both correct
both wrong
wrong correct
correct wrong
change in score ( ), = (1 ), = 0.5
(c) Distributions of score shifts on StrategyQA using CoT prompting.
Figure 11: Distributions of changes in scores from baselines to our method. Since the prediction
correctness keeps unchanged most of the time (i.e., “both correct/incorrect” in blue/orange), we
specifically plot how the scores shift on data points where the predictions get corrected or incorrect,
as shown in green and red, respectively.
-----
Table 5: Impact of LLM backends (Codex vs. ChatGPT vs. GPT-4) and prompting methods (PAL vs.
CoT). The results of ChatGPT (gpt-3.5-turbo) were obtained on 20 March 2023.
Model GSM8K StrategyQA
CoT Codex 65.6 73.2
PAL Codex 72.0 _−_
CoT ChatGPT 80.8 65.9
PAL ChatGPT 78.7 _−_
CoT GPT−4 **92.0** _−_
Ours (CoT) Codex 71.9 **77.2**
Ours (PAL) Codex 80.2 _−_
**LLM Backbone Study.** We are interested in how stronger LLMs (i.e., ChatGPT, GPT-4 (OpenAI, 2023))
work, but they are not directly compatible with our approach since the API does not return token logits.
Table 5 compares the results of various backend LLMs (i.e., Codex, ChatGPT, and GPT-4) on GSM8K. In
arithmetic reasoning with PAL prompting, our Codex-based method achieves competitive results (80.2% vs.
78.7%) even when compared with ChatGPT. The results are consistent across other datasets, including AQuA
(55.9% vs. 54.7%), SVAMP (89.6% vs. 84.1%), ASDiv (84.9% vs. 84.1%), and TabMWP (79.1% vs. 80.6%).
In commonsense reasoning, our method using Codex significantly outperforms ChatGPT-based methods across
different datasets, including StrategyQA (77.2% vs. 65.9%), CommonsenseQA (78.6% vs. 75.2%) and
```
Sports Understanding (98.4% vs. 95.9%). One possible explanation is that ChatGPT lacks sufficient
```
world knowledge for effective fact checking and commonsense reasoning. Given the significant performance
improvement of GPT-4, we conduct further analysis about how to synergistically combine it with our method.
**GPT-4 Experiments** The recently launched GPT-4 has demonstrated notable improvements in reasoning
capabilities across a variety of tasks. In this section, we examine and compare the reasoning skills of different
large language models (LLMs), specifically Codex and GPT-4, in assessing and determining the accuracy of
each step in a reasoning chain. We contrast the confidence scores and corresponding explanations for Codex
(C) and GPT-4 (S) in the context of both arithmetic and commonsense reasoning, as shown in Figure 13
and Figure 14, respectively. For ease of visualization, we employ the same colormap (shown in Figure 12)
as in Figure 7 to represent the scale of scores. Since OpenAI has not provided access to the token-wise
likelihood of generated text, we directly request GPT-4 to score the reasoning steps using binary values 8.
Moreover, we report the average of three evaluation results to reduce the variance of sampling discrete values,
_i.e., S = (S1 + S2 + S3)/3, Si ∈_ [0, 1].
As illustrated in Figure 13, GPT-4 demonstrates greater effectiveness in pinpointing the central logical error
in arithmetic reasoning. For instance, we can observe that S < C for alex_total = alex_weight +
```
weight_multiplier * grace_weight and S > C for answer = grace_weight + alex_total, where
```
the former leads to an incorrect final answer. Additionally, GPT-4 typically offers detailed explanations
and alternative solutions. As seen in the step answer = grace_weight + alex_total, GPT-4 can correct
minor errors even when it arrives at the correct final answer. However, GPT-4 may still encounter difficulties in
detecting small errors within the text, which can have a significant impact on logical consistency. This challenge is
illustrated by the substantial variance in S for the step alex_total = alex_weight + weight_multiplier
```
* grace_weight.
```
The benefits of well-crafted explanations in GPT-4 become more significant when handling complex reasoning
tasks, as demonstrated in Figure 14. For instance, in the R42 of Q4 shown in Figure 7b, Codex generally assigns
high evaluation scores for all steps. Although this reasoning chain leads to the correct final answer, it makes
some overly definitive assumptions without reasonable justification (e.g., “must have attributes that match both”).
In such cases, GPT-4 can accurately identify these vague statements through meticulous analysis. Moreover,
the comprehensive analysis helps address the growing uncertainty arising from the ambiguity in understanding
commonsense questions. For example, in the final step, GPT-4 offers extensive explanations for assigning low S
scores, considering the limited information available to comprehend the question.
In summary, GPT-4 demonstrates promising improvements in LLM evaluation by providing more rigorous and
in-depth explanations. While there is significant variation in its generated explanations for some challenging
cases, these explanations are typically beneficial as they elaborate on understanding and interpretation based on
the given information. This suggests the potential for using LLM explanations to enhance self-evaluation in
future research.
8The results using GPT-4 were obtained on 22 April 2023.
-----
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
self-evaluation scores: | |
Figure 12: Score from low to high values visualized using colormap from [orange ]to [green ].
|[R12] in Python|Col2|Col3|
|---|---|---|
|grace_weight = 125|||
|Ɛ|C|correct|
|S|S 1|correct|
||S 2|The line of code correctly assigns the given weight of Grace (125 pounds) to the variable grace_weight|
||S 3|The given line of code correctly assigns the value of 125 to the variable grace_weight, representing Grace's weight in pounds.|
|alex_weight = 2|||
|Ɛ|C|alex_weight should be calculated based on grace_weight|
|S|S 1|alex_weight should be calculated as 4 times grace_weight minus 2|
||S 2|The line of code defining alex_weight should be calculated as 4 times Grace's weight minus 2, which is (4 * grace_weight) - 2.|
||S 3|The calculation for Alex's weight is not correct. It should be calculated as 4 * grace_weight - 2.|
|weight_multiplier = 4|||
|Ɛ|C|correct|
|S|S 1|The line of code is correct as it defines the multiplier for Alex's weight based on Grace's weight.|
||S 2|Correct because it assigns the correct value (4) to the variable weight_multiplier, which represents the factor by which Grace's weight is multiplied to calculate Alex's weight.|
||S 3|The code correctly assigns the value 4 to the variable weight_multiplier, which will be used in the calculation of Alex's weight.|
|alex_total = alex_weight + weight_multiplier * grace_weight|||
|Ɛ|C|correct formula to calculate alex_total|
|S|S 1|The formula used to calculate Alex's weight is incorrect.|
||S 2|The above line of code calculates Alex's weight correctly as 2 pounds less than 4 times Grace's weight (4 * grace_weight - 2).|
||S 3|Alex weighs 2 pounds less than 4 times what Grace weighs, not 2 pounds more.|
|answer = grace_weight + alex_total|||
|Ɛ|C|should be the sum of grace_weight and alex_weight|
|S|S 1|The code calculates Alex's weight correctly using the given relationship and adds it to Grace's weight to calculate the combined weight. The only minor issue is the unnecessary use of the separate variable alex_weight which could directly be replaced with the calculation (4 * grace_weight - 2). However, this does not affect the correctness of the answer.|
||S 2|The result variable holds the correct combined weight of Grace and Alex, and this line of code correctly calculates the value of answer.|
||S 3|The code above correctly calculates the total weight of Grace and Alex based on the given problem statement. The only minor issue is that the alex_weight variable is initialized as 2, which might be a bit confusing. It would be better to directly calculate alex_total as (4 * grace_weight - 2), but the current code does not produce an incorrect result.|
**[Q1]** **Grace weighs 125 pounds. Alex weighs 2** pounds less than 4 times what Grace weighs.
What are their combined weights in pounds?
**[Ground-Truth a1*]** 623.0
**[Predicted a12]** 627.0❌
**[R12] in Python**
**grace_weight = 125**
Ɛ C correct
S1 correct
The line of code correctly assigns the given weight of Grace (125 pounds) to the variable
S S2 **grace_weight**
The given line of code correctly assigns the value of 125 to the variable grace_weight, representing
S3 Grace's weight in pounds.
**alex_weight = 2**
Ɛ C **alex_weight should be calculated based on grace_weight**
S1 **alex_weight should be calculated as 4 times grace_weight minus 2**
The line of code defining alex_weight should be calculated as 4 times Grace's weight minus 2, which
S S2 is (4 * grace_weight) - 2.
S3 The calculation for Alex's weight is not correct. It should be calculated as 4 * grace_weight - 2.
**weight_multiplier = 4**
Ɛ C correct
S1 The line of code is correct as it defines the multiplier for Alex's weight based on Grace's weight.
Correct because it assigns the correct value (4) to the variable weight_multiplier, which represents
S S2 the factor by which Grace's weight is multiplied to calculate Alex's weight.
The code correctly assigns the value 4 to the variable weight_multiplier, which will be used in the
S3 calculation of Alex's weight.
**alex_total = alex_weight + weight_multiplier * grace_weight**
Ɛ C correct formula to calculate alex_total
S1 The formula used to calculate Alex's weight is incorrect.
The above line of code calculates Alex's weight correctly as 2 pounds less than 4 times Grace's weight
S S2 (4 * grace_weight - 2).
S3 Alex weighs 2 pounds less than 4 times what Grace weighs, not 2 pounds more.
**answer = grace_weight + alex_total**
Ɛ C should be the sum of grace_weight and alex_weight
The code calculates Alex's weight correctly using the given relationship and adds it to Grace's weight to
calculate the combined weight. The only minor issue is the unnecessary use of the separate variable
S1 **alex_weight which could directly be replaced with the calculation (4 * grace_weight - 2).**
However, this does not affect the correctness of the answer.
The result variable holds the correct combined weight of Grace and Alex, and this line of code correctly
S S2 calculates the value of answer.
The code above correctly calculates the total weight of Grace and Alex based on the given problem
statement. The only minor issue is that the alex_weight variable is initialized as 2, which might be a
S3 bit confusing. It would be better to directly calculate alex_total as (4 * grace_weight - 2), but the
current code does not produce an incorrect result.
Figure 13: Comparison between Codex (C) and GPT-4 (S) on evaluating the reasoning chains in
arithmetic reasoning. The example (R12 for Q1) is the same one from Figure 7a. We underline the
key statements in explanations that justify the judgment of GPT-4.
-----
|[Q ] Is Freya a combination of Athena and Aphrodite? 4 [Ground-Truth a *] yes 4 [Explanation] Athena was the Greek goddess of war. Aphrodite was the Greek goddess of love. Freya was the Norse goddess of war, love, and fertility.|Col2|Col3|Col4|
|---|---|---|---|
|[Predicted a ] yes 42 ✔ [R ] in Chain-of-Thought 42||||
|Freya is the goddess of love, beauty and fertility.||||
|Ɛ|C|correct||
|S|S 1|Freya indeed is associated with love, beauty, and fertility in Norse mythology. However, it is important to note that while the comparison to Athena and Aphrodite is understandable, it is not a direct or complete equivalence.||
||S 2|The statement correctly identifies Freya. However, the statement does not mention that Freya is a Norse goddess, which could slightly impact the understanding of the comparison to Greek goddesses Athena and Aphrodite.||
||S 3|The statement is mostly correct as Freya is indeed a goddess associated with love, beauty, and fertility. However, it might be worth mentioning that she is a Norse goddess, while Athena and Aphrodite are Greek goddesses, to provide more context.||
|You can compare these to the Greek gods Athena and Aphrodite.||||
|Ɛ|C|correct||
|S|S 1|It is true that Freya can be compared to the Greek gods Athena and Aphrodite in terms of their respective domains. However, the comparison is not entirely accurate as Freya comes from Norse mythology, and the attributes of the gods may not be an exact match. The comparison is a simplification, which is why the probability is not higher.||
||S 2|The statement establishes a comparison between Freya and the Greek goddesses Athena and Aphrodite. While this step of reasoning is generally correct, it could be improved by specifically mentioning that Freya is a Norse goddess and by providing more information about the roles and attributes of Athena and Aphrodite for a clearer comparison.||
||S 3|While it is true that Freya shares some similarities with Athena and Aphrodite, the comparison is not entirely accurate. Freya is a Norse goddess, while Athena and Aphrodite are Greek goddesses. The comparison is correct in the sense that they share some domains, but it might be misleading to suggest that Freya is a direct "combination" of these two goddesses.||
|It is not clear how their powers exactly compare.||||
|Ɛ|C|correct||
|S|S 1|This step of reasoning acknowledges that it is not entirely clear how the powers of Freya, Athena, and Aphrodite compare. Although there are some similarities, the differences between Norse and Greek mythologies make it difficult to determine whether Freya is a direct combination of Athena and Aphrodite. The high probability reflects the acknowledgment of the uncertainty in comparing the powers of the gods across different mythologies.||
||S 2|The statement acknowledges that it is not clear how the powers of Freya, Athena, and Aphrodite exactly compare. However, the question asks if Freya is a combination of Athena and Aphrodite, which implies a synthesis or blending of their roles or attributes. The reasoning could be improved by specifically mentioning the roles and attributes of Athena and Aphrodite, and whether or not they overlap with Freya's roles and attributes.||
||S 3|The reasoning here is more accurate, acknowledging that while there are similarities between Freya and Athena and Aphrodite, it is not clear how their powers exactly compare. It is important to consider that Freya is a Norse goddess and Athena and Aphrodite are Greek goddesses, so their roles, domains, and characteristics may not be directly comparable.||
|However, it seems that Freya must have attributes that match both.||||
|Ɛ|C|correct||
|S|S 1|incorrect because it assumes that Freya must have attributes that match both Athena and Aphrodite. While there may be some similarities in their attributes, it is not appropriate to conclude that Freya is a combination of the two Greek goddesses. Norse and Greek mythologies are distinct, and the deities in these pantheons may have different origins and roles.||
||S 2|The statement acknowledges that Freya has attributes that match both Athena and Aphrodite. However, it would be more accurate to discuss the specific attributes of Athena and Aphrodite to determine the extent of the similarities.||
||S 3|It assumes that Freya must have attributes that match both Athena and Aphrodite, but this may not be the case. While there are similarities between these goddesses, they are from different mythologies (Freya – Norse, Athena & Aphrodite – Greek), and their roles, domains, and characteristics may not be directly comparable or combined. It is not accurate to assume that Freya is a combination of Athena and Aphrodite without a more in-depth analysis in their respective mythologies.||
|So the answer is yes.||||
|Ɛ|C|correct||
|S|S 1|This step of reasoning is incorrect because it is just based on some similarities in their attributes. While there may be some similarities, it is not accurate to say that Freya is a combination of the two Greek goddesses. Norse and Greek mythologies are distinct, and the deities in these pantheons have different origins and roles.||
||S 2|Although Freya shares some attributes with both Athena and Aphrodite, it is an oversimplification to claim that she is a combination of the two. The question seems to imply that Freya might be a direct mix of the two, but this is not accurate.||
||S 3|The conclusion that Freya is a combination of Athena and Aphrodite is not accurate based on the information provided. While there are similarities between these goddesses, they belong to different mythologies (Freya is Norse, Athena and Aphrodite are Greek), and their roles, domains, and characteristics may not be directly comparable or combined.||
|||||
✔
Figure 14: Comparison between Codex (C) and GPT-4 (S) on evaluating the reasoning chains in
commonsense reasoning. The example (R42 for Q4) is the same one from Figure 7b. We underline
the key points in GPT-4 rationales that explain the detailed understanding and analysis on the steps.
-----
**A.3** **Implementation Details**
Similar to beam search, we maintain k distinct candidates in the beam and sample n completions for each one.
Thus, for each reasoning step s[t], the search space has a size of k · n. After acquiring k · n samples, we retain
_k candidates by sampling from Pbeam as Eq. 4. We set k = 5, n = 16 with Codex backbone to balance the_
quality and efficiency. The maximum number of steps to decode is capped at 16. To control the computational
cost and time complexity, one can also reduce the number of rollouts per beam and the beam size to n = 2 and
_k ∈_ [3, 4] respectively, as we illustrate with Llama-2 backbone.
We set generation temperatures differently for various tasks and baselines. Regarding the generation temperature
_γ on Codex, for arithmetic and symbolic reasoning with PAL using deterministic beam search (τ = 0.0), we find_
that γ ∈ [0.4, 0.8] generally works well. In contrast, for commonsense reasoning with CoT, a lower temperature
(γ ∈ [0.1, 0.5]) is more effective, likely due to the increased randomness from the free-text format. Differently,
when using Llama-2 backbone, PAL generally works better with lower generation temperature γ ≤ 0.5, while
CoT can tolerate larger γ > 0.5 with better or comparable performance. This difference between Codex and
Llama-2 may come from the different training tasks and data adopted for the two models, where the PAL
reasoning is especially enhanced in Codex.
In majority voting, higher γ is preferred to better explore the search space in reasoning, e.g., γ ≥ 0.5 for
arithmetic reasoning. To further introduce sampling randomness in stochastic beam search for majority voting
on multiple reasoning chains, we use α = 0.5 for all datasets but different values of τ for each task. Specifically,
we choose τ = 0.5 for PAL and τ = 0.2 for CoT, as PAL typically decomposes the reasoning problem into more
steps than CoT. Here we tune τ instead of α to be smaller in CoT as CoT naturally contains more randomness
due to its free-text formulation as we observe in practice, where a smaller τ is more efficient to balance this
randomness.
In previous works, majority voting on multiple reasoning chains involves sampling N (usually ≥ 20) reasoning
chains and conducting a vote to determine the final answer, which can be time-consuming. In our approach,
we simply perform majority voting among the N candidates in the last step of beam search from only a few
times (≤ 10) of searching. As a result, our method does not introduce additional time complexity compared to
the conventional majority voting method, although we sacrifice some diversity in the final outcomes due to the
similarity of candidates within a beam.
**Prompts.** We show examples of both the generation and evaluation prompts we use on different tasks in the
following tables, where we only show one instance for each case. Full prompts and detailed formulations can be
found in our code.
-----
Table 6: Examples of few-shot exemplars of generation and evaluation CoT prompts on GSM8K. We
adopt 8-shot and 5-shot learning for generation and evaluation, respectively.
**GENERATION**
**Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are**
done, there will be 21 trees. How many trees did the grove workers plant today?
**A:**
There are 15 trees originally.
And there were 21 trees after some more were planted.
So 21 - 15 = 6 trees were planted.
So the answer is 6.
**EVALUATION**
**Q: Allison brought some CDs online. Each CD cost $7. There was an additional charge of $4 per**
order for shipping costs. The total bill came to $60. How many CDs did Allison buy?
**A:**
Each CD cost 7 dollars.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is (A)
And there was an additional charge of 4 dollars.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is (A)
So the total cost of each CD is 7 + 4 = 11 dollars.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is (B), because each CD costs 7 dollars.
So 60 / 11 = 5.45.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is (B), because it cost 4 dollars for shipping costs. So the cost of CDs
is 60 - 4 = 56 dollars. So Allison bought 56 / 7 = 8 CDs.
So the answer is 5.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is (A), but the value of the number of CDs is incorrect.
-----
Table 7: Examples of few-shot exemplars of generation and evaluation PAL prompts on GSM8K.
We adopt 9-shot and 5-shot learning for generation and evaluation, respectively.
**GENERATION**
**Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have**
left?
```
def solution():
money_initial = 23
bagels = 5
bagel_cost = 3
money_spent = bagels * bagel_cost
money_left = money_initial - money_spent
result = money_left
return result
```
**EVALUATION**
**Q: A piece of square paper has a perimeter of 32 centimeters. Nicky’s dog, Rocky, tore off**
1/4 of the paper. What is the area of the remaining paper?
```
def solution():
perimeter = 32
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
fraction_torn = 1 / 4
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
area_total = (perimeter / 4) ** 2
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), because the total area of the square
```
_,→_ `can be calculated by (perimeter / 4) ** 2`
```
area_remaining = (1 - fraction_torn) * area_total
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
result = area_total
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (B), because the result should be
```
_,→_ `area_remaining`
```
return result
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), but the value of result is incorrect
```
-----
Table 8: Examples of few-shot exemplars of generation and evaluation PAL prompts on AQuA. Here
```
... represents the same evaluation script as those in the previous steps. We adopt 8-shot and 5-shot
```
learning for generation and evaluation, respectively.
**GENERATION**
**Question: In a flight of 600 km, an aircraft was slowed down due to bad weather. Its average speed**
for the trip was reduced by 200 km/hr and the time of flight increased by 30 minutes. The duration of
the flight is:
**Answer Choices: A)1 hour; B)2 hours; C)3 hours; D)4 hours; E)5 hours**
```
def solution():
duration = Symbol('duration', positive=True)
delay = 30 / 60
total_disntace = 600
original_speed = total_disntace / duration
reduced_speed = total_disntace / (duration + delay)
solution = solve_it(original_speed - reduced_speed - 200, duration)
duration = solution[duration]
result = duration
return result
```
**EVALUATION**
**Question: Two trains of length 150 m and 200 m are 100 m apart. They start moving towards each**
other on parallel tracks, at speeds 54 kmph and 72 kmph. In how much time will the trains cross each
other?
**Answer Choices: A)100/7 sec; B)80/7 sec; C)57/7 sec; D)110/7 sec; E)50/7 sec**
```
def solution():
train_1_speed = 54 / 60
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
train_2_speed = 72 / 60
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
distance_between_trains = 100
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
train_1_length = 150
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
train_2_length = 200
# ...
# The above line of code is: (A)
time_to_cross = distance_between_trains / (train_1_speed +
```
_,→_ `train_2_speed)`
```
# ...
# The above line of code is: (B), because to cross each other, the
```
_,→_ `total distance should also contain the train length`
```
result = time_to_cross
# ...
# The above line of code is: (B), because the final result should be in
```
_,→_ `seconds, and the value of time_to_cross is incorrect`
```
return result
# ...
# The above line of code is: (A), but the value of result is incorrect
```
-----
Table 9: Examples of few-shot exemplars of generation and evaluation PAL prompts on SVAMP and
**ASDiv. Here we utilize the same prompts as they have the same task formulation. We adopt 7-shot**
and 5-shot learning for generation and evaluation, respectively.
**GENERATION**
**Passage: James bought 93 red and 10 blue stickers, he used 31 red sticker on his fridge and 7 blue**
stickers on his laptop.
**Question: How many red stickers does James have?**
```
def solution():
original_red_stickers = 93
used_red_stickers = 31
red_stickers = original_red_stickers - used_red_stickers
result = red_stickers
return result
```
**EVALUATION**
**Passage: A piece of square paper has a perimeter of 32 centimeters. Nicky’s dog, Rocky, tore off 1/4**
of the paper.
**Question: What is the area of the remaining paper?**
```
def solution():
perimeter = 32
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
side_length = perimeter / 4
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
area = side_length ** 2
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
result = area
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (B), because should calculate the
```
_,→_ `remaining area after torn off as result`
```
return result
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), but the value of result is incorrect
```
-----
Table 10: Examples of few-shot exemplars of generation and evaluation PAL prompts on TabMWP.
We adopt 4-shot and 5-shot learning for generation and evaluation, respectively.
**GENERATION**
**Table of "Coin collections":**
Name | Number of coins
Braden | 76 \\ Camilla | 94 \\ Rick | 86
Mary | 84 \\ Hector | 80 \\ Devin | 83
Emily | 82 \\ Avery | 87
**Question: Some friends discussed the sizes of their coin collections. What is the mean of the**
numbers?
```
def solution():
number_of_coins_for_different_person = [76, 94, 86, 84, 80, 83, 82, 87]
mean_of_the_numbers = sum(number_of_coins_for_different_person) /
```
_,→_ `len(number_of_coins_for_different_person)`
```
result = mean_of_the_numbers
return result
```
**EVALUATION**
**Table of "Roller coasters per amusement park":**
Stem | Leaf
1 | 0, 0, 1, 6, 8, 9 \\ 2 | 4, 4, 5, 7, 8, 8
3 | 1, 2, 4, 4, 9, 9 \\ 4 | 2, 3, 5, 6, 8, 9, 9
**Question: Rodrigo found a list of the number of roller coasters at each amusement park in the state.**
How many amusement parks have fewer than 40 roller coasters?
```
def solution():
```
```
number_of_roller_coasters_per_amusement_park = [10, 14, 14, 15, 16, 18, 19, 20, 24, 25, 26, 28, 29, 29, 29,
```
_→_ `30, 34, 35, 36, 39, 40, 40, 40, 41, 42, 43, 44, 44, 45, 45, 46, 46, 47, 48, 48, 49, 49, 49, 50, 50, 51,`
_→_ `51, 52, 52, 53, 53, 54, 54, 55, 55, 56, 56, 57, 57, 58, 58, 59, 59, 60, 60, 61, 61, 62, 62, 63, 63, 64,`
_→_ `64, 65, 65, 66, 66, 67, 67, 68, 68, 69, 69, 70, 70, 71, 71, 72, 72, 73, 73, 74, 74, 75, 75, 76, 76, 77,`
_→_ `77, 78, 78, 79, 79, 80, 80, 81, 81, 82, 82, 83, 83, 84, 84, 85, 85, 86, 86, 87, 87, 88, 88, 89, 89, 90,`
_→_ `90, 91, 91, 92, 92, 93, 93, 94, 94, 95, 95, 96, 96, 97, 97, 98, 98, 99, 99]`
```
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (B), beacuse values in the rows of Stem and Leaf represent the decimal and
```
_→_ `individual digits, respectively`
```
number_of_amusement_parks_with_fewer_than_40_roller_coasters = 0
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), because this is to initialize the
```
_,→_ `number_of_amusement_parks_with_fewer_than_40_roller_coasters`
```
for number_of_roller_coasters in
```
_,→_ `number_of_roller_coasters_per_amusement_park:`
```
if number_of_roller_coasters < 40:
number_of_amusement_parks_with_fewer_than_40_roller_coasters +=
```
```
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), but the value of
```
_,→_ `number_of_roller_coasters_per_amusement_park is incorrect`
```
result = number_of_amusement_parks_with_fewer_than_40_roller_coasters
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), but the value of
```
_,→_ `number_of_amusement_parks_with_fewer_than_40_roller_coasters is`
_,→_ `incorrect`
```
return result
# ...
# The above line of code is: (A), but the value of result is incorrect
```
-----
Table 11: Examples of few-shot exemplars of generation and evaluation PAL prompts on Date
```
Understanding from Big-Bench. We adopt 6-shot and 3-shot learning for generation and evaluation,
```
respectively.
**GENERATION**
**Q: 2015 is coming in 36 hours. What is the date one week from today in MM/DD/YYYY?**
```
def solution():
# If 2015 is coming in 36 hours, then today is 36 hours before.
today = datetime(2015, 1, 1) - relativedelta(hours=36)
# One week from today,
one_week_from_today = today + relativedelta(weeks=1)
# The answer formatted with %m/%d/%Y is
result = one_week_from_today.strftime('%m/%d/%Y')
return result
```
**EVALUATION**
**Q: Yesterday was April 30, 2021. What is the date yesterday in MM/DD/YYYY?**
```
def solution():
# Yesterday was April 30, 2021.
yesterday = datetime(2021, 4, 30)
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A)
# Yesterday,
yesterday = yesterday - relativedelta(days=1)
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (B), because the date of yesterday was
```
_,→_ `already got and this is step is redundant`
```
# The answer formatted with %m/%d/%Y is
result = yesterday.strftime('%m/%d/%Y')
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), but the value of yesterday is
```
_,→_ `incorrect`
```
return result
# Is the above line of code:
# (A) Correct
# (B) Incorrect
# The above line of code is: (A), but the value of result if incorrect
```
-----
Table 12: Examples of few-shot exemplars of generation and evaluation PAL prompts on Object
```
Counting from Big-Bench. We adopt 4-shot and 1-shot for generation and evaluation, respectively.
```
**GENERATION**
**Q: I have a drum, a flute, a clarinet, a violin, four accordions, a piano, a trombone, and a trumpet.**
How many musical instruments do I have?
```
def solution():
musical_instruments_to_count = {
'drum': 1,
'flute': 1,
'clarinet': 1,
'violin': 1,
'accordion': 4,
'piano': 1,
'trombone': 1,
'trumpet': 1
}
num_musical_instruments = sum(musical_instruments_to_count.values())
result = num_instruments
return result
```
**EVALUATION**
**Q: I have a peach, an onion, two garlics, a nectarine, a yam, a carrot, a potato, a head of broccoli, a**
cabbage, a banana, and a stalk of celery. How many vegetables do I have?
```
def solution():
# note: I'm not counting the peach, nectarine, banana, or celery as
```
_,→_ `vegetables`
```
vegetables_to_count = {
'onion': 1,
# ...
# The above line of code is: (A)
'garlic': 2,
# ...
# The above line of code is: (A)
'yam': 1,
# ...
# The above line of code is: (A)
'carrot': 1,
# ...
# The above line of code is: (A)
'potato': 1,
# ...
# The above line of code is: (A)
'broccoli': 1,
# ...
# The above line of code is: (A)
'cabbage': 1
# ...
# The above line of code is: (A)
}
# ...
# The above line of code is: (B), because celery is vegetable
num_vegetables = sum(vegetables_to_count.values())
# ...
# ... (A), but the value of vegetables_to_count is incorrect
result = num_vegetables
# ...
# ... (A), but the value of num_vegetables is incorrect
return result
# ...
# ... (A), but the value of result is incorrect
```
-----
Table 13: Examples of few-shot exemplars of generation and evaluation CoT prompts on CSQA. We
adopt 7-shot and 3-shot learning for generation and evaluation, respectively.
**GENERATION**
**Q: What do people use to absorb extra ink from a fountain pen?**
**Answer Choices: (a) shirt pocket; (b) calligrapher’s hand; (c) inkwell; (d) desk drawer; (e)**
blotter
**A:**
The answer must be an item that can absorb ink.
Of the above choices, only blotters are used to absorb ink.
So the answer is (e).
**EVALUATION**
**Q: John’s took his appointment book with him to a checkup but couldn’t find it anywhere.**
He looked and he looked but he didn’t know where it was. It wasn’t until he bought a new
one, went home, took a shower that he found it. Where was it?
**Answer Choices: (a) office supply store; (b) wallet; (c) doctor’s office; (d) handbag; (e)**
pocket
**A:**
The answer must be a place where John could have put his appointment book.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (B), because it fails to reason out that John will check the place
when taking a shower (taking off clothes)
Of the above choices, only wallet makes sense.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (B), because wallet is too small to put his appointment book, and it
cannot be the only one which makes sense (pocket is also plausible)
So the answer is (b).
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (A), but the previous step is incorrect
-----
Table 14: Examples of few-shot exemplars of generation and evaluation CoT prompts on StrategyQA.
We adopt 6-shot and 4-shot learning for generation and evaluation, respectively.
**GENERATION**
**Q: Do hamsters provide food for any animals?**
**A:**
Hamsters are prey animals.
Prey animals are food for predators.
Thus, hamsters provide food for some animals.
So the answer is yes.
**EVALUATION**
**Q: While viewing "Scary Movie" is the viewer likely to experience an increase in**
adrenaline?
**A:**
Adrenaline is a hormone that is released when the body is under stress.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (A)
Scary movies are designed to make the viewer feel stressed.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (B), because Scary Movie is a film that is a comedy take on horror,
intended to make viewers laugh
Thus, the viewer is likely to experience an increase in adrenaline.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (A), but the previous step is incorrect
So the answer is yes.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (A), but the previous step is incorrect
-----
Table 15: Examples of few-shot exemplars of generation and evaluation CoT prompts on Sports
```
Understanding on Big-Bench. We adopt 8-shot and 2-shot learning for generation and evaluation,
```
respectively.
**GENERATION**
**Q: Is the following sentence plausible?**
“KYLE PALMIERI WAS CALLED FOR SLASHING.”
**A:**
Kyle Palmieri is a hockey player.
Being called for slashing is part of hockey.
So the answer is yes.
**EVALUATION**
**Q: Is the following sentence plausible?**
“ROBIN LEHNER LAUNCHED THE DESPERATION HEAVE”
**A:**
Robin Lehner is a hockey player.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (B), because Robin Lehner is a goaltender which is different from a
player
Launching the desperation heave is part of hockey.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (B), because launching the desperation heave is for player, not
goaltender
So the answer is yes.
# Is the above step of reasoning:
# (A) Correct
# (B) Incorrect
# The above step of reasoning is: (A), but the previous step is incorrect
-----
| [
"Yuxi, Xie",
"Kenji, Kawaguchi",
"Yiran, Zhao",
"Junxian, He",
"Xu, Zhao",
"Min-Yen, Kan",
"Qizhe, Xie"
] | 2023-11-02T00:00:00 | NeurIPS 2023 Poster | true | 61 | 4 | null | https://openreview.net/forum?id=Bw82hwg5Q3 | https://arxiv.org/abs/2305.00633 | https://www.semanticscholar.org/paper/ef018d9fad6167cfddb7d6654c5422df1e953730 |
ENIGMA-NG: Efficient Neural and Gradient-Boosted Inference Guidance for E | N/A | The resulting methods improve on the manually designed clause guidance, providing the first practically convincing application of gradient-boosted and neural clause guidance in saturation-style automated theorem provers. | null | [
"Karel, Chvalovskỳ",
"Josef, Urban",
"Jan, Jakub\\uuv",
"Martin, Suda"
] | 2019-01-01T00:00:00 | null | false | 60 | 3 | null | https://www.semanticscholar.org/paper/5eaa3d6e630f1b5369d8253fb1408d252ac3dc8d | null | https://www.semanticscholar.org/paper/5eaa3d6e630f1b5369d8253fb1408d252ac3dc8d |
Semantically-Aligned Universal Tree-Structured Solver for Math Word Problems | A practical automatic textual math word problems (MWPs) solver should be able to solve various textual MWPs while most existing works only focused on one-unknown linear MWPs. Herein, we propose a simple but efficient method called Universal Expression Tree (UET) to make the first attempt to represent the equations of various MWPs uniformly. Then a semantically-aligned universal tree-structured solver (SAU-Solver) based on an encoder-decoder framework is proposed to resolve multiple types of MWPs in a unified model, benefiting from our UET representation. Our SAU-Solver generates a universal expression tree explicitly by deciding which symbol to generate according to the generated symbols’ semantic meanings like human solving MWPs. Besides, our SAU-Solver also includes a novel subtree-level semanticallyaligned regularization to further enforce the semantic constraints and rationality of the generated expression tree by aligning with the contextual information. Finally, to validate the universality of our solver and extend the research boundary of MWPs, we introduce a new challenging Hybrid Math Word Problems dataset (HMWP), consisting of three types of MWPs. Experimental results on several MWPs datasets show that our model can solve universal types of MWPs and outperforms several state-of-the-art models. | A simple but efficient method to make the first attempt to represent the equations of various MWPs uniformly, and a semantically-aligned universal tree-structured solver (SAU-Solver) based on an encoder-decoder framework is proposed to resolve multiple types of MWPs in a unified model, benefiting from the UET representation. | ## Semantically-Aligned Universal Tree-Structured Solver for Math Word Problems
**Jinghui Qin[1], Lihui Lin[2], Xiaodan Liang[1,2,][∗], Rumin Zhang[2], Liang Lin[1,2]**
1
Sun Yat-sen University
2
Dark Matter AI Inc.
[email protected], [email protected],
[email protected], rm [email protected],
[email protected]
**Abstract**
A practical automatic textual math word problems (MWPs) solver should be able to solve
various textual MWPs while most existing
works only focused on one-unknown linear
MWPs. Herein, we propose a simple but
efficient method called Universal Expression
Tree (UET) to make the first attempt to represent the equations of various MWPs uniformly. Then a semantically-aligned universal tree-structured solver (SAU-Solver) based
on an encoder-decoder framework is proposed
to resolve multiple types of MWPs in a unified model, benefiting from our UET representation. Our SAU-Solver generates a universal
expression tree explicitly by deciding which
symbol to generate according to the generated
symbols’ semantic meanings like human solving MWPs. Besides, our SAU-Solver also
includes a novel subtree-level semanticallyaligned regularization to further enforce the semantic constraints and rationality of the generated expression tree by aligning with the contextual information. Finally, to validate the
universality of our solver and extend the research boundary of MWPs, we introduce a
new challenging Hybrid Math Word Problems
dataset (HMWP), consisting of three types
of MWPs. Experimental results on several
MWPs datasets show that our model can solve
universal types of MWPs and outperforms several state-of-the-art models[1].
Arithmetic
Word
Problem
|= * * Equation 3 x 5 y Set Problem = + 24 x y|; = = * * + 24 3 x 5 y x y|
|---|---|
**=**
***** **6**
**3** **x**
**=**
***** **6**
**3** **x**
Expression Trees Universal Expression Trees
Figure 1: Universal Expression Trees (UET). In our
UET representation, multiple expression trees underlying a MWP will be integrated as an universal expression tree (UET) via symbol extension. UET can enable
a solver to handle multiple types of MWPs in an unified
manner like a single expression tree of an equation.
unknown quantity or multiple unknown quantities.
Thus, a machine should have the ability of natural
language understanding and reasoning. To solve an
MWP, the relevant quantities need to be identified
from the text, and the correct operators and their
computation order among these quantities need to
be determined.
Many traditional methods (Yuhui et al., 2010;
Kushman et al., 2014; Shi et al., 2015) have been
proposed to address this problem, but they relied
on tedious hand-crafted features and template annotation, which required extensive human efforts and
knowledge. Recently, deep learning has opened
a new direction towards automatic MWPs solving (Wang et al., 2017; Huang et al., 2018; Wang
et al., 2018b, 2019; Xie and Sun, 2019; Chiang and
Chen, 2019). Most of deep learning-based methods
try to train an end-to-end neural network to automatically learn the mapping function between problems and their corresponding equations. However,
there are some limitations hindering them from
**1** **Introduction**
Math word problems (MWPs) solving aims to automatically answer a math word problem by understanding the textual description of the problem
and reasoning out the underlying answer. A typical MWP is a short story that describes a partial
state of the world and poses a question about an
_∗Corresponding Author_
1The code and the new HMWP dataset are available at
[https://github.com/QinJinghui/SAU-Solver.](https://github.com/QinJinghui/SAU-Solver)
-----
being applied in real-world applications. First, although seq2seq model (Wang et al., 2017) can be
applied to solve various MWPs, it suffers from fake
numbers generation and mispositioned numbers
generation due to all data share the same target vocabulary without problem-specific constraints. Second, some advanced methods (Wang et al., 2018b,
2019; Xie and Sun, 2019) only target at arithmetic
word problems without any unknown or with one
unknown that do not need to model the unknowns
underlying in MWPs, which prevent them from
generalizing to various MWPs, such as equation
set problems. Thus, their methods can only handle arithmetic problems with no more than one unknown. Besides, they also lack an efficient equation
representation mechanism to handle those MWPs
with multiple unknowns and multiple equations,
such as equation set problems. Finally, though
some methods (Wang et al., 2017; Huang et al.,
2018; Chiang and Chen, 2019) can handle multiple
types of MWPs, they neither generate next symbol
by taking full advantage of the generated symbols
like a human nor consider the semantic transformation between equations in a problem, resulting in
poor performance on the multiple-unknown MWPs,
such as the MWPs involving equation set.
To address the above issues, we propose a simple
yet efficient method called Universal Expression
Tree (UET) to make the first attempt to represent
the equations of various MWPs uniformly like the
expression tree of one-unknown linear word problems with considering unknowns. Specifically, as
shown in Fig. 1, UET integrates all expression trees
underlying in an MWP into an ensemble expression
tree via math operator symbol extension so that the
grounded equations of various MWPs can be handled in a unified manner as handling one-unknown
linear MWPs. Thus, it can significantly reduce the
difficulty of modeling equations of various MWPs.
Then, we propose a semantically-aligned universal tree-structured solver (SAU-Solver), which is
based on our UET representation and an EncoderDecoder framework, to solve multiple types of
MWPs in a unified manner with a single model. In
our SAU-Solver, the encoder is designed to understand the semantics of MWPs and extract number
semantic representation while the tree-structured
decoder is designed to generate the next symbol
based on the problem-specific target vocabulary
in a semantically-aligned manner by taking full
advantage of the semantic meanings of the gener
ated expression tree like a human uses problem’s
contextual information and all tokens written to
reason next token for solving MWPs. The problemspecific target vocabulary can help our solver to
mitigate the problem of fake numbers generation
as much as possible.
Besides, to further enforce the semantic constraints and rationality of the generated expression
tree, we also propose a subtree-level semanticallyaligned regularization to further improve subtreelevel semantic representation by aligning with the
contextual information of a problem, which can
improve answer accuracy effectively.
Finally, to validate the universality of our solver
and push the research boundary of MWPs to math
real-word applications better, we introduce a new
challenging Hybrid Math Word Problems dataset
(HMWP), consisting of one-unknown linear word
problems, one-unknown non-linear word problems,
and equation set problems with two unknowns. Experimental results on HWMP, ALG514, Math23K,
and Dolphin18K-Manual show the universality and
superiority of our approach compared with several
state-of-the-art methods.
**2** **Related Works**
Numerous methods have been proposed to attack
the MWPs task, ranging from rule-based methods (Bakman, 2007; Yuhui et al., 2010), statistical
machine learning methods (Kushman et al., 2014;
Zhou et al., 2015; Mitra and Baral, 2016; Huang
et al., 2016; Roy and Roth, 2018),semantic parsing methods (Shi et al., 2015; Koncelkedziorski
et al., 2015; Huang et al., 2017), and deep learning methods (Ling et al., 2017; Wang et al., 2017,
2018b; Huang et al., 2018; Wang et al., 2018a; Xie
and Sun, 2019; Wang et al., 2019). Due to space
limitations, we only review some recent advances
on deep leaning-based methods. (Wang et al.,
2017) made the first attempt to generate expression
templates using Seq2Seq model. Seq2seq method
has achieved promising results, but it suffers from
generating spurious numbers, predicting numbers
at wrong positions, or equation duplication problem (Huang et al., 2018; Wang et al., 2018a). To
address them, (Huang et al., 2018) proposed to add
a copy-and-alignment mechanism to the standard
Seq2Seq model. (Wang et al., 2018a) proposed
equation normalization to normalize the duplicated
equations by considering the uniqueness of an expression tree.
-----
Different from seq2seq-based works, (Xie and
Sun, 2019) proposed a tree-structured decoder to
generate an expression tree inspired by the goaldriven problem-solving mechanism. (Wang et al.,
2019) proposed a two-stage template-based solution based on a recursive neural network for math
expression construction. However, they do not
model the unknowns underlying in MWPs, resulting in only handling one-unknown linear word
problems. Besides, they also lack an efficient mechanism to handle those MWPs with multiple unknowns and multiple equations, such as equation
set problems. Therefore, their solution can not
solve other types of MWPs that are more challenging due to larger search space, such as equation set
problems, non-linear equation problems, etc. (Chiang and Chen, 2019) is a general equation generator
that generates expression via the stack, but they did
not consider the semantic transformation between
equations in a problem, resulting in poor performance on the multiple-unknown MWPs, such as
equation set problems.
**3** **The design of SAU-Solver**
**3.1** **Universal Expression Tree (UET)**
The primary type of textual MWPs can be divided
into two groups: arithmetic word problems and
equation set problems. For a universal MWPs
solver, it is highly demanded to represent various
equations of various MWPs in a unified manner so
that the solver can generate equations efficiently.
Although most of the existing works can handle
one-unknown linear word problems well, it is more
challenging and harder for current methods to handle the equation set MWPs with multiple unknowns
well since they not only do not model the unknowns
in the MWPs but also lack of an efficient equations
representation mechanism to make their decoder
generate required equations efficiently. To handle the above issue, an intuitive way is treating
the equation set as a forest of expression trees and
all trees are processed iteratively in a certain order. Although this is an effective way to handle
equations set problems, it increases the difficulty
of equation generation since the model needs to
reason out the number of equations before starting
equation generation and the prediction error will
influence equation generation greatly. Besides, it
is also challenging to take full advantage of the
context information from the problem and the generated trees. Another way is that we can deploy
Seq2Seq-based architecture to handle various equations in infix order like in previous works (Wang
et al., 2017; Huang et al., 2018), but there are some
limitations, such as generating invalid expression,
generating spurious numbers, and generating numbers at wrong positions.
To overcome the above issues and maintain simplicity, we propose a new equation representation
called Universal Expression Tree (UET) to make
the first attempt to represent the equations of various MWPs uniformly. Specially, we extend the
math operator symbol table by introducing a new
operator ; as the lowest priority operator to integrate one or more expression trees into a universal
expression tree, as shown in Fig. 1. With UET, a
solver can handle the underlying equations of various textual MWPs easier in a unified manner like
the way on arithmetic word problems. Although
our UET is simple, it provides an efficient, concise,
and uniform way to utilize the context information
from the problem and treat the semantic transformation between equations as simple as treating the
semantic transformation between subtrees in an
equation.
**3.2** **SAU-Solver**
Based on our proposed UET representation, we design a universal tree-structured solver to generate
a universal expression tree explicitly according to
the problem context and explicitly model the relationships among unknown variables, quantities,
math operations, and constants in a tree-structured
way, as shown in Fig. 2. Our solver consists of
a Bi-GRU-based problem encoder and an explicit
tree-structured equation decoder. When a problem
is entered, our model first encodes each word of
the problem to generate the problem’s contextual
representation g0 by our problem encoder. Then,
the g0 will be used as the initial hidden state by our
tree-structured equation decoder to guide the equation generation in prefix order with two intertwined
processes: top-down tree-structured decoding and
bottom-up subtree semantic transformation. With
the help of top-down tree-structured decoding and
bottom-up subtree semantic transformation, SAUSolver can generate the next symbol by taking full
advantage of generated symbols in a semanticallyaligned manner like human solving MWPs. Finally,
we apply infix traversal and inverse number mapping to generate the corresponding human-readable
-----
**g0**
**h0** **h1** **hn** **=**
**Semantically-** **g1** **g4**
**Aligned**
**Regularization** **t3**
… **t3**
**g2** **+** **g3** **n1**
GRU GRU … GRU
GRU GRU … GRU **n2** **t1** **t1** **t2** **x**
Encoder Tree-structured Decoder
Embedding …
Model Output(Pre-order expression tree): = + x n2 n1
infix traversal &
inverse number mapping
Equation : x + 6 = 16; Solution: [10]
Hidden State Propagation
Subtree State Propagation
Dan and ?
Model Input: Dan and Jessica have NUM pens in total. Jessica
has NUM pens. How many pens does Dan has?
Number mapping:
{n1=16,n2=6}
Problem: Dan and Jessica have 16 pens in total. Jessica
has 6 pens. How many pens does Dan has?
Figure 2: An overview of our SAU-Solver. When a problem preprocessed by number mapping and replacement
is entered, our problem encoder encodes the problem text as context representation. Then our equation decoder
generates an expression tree explicitly in pre-order traversal for the problem according to the context representation.
Finally, infix traversal and inverse number mapping are applied to generate the corresponding equation.
equation that can be computed by SymPy[2], which
is a python library for symbolic mathematics.
**3.2.1** **Problem Encoder**
Bidirectional Gated Recurrent Unit (BiGRU) (Cho
et al., 2014) is an efficient method to encode sequential information. Formally, given an input
math word problem sentence P = {xt}t[n]=1[, we first]
embed each word into a vector xt. Then these
embeddings are fed into a two-layer BiGRU from
beginning to end and from end to beginning to
model the problem sequence:
where _−h→[p]n_ [and] _←h−[p]0_ [are the hidden states of forward]
sequence and backward sequence respectively.
**3.2.2** **Equation Decoder**
For decoding, inspired by previous works (Xie and
Sun, 2019; Chiang and Chen, 2019), we build a
semantically-aligned tree decoder to decide which
symbol to generate by taking full advantage of
the semantic meanings of the generated symbols
with two intertwined processes: top-down treestructured decoding and bottom-up subtree semantic transformation. Our decoder takes tree-based information gparent (left node) or (gparent, tl) (right
node) as the input and maintains two auxiliary
stacks G and T to enforce semantically-aligned
decoding procedure. The stack G maintains the
hidden states generated from the parent node while
the stack T helps the model decide which symbol to
generate by maintaining subtree semantic information of generated symbols. Benefiting from UET,
our decoder can automatically end the decoding
procedure without any special token. If the predicted token yt is an operator, then we generate
two children hidden states gl and gr according to
the current node embedding n of yt, and push them
into the stack G to maintain the state transition
among nodes and be used to predict token and its
node embedding. Besides, we also push the token
embedding e(yt|P ) of yt into the stack T so that
_−h→[p]t_ [=][ GRU] [(]−h−[p]t→1[,][ x][t][)]
_−_
_←h−[p]t_ [=][ GRU] [(]←h[p]t−+1−[,][ x][t][)]
**h[p]t** [=] _−h→[p]t_ [+] _←h−[p]t_
(1)
where GRU (·, ·) represents the function of a twolayer GRU. h[p]t [is the sum of the hidden states] _−h→[p]t_
and _←h−[p]t_ [, which are from both forward and backward]
GRUs. These representation vectors are then fed
into our tree-structured equation decoder for ensemble expression tree generation. Besides, we also
construct the hidden state g0 as the initial hidden
state of our equation decoder:
**g0[p]** [=][ −]h→[p]n [+] _←h−[p]0_ (2)
[2https://www.sympy.org/](https://www.sympy.org/)
-----
we can maintain subtree semantic information of
generated symbols after right child node generation. If the predicted token yt is not an operator,
we check the size of the stack T to judge whether
the current node is a right node. If the current
node is a right node, we transform the embedding
of parent node op, left sibling node l and current
node e(yt _P_ ) to a subtree semantic representation
_|_
**t, which represents the semantic meanings of gen-**
erated symbols for current subtree and is used to
help the right node generation of the upper subtree. In this way, our equation decoder can decode
out an equation as a human writes out an equation
according to the problem description.
**Token Embedding. For a problem P**, its target
vocabulary V _[tar]_ consists of 4 parts: math operators
_Vop, unknowns Vu, constants Vcon that are those_
common-sense numerical values occurred in the
target expression but not in the problem text (e.g. a
chick has 2 legs.), and the numbers np occurred in
_P_ . For each token y in V _[tar], its token embedding_
**e(y|P** ) is defined as:
**Mop(y)** if y _Vop_
_∈_
**e(y** _P_ ) = **Mu(y)** if y ∈ _Vu_ (3)
_|_ **Mcon(y)** if y _Vcon_
_∈_
**h[p]loc[(][y, P]** [)] if y ∈ _nP_
where Mop, Mu, and Mcon are three trainable
word embedding matrices independent of the specific problem. However, for a numeric value in nP,
we take the corresponding hidden state h[p]loc [from]
encoder as its token embedding, where loc(y, P )
is the index position of numeric value y in P .
**Gating Mechanism and Attention Mechanism.**
To better flow important information and ignore
useless information, we apply a gating mechanism
to generate node state n which will be used for
predicting the output and generating child hidden
states gl and gr for descendant nodes if the output
of the current node is a math operator:
_q = σ (WqI)_
the concatenation of the current node state n, the
contextual vector c aggregating relevant information of the problem as a weighted representation of
the input tokens by attention mechanism, and the
token embedding e(yt _P_ ) of the predicted token
_|_
_yt._
For better predicting a token yt by utilizing contextual information, we deploy an attention mechanism to aggregate relevant information from the
input vectors. Formally, given current node state n
and the encoder outputscontextual vector c as follows: {h[p]t _[}]t[n]=1[, we calculate the]_
exp (Va tanh (Wa [n, h[p]s[]))]
_s_ [(5)]
_i_ [exp (][V][a][ tanh (][W][a][ [][n][,][ h]i[p][]))] **[h][p]**
**c =**
Based on the contextual vector c and current node
state n, we can predict the token yt as follows:
exp(s(y **n, c, P** ))
_y = arg max_ _|_ (6)
_i_ [exp (][s][ (][y][i][|][n][,][ c][, P] [))]
P
where
**s(y|n, c, P** ) = Vn tanh (Ws[n, c, e(y|P )]) (7)
**Subtree Semantic Transformation.** Although
our decoder decodes a universal expression tree
in the prefix, to help our model to generate the next
symbol in a semantically-aligned manner by taking
full advantage of the semantic meanings of the generated expression tree, we design a recursive neural
network to transform the semantic representations
of the current node and its two child subtrees tl and
**tr into a high-level embedding t in a bottom-up**
manner. Formally, let t be a subtree, and y denotes
the predicted token of the root node of the subtree.
If y is a math operator, which means that the current subtree t must have two child subtrees tl and
**tr, the high-level embedding t should fuse the se-**
mantic information from the operator token y, the
left child subtree tl and the right child subtree tr
as follows:
(4)
_Q = tanh (WQI)_
_O = q ⊙_ _Q_
_gt = σ (Wgt [tl, tr, e(ˆy|P_ )])
_Ct = tanh (Wct |tl, tr, e(ˆy|P_ )])
**t = gt** _Ct_
_⊙_
(8)
where O can be a left node state nl, a right node
state, a left child hidden state gl, or a right child hidden state gr. For nl, I is gl generated by the parent
node. For nr, I is [gr, tl] which is the concatenation of the hidden state gr generated by the parent
node and the subtree semantic embedding tl of left
sibling. For gl and gr, I is [n, c, e(yt|P )] which is
Otherwise, t is the embedding e(y|P ) of the predicted token y because y is a numeric value, an
unknown variable, or a constant quantity and the
recursion stops.
-----
**3.2.3** **Semantically-Aligned Regularization**
When a subtree t is produced by our model, this
means that we have a computable unit. The semantics of this computable unit should be consistent
with the problem text P . To achieve this goal, we
propose a subtree-level semantically-aligned regularization to help train a better model with higher
performance. For each subtree embedding t and
encoder outputs **h[P]1** _[,][ h]1[P]_ _[,][ · · ·][,][ h]n[P]_, we first apply
an attention function to compute a semantically
aligned vector a as Equation(5), then we use a
two-layer feed-forward neural network with tanh
activation to transform t and a into same semantic
space respectively. The procedure can be formulated as:
**esa = We2 tanh (We1a)**
(9)
**dsa = Wd2 tanh (Wd1t)**
where We1, We2, Wd1, and Wd2 are trainable
parameter matrices. With the vectors esa and dsa
Let m be the number of subtrees in a universal
expression tree, we can regularize our model by
minimizing the following loss:
real-word MWPs better than GTS and StackDecoder which either can only handle single-var linear
MWPs without considering unknowns or can handle equations set problem iteratively. Second, we
introduce subtree-level semantically-aligned regularization for better enforcing the semantic constraints and rationality of generated expression tree
during training, leading to higher answer accuracy,
as illustrated in Table 2.
**4** **Hybrid Math Word Problem Dataset**
Most public datasets for automatic MWPs solving either are quite small such as Alg514 (Kushman et al., 2014), DRAW-1K (Upadhyay and
Chang, 2017), MaWPS (Koncel-Kedziorski et al.,
2016) or exist some incorrect labels such as Dolphin18K (Huang et al., 2016). An exception is
the Math23K dataset which contains 23161 problems labeled well with structured equations and
answers. However, it only contains one-unknown
linear MWPs, which is not sufficient to validate
the ability of a math solver about solving multiple types of MWPs. Therefore, we introduce a
new high-quality MWPs dataset, called HMWP, in
which each sample is extracted from a Chinese K12
math word problem bank, to validate the universality of math word problem solvers and push the
research boundary of MWPs to match real-world
scenes better. Our dataset contains three types of
MWPs: arithmetic word problems, equations set
problems, and non-linear equation problems. There
are 5491 MWPs, including 2955 one-unknownvariable linear MWPs, 1636 two-unknown-variable
linear MWPs, and 900 one-unknown-variable nonlinear MWPs. It should be noticed that our dataset
is sufficient for validating the universality of math
word problem solvers since these problems can
cover most cases about MWPs. We labeled our
data with structured equations and answers as
Math23K (Wang et al., 2017). The data statistics of
our dataset and several publicly available datasets
are shown in Table 1. From the statistics, we can
see that the #AVG EL (average equation length),
#Avg PN (average number of quantities occurred
in problems and their corresponding equations),
and #Avg Ops (average numbers of operators in
equations) are the largest among the serval publicly
available datasets. (Xie and Sun, 2019) showed the
higher these values, the more difficult it is. Therefore, our dataset is more challenging for MWPs
solvers.
_sa(T_ _P_ ) = [1]
_L_ _|_ _m_
**dsa** **esa** 2 (10)
_i=1_ _∥_ _−_ _∥_
X
**3.2.4** **Training Objective**
Given the training dataset D={(P _[i], T_ [1]), (P [2], T [2]),
_· · ·,(P_ _[N]_ _, T_ _[N]_ ) }, where T _[i]_ is the universal expression tree of problem P _[i], we minimize the following_
loss function:
(T _P_ ) = [ log p(T _P_ )+λ _sa(T_ _P_ )]
_L_ _|_ _−_ _|_ _∗L_ _|_
(P,TX)∈D
(11)
where
prob(yt **gt, ct, P** ) (12)
_|_
_t=1_
Y
_p(T_ _|P_ ) =
where m denotes the size of T, and gt and ct are
the hidden state vector and its contextual vector at
the t-th node. We set λ as 0.01 empirically.
**3.3** **Discussion**
The methods most relevant to our method are
GTS (Xie and Sun, 2019) and StackDecoder (Chiang and Chen, 2019). However, compared with
them, our method is different from them as follows. First, our method applies a universal expression tree to represent the diverse equations underlying different MWPs uniformly, which match
-----
|Dataset|# Problems|# Templates|# Sentences|# Words|# Avg EL|# Avg SNI|# Avg Constants|# Avg Ops|Problem types|
|---|---|---|---|---|---|---|---|---|---|
|Alg514|514|28|1.62k|19.3k|9.67|3.54|0.44|5.69|algebra, linear|
|Dolphin1878|1,878|1,183|3.30k|41.4k|8.18|2.58|0.63|4.97|linear + nonlinear|
|DRAW-1K|1,000|230|6.23k|81.5k|9.985|3.386|0.747|5.852|algebra, linear|
|MaWPS|2373|-|2373|73.3k|4.55|2.31|0.26|1.78|algebra, linear|
|Math23K|23,161|2,187|70.1k|822k|5.55|3.0|0.28|2.28|algebra, linear|
|Dolphin18k|18,460|5,871|49.9k|604k|9.19|3.15|1.09|4.96|linear + nonlinear|
|HMWP|5470|2779|9.56k|342k|10.73|3.42|1.35|5.96|linear + nonlinear|
Table 1: Statistics of our dataset and several publicly available datasets. Avg EL, Avg SNI, Avg Constants, and Avg
Ops represent average equation length, average number of quantities occurred in problems and their corresponding
equations, average numbers of constants only occurred in equations, and average numbers of operators in equations,
respectively. The higher these values, the more difficult it is. This has been shown in (Xie and Sun, 2019).
**5** **Experiments**
**5.1** **Experimental Setup and Training Details**
**Datasets, Baselines, and Evaluation metric.**
We conduct experiments on four datasets, such
as HMWP, Alg514 (Kushman et al., 2014),
Math23K (Wang et al., 2017) and Dolphin18KManual (Huang et al., 2016). The data statistics
of four datasets are shown in Table 1. The main
state-of-the-art learning-based methods to be compared are as follows: Seq2Seq-attn w/ SNI (Wang
et al., 2017) is a universal solver based on the
seq2seq model with significant number identification(SNI). GTS (Xie and Sun, 2019) is a goaldriven tree-structured MWP solver only for oneunknown-variable non-linear MWPs. **StackDe-**
**coder (Chiang and Chen, 2019) is a semantically-**
aligned MWPs solver. SAU-Solver w/o SSAR
and SAU-Solver are two universal tree-structured
solvers proposed in this paper without and with
subtree semantically-aligned regularization. Following our baselines, we use answer accuracy as
the evaluation metric: if the calculated value of
the predicted expression tree equals to the true answer, it is thought of correct since the predicted
expression is equivalent to the target expression.
|Model|HMWP|ALG514|Math23K|Dolphin18K Manual|
|---|---|---|---|---|
|Seq2Seq-attn w/ SNI|23.2%|16.1%|58.1%|5.9%|
|GTS|-|-|73.9%|-|
|StackDecoder|27.4%|28.86%|66.0%|9.8%|
|SAU-Solver w/o SSAR (ours)|44.40%|55.44%|74.53%|11.02%|
|SAU-Solver (ours)|44.83%|57.39%|74.84%|11.41%|
Model HMWP ALG514 Math23K [Dolphin18K]
Manual
Seq2Seq-attn w/ SNI 23.2% 16.1% 58.1% 5.9%
GTS - - 73.9%
StackDecoder 27.4% 28.86% 66.0% 9.8%
SAU-Solver w/o SSAR (ours) 44.40% 55.44% 74.53% 11.02%
SAU-Solver (ours) **44.83% 57.39%** **74.84%** **11.41%**
Table 2: Model comparison on answer accuracy via 5fold cross-validation. “-” means either the code is not
released or the model is not suitable on those datasets.
**Implementation Details. We use PyTorch[3]** to implement our model on Linux with NVIDIA RTX
2080Ti. All the words with less than five occurrences are converted into a special token UNK. We
3http://pytorch.org
set the dimensionality of word embedding and the
size of all hidden states for other layers as 128 and
512, respectively. But for HMWP and Dolphin18KManual, we set the size of all hidden states for other
layers as 384 since the memory consumption exceeds the capacity of NVIDIA RTX 2080Ti. Our
model is trained by ADAM optimizor (Kingma
and Ba, 2015) with β1 = 0.9, β2 =0.999, and ϵ =
10[−][8]. The mini-batch size is set to 32. The initial
learning rate is set to 10[−][3] and then decreases to
half every 20 epochs. To prevent overfitting, we
set the dropout probability as 0.5 and weight decay
as 1e[−][5]. Finally, we set beam size as 5 in beam
search to generate expression trees.
|Col1|linear (One-VAR)|linear (Two-VAR)|non-linear (One-VAR)|All|
|---|---|---|---|---|
|# Num|1944|1614|1912|5470|
|# Avg EL|10.50|12.10|9.83|10.73|
|# Avg SNI|3.59|3.59|3.12|3.42|
|# Avg Constants|1.21|1.41|1.45|1.35|
|# Avg Ops|5.70|7.10|5.26|5.96|
|Correct number (Retrieval-Jaccard)|222|348|618|1188|
|Accuracy ( Retrieval-Jaccard )|11.42%|21.56%|32.32%|21.72%|
|Correct number (Seq2Seq-attn w/SNI)|244|312|711|1267|
|Accuracy (Seq2Seq-attn w/SNI)|12.55%|19.33%|37.19%|23.2%|
|Correct number (SAU-Solver (ours))|593|673|1186|2452|
|Accuracy (SAU-Solver (ours))|30.50%|41.70%|62.03%|44.83%|
linear linear non-linear
All
(One-VAR) (Two-VAR) (One-VAR)
# Num 1944 1614 1912 5470
# Avg EL 10.50 12.10 9.83 10.73
# Avg SNI 3.59 3.59 3.12 3.42
# Avg Constants 1.21 1.41 1.45 1.35
# Avg Ops 5.70 7.10 5.26 5.96
Correct number
222 348 618 1188
(Retrieval-Jaccard)
Accuracy
11.42% 21.56% 32.32% 21.72%
( Retrieval-Jaccard )
Correct number
244 312 711 1267
(Seq2Seq-attn w/SNI)
Accuracy
12.55% 19.33% 37.19% 23.2%
(Seq2Seq-attn w/SNI)
Correct number
**593** **673** **1186** **2452**
(SAU-Solver (ours))
Accuracy
**30.50%** **41.70%** **62.03%** **44.83%**
(SAU-Solver (ours))
Table 3: The data statistics and performance on different subset of HMWP.
**5.2** **Results and Analyses**
**Answer Accuracy.** We conduct 5-fold crossvalidation to evaluate the performances of baselines
and our models on all four datasets. The results
are shown in Table 2. Several observations can be
made from the results in Table 2 as follows:
First, our SAU-Solver has achieved significantly
better than the baselines on four datasets. It proves
that our model is feasible for solving multiple types
-----
|Case 1: 鸡兔同笼,上数有NUM(n0 [20]) 个头,下数有NUM(n1 [50]) 条腿,可知鸡数量为多少?( An unknown number of rabbits and chickens were locked in a cage, counting from the top, there were NUM(n0 [20]) heads, counting from the bottom, there were NUM(n1 [50]) feet. How many chickens were locked in this cage? )|Col2|Col3|
|---|---|---|
|Seq2Seq: (x-n1)/n0=(x-n1)/n2; (error)|SAU-Solver w/o SSAR: n0+x+4.0*x=n1; (correct)|SAU-Solver: 2.0*x+4.0*(n0-x)=n1; (correct)|
|Case 2: NUM(n0 [1]) 艘轮船航行于A 、B NUM(n1 [2]) 个码头之间,顺水需NUM(n2 [5]) 小时,逆水需NUM(n3 [7]) 小时,已知水流速度为每 小时NUM (n4 [5]) 千米,则A 、B 之间距离为多少千米? ( NUM(n0 [1]) boat sailing between NUM(n1 [2]) docks, it takes NUM(n2 [5]) hours to sail from A to B downstream, while NUM(n3 [7]) hours sailing upstream. Knowing the velocity of the water flow is 5 km/h, what is the distance between A and B? )|||
|Seq2Seq: x/(n2+n1)+n1=x-/n2; (error)|SAU-Solver w/o SSAR: x/n2-n4=x/n3+n4; (correct)|SAU-Solver: x/n2-n4=x/n3+n4; (correct)|
|Case 3: 整理NUM(n0 [1]) 批图书,如果由NUM(n1 [1]) 个人单独做, 要花NUM(n2 [60]) 小时.现在由一部分人用NUM(N3 [1]) 小时整理, 随后 增加NUM(n4[15]) 人和他们一起又做了NUM(n5 [2]) 小时, 恰好完成整理工作.假设每个人的工作效率相同,那么先安排整理的人员有 多少人? ( Given NUM(n0 [1]) stack of books, NUM(n1 [1]) student can sort them in NUM(n2 [60]) hours. In the first NUM(N3 [1]) hours, there were several students sorting books, later, NUM(n4[15]) more students joined them, and they finished the job in another NUM(n5 [2]) hours together. If each student is as efficient as the others, how many students were working at the beginning?|||
|Seq2Seq: n1*(x/n2)+n5*(x+n4)/n2=1.0; (error)|SAU-Solver w/o SSAR: x/n2+n5*(x+n4)/n2=1.0; (correct)|SAU-Solver: x/n2+n5*(x+n4)/n2=1.0; (correct)|
|Case 4: 某农场老板准备建造NUM(n0 [1]) 个矩形羊圈,他打算让矩形羊圈的NUM(n1 [1]) 面完全靠墙,墙可利用的长度为NUM(n2 [25]) m ,另外NUM(n3 [1]) 面用长度为NUM(n4 [50]) m 的篱笆围成( 篱笆正好要全部用完,且不考虑接头的部分) ,若要使矩形羊圈的面积 为NUM(n5 [300]) m ˆ NUM(n6 [2]) ,求垂直于墙的边长.( A farm owner plans to build a rectangle sheepfold, with NUM(n1 [1]) side against the wall. The wall is 25 meters long, and he used NUM(n3 [1]) NUM(n4 [50])-meter-long fence to build the rest of the sheepfold (the fence should be exactly used up, neglecting the joining part). If the area of the sheepfold is NUM(n5 [300]) m ˆ NUM(n6 [2]), find the length of the side vertical to the wall.|||
|Seq2Seq: x*(n3-2.0*x)=n4; (error)|SAU-Solver w/o SSAR: (n2-2.0*x)*(n4-2.0*x)= n5; (error)|SAU-Solver: x*(n4-2.0*x)= n5; (correct)|
Table 4: Typical cases. Note that the results are represented as infix traversal of expression trees which is more
readable than prefix traversal.
drill down to analyse the performance of Retrieval**Jaccard, Seq2seq-attn w/SNI, and SAU-Solver**
on different types of MWPs in HMWP. The data
statistics and performance results are shown in Table 3. We can observe that our model outperforms
the other two models by a large margin on all subsets. Intuitively, the longer the expression length
is, the more complex the mathematical relationship of the problem is, and the more difficult it is.
And the average expression length of our dataset is
much longer than Math23K according to the data
statistics of Table 3 and Table 1. Therefore, we can
observe that the accuracy of our model on linear
(One-VAR) is lower than Math23K in Table 2.
of MWPs. It also proves that our model is more
general and more effective than other state-of-theart models on the real-word scenario that need to
solve multiple types of MWPs with a unified solver.
Second, with our subtree-level semanticallyaligned regularization on training procedure,
our SAU-Solver has gained additional absolute
0.43% accuracy on HMWP, absolute 1.95% accuracy on ALG514, absolute 0.31% accuracy
on Math23k, and absolute 0.39% accuracy on
Dolphin18k-Manual. This shows that subtree-level
semantically-aligned regularization is helpful for
improving subtree semantic embedding, resulting
in improving expression tree generation, especially
for the generation of the right child node. Although
StackDecoder can be a universal math word problem solver via simple operator extension, the performances on HMWP, ALG514, and Dolphin18kManual are very poor, since it generates expression trees independently and only considers the
semantic-aligned transformation in an expression
tree. Different from it, our SAU-Solver generates
multiple expression trees as a universal expression
tree and conducts subtree-level semantic-aligned
transformation for subsequent tree node generation
in our universal expression tree. In this way, we
can deliver the semantic information of the previous expression tree to help the generation of the
current expression tree. Therefore we can achieve
better performance than StackDecoder.
Overall, our model is more general and effective than other state-of-the-art models on multiple
MWPs and outperforms the compared state-of-theart models by a large margin on answer accuracy.
**Performance on different types of MWPs. We**
|Expression Tree Sizes|Math23K|Col3|Col4|HMWP|Col6|Col7|
|---|---|---|---|---|---|---|
||Correct|Error|Acc(%)|Correct|Error|Acc(%)|
|3-|729|168|81.27%|0|0|0%|
|5|1872|435|81.14%|3|1|75.00%|
|7|620|291|68.06%|32|25|56.14%|
|9|147|143|50.69%|159|69|69.74%|
|11|66|74|47.14%|102|111|47.89%|
|13+|20|66|23.26%|197|395|33.28%|
Expression Math23K HMWP
Tree Sizes Correct Error Acc(%) Correct Error Acc(%)
3- 729 168 81.27% 0 0 0%
5 1872 435 81.14% 3 1 75.00%
7 620 291 68.06% 32 25 56.14%
9 147 143 50.69% 159 69 69.74%
11 66 74 47.14% 102 111 47.89%
13+ 20 66 23.26% 197 395 33.28%
Table 5: Accuracy of different expression tree size.
**5.3** **Error Analysis**
In Table 5, we show the results of how the accuracy
changes as the expression tree size becomes larger.
We can observe that as the expression tree size becomes larger, our model’s performance becomes
lower. This shows that although our model can handle various equations in a unified manner, it still
has limitations at predicting long equations since
longer equations often match with more complex
MWPs which are more difficult to solve. Thus, our
model still has room for improvement in reasoning,
inference, and semantic understanding. Besides,
-----
compared with performances on Math23K which
has only a few examples with complex templates,
our model achieves significant improvement on the
subset of HMWP with expression tree size 13+.
This shows that constructing datasets with abundant complex examples can improve the model’s
ability to handle complex problems.
**5.4** **Case Study**
Further, we conduct a case analysis and provide
four cases in Table 4, which shows the effectiveness of our approach. Our analyses are summarized as follows. From Case 1, Seq2Seq generates
a spurious number n2 not in problem text while
both SAU-Solver w/o SSAR and SAU-Solver predict correctly owning to the problem-specific target
vocabulary. Besides, although both SAU-Solver
w/o SSAR and SAU-Solver can generate correct
an equation, the equation generated by our SAUSolver is more semantically-aligned with a human
than the equation generated by SAU-Solver. From
**Case 2, we can see that Seq2Seq generates an in-**
valid expression containing consecutive operators
while our models can guarantee the validity of expressions since they generate expression trees directly. From Case 3, we find it interesting that
tree-based models can avoid generating redundant
operations, such as “n1*”. From Case 4, we can
see that SAU-Solver can prevent generating the
similar subtree as its left sibling when the parent
node is “*”.
**6** **Conclusion**
We propose an SAU-Solver, which is able to solve
multiple types of MWPs, to generate the universal express tree explicitly in a semantically-aligned
manner. Besides, we also propose a subtree-level
semantically-aligned regularization to improve subtree semantic representation. Finally, we introduce a new MWPs datasets, called HMWP, to validate our solver’s universality and push the research
boundary of MWPs to math real-world applications
better. Experimental results show the superiority
of our approach.
**Acknowledgements**
We thank all anonymous reviewers for their constructive comments. This work was supported
in part by National Key RD Program of China
under Grant No. 2018AAA0100300, National
Natural Science Foundation of China (NSFC)
under Grant No.U19A2073 and No.61976233,
Guangdong Province Basic and Applied Basic Research (Regional Joint Fund-Key) Grant
No.2019B1515120039, Nature Science Foundation of Shenzhen Under Grant No. 2019191361,
Zhijiang Lab’s Open Fund (No. 2020AA3AB14),
Sichuan Science and Technology Program (No.
2019YJ0190).
**References**
Yefim Bakman. 2007. Robust understanding of word
problems with extraneous information. Computing
_Research Repository, arXiv:math/0701393._
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-aligned equation generation for
solving and reasoning math word problems. In
_Proceedings of the 2019 Conference of the North_
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
_Volume 1 (Long and Short Papers), pages 2656–_
2668. Association for Computational Linguistics.
Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. 2014. Learning
phrase representations using RNN encoder–decoder
for statistical machine translation. In Proceedings of
_the 2014 Conference on Empirical Methods in Nat-_
_ural Language Processing (EMNLP), pages 1724–_
1734. Association for Computational Linguistics.
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
2018. Neural math word problem solver with reinforcement learning. In Proceedings of the 27th Inter_national Conference on Computational Linguistics,_
pages 213–223. Association for Computational Linguistics.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian
Yin. 2017. Learning fine-grained expressions to
solve math word problems. In Proceedings of the
_2017 Conference on Empirical Methods in Natural_
_Language Processing, pages 805–814. Association_
for Computational Linguistics.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
887–896. Association for Computational Linguistics.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In international
_conference on learning representations._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
-----
A math word problem repository. In Proceedings of
_the 2016 Conference of the North American Chap-_
_ter of the Association for Computational Linguistics:_
_Human Language Technologies, pages 1152–1157._
Rik Koncelkedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52th Annual Meeting of the Association for Compu-_
_tational Linguistics, volume 1, pages 271–281._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167. Associa-_
tion for Computational Linguistics.
Arindam Mitra and Chitta Baral. 2016. Learning to
use formulas to solve simple arithmetic problems.
In Proceedings of the 54th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 2144–2153. Association for_
Computational Linguistics.
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transac_tions of the Association for Computational Linguis-_
_tics, 6:159–172._
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Processing,_
pages 1132–1142. Association for Computational
Linguistics.
Shyam Upadhyay and Mingwei Chang. 2017. Annotating derivations: A new evaluation strategy and
dataset for algebra word problems. In 15th Confer_ence of the European Chapter of the Association for_
_Computational Linguistics, pages 494–504._
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math word
problem to a expression tree. In Proceedings of the
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 1064–1069. Associa-_
tion for Computational Linguistics.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Thirty-Second AAAI Con_ference on Artificial Intelligence, pages 5545–5552._
Lei Wang, Dongxiang Zhang, Zhang Jipeng, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers
with recursive neural networks. In Thirty-Third
_AAAI Conference on Artificial Intelligence, pages_
7144–7151.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854. Association for Computational Linguistics.
Zhipeng Xie and Shichao Sun. 2019. A goal-driven
tree-structured neural model for math word problems. In Proceedings of the Twenty-Eighth In_ternational Joint Conference on Artificial Intelli-_
_gence, IJCAI-19, pages 5299–5305. International_
Joint Conferences on Artificial Intelligence Organization.
Ma Yuhui, Zhou Ying, Cui Guangzuo, Ren Yun, and
Huang Ronghuai. 2010. Frame-based calculus of
solving arithmetic multi-step addition and subtraction word problems. In International Workshop on
_Education Technology and Computer Science, vol-_
ume 2, pages 476–479.
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using
quadratic programming. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 817–822. Association for_
Computational Linguistics.
-----
| [
"Lihui, Lin",
"Jinghui, Qin",
"Rumin, Zhang",
"Liang, Lin",
"Xiaodan, Liang"
] | 2020-10-14T00:00:00 | EMNLP 2020 Main | true | 58 | 12 | null | http://arxiv.org/abs/2010.06823 | https://arxiv.org/abs/2010.06823 | https://www.semanticscholar.org/paper/f9c07ed1d2113c858b38861790af6a26310b8465 |
Solving Math Word Problems with Multi-Encoders and Multi-Decoders | Math word problems solving remains a challenging task where potential semantic and mathematical logic need to be mined from natural language. Although previous researches employ the Seq2Seq technique to transform text descriptions into equation expressions, most of them achieve inferior performance due to insufficient consideration in the design of encoder and decoder. Specifically, these models only consider input/output objects as sequences, ignoring the important structural information contained in text descriptions and equation expressions. To overcome those defects, a model with multi-encoders and multi-decoders is proposed in this paper, which combines sequence-based encoder and graph-based encoder to enhance the representation of text descriptions, and generates different equation expressions via sequence-based decoder and tree-based decoder. Experimental results on the dataset Math23K show that our model outperforms existing state-of-the-art methods. | A model with multi-encoders and multi-decoders is proposed in this paper, which combines sequence-based encoder and graph- based encoder to enhance the representation of text descriptions, and generates different equation expressions via sequence- based decoder and tree-based decoder. | # Solving Math Word Problems with Multi-Encoders and Multi-Decoders
**Yibin Shen**
East China Normal University
[email protected]
**Cheqing Jin[B]**
East China Normal University
[email protected]
**Abstract**
Math word problems solving remains a challenging task where potential semantic and mathematical logic need to be mined from natural language. Although previous researches employ
the Seq2Seq technique to transform text descriptions into equation expressions, most of them
achieve inferior performance due to insufficient consideration in the design of encoder and decoder. Specifically, these models only consider input/output objects as sequences, ignoring the
important structural information contained in text descriptions and equation expressions. To
overcome those defects, a model with multi-encoders and multi-decoders is proposed in this
paper, which combines sequence-based encoder and graph-based encoder to enhance the representation of text descriptions, and generates different equation expressions via sequence-based
decoder and tree-based decoder. Experimental results on the dataset Math23K show that our
model outperforms existing state-of-the-art methods.
**1** **Introduction**
Math word problems (MWPs) solving, a task that transforms text descriptions into solvable equation
expressions, is considered a crucial step towards general AI (Wang et al., 2018b). Since semantic understanding and mathematical logic reasoning both contribute to correct answers, MWPs solving remains a
challenging topic in NLP. Table 1 shows a typical example of MWPs.
**Problem:** A slow car drives 58(n1) km/h, and a fast car drives **AST:**
85(n2) km/h. The two cars drive at the same time in ×
inverse direction, and they meet after 5(n3) hours.
How many kilometers does the fast car drive more
than the slow car when they meet? −
**Equation:** (n2 − _n1) × n3_
**PrefixSuffix::** _n× −2n1n −2nn13n×3_
**Answer:** 135
×
−
Table 1: A typical example of MWPs.
Researches on MWPs solving has a long history. Early researches focused on rule-based methods (Fletcher, 1985; Bakman, 2007; Yuhui et al., 2010) and statistical machine learning methods (Kushman et al., 2014; Hosseini et al., 2014; Mitra and Baral, 2016) that map problems into predefined templates. The main drawbacks of these methods lie in their heavy dependency on manual features and
incapacity to generate new templates for new problems. Consequently, they can only achieve satisfactory results on small-scale datasets (Zhang et al., 2018).
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://
creativecommons.org/licenses/by/4.0/.
2924
-----
Recently, more researchers have been introducing Seq2Seq models, which are capable of generating
new equation expressions that do not exist in the training set (Wang et al., 2017; Wang et al., 2018a; Wang
et al., 2019; Li et al., 2019). However, these models may generate invalid expressions since the sequencebased decoder cannot control the generation process. Based on the fact that each equation expression
could be transformed into an abstract syntax tree (AST), some studies (Liu et al., 2019; Xie and Sun,
2019) changed the pattern of sequence generation from left to right and followed the top-down decoding
process. Such tree-based decoders match the prefix order of AST. Although these models considered
the structural information of equation expressions, they ignored that text descriptions also contain rich
structural information, such as dependency parse tree and numerical comparison information.
The dependency parse tree represents various grammatical relationships between pairs of text words,
for example, nouns are usually matched with verbs, and numerals are usually matched with quantifiers.
In Table 1, n1 can be subtracted from n2 because n1 and n2 have the same quantifiers. Therefore, considering the dependency parse tree can reduce the situation of unreasonable operators between number
pairs. In addition, most of MWPs solving replace numbers with special tokens (i.e. n1, n2), which loses
important numerical comparison information contained in text descriptions. For example, in Table 1,
the underlined words ‘slow car’ and ‘fast car’ imply the fact that ‘n1 < n2’. Similarly, we incline to
ask ‘How many kilometers does the fast car drive more than the slow car?’ rather than ‘How many
kilometers does the slow car drive more than the fast car?’. In other words, text descriptions match the
numerical comparison information. Provided that a model knows numerical comparison information in
advance, the model can better understand potential semantic without wasting a lot of time in mining
these established facts from a large number of corpus.
Now back to the design of decoder, existing methods only adopt one decoder, which limits the generation ability of the model. (Wang et al., 2018a) provided an ensemble model that selects the result
according to various models’ generation probability. However, there is still one decoder for a single
model. (Meng and Rumshisky, 2019) integrated two decoders in one model, but both are sequence-based
decoders. The same type of decoder cannot significantly improve generalization performance.
With the aim of solving aforementioned challenges, we propose a novel model with multi-encoders
and multi-decoders, which combines sequence-based encoder and graph-based encoder to enhance the
representation of text descriptions, and obtains different equation expressions via sequence-based decoder and tree-based decoder. Specifically, we leverage a sequence-based encoder to get the context
representation of text descriptions, and integrate the dependency parse tree and numerical comparison
information via a graph-based encoder. In the decoding stage, a sequence-based decoder is used to generate the suffix order of AST, and a tree-based decoder is used to generate the prefix order. The final
result is selected according to the generation probability of different decoders. The main contributions
of this paper are summarized as follows:
- We integrate the dependency parse tree and numerical comparison information in the model, which
enhances the representation of text descriptions.
- We use two types of decoders to generate different equation expressions, which strengthens the
generation ability of the model.
- We evaluate our model on a large-scale dataset Math23K. The experimental results show that our
model outperforms all existing state-of-the-art methods.
**2** **Related Work**
MWPs solving may date back to the 1960s and continues attracting current NLP researchers. Here we
will introduce recent studies based on the Seq2Seq framework. The work presented in (Zhang et al.,
2018) reviews more early approaches.
(Wang et al., 2017) made the first attempt to directly generate equation expressions by using the
Seq2Seq model and published a high-quality Chinese dataset Math23K. (Wang et al., 2018a) found that
using the suffix order of AST can eliminate brackets in the original expressions, and proposed an equation
2925
-----
normalization method to reduce the number of duplicated equations. (Wang et al., 2019) proposed a twostage model that first used a Seq2Seq model to generate expressions without operators, and then used
a recursive neural network to predict the operator between numbers. (Chiang and Chen, 2019) adopted
a stack to track the semantic meanings of numbers. (Li et al., 2019) added different functional multihead attentions to the Seq2Seq framework. (Meng and Rumshisky, 2019) applied double sequencebased decoders in one model. However, these Seq2Seq models only consider input/output objects as
sequences, ignoring the important structural information of equation expressions. Consequently, they
cannot guarantee the generation of valid equation expressions.
The idea of the tree-based decoder was proposed in (Liu et al., 2019; Xie and Sun, 2019). They
changed the pattern of sequence generation from left to right and followed the top-down decoding process. However, these methods ignored rich structural information contained in text descriptions.
(Li et al., 2020; Zhang et al., 2020) proposed the graph-based encoder. (Li et al., 2020) integrated
the dependency parse tree and constituency tree of text descriptions. (Zhang et al., 2020) constructed
the quantity cell graph and quantity comparison graph. Since these methods considered the structure
information of text descriptions, they have been current state-of-the-art models.
The encoders and decoders designed by these Seq2Seq models are summarized in Table 2. As we can
see, our model is the first model to adopt multi-encoders and multi-decoders.
**Model** **Seq-Encoder** **Graph-Encoder** **Seq-Decoder** **Tree-Decoder**
**DNS (Wang et al., 2017)** ✓ ✓
**Math-EN (Wang et al., 2018a)** ✓ ✓
**T-RNN(Wang et al., 2019)** ✓ ✓
**S-Aligned (Chiang and Chen, 2019)** ✓ ✓
**Group-ATT (Li et al., 2019)** ✓ ✓
**D-Decoder (Meng and Rumshisky, 2019)** ✓ ✓
**AST-Dec (Liu et al., 2019)** ✓ ✓
**GTS (Xie and Sun, 2019)** ✓ ✓
**Graph2Tree (Li et al., 2020)** ✓ ✓ ✓
**Graph2Tree (Zhang et al., 2020)** ✓ ✓ ✓
**Ours** ✓ ✓ ✓ ✓
Table 2: The encoders and decoders designed by various Seq2Seq models.
**3** **Methodology**
The framework of our model is shown in Figure 1, which consists of four components: the sequencebased encoder obtains the context representation of text descriptions; the graph-based encoder integrates
the dependency parse tree and numerical comparison information; the sequence-based decoder generates
the suffix order of AST, and the tree-based decoder generates the prefix order. The final generation result
is selected according to the generation probability of different decoders.
**3.1** **Sequence-Based Encoder**
The goal of sequence-based encoder is to get the context representation of text descriptions. Without loss
of generality, we use a BiGRU to encode text words. Formally, given the text words P = _x1,_ _, xn_,
_{_ _· · ·_ _}_
we first embed each word token xi to a word embedding vector ei[1], and then feed these embedding
vectors into a BiGRU to produce hidden state sequences H = **_h1,_** _, hn_ .
_{_ _· · ·_ _}_
**3.2** **Graph-Based Encoder**
**3.2.1** **Dependency Parse Tree**
As is discussed in Section 1, the dependency parse tree represents various grammatical relationships
between pairs of text words, which is helpful to find reasonable operators between number pairs. We can
1For each word token, we also embed its POS tagging.
2926
-----
|||Col3|
|---|---|---|
|A|car|det|
|slow|car|amod|
|car|drives|nsubj|
||km/h|nummod|
Input: A slow car drives km/h, … Dependency Parse Tree Tree-Based Decoder
the fast car drive than the slow car ×
when they meet? _A_ _car_ _det_
_slow_ _car_ _amod_
_car_ _drives_ _nsubj_
… _km/h_ _nummod_ −
⋯⋯⋯
…
GRU GRU GRU GRU GRU
…
…
Sequence-Based Encoder Output: × −
Parse Graph Numerical Graphs
Output: − ×
[] []
− × _E_
⋯ GCN ⋯
× []
[] ⋯ MaxPool GRU GRU GRU GRU GRU GRU
GCN ⋯ _S_
× Graph-Based Encoder Sequence-Based Decoder
Figure 1: The framework of our model. We first exploit a sequence-based encoder to obtain the context
representation of text descriptions. Later, a graph-based encoder is used to integrate the dependency
parse tree and numerical comparison information. In the decoding process, the sequence-based decoder
and tree-based decoder generate different equation expressions.
easily obtain the graph-based structure of the dependency parse tree by using dependency relationships
in the parse tree. Hence, we consider the following parse graph.
- Parse Graph (G): For two words xi, xj ∈ _P_, there is an edge eij = (xi, yj) ∈G if the pair has
dependency relationship in the dependency parse tree, referring to the table in Figure 1.
Note that the parse graph is an undirected graph. After building the graph-based structure of the
dependency parse tree, we need to find an effective way to learn the graph representation. Here we
introduce GraphSAGE (Hamilton et al., 2017), which is a flexible graph neural network. Specifically,
we first use the sequence H = **_h1,_** _, hn_ obtained by the sequence-based encoder as the initial
_{_ _· · ·_ _}_
embedding of each node. Then each node updates its embedding vector from neighborhood nodes,
which can be expressed as
**_PN[k]_** [=][ GCN][(][P][ k][−][1][,][ G][) =][ ReLU][(][ e]D[−] 2[1] **_AD[−]_** 2[1] P _[k][−][1]W )_ (1)
**_P_** _[k]_ = ReLU([P _[k][−][1]; PN[k]_ []][ ·][ W][P] [)] (2)
[e] [e]
where PN[k] [denotes the aggregating information from neighborhood nodes,][ P][ k][ denotes the updated em-]
bedding of each node and P [0] = H. **_D = D + L,_** **_A = A + L, D represents the degree matrix and A_**
represents the adjacency matrix of the parse graph. k 1, _, K_ is the iteration index and **_W, WP_**
_∈{_ _· · ·_ _}_ _{_ _}_
are parameter matrices. [e] [e]
**3.2.2** **Numerical Comparison Information**
Numerical comparison information also plays an important role in enhancing text descriptions. We also
use a graph-based structure to represent the numerical comparison information. We denote the numbers
in the text words as Vn = {n1, · · ·, nl} and consider the following two types of numerical graphs.
- Greater Graph (Gg): For two numbers ni, nj ∈ _Vn, there is an edge eij = (ni, nj) ∈Gg if ni > nj,_
referring to the red solid lines in Figure 1.
2927
-----
- Lower Graph (Gl): For two numbers ni, nj ∈ _Vn, there is an edge eij = (ni, nj) ∈Gl if ni ≤_ _nj,_
referring to the red dashed lines in Figure 1.
Unlike the parse graph, there are two types of numerical graphs and they are directed graphs. Hence
we extend GraphSAGE to fit the integration of numerical comparison information. The updating rule of
each number can be expressed as
**_Q[k]Ng_** [=][ GCN][(][Q][k][−][1][,][ G][g][) =][ ReLU][(][ e]Dg[−][1]AegQ[k][−][1]Wg) (3)
**_Q[k]Nl_** [=][ GCN][(][Q][k][−][1][,][ G][l][) =][ ReLU][(][ e]Dl[−][1]AlQ[k][−][1]Wl) (4)
**_Q[k]N_** [=][ M][a] _[∗]_ **_[Q]N[k]_** _g_ [+ (][1][ −] **_[M][a][)][ ∗]e[Q]N[k]_** _l_ (5)
**_Ma = σ([Q[k]Ng_** [;][ Q]N[k] _l[;][ Q]N[k]_ _g_ [+][ Q]N[k] _l[;][ Q]N[k]_ _g_ _[−]_ **_[Q]N[k]_** _l[]][ ·][ W][a][)]_ (6)
**_Q[k]_** = ReLU([Q[k][−][1]; Q[k]N []][ ·][ W][Q][)] (7)
where {Q[k]Ng _[,][ Q]N[k]_ _l[}][ represent the aggregating information from neighborhood nodes in two graphs,][ Q][k]_
represents the updated embedding of each node and Q[0] = P _[K]. Ma controls the weight of two graphs,_
‘∗’ denotes element-wise multiplication and ‘σ’ denotes the ‘Sigmoid’ function. k ∈{1, · · ·, K} is the
iteration index and **_Wg, Wl, Wa, WQ_** are parameter matrices.
_{_ _}_
The final encoder vectors of text descriptions incorporate the node embedding vectors in the parse
graph and numerical graphs, which can be calculated as
**_Z = P_** _[K]_ + Q[K] (8)
**_g = MaxPool(Z)_** (9)
where Z = **_z1,_** _, zn_ denotes the final encoder vectors of each word, and g represents the global
_{_ _· · ·_ _}_
vector of text descriptions for further decoding.
**3.3** **Sequence-Based Decoder**
The sequence-based decoder is used to generate the suffix order of AST. We use a GRU with attention
layer to generate the sequence, which can be expressed as
**_si = GRU(ˆyi_** 1, si 1, ci) (10)
_−_ _−_
_αijzj_ (11)
_j=1_
X
**_ci =_**
_αij =_ _nexp(score(si−1, zj))_ (12)
_j=1_ [exp(][score][(][s][i][−][1][,][ z][j][))]
P
score(si−1, zj) = vs[T] _[·][ tanh(][W][s]_ _[·][ [][s][i][−][1][;][ z][j][])]_ (13)
where si denotes the hidden state vector of the decoder, ci denotes the context vector. αij controls the
attention weight of every encoder vector. ˆyi is the output and {vs, Ws} are parameter matrices.
**3.4** **Tree-Based Decoder**
The tree-based decoder is used to generate the prefix order of AST. We follow the Goal-driven Tree
Structure (GTS) proposed in (Xie and Sun, 2019), which not only realized the top-down decoding process
but also used the bottom-up subtree embedding manners. Here we simply introduce the decoding process,
which can be expressed as
2928
-----
- Step 1 (Root Goal Generation): GTS followed the pre-order traversal manner, so the primary goal
is to generate the root node. We use g as the initial goal vector of the root node, and apply the same
attention mechanism in the sequence-based decoder to get the context vector **_c1._**
**_c1 = Attention(g, Z)_** (14)
e
_yˆ1 = Predict(g,_ **_c1)_** (15)
e
Note that the algorithm terminates directly if ˆy1 is a number; otherwise, we will go to step 2.
e
- Step 2 (Left Goal Generation): The left goal gl is generated according to the goal vector and the
predicted token of its parent node, which can be expressed as
**_gl = Left(ˆyp, gp,_** **_cp)_** (16)
_yˆl = Predict(gl,_ **_cl)_** (17)
e
where ˆyp, gp and **_cp stand for the predicted token, goal vector and content vector of the parent node_**
e
respectively. The process of generating the left goal continues until ˆyl is a number, referring to the
red dashed lines in Figure 1. Later, we will go to step 3. e
- Step 3 (Right Goal Generation): When the right goal node is being generated, its left sibling
node has been completed. Therefore, GTS considered the subtree embedding of its sibling node to
generate the right goal gr, which can be expressed as
**_gr = Right(ˆyp, gp,_** **_cp, tl)_** (18)
**_tl = SubTree(ˆyl, gl)_** (19)
e
_yˆr = Predict(gr,_ **_cr)_** (20)
Here, tl is the tree embedding of the left goal, as illustrated by the blue solid lines in Figure 1.
e
Similarly, we will go back to step 2 if ˆyr is an operator. The algorithm backtracks to check whether
there are right goals in the tree that need to be generated if ˆyr is a number. When the model cannot
find any generation goal, the algorithm terminates; otherwise, we will continue step 3.
**3.5** **Model Training**
Since our model integrates two types of decoders, we combine the loss functions of the sequence-based
decoder and tree-based decoder. For each sample of problem-expression (P, T ), the optimization objective of our model is defined as
_m_
_L =_ (log p(yi **_si, ci, Ts) + log p(yi_** **_gi,_** **_ci, Tt))_** (21)
_−_ _m[1]_ _|_ _|_
_i=1_
X
where e
_p(yi_ **_si, ci, Ts) = softmax(W1_** tanh(W2 [si; ci])) (22)
_|_ _·_ _·_
_p(yi_ **_gi,_** **_ci, Tt) = softmax(W3_** tanh(W4 [gi; **_ci]))_** (23)
_|_ _·_ _·_
where m denotes the number of tokens in the equation expression. Ts represents the suffix order and Tt
e e
represents the prefix order. **_W1, W2, W3, W4_** are parameter matrices.
_{_ _}_
Finally, we use the log probability scores to perform a beam search. After obtaining the top equation
expression from double decoders respectively, we select the one with a higher score as the final result.
**4** **Experiments**
In this section, we evaluate our model on a large-scale dataset Math23K. We compare our model with
several state-of-the-art methods and demonstrate the effectiveness of our model via a series of controlled
experiments. Our code can be downloaded at https://github.com/YibinShen/MultiMath.
2929
-----
**4.1** **Experimental Setup**
**4.1.1** **Dataset**
**Math23K (Wang et al., 2017): Math23K is a large-scale Chinese dataset that contains 23,162 elementary**
school level MWPs and corresponding equation expressions and answers. Although there are other
large-scale datasets, such as Dolphin18K (Huang et al., 2016) (with 18,460 MWPs) and AQuA (Ling
et al., 2017) (with 100,000 MWPs), they contain either some unlabeled problems or informal equation
expressions (mixed with texts). Therefore, Math23K is still the most ideal large-scale and high-quality
publish dataset.
**4.1.2** **Hyperparameters**
In the sequence-based encoder, we use a two-layer BiGRU with 512 hidden units as the encoder, and the
dimension of word embedding is set as 128. In the graph-based encoder, we set the number of iteration
steps as K = 2. We also use a two-layer GRU with 512 hidden units as the decoder in the sequence-based
decoder. The hyper-parameters of the tree-based decoder are consistent with GTS. As to the optimizer,
we use Adam with an initial learning rate at 0.001, and the learning rate will be halved every 20 epochs.
The number of epochs, batch size and dropout rate are set 80, 64 and 0.5 respectively. At last, we use
a beam search with beam size 5 in the sequence-based decoder and tree-based decoder. Our model is
implemented in PyTorch 1.4.0 and runs on a server with one NVIDIA Tesla V100. We use pyltp 0.2.1 to
preform dependency parsing and POS tagging.
**4.1.3** **Metric**
Since a math word problem can be solved by multiple equation expressions, we use the answer accuracy
as the evaluation metric. For Math23K, some of the previous studies were evaluated on the publish test
set, while others used the 5-fold cross-validation. We evaluate our model on the two situations.
**4.1.4** **Baselines**
We compare our model with some state-of-the-art methods, including: DNS (Wang et al., 2017) made
the first attempt to solve MWPs by using a Seq2Seq model. Math-EN (Wang et al., 2018a) proposed
an equation normalization method to reduce the number of duplicated equations. T-RNN (Wang et al.,
2019) used a two-stage model to generate expressions. S-Aligned (Chiang and Chen, 2019) adopted a
stack to track the semantic meanings of numbers. Group-ATT (Li et al., 2019) added different functional multi-head attentions to the Seq2Seq framework. AST-Dec (Liu et al., 2019) used TreeLSTM
to realize top-down decoding process. GTS (Xie and Sun, 2019) followed goal-driven tree structure.
**Graph2Tree (Zhang et al., 2020) integrated the quantity cell graph and quantity comparison graph.**
**4.2** **Experimental Results**
**Type** **Model** **Math23K(%)** **Math23K[∗](%)**
DNS - 58.1
Math-EN 66.7 -
Seq2Seq T-RNN 66.9 -
S-Aligned - 65.8
Group-ATT 69.5 66.9
AST-Dec 69.0 -
Seq2Tree
GTS 75.6 74.3
Graph2Tree Graph2Tree 77.4 75.5
Multi-E/D Ours **78.4** **76.9**
Table 3: Performance comparison on Math23K. Note that Math23K denotes results on public test set and
Math23K[∗] denotes the 5-fold cross-validation.
2930
-----
Table 3 depicts the performance comparison of different models on Math23K. As we can see, Seq2Seq
models cannot exceed 70% accuracy because they ignored the structural information of text descriptions and equation expressions. Seq2Tree models made full use of tree-based structure expressions and
followed the top-down decoding process, which outperforms most of Seq2Seq models. In particular,
GTS also realized bottom-up subtree embedding manners and have a good performance on Math23K.
Graph2Tree considered the structure information of text descriptions, integrating the quantity cell graph
and quantity comparison graph, so it achieves sub-optimal performance in all models. As to our model,
our model not only uses multi-encoders to integrate the structural information of the dependency parse
tree and numerical comparison graphs, but also enhances the generation ability of the model via multidecoders, which outperforms aforementioned models.
**4.3** **Experimental Analysis**
In Table 4, we show the accuracy of the top-5 most frequent expressions on Math23K[∗]. Intuitively, our
model achieves more than 90% accuracy in all situations and outperforms the other two models in most
cases. Note that our model has a significant improvement over GTS under expressions with ‘÷’ or ‘−’.
This is because the division and subtraction operators do not meet the commutative law, which requires
the model to learn the correct arithmetic order. Since GTS doesn’t integrate the numerical comparison
information in the model, it cannot deal with these expressions well.
**Expression (prefix)** **Pro(%)** **GTS(%)** **Graph2Tree(%)** **Ours(%)**
_×n1n2_ 4.77 89.05 89.05 **90.23**
_÷n1n2_ 4.40 88.61 90.37 **91.85**
_÷n2n1_ 3.43 86.40 88.16 **90.81**
_×n1 −_ 1n2 2.31 89.55 90.67 **91.23**
_÷ × n1n2n3_ 2.27 90.49 **92.40** 92.21
Table 4: Accuracy of the top-5 most frequent expressions on Math23K[∗].
Figure 2 depicts the accuracy of different expression
lengths. The gray line represents the proportion of different expression lengths. Results show that our model outperforms GTS and Graph2Tree in all situations. However,
the performance of our model has a rapid drop when the
expression becomes longer. There are two reasons for this
phenomenon: (1) longer expressions contain more operators, and the neural network cannot save the results of intermediate variables well; (2) longer expressions only account
for a small part of the dataset (e.g. each expression longer
than 9 can be matched to 1.67 problems on average), and
the model lacks samples for training. In future work, we
will consider question generation technology to generate
more MWPs, which may solve this problem.
**4.4** **Case Study**
Figure 2: Accuracy of different expression lengths on Math23K[∗].
To demonstrate the effectiveness of our model, we conduct a case study in Table 5. Test 1 exchanges the
order of text descriptions and Test 2 changes the form of question description. These two simple tests are
used to investigate whether the model can mine the correct mathematical logic from natural language.
In the original problem, GTS obtains a negative answer, which conflicts with the problem. It is funny
that GTS obtains the correct answer when we change the order of text description. Note that GTS generates the same expression in Test 1, which implies that GTS only remembers the order of the numbers
instead of the real mathematical logic within the problem.
2931
-----
**Problem:** A slow car drives 58(n1) km/h, and a fast car drives 85(n2) km/h. The two cars drive at
the same time in inverse direction, and they meet after 5(n3) hours. How many kilometers
does the fast car drive more than the slow car when they meet?
**Result:** **GTS: × −** _n1n2n3 = −135 (error)_ **Ours: × −** _n2n1n3 = 135 (correct)_
**Test 1:** **A fast car drives 85(n1) km/h, and a slow car drives 58(n2) km/h. The two cars**
drive at the same time in inverse direction, and they meet after 5(n3) hours. How many
kilometers does the fast car drive more than the slow car when they meet?
**Result:** **GTS: × −** _n1n2n3 = 135 (correct)_ **Ours: × −** _n1n2n3 = 135 (correct)_
**Test 2:** A slow car drives 58(n1) km/h, and a fast car drives 85(n2) km/h. The two cars drive at
the same time in inverse direction, and they meet after 5(n3) hours. How many kilome**ters does the slow car drive less than the fast car when they meet?**
**Result:** **GTS: × −** _n1n2n3 = −135 (error)_ **Ours: × −** _n2n1n3 = 135 (correct)_
Table 5: Case study of MWPs solving, where Test 1 and Test 2 are generative cases.
In Test 2, we change the form of question description, GTS and our model obtain the same expressions
that generated in the original problem. This is because we use the attention mechanism in the model,
changing the form of question description has no impact on generating correct expressions.
Since Graph2Tree also considered the quantity comparison graph in the model, the same results are
obtained in this case as our model.
**4.5** **Ablation Study**
Last but not least, we conduct an ablation study to better understand the effect of encoders and decoders
in the model, as is shown in Table 6. When we use a fully connected layer to replace the sequencebased encoder, the performance of our model observably drops. This is because other encoders and
decoders depend on the context representation obtained by the sequence-based encoder. We find that the
performance has a drop if we discard any type of graph-based structure, which proves the importance
of considering the structure information in text descriptions. When the model has only one decoder, the
generation ability is limited, which indicates the necessity of designing multi-decoders.
**Model** **Math23K (%)**
Full Model 78.4
- Sequence-Based Encoder 69.7
- Graph-Based Encoder (Parse Graph) 76.4
- Graph-Based Encoder (Numerical Graphs) 76.1
- Sequence-Based Decoder 76.6
- Tree-Based Decoder 71.3
Table 6: Effect of encoders and decoders in the model.
**5** **Conclusion and Future Work**
Inspired by the fact that text descriptions and equation expressions have structural information, a model
with multi-encoders and multi-decoders is proposed in this paper. To be specific, we use the sequencebased encoder to obtain the context representation, and the graph-based encoder is used to integrate the
structure information of text descriptions. Two types of decoders generate different expressions, which
strengthens the generation ability of the model. Experimental results on Math23K proves the advantages
of our model over existing state-of-the-art methods. The Experiment analysis shows that the effectiveness
of mathematical logic mining from the problem. In future work, we will explore the question generation
technique to increase samples of the dataset and solve the problems with complex expressions.
2932
-----
**Acknowledgements**
Thanks to the anonymous reviewers for their helpful comments and suggestions. This work is partially supported by National Science Foundation of China (U1811264, U1911203 and 61877018) and
ECNU Academic Innovation Promotion Program for Excellent Doctoral Students (YBNLTS2019-022).
Cheqing Jin is the corresponding author.
**References**
Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. arXiv preprint
_math/0701393._
Ting-Rui Chiang and Yun-Nung Chen. 2019. Semantically-aligned equation generation for solving and reasoning
math word problems. In Proceedings of the 2019 Conference of the North American Chapter of the Association
_for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), volume 1,_
pages 2656–2668.
Charles R. Fletcher. 1985. Understanding and solving arithmetic word problems: A computer simulation. Behav_ior Research Methods Instruments & Computers, 17(5):565–571._
William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs.
In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan,
and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on
_Neural Information Processing Systems, pages 1024–1034._
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve
arithmetic word problems with verb categorization. In Alessandro Moschitti, Bo Pang, and Walter Daelemans,
editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages
523–533.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve
math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 887–896._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra
word problems”. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics
_(Volume 1: Long Papers), volume 1, pages 271–281._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai, and Dongxiang Zhang. 2019. Modeling intrarelation in math word problems with different functional multi-head attentions. In Proceedings of the 57th
_Conference of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6162–6167._
Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu, and Sheng Zhong. 2020. Graph-to-tree neural
networks for learning structured input-output translation with applications to semantic parsing and math word
problem. arXiv preprint arXiv:2004.13781.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation:
Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the
_Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 158–167._
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke Kawahara. 2019. Tree-structured decoding for solving math
word problems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing
_and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2370–_
2379.
Yuanliang Meng and Anna Rumshisky. 2019. Solving math word problems with double-decoder transformer.
_arXiv preprint arXiv:1908.10924._
Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In Pro_ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
volume 1, pages 2144–2153.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings
_of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 845–854._
2933
-----
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang, and Xiaojiang Liu. 2018a. Translating a math word problem to a expression tree. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language
_Processing, pages 1064–1069._
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn:
Solving arithmetic word problems via deep reinforcement learning. In Thirty-Second AAAI Conference on
_Artificial Intelligence, pages 5545–5552._
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen. 2019.
Template-based math word problem solvers with recursive neural networks. In Thirty-Third AAAI Conference
_on Artificial Intelligence, pages 7144–7151._
Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In
_Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pages 5299–5305._
Ma Yuhui, Zhou Ying, Cui Guangzuo, Ren Yun, and Huang Ronghuai. 2010. Frame-based calculus of solving
arithmetic multi-step addition and subtraction word problems. In 2010 Second International Workshop on
_Education Technology and Computer Science, volume 2, pages 476–479._
Dongxiang Zhang, Lei Wang, Nuo Xu, Bing Tian Dai, and Heng Tao Shen. 2018. The gap of semantic parsing: A
survey on automatic math word problem solvers. arXiv preprint arXiv:1808.07290.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-to-tree
learning for solving math word problems. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault,
editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3928–
3937.
2934
-----
| [
"Yibin, Shen",
"Cheqing, Jin"
] | 2020-01-01T00:00:00 | null | false | 56 | 7 | null | https://www.aclweb.org/anthology/2020.coling-main.262 | null | https://www.semanticscholar.org/paper/53953a19e2fdf9c5ad5ff445c07ce36cc70f551b |
Teacher-Student Networks with Multiple Decoders for Solving Math Word Problem | Math word problem (MWP) is challenging due to the limitation in training data where only one “standard” solution is available. MWP models often simply fit this solution rather than truly understand or solve the problem. The generalization of models (to diverse word scenarios) is thus limited. To address this problem, this paper proposes a novel approach, TSN-MD, by leveraging the teacher network to integrate the knowledge of equivalent solution expressions and then to regularize the learning behavior of the student network. In addition, we introduce the multiple-decoder student network to generate multiple candidate solution expressions by which the final answer is voted. In experiments, we conduct extensive comparisons and ablative studies on two large-scale MWP benchmarks, and show that using TSN-MD can surpass the state-of-the-art works by a large margin. More intriguingly, the visualization results demonstrate that TSN-MD not only produces correct final answers but also generates diverse equivalent expressions of the solution. | This paper proposes a novel approach, TSN-MD, by leveraging the teacher network to integrate the knowledge of equivalent solution expressions and then to regularize the learning behavior of the student network to addressMath word problem challenges. | null | [
"Jipeng, Zhang",
"Lei, Wang",
"Wei, Qin",
"Ee-Peng, Lim",
"Jie, Shao",
"Qianru, Sun",
"Roy Ka-Wei, Lee"
] | 2020-07-01T00:00:00 | IJCAI 2020 Natural Language Processing | false | 56 | 8 | null | https://www.ijcai.org/proceedings/2020/555 | null | https://www.semanticscholar.org/paper/7f5b28f0719354be493bd346abc08b9095b5affc |
MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems? | The remarkable progress of Multi-modal Large Language Models (MLLMs) has garnered unparalleled attention, due to their superior performance in visual contexts. However, their capabilities in visual math problem-solving remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams. To this end, we introduce MathVerse, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into six distinct versions, each offering varying degrees of information content in multi-modality, contributing to 15K test samples in total. This approach allows MathVerse to comprehensively assess whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning. In addition, we propose a Chain-of-Thought (CoT) evaluation strategy for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs. We hope the MathVerse benchmark may provide unique insights to guide the future development of MLLMs. Project page: https://mathverse-cuhk.github.io | The MathVerse benchmark is introduced, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs, and a Chain-of-Thought evaluation strategy is proposed for a fine-grained assessment of the output answers. | ## MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
**Renrui Zhang[∗‡][1][,][2], Dongzhi Jiang[∗][1], Yichi Zhang[∗][2], Haokun Lin[2], Ziyu Guo[2], Pengshuo Qiu[2]**
**Aojun Zhou[1], Pan Lu[3], Kai-Wei Chang[3], Peng Gao[†][2], Hongsheng Li[†][1]**
1CUHK MMLab 2Shanghai Artificial Intelligence Laboratory
3University of California, Los Angeles
```
{zhangrenrui, dzjiang, ziyuguo}@link.cuhk.edu.hk
[email protected], [email protected], [email protected]
```
**Abstract**
The remarkable progress of Multi-modal Large Language Models (MLLMs) has
garnered unparalleled attention, due to their superior performance in visual contexts.
However, their capabilities in visual math problem-solving remain insufficiently
evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs
in deducing answers without truly interpreting the input diagrams. To this end,
we introduce MATHVERSE, an all-around visual math benchmark designed for
an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612
high-quality, multi-subject math problems with diagrams from publicly available
sources. Each problem is then transformed by human annotators into six distinct
versions, each offering varying degrees of information content in multi-modality,
contributing to 15K test samples in total. This approach allows MATHVERSE to
comprehensively assess whether and how much MLLMs can truly understand the
_visual diagrams for mathematical reasoning. In addition, we propose a Chain-_
of-Thought (CoT) evaluation strategy for a fine-grained assessment of the output
answers. Rather than naively judging True or False, we employ GPT-4(V) to
adaptively extract crucial reasoning steps, and then score each step with detailed
error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs.
With MATHVERSE, we unveil that, most existing MLLMs struggle to understand
math diagrams, relying heavily on textual questions. Surprisingly, some of them
even achieve 5%+ higher accuracy without the visual input, e.g., Qwen-VL-Max
and InternLM-XComposer2. In contrast, GPT-4V and ShareGPT4V demonstrate
relatively better comprehension of the visual content for mathematical reasoning.
We hope MATHVERSE may provide unique insights to guide the future develop[ment of MLLMs. Project page: https://mathverse-cuhk.github.io.](https://mathverse-cuhk.github.io)
**1** **Introduction**
With the substantial advances of big data and computational power, Large Language Models
(LLMs) [4, 28, 55, 56, 13], such as ChatGPT [45] and GPT-4 [46], have emerged as a central
point of interest in both industry and academia. To broaden their applicability across diverse contexts, Multi-modal Large Language Models (MLLMs) [66, 20, 52, 11, 61, 70] have recently become
1∗ Equal contribution _‡ Project lead_ _† Corresponding author_
Preprint. Under review.
-----
**GeoQA** **MathVista** **MMMU**
Original Questions
w/o Text Redundancy
w/o Diagram Input
51.7
43.3
35.8
33.3
29.2
25.0
Open-source
MLLMs
**Question:**
As shown in the figure,
AB is parallel to CD,
and a straight line EF
intersects AB at point E,
intersects CD at point F,
EG bisects angle BEF,
and it intersects CD at
point G, angle 1 = 50°,
angle 2 is equal to ()
**Question:**
OD ⊥ AC at point D, if
AB is the diameter of
⊙O, C is the point on
⊙O, passing point C is
the tangent of ⊙O and
intersects the extended
line of AB at point E,
∠E = 30°, CE = 6.0, the
value of OD is ()
**Question:**
The curve y = f(x) and
the line y = -3, as shown
in the figure, intersect at
the points (0, -3), (𝑎, -3),
and (𝑏, -3). The sum of
the area of the shaded
region enclosed by the
curve and the line is
given by ()
Closed-source
MLLMs
(a) Text Redundancy within Existing Benchmarks (b) Ablation Study
Figure 1: (a) We showcase three examples of Text Redundancy (highlighted in red) within existing
visual math benchmarks [9, 41, 63]. (b) We report an ablation study by respectively removing the
redundant texts and input diagrams on 120 randomly selected problems, for closed-sourced [47, 22, 3]
and open-sourced [21, 38, 16] MLLMs.
a fast-evolving track, exemplified by the latest GPT-4V [47], Gemini [22], and the open-source
LLaVA [39, 34, 32, 30] and SPHINX [36, 21]. Concurrently, a diverse array of evaluation benchmarks [17, 40, 33, 18, 53] are curated to assess their visual comprehension performance across
different domains. Notably, the capability to solve mathematical problems involving diagrams serves
as a critical measure, offering insights into the multi-modal logical thinking prowess of MLLMs. This
task demands MLLMs to accurately decode the visual elements within input diagrams (characters
and figures), and correlate them with the condition specified by textual questions for mathematical
reasoning. Previous efforts [42, 51], e.g., GeoQA [9, 5] and UniGeo [7], concentrate on the challenging geometric problems, while the recent MathVista [41] and MMMU [63] expand the scope to
encompass broader disciplines, including functions, charts, and scientific problems.
However, through our comprehensive observation and analysis, we identify three primary issues in
current mathematical benchmarks for evaluating MLLMs:
i. Do MLLMs truly see the math diagrams in evaluation? This is the most fundamental
question concerning the accurate assessment of visual math problem-solving. In Figure 1
(a), we showcase three examples from current benchmarks. We observe their texts contain
too much duplicate information (highlighted in red) that is also depicted in the diagram.
This redundancy might inadvertently provide MLLMs with a shortcut to resolve the problem
by mostly reading the text, rather than interpreting the diagram. Our hypothesis gains
support from the experiment in Figure 1 (b). For 40 randomly sampled problems from each
benchmark, we remove such redundant texts from the question, challenging MLLMs to
capture the corresponding information exclusively from visual inputs. The results reveal
a significant drop in accuracy among most MLLMs (the blue column), even falling below
the scores without taking diagrams as input (the grey column). This outcome suggests that
**_MLLMs primarily depend on textual cues rather than the visual diagrams themselves_**
**_to solve these problems in evaluation. Given this, we demonstrate that current visual_**
math benchmarks might not be comprehensive enough to assess the genuine multi-modal
mathematical reasoning capabilities of MLLMs.
ii. Is it equitable to assess solely by the final answer? Most existing multi-modal benchmarks
directly compare model outputs with ground truths to derive a binary evaluation result.
While this approach may suffice for general visual contexts, it falls short in math problems
that require intricate step-by-step reasoning. In Figure 2, we examine three model outputs.
Although they all arrive at incorrect answers in the end, they demonstrate varying levels
of precision in the intermediate reasoning processes. Merely categorizing these outputs as
‘Incorrect’ fails to capture the nuanced differences in the reasoning quality of MLLMs.
-----
**Question:**
For what x does f reach its local maximum? **GPT-4V:** **Gemini-Pro:** **SPHINX-MoE:**
**Answer:** x = 3
✓ Correct **× Incorrect**
The graph shown is the derivative
of the function f, denoted by f′(x).
The derivative of a function is
zero at the relative extrema of
the original function …
Since f′(x) is positive on the
interval 2<x<7, we know that f is
increasing over that interval …
f has a local minimum at x=2 and a
local maximum at x=7 …
f reaches a local maximum at x=7.
The graph shows two functions:
f(x) in blue, and f’(x) in orange …
To find f(x)’s local maximum, we
look at where f'(x) is zero.
The graph of f'(x) crosses the xaxis at two points, and a local
maximum is where the derivative
changes from positive to negative …
This occurs just to the left of the
y-axis, a little before x=0 …
It appears to be around x = -1.
The graph shows f(x) and
its derivative f’(x) …
f(x) is a parabola with a
vertical axis of symmetry.
This means that f(x) will
have a U-shape, with the
vertex at the point (4, 0).
The local maximum of f(x)
occurs when x = 4, the
point where the graph of
f’(x) intersects the x-axis.
Figure 2: Comparison of Visual Mathematical Reasoning by Three MLLMs. Despite the
incorrect final answer, GPT-4V [47], Gemini-Pro [22], and SPHINX-MoE [21] exhibit different
levels of quality in the intermediate reasoning process.
iii. Do they specialize in mathematical reasoning evaluation? GeoQA, UniGeo, and other
previous works narrowly target specific aspects of plane geometry. This limits the evaluation
of broader mathematical capabilities, e.g., functions and solid geometry. Instead, MathVista
expands its scope by including a wide array of peripheral tasks (19 out of 28), encompassing
natural images, statistic plots, and charts, which do not directly evaluate professional math
skills. Furthermore, the math problems in MMMU are of college-level complexity with extensive domain-specific knowledge, potentially hindering MLLMs from fully demonstrating
their reasoning capacity.
Therefore, in light of the issues discussed, we present MATHVERSE, a holistic and specialized
visual math benchmark crafted to evaluate the multi-modal mathematical reasoning skills of MLLMs.
This benchmark encompasses a meticulously collected dataset of 2,612 visual math problems, with
1,236 newly acquired from public question repositories and 1,376 selected from existing benchmarks,
ensuring a diverse range of challenges. To specialize in mathematical reasoning, MATHVERSE
spans three primary areas: plane geometry, solid geometry, and functions. Each problem has been
rigorously reviewed by expert annotators and classified into twelve detailed categories, emphasizing
different fine-grained problem-solving capabilities. Notably, MATHVERSE distinguishes itself by
introducing two novel strategies for evaluating MLLMs.
First, we investigate the influence of textual redundancy and validate whether MLLMs can interpret
the diagrams for mathematical reasoning. As illustrated in Figure 3 (Left), we categorize the textual
content within the questions into three different types: Descriptive Information, Implicit Property,
and Essential Condition. These categories, arranged in ascending order of significance for problemsolving, correspond to information directly observable from the diagram, implicit spatial properties
that demand advanced visual perception, and specific measurements crucial for computing the
solution, respectively. Based on this problem formulation, expert annotators progressively remove the
textual information from the questions in MATHVERSE, while incrementally incorporating elements
into the visual diagrams to ensure problems are adequately defined. As shown in Figure 3 (Right),
this process results in six unique versions of each problem characterized by a reduction in textual
content and an enhancement in visual elements, creating a total of 15K test samples. These delicately
curated problems can indicate the various multi-modal capabilities of MLLMs, such as geometric
element understanding, function curve perception, and numerical value recognition, which thoroughly
unveils whether and how much they comprehend the visual diagram for mathematical reasoning.
Second, to rigorously assess the visual Chain-of-Thought (CoT) capabilities [58], we propose a
**CoT Evaluation strategy for the step-by-step reasoning assessment of MLLMs. For each model’s**
output, we leverage GPT-4 to first extract several crucial steps exclusively from the solving process,
deliberately omitting the input of the question and answer. This approach aims to mitigate the
bias towards GPT-4’s inherent question-answering propensities. Then, the corresponding question,
diagram, and ground-truth answer are fed into GPT-4 to evaluate each identified critical step, and
provide detailed error analysis. Finally, the overall score is obtained by considering every single
step within reasoning. Note that, we do not pre-define a ground-truth key-step template, since each
-----
**Plane Geometry:** **Solid Geometry:** **Functions:**
**Three Categories of Texts:**
Descriptive Information
Implicit Property
Essential Condition
📖 Decrease Text Content
📖
🔍 Increase Vision Content
🔍
**Six Versions of a Problem:**
**Question:**
AB and CD are two
diameters of circle O,
chord DE parallel AB,
arc DE is the arc of
50°, then angle BOC
is (). Choices: …
**Question:**
Find the surface area
of the cylinder shown.
The height is 10 cm
and the radius is 6 cm.
Give your answer to
two decimal places.
**Question:**
for 𝑦# as shown.
The graph shows 𝑦! =
𝑥["] passing (0,0) and a
vertical or horizontal
translation 𝑦# passing (2,0). Write an equation
Vision-intensive
Vision-dominant
Vision-only
Text-dominant
Text-lite
Text-only
A: 115° **Answer:** 603.19 cm[#] **Answer:** 𝑦# = 𝑥+ 2 ["]
**Answer:**
Figure 3: Three Categories of Question Texts in MATHVERSE. According to the significance for
problem-solving, we categorize the question texts into three categories, and transform each problem
into six versions for evaluation, with varying content in multi-modality. We present three examples in
MATHVERSE for illustration.
math problem may encompass a variety of solution pathways, and different MLLMs tend to exhibit
variable reasoning lengths. With CoT scoring, MATHVERSE showcases a fine-grained evaluation of
the intermediate logical deduction of MLLMs, demonstrating visual mathematical CoT capabilities.
We conduct extensive experiments on MATHVERSE with popular closed-source [47, 3, 22] and
open-source [37, 38, 16, 21] MLLMs. Comparing different problem versions, we unveil that,
most existing MLLMs struggle to understand math diagrams, relying heavily on textual questions.
Therein, GPT-4V [47] achieves the best overall performance across different problem versions and
subjects. Surprisingly, some of the MLLMs even attain much higher results without the diagram
input, e.g., +5.1% for Qwen-VL-Max [3] and +5.6% for InternLM-XComposer2 [16]. With the
fine-grained error analysis produced by our CoT evaluation strategy, we demonstrate such results are
due to their deficient visual encoding capacity for mathematical diagrams, which instead acts as a
distraction for problem-solving. In contrast, GPT-4V and ShareGPT4V [12] demonstrate relatively
better comprehension of the visual content for mathematical reasoning. Our experimental results
suggest that inadequate mathematical visual interpretation capabilities represent the most significant
impediment for MLLMs in addressing multi-modal math problems, indicating substantial potential
for advancement.
The contributions of this paper are summarized as follows:
- We investigate primary issues within existing benchmarks and introduce MATHVERSE,
an all-around multi-modal benchmark evaluating the visual mathematical reasoning of
MLLMs. The meticulously curated dataset contains 20K test problems with diagrams for a
comprehensive assessment.
- By modifying problems with varying information content in multi-modality, we explore
whether and how much MLLMs can understand the visual diagrams for mathematical
reasoning, rather than relying on question texts.
- We propose a CoT evaluation strategy with GPT-4 to extract and assess each key step in
the reasoning process of MLLMs, which provides a detailed error analysis and fine-grained
evaluation of their multi-modal mathematical CoT capabilities.
**2** **MATHVERSE**
In Section 2.1, we first present an overview of the curated visual math dataset in MATHVERSE. Then,
in Section 2.2, we introduce our data formulation approach for investigating the visual mathematical
comprehension of Multi-modal Large Language Models (MLLMs). Finally, in Section 2.3, we
elaborate on the methodology of our proposed Chain-of-Thought (CoT) evaluation strategy.
-----
Figure 4: Subject Distribution of MATH**VERSE. Solid G: Solid Geometry, Plane**
G: Plane Geometry.
Property Analytic Area
Applied
Expression Functions
Coordinate 20.5% Applied
Volume Solid G Plane G
Area 12.7% 66.8%
Length
Length
Angle
er num
Table 1: Key Statistics of MATHVERSE.
**Statistic** **Number**
Total questions 2,612
- Multiple-choice questions 1,631 (62.4%)
- Free-form questions 981 (37.6%)
- Newly collected questions **1,236 (47.3%)**
- Existing-dataset questions 1,376 (52.7%)
- Questions with explanations **1,236 (47.3%)**
**Total test samples** **15,672**
- Newly annotated samples **10,448 (66.7%)**
- Samples of each version 2,612 (16.7%)
Number of unique images 2,420 (92.6%)
Number of unique questions 2,573 (98.5%)
Number of unique answers 847 (32.4%)
Maximum question length 203
Maximum answer length 17
Average question length 35.7
Average answer length 1.4
**2.1** **Visual Math Dataset**
To thoroughly assess visual mathematical proficiency, we compile a comprehensive problem set
covering a broad spectrum of math subjects, diagram patterns, and specialized knowledge domains.
This widespread collection for MATHVERSE aims to pose diverse challenges to MLLMs, ensuring a
robust evaluation of their capabilities in visual contexts.
**Data Composition and Categorization.** MATHVERSE comprises a total of 2,612 visual math
problems, which contribute to the final created 15K test samples. Detailed statistics for data composition are presented in Table 1. This meticulously collected dataset covers three fundamental math
subjects, i.e., plane geometry (1,746), solid geometry (332), and functions (534), where the latter
two are all composed of newly collected problems. The choice of these three subjects is not only
due to their rigorous demands on multi-modal reasoning, but also for two other considerations. For
one thing, as we specialize MATHVERSE in mathematical problem-solving, other peripheral tasks
in MathVista [41] are not included, e.g., statistical reasoning, table question-answering, and puzzle
tests. For another, we expect the evaluation to fully display the reasoning capabilities of MLLMs
with moderate-level mathematical knowledge. This avoids limiting their performance with overly
complex domain-specific theorems or prior commonsense knowledge. Therefore, we deliberately
focus the collected problems on the high school level, excluding advanced college-level disciplines
like calculus and graph theory featured in MMMU [63]. Furthermore, expert annotators subdivide the
problems into twelve fine-grained categories, as depicted in Figure 4, showcasing various dimensions
of visual mathematical skills.
**Data Collection and Review Process.** Our collection procedure for high-quality visual math
problems involves a rigorous selection from both pre-existing datasets and public question repositories.
In the domain of plane geometry, we initially select 750 problems from GeoQA [9], 119 from
GEOS [51], and 507 from Geometry3K [42], based on their original data quality and distribution.
We exclude questions that are extremely simple or excessively complex, as well as those that appear
dubious or lack necessary conditions. To enhance the diversity of question types and diagram styles,
we further enrich our dataset with additional 370 plane geometry problems by manually collecting
from other sources[1][,][2][,][3]. Given the scarcity of solid geometry and function-related problems in
existing benchmarks, we purposefully gather these two types of problems (332 and 534, respectively)
from new sources[1][,][2][,][3] to address this gap. Problems that include multiple diagrams or require
visual illustrations within solutions are excluded, considering the current limitations of MLLMs in
resolving such information. Note that, all the newly collected problems (1,236) accompany detailed
1https://homework.study.com
2https://www.ixl.com/math
3https://mathspace.co/us
-----
Descriptive Information Implicit Property Essential Condition
Text Dominant Text Lite Text Only Vision Intensive Vision Dominant Vision Only
AB and CD are two AB and CD are two
📖 InputText diameters of circle O,chord DE parallel AB,arc DE is the arc of Chord DE parallel AB,arc DE is the arc of diameters of circle O,chord DE parallel AB,arc DE is the arc of Arc DE is the arc of Chord DE parallel AB,then angle BOC is ().
50°, then angle BOC 50°, then angle BOC 50°, then angle BOC 50°, then angle BOC Choices: …
is (). Choices: … is (). Choices: … is (). Choices: … is (). Choices: …
Chord DE parallel AB, thenangle BOC is (). Choices: …
Vision 50° 50°
🔍 Input
Text Dominant Text Lite Vision Dominant Text Dominant Text Lite Vision Dominant
Find the surface area The graph shows 𝑦! = The graph shows 𝑦! = The graph shows 𝑦! and
of the cylinder shown. Find the surface area Find the surface area 𝑥["] passing (0,0) and a 𝑥["] and a vertical or a vertical or horizontal
The height is 10 cm of the cylinder shown. of the cylinder shown. vertical or horizontal horizontal translation translation 𝑦#. Write an
and the radius is 6 cm. and the radius is 6 cm. Give your answer to translation 𝑦# passing (- 𝑦#. Write an equation equation for 𝑦# as shown.
Give your answer to Give your answer to two decimal places. 2,0). Write an equation for 𝑦# as shown.
two decimal places. two decimal places. for 𝑦# as shown.
6 cm
y# y! = x["]
Figure 5: Six Versions of Each Problem in MATHVERSE. Expert annotators meticulously transform
each visual math problem within MATHVERSE into six versions. They contain different visionlanguage content for a holistic visual mathematical evaluation.
explanations. After the preliminary collection, we undertake a comprehensive review to verify
the accuracy of the answers, ensure consistency between questions and diagrams, and confirm the
relevance of each problem to the defined twelve categories. This meticulous review guarantees the
dataset’s quality and precision.
**2.2** **Whether MLLMs Truly See the Diagrams?**
In this section, we detail our data formulation approach to transform each problem in MATHVERSE
into six different versions with varying information content in multi-modality. In this way, we explore
the visual diagram understanding capabilities of MLLMs for mathematical reasoning.
**Three Types of Textual Information.** Considering the textual redundancy in original math problems, we first define three distinct categories for the textual information within the questions, as
illustrated in Figure 3 and the following:
- Descriptive Information (DI) refers to the directly observable and clearly portrayed content
in the diagram. It depicts the basic figure composition, spatial arrangement, and annotated
entities, such as the presence of geometric shapes or intersection points of functions. These
sentences normally help establish the context and frame the problem to orient the solver.
Nevertheless, such information is repetitive to the visual components present in the diagram,
thus regarded as redundant information for problem-solving. More importantly, it may
assist MLLMs in bypassing the process of diagram interpretation, thereby undermining the
assessment for visual mathematical reasoning, as evidenced in Figure 1.
- Implicit Property (IP) involves the information that requires a higher level of visual perception but less mathematical knowledge to discern from the diagram. It signifies strong
visual conditions for problem-solving, such as the parallelism and perpendicularity between
_lines, the similarity and congruence among triangles, and the category and periodicity of_
_functions. They can, in theory, be fully extracted from the diagrams alone, giving adequate_
capability for visual recognition and comprehension of MLLMs.
-----
- Essential Condition (EC) denotes the specific numerical or algebraic measurements, which
are indispensable conditions to derive the solution and cannot be derived from the visual
diagram. This category encompasses precise values of angles, lengths, and function expressions, such as an angle being 45 degrees, the length of BC being 6 units, and the functional
_equation f_ (x) = x[2] + 3. Without these details in textual information, solving the visual
math problem would be impossible.
**Creating Six Versions of Each Problem.** Based on the three categories, expert annotators systematically remove different textual information within questions, and incrementally incorporate the critical
elements into diagrams. This approach can progressively reduce textual redundancy and information
content, thereby increasingly compelling MLLMs to capture mathematical conditions from the visual
input. As compared in Figure 5, we generate six versions of each problem in MATHVERSE, obtaining
15,672 test instances. With this curated problem set, we can provide a holistic evaluation of the
genuine visual comprehension of MLLMs, and whether it can facilitate multi-modal mathematical
reasoning. The details of each problem version are as follows:
- Text-dominant Version retains the entire textual content, including the three types of
textual information and the question statement. If the original problem contains limited
_Descriptive Information, we manually add it within the textual content. This version may_
induce MLLMs to regard the text as the primary source of information, treating the diagram
more as a supplementary visual aid. This serves as the baseline point for evaluation.
Text: DI + IP + EC + Question Vision: Diagram (1)
- Text-lite Version diminishes the Descriptive Information from the Text-dominant version,
assuming this information can be observed from the diagram. This creates a condensed
question without redundancy, forcing MLLMs to interpret the diagram for basic information.
Text: IP + EC + Question Vision: Diagram (2)
- Text-only Version directly discards the diagram input from the Text-dominant version.
Comparing this to the Text-lite version helps identify where MLLMs mainly obtain the
contextual visual information for problem-solving, the Descriptive Information or diagram.
Text: DI + IP + EC + Question Vision: ∅ (3)
- Vision-intensive Version further removes the Implicit Property from the Text-lite version.
Without the strong visual condition in texts, MLLMs are challenged to intensively leverage
their visual interpretation skills to gather sufficient cues for mathematical reasoning. The
outcome demonstrates their proficiency in understanding mathematical relationships visually.
Text: EC + Question Vision: Diagram (4)
- Vision-dominant Version, building upon the Text-lite version, excludes the Essential
_Condition from texts, instead annotating these measurements visually in diagrams. The_
textual content is narrowed down to Implicit Property and question statements. It demands
MLLMs to recognize the Essential Condition exclusively from diagrams, and accurately
correlate it with corresponding visual elements for problem-solving.
Text: IP + Question Vision: Diagram + EC (5)
- Vision-only Version strips away the entire textual input, conveying the problem solely
through the diagram. We render the remaining textual content in the Vision-dominant
version onto the diagram, and minimize the text to an almost negligible prompt. This
compels MLLMs to extract figures and recognize questions both from the visual information,
standing as an ultimate test for their visual reasoning capabilities in mathematics.
Text: ∅ Vision: Diagram + EC + IP + Question (6)
-----
Model Output
The line passes through the points (-3, 0) and (0, -6). We can use
these points to find the equation of the line. Let‘s use the slopeintercept form of a linear equation: y = mx + b, where m is the slope
and b is the y-intercept. First, we’ll find the slope (m) …
Key-step Extraction
1. Use the slope-intercept form of a linear equation: y = mx + b.
2. Find the slope (m) using the formula: m = (y2 - y1) / (x2 - x1). Using the
points (-3, 0) and (0, -6): m = (-6 - 0) / (0 - (-3)), m = -2.
3. Find the y-intercept (b) using one of the points and the slope-intercept
form of the equation: 0 = -2 * (-3) + b, b = -6.
4. Since the line is solid, the inequality is either y ≤ -2x - 6 or y ≥ -2x - 6.
5. The region below the line is shaded, which means the values of y should
be less than or equal to -2x - 6.
6. The inequality describing the region drawn in the plane is: y ≤ -2x – 6.
Model Output
Since B C and E D are parallel, they form a proportion. Therefore, we
can set up the following equation: A B / B C = C D / D E. Substituting
the given values, we get: 16 / (x - 3) = (x + 5) / 20. Cross-multiplying, we
get: 16(x + 5) = 20(x - 3). Expanding and simplifying, we get: 16x + 80 =
20x – 60, 4x = 140, x = 35. …
Key-step Extraction
1. Identify that BC and ED are parallel, forming a proportion equation.
2. Set up the equation: AB / BC = CD / DE.
3. Substitute the given values into it: 16 / (x - 3) = (x + 5) / 20. Expand and
simplify the equation to get: 16x + 80 = 20x – 60.
4. Solve for x: 4x = 140, x = 35.
5. Substitute x into CD = x + 5 to find CD: CD = 35 + 5 = 40.
6. The final answer is D: 40.
Multi-step Scoring
1. The model correctly states that the slope-intercept form of a
linear equation is y = mx + b.
2. The model correctly calculates the slope (m) using the given
points (-3, 0) and (0, -6).
3. The model correctly finds the y-intercept (b) using one of the
points and the slope-intercept form of the equation.
4. The equation of the line is correctly stated as y = -2x - 6.
5. The model incorrectly states that the region below the line is
shaded. However, based on the diagram and the ground truth
answer, the region above the line is shaded.
6. The final inequality provided by the model is y ≤ -2x - 6, which
does not match the ground truth answer of y ≥ -2x - 6.
✓ 1/1
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**Question:**
The x-intercept of the line
is -3 and the y-intercept is
-6. State the inequality
that describes the region
drawn in the plane.
**Answer: y ≥** -2x - 6.
**Version: Text Dominant**
**Question:**
BC and ED are parallel. Find
CD if AC = x - 3, BE = 20, AB
= 16, and CD = x + 5. Choices:
A:32 B:35 C:36 D:40
**Answer: D: 40**
Multi-step Score: (1 + 1 + 1 + 1 + 0 + 0) / 6 = 2/3 ≈ 0.67
Final CoT Evaluation Score: 0.67 × 0.7 + 0 × 0.3 = 0.47
Multi-step Scoring
1. The model correctly identifies that BC and ED are parallel. ✓ 1/1
2. The equation AB / BC = CD / DE is incorrectreflect the correct sides according to similar triangles. because it doesn't **×** 0/1
3. The calculation of the equation is correctly performedsince the equation itself is incorrect, this step is also incorrect., but **×** 0/1
4. The modelequation. Since the equation is wrong, this step is also incorrect. correctly solves for x based on the incorrect **×** 0/1
5. The model substitutes x into CD = x + 5 to find CD, which would
be correct if the value of x was correct.based on an incorrect equation, this step is also incorrect. Since the value of x is **×** 0/1
6. The final answer provided by the model is D: 40, which matches
the ground truth answer. Despite the incorrect steps, the final ✓ 1/1
answer is coincidentally correct.
Multi-step Score: (1 + 0 + 0 + 0 + 0 + 1) / 6 = 1/3 ≈ 0.33
Final CoT Evaluation Score: 0.33 × 0.7 + 1 × 0.3 = 0.53
**Subject:**
Length
**Version: Text Lite**
Figure 6: Examples of the CoT Evaluation Strategy for MATHVERSE. We present two outputs
from Qwen-VL-Max [3] with our CoT evaluation strategy, which assesses the fine-grained reasoning
capabilities with a detailed explanation for error analysis.
**2.3** **CoT Evaluation Strategy**
Compared to visual question-answering in general scenarios, the solving process of MLLMs for
mathematical problems requires nuanced, step-by-step CoT reasoning. Considering two cases in
Figure 6, one arrives at the correct solution albeit through incorrect intermediary steps, while the other
demonstrates the opposite phenomenon. Therefore, the binary ‘Correct’ or ‘Incorrect’ evaluative
approach of existing benchmarks is inadequate to examine the depth and precision of the multi-step
reasoning process. To this end, we propose a CoT evaluation strategy to thoroughly assess their
mathematical CoT skills in visual contexts, involving two prompting phases with GPT-4(V) [47, 46].
**Key-step Extraction.** Given the output of an MLLM, we first employ GPT-4, the languageonly version, to extract N pivotal steps within the reasoning sequence, denoted as [s1, s2, . . ., sN ],
including the final answer sA. Such key steps include significant computational outcomes, the
identification of visual components, and critical immediate inferences. Note that, we only prompt
GPT-4 with the MLLM’s output, deliberately omitting the original questions, diagrams, and groundtruth answers. This approach aims to mitigate the inherent bias of GPT-4 itself towards problemsolving and visual diagram interpretation, thereby concentrating solely on the logical coherence
of the model output. In addition, we do not pre-define a ground-truth key-step template for each
problem, but perform the extraction adaptively for the unique output of every MLLM. Since the
problem potentially encompasses diverse possible solution pathways, and different MLLMs exhibit
varying reasoning lengths and styles, the rigid template would harm the CoT evaluation accuracy.
**Multi-step Scoring.** After the extraction phase, we utilize GPT-4V, the multi-modal version, to
evaluate each critical step and culminate a comprehensive score. We feed the extracted key steps, the
original questions, diagrams, and ground-truth answers all into GPT-4V, contributing to a holistic
assessment, e.g., numerical computations, logical deductions, and visual interpretations. Therein, we
observe that GPT-4V occasionally struggles with accurately recognizing elements within functional
diagrams, leading to unstable evaluation for related problems. We thereby annotate additional
-----
Table 2: Mathematical Evaluation on Six Problem Versions in MATHVERSE’s testmini Set. We
calculate the ‘All’ score without averaging the ‘Text Only’ version. ‘CoT-E’ or ‘Acc’ denotes whether
to employ the proposed CoT evaluation strategy or not. The highest accuracy for closed-source and
open-source MLLMs is marked in red and blue respectively.
|Text Text Text Vision Vision Vision All Model Dominant Lite Only Intensive Dominant Only CoT-E Acc CoT-E Acc CoT-E Acc CoT-E Acc CoT-E Acc CoT-E Acc CoT-E Acc|All|Text Dominant|Text Lite|Text Only|Vision Intensive|Vision Dominant|Vision Only|
|---|---|---|---|---|---|---|---|
||CoT-E Acc|CoT-E Acc|CoT-E Acc|CoT-E Acc|CoT-E Acc|CoT-E Acc|CoT-E Acc|
_Baselines_
|Random Chance Human|- 12.4 - 64.9|- 12.4 - 71.2|- 12.4 - 70.9|- 12.4 - 41.7|- 12.4 - 61.4|- 12.4 - 68.3|- 12.4 - 66.7|
|---|---|---|---|---|---|---|---|
_LLMs_
|ChatGPT [48] GPT-4 [46]|- - - -|51.3 33.3 63.4 46.5|38.5 18.9 40.7 20.7|51.3 33.3 63.4 46.5|- - - -|- - - -|- - - -|
|---|---|---|---|---|---|---|---|
_Closed-source MLLMs_
|Qwen-VL-Plus [3] Gemini-Pro [22] Qwen-VL-Max [3] GPT-4V [47]|21.3 11.8 35.3 23.5 37.2 25.3 54.4 39.4|26.0 15.7 39.8 26.3 42.8 30.7 63.1 54.7|21.2 11.1 34.7 23.5 37.7 26.1 56.6 41.4|25.2 14.5 44.5 27.3 47.9 28.9 60.3 48.7|18.5 9.0 32.0 23.0 33.6 24.1 51.4 34.9|19.1 13.0 36.8 22.3 35.9 24.1 50.8 34.4|21.8 10.0 33.3 22.2 35.9 21.4 50.3 31.6|
|---|---|---|---|---|---|---|---|
_Open-source MLLMs_
|LLaMA-Adapter V2 [20] ImageBind-LLM [24] mPLUG-Owl2 [62] MiniGPT-v2 [11] LLaVA-1.5 [37] SPHINX-Plus [21] G-LLaVA [19] LLaVA-NeXT [38] ShareGPT4V [12] SPHINX-MoE [21] InternLM-XC2. [16]|5.8 5.7 10.0 9.2 10.3 5.9 10.9 11.0 12.7 7.6 14.0 12.2 15.7 16.6 17.2 15.6 17.4 13.1 22.8 15.0 25.9 16.5|7.8 6.2 13.2 11.4 11.6 6.6 13.2 12.1 17.1 8.8 16.3 13.9 22.2 20.9 21.6 19.4 21.8 16.2 33.3 22.2 36.9 22.3|6.3 5.9 11.6 11.3 11.4 6.3 12.7 12.0 12.0 7.6 12.8 11.6 20.4 20.7 19.7 15.2 20.6 16.2 21.9 16.4 28.3 17.0|3.9 2.7 12.9 11.7 13.8 6.1 15.3 11.7 22.6 11.5 15.8 14.9 21.6 21.1 25.1 18.1 14.6 6.6 40.7 18.3 42.5 16.5|6.2 6.1 9.8 8.9 11.1 6.3 11.1 13.1 12.6 7.4 12.9 11.6 16.5 17.2 17.6 16.8 18.6 15.5 21.1 14.8 20.1 15.7|4.5 4.2 11.8 11.2 9.4 5.6 11.3 10.3 12.7 7.4 14.7 13.5 12.7 14.6 14.9 15.2 16.2 13.8 19.6 12.6 24.4 16.4|4.4 6.1 3.5 3.4 8.0 4.9 6.4 7.4 9.0 6.9 13.2 10.4 6.6 9.4 12.1 11.3 9.7 3.7 18.3 9.1 19.8 11.0|
|---|---|---|---|---|---|---|---|
information for function problems and together feed into GPT-4V, ensuring the quality of visual
evaluation. Specifically, GPT-4V assesses each N intermediate step with a binary score of ‘1’ (correct)
or ‘0’ (incorrect), and derives the overall score by aggregating the correctness of the final answer. We
formulate the scoring process as
1 _N_
Scorefinal = α Score(si) + (1 _α)Score(sA),_ (7)
_N_ _−_
_i=1_
X
where α denotes a balancing factor between the intermediate steps and the final answer sA. We set α
as 0.7 by default to underscore the significance of CoT reasoning. As exemplified in Figure 6, besides
the fine-grained scoring, the CoT evaluation can also provide a detailed error analysis of each step,
which is valuable and instructive for the development of MLLMs in the field.
**3** **Experiments**
In this section, we conduct a systematic evaluation of existing Multi-modal Large Language Models
(MLLMs) on MATHVERSE. We first introduce the experimental setup in Section 3.1. Then, we detail
the quantitative results in Section 3.2 and narrate the error analysis in Section 3.3.
**3.1** **Experimental Setup**
**Division of the testmini Subset.** MATHVERSE encompasses a comprehensive collection of 2,612
visual math problems, alongside 15,672 corresponding test instances. To enable faster evaluation and
model development validation, we extract a smaller subset termed testmini including 788 problems
and 4,728 instances. In constructing testmini, we employ a random sampling strategy across different
subfields, maintaining a sample size proportional to the overall dataset to preserve its statistical
representativeness. The remaining test set features 1,824 problems and 10,944 samples will be
utilized for standard evaluation and publicly released in the future. In the subsequent experiments,
**_all quantitative results are assessed using the testmini subset of MATHVERSE._**
**Evaluation Models.** We examine the performance of foundation models across three distinct
categories on MATHVERSE: (a) Large Language Models (LLMs) as the text-only baseline, which
only take textual questions as input, including ChatGPT [45] and GPT-4 [46], (b) Closed-source
-----
Table 3: Mathematical Evaluation on Different Subjects and Subfields in MATHVERSE’s testmini
**Set. We report the scores averaging five problem versions except for the ‘Text Only’ version, and**
employ the CoT evaluation strategy by default. Len: Length; Anal: Analytic; Apply: Applied; Vol:
Volume; Coord: Coordinate; Prop: Property; Exp: Expression; Apply: Applied. The highest accuracy
for closed-source and open-source MLLMs is marked in red and blue respectively.
|Model|All|Plane Geometry|Solid Geometry|Functions|
|---|---|---|---|---|
|||All Len Area Angle Anal Apply|All Len Area Vol|All Coord Prop Exp Apply|
_Closed-source MLLMs_
|Qwen-VL-Plus [3] Gemini-Pro [22] Qwen-VL-Max [3] GPT-4V [47]|21.3 35.3 37.2 54.4|17.3 19.1 16.4 16.1 23.6 13.2 33.0 32.2 42.6 28.4 30.2 32.3 38.4 41.7 46.4 32.6 40.6 38.7 56.9 60.8 63.4 52.6 48.5 60.9|24.8 18.1 18.7 33.4 33.4 35.0 29.3 36.1 33.7 25.4 28.3 42.6 50.2 54.8 39.9 56.8|31.3 52.5 25.1 10.8 50.3 28.3 25.7 26.6 10.8 51.3 38.4 43.7 35.5 13.6 61.0 52.8 72.3 47.1 30.9 70.1|
|---|---|---|---|---|
_Open-source MLLMs_
|LLaMA-Adapter V2 [20] ImageBind-LLM [24] mPLUG-Owl2 [62] MiniGPT-v2 [11] LLaVA-1.5 [37] SPHINX-Plus [21] G-LLaVA [19] LLaVA-NeXT [38] ShareGPT4V [12] SPHINX-MoE [21] InternLM-XC2. [16]|5.8 10.0 10.3 10.9 12.7 14.0 15.7 17.2 17.4 22.8 25.9|5.9 4.0 5.9 6.6 13.4 3.3 9.7 12.1 9.9 9.2 10.2 4.8 7.7 8.2 6.0 5.7 12.4 10.6 11.6 10.0 9.8 14.3 9.1 11.8 11.8 13.1 15.1 9.7 9.4 13.2 14.4 14.2 10.5 14.1 16.5 16.8 20.2 17.3 13.6 26.5 5.9 23.1 15.9 14.8 13.1 16.3 17.7 17.8 16.9 16.2 17.9 16.9 12.2 21.1 24.5 26.3 28.4 21.1 26.6 24.4 26.2 27.1 29.7 20.6 18.5 22.2|4.6 5.3 3.1 5.7 4.6 4.9 3.5 5.3 11.0 9.2 6.7 15.7 1.7 2.2 1.6 0.5 10.6 12.1 8.7 11.6 7.0 7.2 6.1 7.6 5.0 10.3 4.4 3.1 19.6 33.3 11.7 12.6 15.0 13.6 10.9 19.7 15.8 9.4 10.7 26.3 20.1 34.5 14.1 25.2|6.2 6.7 6.1 4.5 7.9 14.9 12.3 13.8 4.6 25.9 17.4 22.8 18.6 5.3 22.2 11.2 4.2 15.7 4.0 21.1 14.8 18.8 12.7 9.5 23.7 17.9 11.1 19.1 6.3 27.7 9.2 9.1 9.1 1.3 15.5 23.1 24.5 23.4 8.0 33.1 20.2 19.9 22.2 8.4 25.8 19.5 23.5 19.3 9.2 30.3 23.7 24.4 24.9 10.6 36.3|
|---|---|---|---|---|
_MLLMs, represented by models like GPT-4V [47], Gemini-Pro [22], Qwen-VL-Max [3], and Qwen-_
VL-Plus, and (c) Open-source MLLMs, featuring models such as LLaVA-1.5 [37] (Vicuna-13B [13]),
LLaVA-NeXT [38] (Vicuna-13B), SPHINX-MoE [21] (Mixtral-8×7B [28]), SPHINX-Plus (LLaMA213B [56]), InternLM-XComposer2 [16] (InternLM2-7B [54]), LLaMA-Adapter V2 [20] (LLaMA7B [55]), ImageBind-LLM [24] (LLaMA-7B), MiniGPT-v2 [11] (LLaMA2-7B), mPLUG-Owl2 [62]
(LLaMA-7B), G-LLaVA [19] (LLaMA2-7B), and ShareGPT-4V [12] (Vicuna-13B).
**Implementation Details.** All our experiments are conducted under a zero-shot setting, showcasing
the generalization capacity of MLLMs for mathematical reasoning, without few-shot prompting or
further fine-tuning. By default, we employ the Chain-of-Thought (CoT) prompting technique [58],
which encourages MLLMs to perform complete reasoning steps for a fine-grained evaluation. A
baseline representing random chance is established for comparison, for which we select one option
at random for multiple-choice questions and utilize empty for free-form questions. In addition,
we recruit ten qualified college students, and ask them to solve the problems in MATHVERSE
independently, serving as a baseline for human performance. We conduct all experiments on NVIDIA
A100 GPUs. As the text-only LLMs can only take text questions as input, we evaluate them with the
first three problem versions, i.e., Text Dominant, Text Lite, and Text Only. For the ‘w/o’ results, we
utilize the template in MathVista [41] to prompt GPT-4 [46] for answer extraction, and directly score
the final answer without the intermediate reasoning process.
**3.2** **Experimental Analysis**
To best investigate the visual mathematical reasoning capabilities, we report the evaluation results of
different models on MATHVERSE for the six transformed problem versions in Table 2 and twelve
detailed subjects in Table 3. We mainly analyze the performance by the proposed Chain-of-Though
(CoT) evaluation, and derive the following observations.
**MLLMs Rely More on DI than Seeing Diagrams.** Comparing the Text-dominant and Text-only
versions, with the elimination of visual input, most MLLMs even obtain an unexpected performance
improvement, e.g., +5.1% for Qwen-VL-Max and +5.6% for InternLM-XComposer2. This suggests
that the unsatisfactory visual encoding for mathematical diagrams instead severely harms the original
problem-solving capacity of MLLMs. As exemplified in Figure 7, from the error analysis of our
CoT evaluation strategy, we observe that Gemini-Pro can deduce the correct answer exclusively by
the visual information within the Descriptive Information. Instead, the inaccurate visual perception
of mathematical elements directly interferes with the outcome of problem-solving, turning correct
answers into incorrect ones. In contrast, GPT-4V and ShareGPT-4V achieve better results in Text
Dominant than in Text Only, indicating their relatively better visual encoding, which would not
degrade the performance. However, they still encounter a larger performance drop by removing the
-----
Text Dominant
1. This step is correct as the model correctly states the definition of a
one-to-one function, which is necessary to determine if a function has
an inverse.
2. Analyze the function graph. This step is incorrect. The model_output
incorrectly states that the value 0 on the y-axis corresponds to both
the value 4 and the value -4 on the x-axis, which is not supported by
the image.
1.
Visual Perception Error
2.
3. Conclude that the function does not have an inverse function because it
is not one-to-one. This conclusion is based on the incorrect analysis in
step 2. Since step 2 is incorrect, this conclusion is also incorrect.
4. The final answer provided by the model_output is incorrect because
the function appears to be one-to-one based on the image.
Multi-step Score: (1 + 0 + 0 + 0) / 4 = 0.25
Final CoT Evaluation Score: 0.25 × 0.7 + 0 × 0 = 0.175
Text Only
1. Correct, the horizontal line test is a
valid method to determine if a
function has an inverse.
2. Correct, based on the provided
information, it appears that no
horizontal line would intersect the
function more than once.
1/1
1/1
1/1
1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**Question:**
For the curve graphed
below, whose asymptote is
x-axis and x=4, determine
if it has an inverse function.
Choices: A:Yes B:No
**Answer: A: Yes**
3. Correct, following the logic of steps
1 and 2, the conclusion is valid.
4. Correct, the answer A corresponds
to the conclusion that the function
does have an inverse.
Multi-step Score: 1
Final CoT Evaluation Score: 1
Figure 7: A Typical Visual Perception Error by our CoT Evaluation Strategy. The example is an
output from Gemini-Pro [22], where the correct reasoning of the Text-only version is distracted by
the visual perception error within the diagram.
redundant Descriptive Information than the diagram input, e.g., GPT-4V and ShareGPT-4V. This
pattern demonstrates that they tend to capture more visual information for mathematical reasoning
from the text content, instead of seeing the diagram itself.
**MLLMs are Moderately Effective at Perceiving IP.** By discarding the Implicit Property in
question texts, a negligible decline in accuracy is noted from the Text-lite to Vision-intensive versions
for most MLLMs. This is because the Implicit Property mainly encompasses the spatial layouts and
geometric relationships, which demand minimal mathematical domain knowledge for interpretation.
This outcome underscores the favorable visual perception skills of MLLMs for non-mathematical
elements, which is not the primary obstacle hindering MLLMs in solving visual math problems.
**MLLMs are Challenged to interpret EC from Diagrams.** Incorporating the Essential Condition
within diagrams challenges MLLMs to accurately identify and understand these conditions in vision
modality for mathematical problem-solving. Evidence from the Vision-dominant results indicates
a notable decline in the performance of most MLLMs compared to the Text-lite accuracy, such as
-5.8% for GPT-4V and -3.9% for InterLM-XComposer2. This reveals their inaccurate identification of
mathematical symbols and an insufficient grasp of domain-specific knowledge required to associate
identified measurements with relevant concepts.
**MLLMs struggle to Solve Problems Entirely by Diagrams.** The scenario of Vision-only problems
aligns more closely with real-world applications, where capturing an image is often more convenient
than transcribing the problem into text. However, by rendering the whole question within the diagram,
the mathematical problem-solving capacity of MLLMs is further diminished. This experiment unveils
the great challenge for MLLMs to simultaneously understand mathematical conditions, questions,
and figures from the visual input alone.
**Closed-source MLLMs are Better-performed.** From the performance in both tables, we observe a
consistently better performance achieved by closed-source MLLMs than open-sourced ones. Despite
the gap with humans, GPT-4V attains the leading position among MLLMs, showcasing superior
mathematical capabilities over problem versions and subjects, especially the challenging subfields
like ‘Coord’ and ‘Prop’ (the property and coordinate solving of function problems). InternLMXComposer2 and SPHINX-MoE are the best-performing open-source MLLMs, while still lagging
behind Gemini-Pro with a margin of 9.4% and 12.5% overall accuracy, respectively, suggesting large
improvement space.
**LLMs Achieve Competitive Results to MLLMs.** Utilizing solely question texts as input, two
LLMs, i.e., GPT-4 and ChatGPT, attain superior accuracy to most MLLMs in Text Dominant and Lite
versions. Even in the absence of redundant Descriptive Information within Text-lite problems, GPT-4
outperforms InternLM-XComposer2 and SPHINX-MoE by substantial margins of 12.4% and 18.8%,
respectively. These findings not only indicate the strong mathematical reasoning skills of LLMs, but
-----
Results with and without CoT Evaluation
w/o CoT evaluation
60 Increase by CoT evaluation
Decrease by CoT evaluation
40
20
0
Qwen-VL-Plus Gemini-Pro Qwen-VL-Max GPT-4V LLaMA-Adapter V2ImageBind-LLMmPLUG-Owl2 MiniGPT-v2 LLaVA-1.5 SPHNIX-Plus G-LLaVA LLaVA-NeXT ShareGPT4V SPHINX-MoE InternLM-XC2
Figure 8: Results with and without CoT Evaluation in MATHVERSE. Referring to Table 2, we
denote the ‘w/o’ results in blue pillars, and highlight the increase and decrease magnitude with
‘CoT-E’ by green and red colors, respectively.
further emphasize the deficiencies in diagram interpretation of existing MLLMs. Importantly, the
performance of GPT-4 is only exceeded by GPT-4V, which demonstrates that a satisfactory diagram
perception capability can enhance problem-solving for visual mathematics.
**GPT-4(V) Beats Human in the Text-only Version.** Without the visual content provided in diagrams, human solvers often face challenges in deducing the correct answers due to the lack of
sufficient information, e.g., 41.7% ‘w/o’ scores in Text-only problems. In contrast, GPT-4V and GPT4 achieve the ‘w/o’ scores of 41.1% and 46.5%, respectively, which surpass the human performance.
This comparison highlights their advanced reasoning capabilities in handling extreme scenarios,
exhibiting more robustness for mathematical problem-solving given missing visual conditions.
**Mathematical Training Benefits the Performance.** In addition to foundational visual instructionfollowing datasets, both SPHINX-MoE and InternLM-XComposer2 extend their training regimes to
include specialized mathematical problems that are either text-only or visual, such as MathQA [2],
Geometry3K [43], and MathInstruct [64]. This approach of math-specific tuning contributes to
their leading performance in MATHVERSE. Furthermore, G-LLaVA fine-tunes LLaVA-1.5 by a
large-scale visual geometric dataset containing 170K enriched problems. This targeted refinement
can improve several fields (‘Len’, ‘Angle’, and ‘Apply’) within the plane geometry subject. However,
since G-LLaVA’s fine-tuning data does not include problems of analytic geometry, solid geometry,
and functions, it harms the related results of LLaVA-1.5 due to catastrophic forgetting, e.g., -3.5%
in ‘Anal’, -5.6% in ‘Solid Geometry’, and -5.6% in ‘Functions’. This phenomenon underscores the
critical role of developing extensive, high-quality visual math data for effectively training MLLMs.
**Discrepancy Between ‘CoT-E’ and ‘w/o’ Scores.** As illustrated by Table 2, the ‘CoT-E’ scores
for MLLMs, in most cases, are much higher than ‘w/o’ scores, e.g., +16.1% for GPT-4V and
+9.6% for InternLM-XComposer2. This observation demonstrates that our proposed CoT evaluation
strategy identifies numerous correct intermediate reasoning steps, despite the final incorrect answer,
highlighting the effectiveness of fine-grained assessment. In Figure 8, we present the statistics of
variance between ‘CoT-E’ and ‘w/o’ scores within different MLLMs. Although GPT-4V attains
top-tier performance, it exhibits a pronounced gap of 16.1% concerning the evaluation of CoT
reasoning quality, similar to the 12.4% gap of Qwen-VL-Max. Conversely, SPHINX-MoE showcases
favorable precision among open-source MLLMs, while preserving a relatively lower variance of
two evaluation methods, i.e., 6.0% compared to InternLM-XComposer’s 9.6%. This indicates its
consistent step-by-step reasoning throughout the problem-solving process.
**3.3** **Error Analysis**
To delve into the fine-grained predictions, we select the best-performing MLLM, GPT-4V [47], to
understand its modes of success and failure. Our proposed CoT evaluation strategy has produced a
detailed assessment of model output, including step-wise scores and explanation, reducing extensive
manual effort in identifying and analyzing errors. We conduct our analysis on the two-step output
from the CoT evaluation across the entire dataset, focusing on two key dimensions.
-----
Text Dominant
**× Ans.** **× Ans.**
**× Rea.** **× Rea.**
**2.6%** **3.3%** **× Ans.**
**× Ans.** **× Rea.**
**Partial × Rea.** **18.5%**
**44.4%** **× Ans.** **× Ans.**
**Partial × Rea.** **Partial × Rea.**
✓✓ **Rea.Ans.** ✓✓25.8%Rea.Ans. **55.6%** ✓✓ **Rea.Ans.** **31.9%**
**35.1%** **37.1%**
**Partial✓** **Ans. × Rea.** ✓× Ans.Rea. ✓× Ans.Rea.
**14.6%** **2.6%** **7.9%**
✓ **Ans.**
**× Rea.** ✓ **Ans.**
**0.7%** **Partial × Rea.** ✓ **Ans.** **× Ans.** ✓ **Ans.** ✓ **Ans.**
Vision Intensive
Text Lite
✓ **Ans.**
✓ **Rea.**
**25.8%**
✓ **Ans.**
**Partial × Rea.** ✓ **Ans.**
**11.3%** **× Rea.**
**0.0%**
Text Only
**× Ans.**
**× Rea.**
**18.5%**
**Partial**
✓ **Ans.**
✓ **Rea.**
**37.1%**
**×**
✓
✓ **Ans.**
**Partial × Rea.**
**4.6%**
Vision Only
**× Ans.** **× Ans.**
**× Rea.** **× Rea.**
**6.6%** **6.0%** **× Ans.**
**× Rea.**
**24%**
**× Ans.** **× Ans.**
✓✓27.2%Rea.Ans. **Partial54.3% × Rea.** ✓✓ **Rea.Ans.** **Partial54.3% × Rea.** ✓✓24.8%Rea.Ans. **Partial× Ans. × Rea.**
**29.8%** **39.7%**
✓ **Ans.**
**× Ans.** **× Ans.** **Partial × Rea.**
✓ **Ans.** ✓ **Ans.** ✓ **Rea.** ✓ **Ans.** ✓ **Ans.** ✓ **Rea.** **4.1%** ✓ **Ans.**
**× Rea.** ✓ **Rea.**
**0.0%** **7.4%**
✓ **Ans.**
**× Rea.**
**0.0%**
**0.7%**
**× Ans.**
✓ **Rea.**
**4.0%**
Vision Dominant
**× Ans.**
✓ **Rea.**
**2.0%**
✓ **Ans.**
**× Rea.**
**0.0%**
✓ **Ans.**
**× Rea.**
**1.3%**
**Partial × Rea.**
**10.6%**
**Partial × Rea.**
**6.6%**
**1.3%**
Figure 9: Distribution of GPT-4V’s [47] Errors in Reasoning and Answers. For the six problem
versions in MATHVERSE, we provide the statistics of errors made by GPT-4V based on their
occurrence in answers (‘Ans.’) and reasoning processes (‘Rea.’).
**Errors in Reasoning or Answer?** In Figure 9, we showcase the statistics of different error
distributions in six problem versions of MATHVERSE. We define the following six error categories:
correct final answer with correct/partially correct/incorrect CoT reasoning and incorrect final answer
with correct/partially correct/incorrect CoT reasoning. For all six versions, the incorrect final answers
are mostly caused by the partially incorrect reasoning process. In addition, a number of problems
with correct answers are accompanied by partially or entirely incorrect reasoning, e.g., 15.3% in Text
Dominant, which cannot be detected by the traditional True or False evaluation. As we remove the
content within textual questions and enrich the visual diagram, e.g., from Text Dominant and Lite
to Vision Dominant and Only, we observe a progressive increase in the error rate of ‘incorrect final
answer with incorrect CoT reasoning’, indicating that MLLMs are challenged to conduct high-quality
intermediate reasoning by capturing more information from the visual input.
**What Types of Errors?** To further investigate the specific error types, we survey the problems with
errors that occur either within the reasoning process or the final answer. As depicted in Figure 10, we
divide the errors of GPT-4V into four distinct types: visual perception error, reasoning error, knowledge error, and calculation error. Consistent with our findings in the main paper, the primary source
of errors in problem-solving attributes to the inaccurate interpretation of mathematical diagrams,
which significantly impedes the performance of MLLMs. For the problem versions that demand
advanced diagram interpretation, e.g., Vision Dominant and Only, we observe a notable increase in
the rate of visual perception errors, demonstrating an urgent need for stronger visual encoders in
MLLMs. Moreover, reasoning errors also account for a considerable percentage, indicating that the
logical deduction skills of MLLMs still require improvement. As expected, knowledge errors do not
significantly hinder the mathematical reasoning capabilities of MLLMs in MATHVERSE.
-----
Text Dominant Text Lite Text Only
**Knowledge Error8.8%** **Knowledge Error5.9%** **Knowledge Error10.3%**
**Calculation Error**
**Calculation Error** **17.6%**
**20.6%** **Calculation Error**
**37.9%**
**Reasoning Error**
**32.4%**
**Visual Perception Error32.4%** **Reasoning Error38.2%** **Visual Perception 44.1%Error** **Reasoning Error51.8%**
**Visual Perception**
**Error**
**0.0%**
Vision Intensive Vision Dominant Vision Only
**Knowledge Error** **Calculation Error9.1%** **Knowledge Error** **Calculation Error11.8%** **Knowledge Error**
**12.1%** **5.9%** **4.5%**
**Calculation Error**
**22.7%**
**Reasoning Error**
**Reasoning Error** **35.2%**
**36.4%** **Reasoning Error**
**Visual Perception** **Visual Perception** **Visual Perception Error** **27.3%**
**Error** **Error** **45.5%**
**42.4%** **47.1%**
Figure 10: Distribution of GPT-4V’s [47] Errors within Different Types. We present the statistics
of four error types by GPT-4V in the six problem versions, i.e., Visual Perception Error, Reasoning
Error, Calculation Error, and Knowledge Error.
**4** **Conclusion**
In this paper, we propose a comprehensive and specialized benchmark, MATHVERSE, for the visual
mathematical problem-solving capacity of MLLMs. We meticulously collect high-quality math
problems with diagrams spanning three primary subjects and twelve subfields. Given the issues
within current benchmarks, we transform each problem into six versions, investigating whether and
how much MLLMs can interpret the visual math diagrams. We also propose a CoT evaluation strategy
for finer-grained assessment of the intermediate reasoning process of MLLMs. By evaluating various
closed-source and open-source models, MATHVERSE unveils that most existing MLLMs struggle to
accurately understand mathematical diagrams, and even attain higher results without visual input.
This indicates the potential of developing more advanced math-specific vision encoders for stronger
multi-modal mathematical reasoning.
**References**
[1] Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A.,
Millican, K., Reynolds, M., et al.: Flamingo: a visual language model for few-shot learning.
Advances in Neural Information Processing Systems 35, 23716–23736 (2022)
[2] Amini, A., Gabriel, S., Lin, P., Koncel-Kedziorski, R., Choi, Y., Hajishirzi, H.: Mathqa:
Towards interpretable math word problem solving with operation-based formalisms. arXiv
preprint arXiv:1905.13319 (2019)
[3] Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou, C., Zhou, J.: Qwen-vl: A
versatile vision-language model for understanding, localization, text reading, and beyond. arXiv
preprint arXiv:2308.12966 (2023)
[4] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A.,
Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: Advances
in neural information processing systems. pp. 1877–1901 (2020)
-----
[5] Cao, J., Xiao, J.: An augmented benchmark dataset for geometric question answering through
dual parallel text encoding. In: Proceedings of the 29th International Conference on Computational Linguistics. pp. 1511–1520 (2022)
[6] Chen, G., Zheng, Y.D., Wang, J., Xu, J., Huang, Y., Pan, J., Wang, Y., Wang, Y., Qiao, Y.,
Lu, T., et al.: Videollm: Modeling video sequence with large language models. arXiv preprint
arXiv:2305.13292 (2023)
[7] Chen, J., Li, T., Qin, J., Lu, P., Lin, L., Chen, C., Liang, X.: Unigeo: Unifying geometry logical
reasoning via reformulating mathematical expression. arXiv preprint arXiv:2212.02746 (2022)
[8] Chen, J., Li, T., Qin, J., Lu, P., Lin, L., Chen, C., Liang, X.: Unigeo: Unifying geometry logical
reasoning via reformulating mathematical expression. ArXiv abs/2212.02746 (2022)
[9] Chen, J., Tang, J., Qin, J., Liang, X., Liu, L., Xing, E.P., Lin, L.: Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning. arXiv preprint arXiv:2105.14517
(2021)
[10] Chen, J., Tang, J., Qin, J., Liang, X., Liu, L., Xing, E.P., Lin, L.: Geoqa: A geometric question
answering benchmark towards multimodal numerical reasoning. ArXiv abs/2105.14517 (2021),
```
https://api.semanticscholar.org/CorpusID:235253782
```
[11] Chen, J., Li, D.Z.X.S.X., Zhang, Z.L.P., Xiong, R.K.V.C.Y., Elhoseiny, M.: Minigpt-v2: Large
language model as a unified interface for vision-language multi-task learning. arXiv preprint
arXiv:2310.09478 (2023)
[12] Chen, L., Li, J., wen Dong, X., Zhang, P., He, C., Wang, J., Zhao, F., Lin, D.: Sharegpt4v:
Improving large multi-modal models with better captions. ArXiv abs/2311.12793 (2023),
```
https://api.semanticscholar.org/CorpusID:265308687
```
[13] Chiang, W.L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y.,
Gonzalez, J.E., Stoica, I., Xing, E.P.: Vicuna: An open-source chatbot impressing gpt-4 with
[90%* chatgpt quality. https://lmsys.org/blog/2023-03-30-vicuna/ (March 2023)](https://lmsys.org/blog/2023-03-30-vicuna/)
[14] Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J.,
Hilton, J., Nakano, R., et al.: Training verifiers to solve math word problems. arXiv preprint
arXiv:2110.14168 (2021)
[15] Dai, W., Li, J., Li, D., Tiong, A.M.H., Zhao, J., Wang, W., Li, B., Fung, P., Hoi, S.: Instructblip:
Towards general-purpose vision-language models with instruction tuning (2023)
[16] Dong, X., Zhang, P., Zang, Y., Cao, Y., Wang, B., Ouyang, L., Wei, X., Zhang, S., Duan,
H., Cao, M., et al.: Internlm-xcomposer2: Mastering free-form text-image composition and
comprehension in vision-language large model. arXiv preprint arXiv:2401.16420 (2024)
[17] Fu, C., Chen, P., Shen, Y., Qin, Y., Zhang, M., Lin, X., Yang, J., Zheng, X., Li, K., Sun, X., Wu,
Y., Ji, R.: Mme: A comprehensive evaluation benchmark for multimodal large language models.
arXiv preprint arXiv:2306.13394 (2023)
[18] Fu, C., Zhang, R., Lin, H., Wang, Z., Gao, T., Luo, Y., Huang, Y., Zhang, Z., Qiu, L., Ye, G.,
et al.: A challenger to gpt-4v? early explorations of gemini in visual expertise. arXiv preprint
arXiv:2312.12436 (2023)
[19] Gao, J., Pi, R., Zhang, J., Ye, J., Zhong, W., Wang, Y., Hong, L., Han, J., Xu, H., Li, Z., et al.:
G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint
arXiv:2312.11370 (2023)
[20] Gao, P., Han, J., Zhang, R., Lin, Z., Geng, S., Zhou, A., Zhang, W., Lu, P., He, C., Yue, X., Li,
H., Qiao, Y.: Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint
arXiv:2304.15010 (2023)
[21] Gao, P., Zhang, R., Liu, C., Qiu, L., Huang, S., Lin, W., Zhao, S., Geng, S., Lin, Z., Jin, P.,
et al.: Sphinx-x: Scaling data and parameters for a family of multi-modal large language models.
arXiv preprint arXiv:2402.05935 (2024)
-----
[22] Gemini Team, G.: Gemini: a family of highly capable multimodal models. arXiv preprint
arXiv:2312.11805 (2023)
[23] Guo, Z., Zhang, R., Zhu, X., Tang, Y., Ma, X., Han, J., Chen, K., Gao, P., Li, X., Li, H.,
et al.: Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding,
generation, and instruction following. arXiv preprint arXiv:2309.00615 (2023)
[24] Han, J., Zhang, R., Shao, W., Gao, P., Xu, P., Xiao, H., Zhang, K., Liu, C., Wen, S., Guo, Z.,
et al.: Imagebind-llm: Multi-modality instruction tuning. arXiv preprint arXiv:2309.03905
(2023)
[25] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., Steinhardt, J.: Measuring
massive multitask language understanding. Proceedings of the International Conference on
Learning Representations (ICLR) (2021)
[26] Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., Steinhardt, J.:
Measuring mathematical problem solving with the math dataset. NeurIPS (2021)
[27] Hong, Y., Zhen, H., Chen, P., Zheng, S., Du, Y., Chen, Z., Gan, C.: 3d-llm: Injecting the 3d
world into large language models. Advances in Neural Information Processing Systems 36
(2024)
[28] Jiang, A.Q., Sablayrolles, A., Roux, A., Mensch, A., Savary, B., Bamford, C., Chaplot, D.S.,
de Las Casas, D., Hanna, E.B., Bressand, F., Lengyel, G., Bour, G., Lample, G., Lavaud, L.R.,
Saulnier, L., Lachaux, M., Stock, P., Subramanian, S., Yang, S., Antoniak, S., Scao, T.L., Gervet,
T., Lavril, T., Wang, T., Lacroix, T., Sayed, W.E.: Mixtral of experts. Arxiv 2401.04088 (2024)
[29] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S.,
Berg, A.C., Lo, W.Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023)
[30] Li, B., Zhang, K., Zhang, H., Guo, D., Zhang, R., Li, F., Zhang, Y., Liu, Z., Li, C.:
Llava-next: Stronger llms supercharge multimodal capabilities in the wild. https://llavavl.github.io/blog/2024-05-10-llava-next-stronger-llms/ (2024)
[31] Li, B., Zhang, Y., Chen, L., Wang, J., Pu, F., Yang, J., Li, C., Liu, Z.: Mimic-it: Multi-modal
in-context instruction tuning. arXiv preprint arXiv:2306.05425 (2023)
[32] Li, B., Zhang, Y., Guo, D., Zhang, R., Li, F., Zhang, H., Zhang, K., Li, Y., Liu, Z., Li, C.:
Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 (2024)
[33] Li, B., Wang, R., Wang, G., Ge, Y., Ge, Y., Shan, Y.: Seed-bench: Benchmarking multimodal
llms with generative comprehension. ArXiv abs/2307.16125 (2023)
[34] Li, F., Zhang, R., Zhang, H., Zhang, Y., Li, B., Li, W., Ma, Z., Li, C.: Llava-nextinterleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint
arXiv:2407.07895 (2024)
[35] Li, J., Li, D., Xiong, C., Hoi, S.: Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In: International Conference on Machine
Learning. pp. 12888–12900. PMLR (2022)
[36] Lin, Z., Liu, C., Zhang, R., Gao, P., Qiu, L., Xiao, H., Qiu, H., Lin, C., Shao, W., Chen, K.,
et al.: Sphinx: The joint mixing of weights, tasks, and visual embeddings for multi-modal large
language models. arXiv preprint arXiv:2311.07575 (2023)
[37] Liu, H., Li, C., Li, Y., Lee, Y.J.: Improved baselines with visual instruction tuning (2023)
[38] Liu, H., Li, C., Li, Y., Li, B., Zhang, Y., Shen, S., Lee, Y.J.: Llava-next: Improved rea[soning, ocr, and world knowledge (January 2024), https://llava-vl.github.io/blog/](https://llava-vl.github.io/blog/2024-01-30-llava-next/)
```
2024-01-30-llava-next/
```
[39] Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. In: NeurIPS (2023)
-----
[40] Liu, Y., Duan, H., Zhang, Y., Li, B., Zhang, S., Zhao, W., Yuan, Y., Wang, J., He, C.,
Liu, Z., et al.: Mmbench: Is your multi-modal model an all-around player? arXiv preprint
arXiv:2307.06281 (2023)
[41] Lu, P., Bansal, H., Xia, T., Liu, J., yue Li, C., Hajishirzi, H., Cheng, H., Chang, K.W., Galley,
M., Gao, J.: Mathvista: Evaluating math reasoning in visual contexts with gpt-4v, bard, and
other large multimodal models. ArXiv abs/2310.02255 (2023)
[42] Lu, P., Gong, R., Jiang, S., Qiu, L., Huang, S., Liang, X., Zhu, S.C.: Inter-gps: Interpretable
geometry problem solving with formal language and symbolic reasoning. arXiv preprint
arXiv:2105.04165 (2021)
[43] Lu, P., Gong, R., Jiang, S., Qiu, L., Huang, S., Liang, X., Zhu, S.C.: Inter-gps: Interpretable
geometry problem solving with formal language and symbolic reasoning. In: Annual Meeting
[of the Association for Computational Linguistics (2021), https://api.semanticscholar.](https://api.semanticscholar.org/CorpusID:234337054)
```
org/CorpusID:234337054
```
[44] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., Zhang, D.:
Wizardmath: Empowering mathematical reasoning for large language models via reinforced
evol-instruct. arXiv preprint arXiv:2308.09583 (2023)
[[45] OpenAI: Chatgpt. https://chat.openai.com (2023)](https://chat.openai.com)
[46] OpenAI: Gpt-4 technical report. ArXiv abs/2303.08774 (2023)
[47] OpenAI: GPT-4V(ision) system card (2023), `https://openai.com/research/`
```
gpt-4v-system-card
```
[48] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal,
S., Slama, K., Gray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A.,
Welinder, P., Christiano, P., Leike, J., Lowe, R.: Training language models to follow instructions
with human feedback. In: Advances in Neural Information Processing Systems (2022)
[49] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell,
A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models
from natural language supervision. In: International Conference on Machine Learning (2021),
```
https://api.semanticscholar.org/CorpusID:231591445
```
[50] Roy, S., Roth, D.: Solving general arithmetic word problems. ArXiv abs/1608.01413 (2016),
```
https://api.semanticscholar.org/CorpusID:560565
```
[51] Seo, M., Hajishirzi, H., Farhadi, A., Etzioni, O., Malcolm, C.: Solving geometry problems:
Combining text and diagram interpretation. In: Proceedings of the 2015 conference on empirical
methods in natural language processing. pp. 1466–1476 (2015)
[52] Su, Y., Lan, T., Li, H., Xu, J., Wang, Y., Cai, D.: Pandagpt: One model to instruction-follow
them all. arXiv preprint arXiv:2305.16355 (2023)
[53] Sun, K., Pan, J., Ge, Y., Li, H., Duan, H., Wu, X., Zhang, R., Zhou, A., Qin, Z., Wang, Y., et al.:
Journeydb: A benchmark for generative image understanding. Advances in Neural Information
Processing Systems 36 (2024)
[54] Team, I.: Internlm: A multilingual language model with progressively enhanced capabilities
(2023)
[55] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozière, B.,
Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient foundation language models.
arXiv preprint arXiv:2302.13971 (2023)
[56] Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra,
S., Bhargava, P., Bhosale, S., et al.: Llama 2: Open foundation and fine-tuned chat models.
arXiv preprint arXiv:2307.09288 (2023)
-----
[57] Wang, K., Ren, H., Zhou, A., Lu, Z., Luo, S., Shi, W., Zhang, R., Song, L., Zhan, M., Li, H.:
Mathcoder: Seamless code integration in LLMs for enhanced mathematical reasoning. In: The
[Twelfth International Conference on Learning Representations (2024), https://openreview.](https://openreview.net/forum?id=z8TW0ttBPp)
```
net/forum?id=z8TW0ttBPp
```
[58] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., Zhou, D., et al.:
Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
Information Processing Systems 35, 24824–24837 (2022)
[59] Xu, P., Shao, W., Zhang, K., Gao, P., Liu, S., Lei, M., Meng, F., Huang, S., Qiao, Y., Luo, P.:
Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv
preprint arXiv:2306.09265 (2023)
[60] Xu, R., Wang, X., Wang, T., Chen, Y., Pang, J., Lin, D.: Pointllm: Empowering large language
models to understand point clouds. arXiv preprint arXiv:2308.16911 (2023)
[61] Ye, Q., Xu, H., Xu, G., Ye, J., Yan, M., Zhou, Y., Wang, J., Hu, A., Shi, P., Shi, Y., Jiang, C.,
Li, C., Xu, Y., Chen, H., Tian, J., Qian, Q., Zhang, J., Huang, F.: mplug-owl: Modularization
empowers large language models with multimodality (2023)
[62] Ye, Q., Xu, H., Ye, J., Yan, M., Hu, A., Liu, H., Qian, Q., Zhang, J., Huang, F., Zhou, J.:
mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration
(2023)
[63] Yue, X., Ni, Y., Zhang, K., Zheng, T., Liu, R., Zhang, G., Stevens, S., Jiang, D., Ren, W., Sun,
Y., Wei, C., Yu, B., Yuan, R., Sun, R., Yin, M., Zheng, B., Yang, Z., Liu, Y., Huang, W., Sun, H.,
Su, Y., Chen, W.: Mmmu: A massive multi-discipline multimodal understanding and reasoning
benchmark for expert agi. arXiv preprint arXiv:2311.16502 (2023)
[64] Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., Chen, W.: Mammoth: Building
math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653
(2023)
[65] Zhang, H., Li, X., Bing, L.: Video-llama: An instruction-tuned audio-visual language model for
video understanding. arXiv preprint arXiv:2306.02858 (2023)
[66] Zhang, R., Han, J., Zhou, A., Hu, X., Yan, S., Lu, P., Li, H., Gao, P., Qiao, Y.: LLaMA-adapter:
Efficient fine-tuning of large language models with zero-initialized attention. In: The Twelfth
[International Conference on Learning Representations (2024), https://openreview.net/](https://openreview.net/forum?id=d4UiXAHN2W)
```
forum?id=d4UiXAHN2W
```
[67] Zhang, R., Hu, X., Li, B., Huang, S., Deng, H., Li, H., Qiao, Y., Gao, P.: Prompt, generate, then
cache: Cascade of foundation models makes strong few-shot learners. CVPR 2023 (2023)
[68] Zhang, R., Jiang, Z., Guo, Z., Yan, S., Pan, J., Dong, H., Gao, P., Li, H.: Personalize segment
anything model with one shot. ICLR 2024 (2023)
[69] Zhang, R., Wang, L., Qiao, Y., Gao, P., Li, H.: Learning 3d representations from 2d pre-trained
models via image-to-point masked autoencoders. CVPR 2023 (2023)
[70] Zhang, R., Wei, X., Jiang, D., Zhang, Y., Guo, Z., Tong, C., Liu, J., Zhou, A., Wei, B., Zhang, S.,
et al.: Mavis: Mathematical visual instruction tuning. arXiv preprint arXiv:2407.08739 (2024)
[71] Zhou, A., Wang, K., Lu, Z., Shi, W., Luo, S., Qin, Z., Lu, S., Jia, A., Song, L., Zhan, M.,
et al.: Solving challenging math word problems using gpt-4 code interpreter with code-based
self-verification. arXiv preprint arXiv:2308.07921 (2023)
[72] Zhu, D., Chen, J., Shen, X., Li, X., Elhoseiny, M.: Minigpt-4: Enhancing vision-language
understanding with advanced large language models. arXiv preprint arXiv:2304.10592 (2023)
-----
**Appendix Overview**
- Section A: Related work.
- Section B: Additional experimental details.
- Section C: More dataset details.
- Section D: Comparison to current benchmarks.
- Section E: Limitation and future work.
- Section F: Qualitative examples.
**A** **Related Work**
**Multi-modal Large Language Models (MLLMs),** building upon the prevalence of Large Language Models (LLMs) [55, 56, 45, 28, 4] and large vision models [49, 29, 68, 67, 69], have become
increasingly prominent in the field. They extend LLMs to tackle a diverse range of tasks and
domains, including the mainstream 2D images [35, 15, 1, 31] and other modalities, such as 3D
point clouds [23, 60, 27], audio [24, 52], and video [65, 6]. Noteworthy examples like OpenAI’s
GPT-4V [47] and Google’s Gemini [22] exhibit exceptional visual understanding and reasoning capabilities, setting new benchmarks in multi-modal performance. However, their closed-source nature
poses a barrier to the broader application and development of MLLMs. Concurrently, another line of
work is dedicated to exploring advanced MLLMs open-source to the community. Prior efforts like
LLaMA-Adapter [66, 20], LLaVA [39, 38, 37], and MiniGPT-4 [72, 11] leverage a frozen CLIP [49]
model for image encoding, and inject the visual cues into LLaMA [55] for multi-modal instruction
tuning. The subsequent mPLUG-Owl [61, 62], Qwen-VL [3], InternLM-XComposer [16], and
SPHINX [36, 21] further push the frontier of MLLMs in understanding and generalizing across visual
contexts. Despite comprehensive benchmarks [17, 40, 33, 59] on general visual instruction-following
scenarios, the specific potential of MLLMs for visual mathematical problem-solving remains underexplored. In this paper, we introduce the MATHVERSE benchmark to comprehensively evaluate
the visual mathematical reasoning and diagram understanding skills of MLLMs, providing unique
perspectives for future research directions.
**Mathematical Reasoning Benchmarks** have emerged as a significant area of focus, posing
considerable challenges for large foundational models, e.g., LLMs and MLLMs. Initially, datasets in
this realm are designed to address basic algebraic [26] and arithmetic [50] word problems, which
are relatively limited in scope and volume. Subsequent efforts, including MATH [26], GSM8K [14],
and MMLU [25], expand the range and quality of textual mathematical problems. These datasets
feature a broader spectrum of difficulties, establishing a robust benchmark for the evaluation of
general and math-specific LLMs [71, 64, 57, 19, 44]. Besides the text-only assessment, there is a
growing demand for comparable, high-quality benchmarks for evaluating mathematical problemsolving in visual contexts, with the rapid progress of MLLMs. There are prior attempts, such as
GeoQA [10], UniGeo [8], and Geometry3K [43], which focused exclusively on geometric problems.
The recently proposed MathVista [41] broadens the scope to incorporate a variety of multi-modal
tasks involving mathematical reasoning, and MMMU [63] covers college-level questions demanding
intricate, domain-specific knowledge. However, our analysis identifies three main shortcomings
within the current visual math benchmarks, as elaborated in Section 1 of the main paper. Therefore, we
propose MATHVERSE specialized in the multi-modal mathematical evaluation of MLLMs, comprising
twelve subjects, six problem versions, and 20K test samples. Our objective is to thoroughly investigate
whether and how much MLLMs genuinely interpret visual diagrams for mathematical reasoning.
**B** **Additional Experimental Details**
**Model Sources.** For different MLLMs, we select their latest models and best-performing configurations for evaluation to fully reveal their visual mathematical proficiency. Table 4 presents the release
time and model sources of MLLMs used in MATHVERSE.
-----
Table 4: The Release Time and Model Source of MLLMs Used in MATHVERSE.
**Release**
**Model** **Source**
**Time**
ChatGPT [48] 2022-11 `https://platform.openai.com/`
GPT-4 [46] 2023-03 `https://platform.openai.com/`
```
https://help.aliyun.com/zh/
dashscope/developer-reference/
vl-plus-quick-start
```
Qwen-VL-Plus [3] 2023-11
Gemini-Pro [22] 2023-12 `https://ai.google.dev/`
```
https://help.aliyun.com/zh/
dashscope/developer-reference/
vl-plus-quick-start
```
Qwen-VL-Max [3] 2024-01
GPT-4V [47] 2023-09 `https://platform.openai.com/`
```
https://github.com/OpenGVLab/
LLaMA-Adapter/tree/main/llama_
adapter_v2_multimodal7b
```
LLaMA-Adapter V2 [20] 2023-04
```
https://huggingface.co/
```
LLaVA-1.5 [37] 2023-10
```
liuhaotian/llava-v1.5-13b
https://github.com/
```
MiniGPT-v2 [11] 2023-10
```
Vision-CAIR/MiniGPT-4
https://huggingface.co/
```
mPLUG-Owl2 [62] 2023-11
```
MAGAer13/mplug-owl2-llama2-7b
https://github.com/pipilurj/
```
G-LLaVA [19] 2023-12
```
G-LLaVA/tree/main
```
```
https://github.com/OpenGVLab/
LLaMA-Adapter/tree/main/
imagebind_LLM
```
ImageBind-LLM [24] 2023-05
```
https://huggingface.co/
```
ShareGPT4V [12] 2023-11
```
Lin-Chen/ShareGPT4V-13B
```
```
https://huggingface.co/
Alpha-VLLM/LLaMA2-Accessory/
tree/main/finetune/mm/SPHINX/
SPHINX-v2-1k
https://huggingface.co/
liuhaotian/llava-v1.
6-vicuna-13b
https://huggingface.co/
Alpha-VLLM/LLaMA2-Accessory/
tree/main/finetune/mm/SPHINX/
SPHINX-MoE
https://huggingface.
co/internlm/
internlm-xcomposer2-vl-7b
```
SPHINX-Plus [36] 2023-11
LLaVA-NeXT [38] 2024-01
SPHINX-MoE [21] 2024-01
InternLM-XComposer2 [16] 2024-01
-----
Table 5: Input Prompt of MLLMs for Response Generation. We adopt two different prompts
for the free-form and multiple-choice questions. Note that these prompts are used for five problem
versions except for the Vision-only version.
**Question** **Prompt**
Please first conduct reasoning, and then answer the
question and provide the final value, e.g., 1, 2.5, 300,
at the end.
– Question: {question}
Please first conduct reasoning, and then answer the
question and provide the correct option letter, e.g.,
A, B, C, D, at the end.
– Question: {question}
Free-form Question
Multiple-choice Question
Table 6: Input Prompt for Vision-only Problems. Especially for the Vision-only version without
textual input, we add “According to the question shown in the image” at the beginning of the prompt,
and remove the “Question:” at the end.
**Question** **Prompt**
According to the question shown in the image, please
first conduct reasoning, and then answer the question
and provide the final value, e.g., 1, 2.5, 300, at the
end.
According to the question shown in the image, please
first conduct reasoning, and then answer the question
and provide the correct option letter, e.g., A, B, C, D,
at the end.
Free-form Question
Multiple-choice Question
**Prompt for Response Generation.** We adopt two types of prompts respectively for the free-form
and multiple-choice questions, as shown in Table 5. We inspire the Chain-of-Thought (CoT) reasoning
capabilities of MLLMs by using the phrase “first conduct reasoning”. Especially for the Vision-only
problem version in Table 6, we add “According to the question shown in the image” at the beginning
to remind MLLMs to read the questions rendered within diagrams, where the textual input for
MLLMs only contains the prompt itself.
**Prompt for the CoT Evaluation.** Our proposed CoT evaluation contains two steps, i.e., key-step
extraction and multi-step scoring, which prompt GPT-4 [46] and GPT-4V [47], respectively. The
input configuration is listed in Table 7. We utilize the text-only GPT-4 in the first step to extract
multiple key steps within the model’s unstructured output, without feeding the question information.
In the second step, we input the extracted key-step reasoning and all the available content related
to the problem into GPT-4V, allowing for a holistic assessment, including diagram interpretation,
logical reasoning, and numerical computation. In Figure 11, we showcase the manual annotation for
critical information within functional diagrams, e.g., function expression and properties. This assists
GPT-4V in evaluating the visual perception accuracy of MLLMs for function graphs.
**Human Performance Assessment.** We recruit ten qualified college students specifically for the
evaluation of human performance on MATHVERSE. These individuals are kept separate from the
data curation stage, eliminating the possibility of them encountering the solutions beforehand. We
allocate to each student the questions from a specific problem version. This strategy is to prevent
them from gaining additional information from another version to answer questions, e.g., leveraging
the textual Implicit Property from the Text-lite version to solve Text-intensive problems. They are
asked to directly provide the final answer without detailed reasoning. Therefore, we do not report the
CoT evaluation results for human performance, alongside the ‘Random Chance’ baseline.
-----
Table 7: Configuration for the CoT Evaluation Strategy. We conduct two evaluation phases,
respectively by prompting the text-only GPT-4 [46] and GPT-4V [47]. The symbol ‘XXX’ denotes
the given one-shot sample, abbreviated for brevity. The ‘Annotation’ (represented in grey) in the
second phase is only required for function problems.
**Phase** **Input** **Prompt**
I will give you a detailed solving procedure or a
single answer for a math problem.
If it is a procedure, you need to extract the key
solution steps and list them accordingly in markdown syntax. If it is just a single answer, output
the answer directly.
Here are examples:
– Model output: XXX
– Extracted: 1. XXX 2. XXX 3. XXX
– Model output: 2.2
– Extracted: The single answer is 2.2
Here is what you need to extract:
– Model output: {model output}
– Extracted:
I will first give you a visual math problem, including the question, diagram, ground-truth answer,
and detailed annotation of the diagram, and then
give you a model output containing multiple key
solution steps.
Please think step by step and output the Average
score, along with the Final answer score in the end,
as described below:
– Average score: Evaluate, based on the given question, answer, diagram, and diagram annotation,
whether each solution step is correct in logical
reasoning, visual perception, and numerical computation, with an incorrect score of 0 and a correct
score of 1. Then, calculate the average score of
multiple steps.
– Final answer score: Match the model’s final answer with the ground truth answer, scoring 1 if it
matches and 0 if it doesn’t.
– If the model output only includes a single step or
answer, the Average score and Final answer score
are the same.
– Question: {question}
– Ground-truth answer: {answer}
– Diagram annotation: {annotation}
– Model output: {extracted steps}
– Average score:
– Final answer score:
**Key-step**
**Extraction** Model Output
(GPT-4)
Extracted Steps
Question
Diagram
Answer
Annotation
**Multi-step**
**Scoring**
(GPT-4V)
-----
**Annotation:**
A piecewise function 𝑓:
when 𝑥<= 3, 𝑓𝑥= −𝑥+ 1,
when 𝑥> 3, 𝑓(𝑥) = 𝑥−5.
**Annotation:** **Annotation:**
A quadratic function 𝑦= 𝑥−2 [!] −4,
goes through (0, 0), (4, 0),
opens upward with a vertex (2, 4).
A U-shape curve,
passes the origin (0, 0),
opens upward.
**Annotation:** **Annotation:**
A linear function 𝑦= 0.5𝑥+ 2,
passes (-4, 0), (0, 2) and (2, 3),
the shaded region is above the line.
Two quadratic functions going through (1, 0) and (7, 0),
𝑓(𝑥) = −(1/3)(𝑥−1)(𝑥−7) in blue with a vertex is (4, 3),
𝑔(𝑥) = −(2/3)(𝑥−1)(𝑥−7) in dashed red with a vertex is (4, 6).
Figure 11: Manual Annotations for Function Problems in MATHVERSE. We provide detailed
annotations, e.g., function expression and properties, for the diagrams of 534 function problems,
which benefits the accuracy of GPT-4V [47] for CoT evaluation.
**C** **More Dataset Details**
**C.1** **Data Curation**
This paper engages twelve expert annotators for data curation, consisting of senior undergraduate and
graduate students from across the globe with a strong background in science. In collaboration with
the authors, they are required to mainly complete five tasks concerning data collection, categorization,
quality review, problem version transformation, and function diagram annotation.
**Data Collection.** We comprehensively collect visual math problems from existing datasets [43, 9,
51] and public question repositories[1][,][2][,][3]. We specifically select high-quality plane geometric problems
from current benchmarks, which showcase various question types, moderate question length, diverse
diagram styles, and appropriate solving difficulty. For the manually collected problems of three
subjects (plane geometry, solid geometry, and functions), we apply the Mathpix tool[4] to accurately
extract the question texts, diagrams, explanations, and answers from the website. We strictly comply
with copyright and licensing rules, ensuring that we refrain from using data from sites that forbid
copying and redistribution. After the initial collection, we obtain around 3.5K visual math problems,
with 1.5K from existing datasets and 2K newly collected.
1https://homework.study.com
2https://www.ixl.com/math
3https://mathspace.co/us
4https://mathpix.com
-----
Descriptive Information Original Question: Text Dominant:
Add In ⊙G, CE and AB are both diameters.
∠AGD is a right angle and ∠AGC is 60°.
∠AGC is 60°. Find ∠BGE. Find ∠BGE.
Choices: A:60° B:120° C:240° D:360°. Choices: A:60° B:120° C:240° D:360°.
Add A polynomial function is graphed, which
goes through (-2, 1), (0, 2), and (3, 1).
Determine the least possible degree of Determine the least possible degree of
the polynomial function shown. the function.
Add The figure shows ¾ of a cylinder. The
height is 15 centimeters and the radius
Find the volume of the figure shown is 5 centimeters.
with a radius of 5 centimeters, correct Find the volume of the figure shown,
to two decimal places. correct to two decimal places.
Figure 12: Manual Annotations for Descriptive Information in MATHVERSE. For some collected
problems, we are required to supplement additional Descriptive Information (highlighted in red) to
distinguish the Text-dominant version.
**Data Categorization and Review.** We first ask the human annotators to categorize the problems
into three primary subjects, i.e., plane geometry, solid geometry, and functions. Within each subject,
according to the definitions in Section C.2, the math problems are further divided into twelve finegrained categories. At the same time, we meticulously review the collected dataset. We manually
rectify the problems with incorrect answers and discard the problems with multiple diagrams, visual
solutions, and too much similar content to others. Finally, 2,612 high-quality math problems with
paired diagrams are preserved for MATHVERSE, spanning diverse subjects and subfields.
**Transformation of Problem Versions.** Given the three types of textual information within questions, human annotators rigorously transform each problem into six different versions as discussed
in Section 2.2 of the main paper. We utilize Microsoft PowerPoint to annotate the diagrams in the
Vision-dominant version, and employ Matplotlib to render the questions onto the diagrams in the
Vision-only version. As illustrated in Figure 12, for problems with minimal Descriptive Information,
we manually enhance the question text with additional contextual description about the diagram to
differentiate the Text-dominant version. In the case of questions in Figure 13, where the Essential
_Condition has been fully depicted in the diagrams, we remove some of this content from the diagram_
and incorporate it into the text to mark the Vision-dominant version.
**C.2** **Subject and Subfield Definition**
The visual math problems within MATHVERSE encompass three primary subjects, plane geometry,
solid geometry, and functions, alongside twelve finer-grained subfields, which comprehensively
evaluate the diagram understanding and mathematical reasoning capabilities of MLLMs.
**Plane Geometry** is a fundamental area that explores the properties and relations of points, lines,
and surfaces in a two-dimensional plane. This subject delves into concepts such as angles, triangles,
circles, and polygons, offering a rich context for assessing the spatial comprehension and logical
deduction skills of MLLMs. We divide it into five subfields, as exemplified in Figure 14:
-----
Original Question: Essential Condition Text Dominant:
Delete x
Add
In the right triangle, an acute angle is 70
Essential
Condition degrees and the hypotenuse is x length.
Find x to the nearest hundredth. Find x to the nearest hundredth.
Choices: A:5.13 B:5.46 C:15.96 D:43.86. Choices: A:5.13 B:5.46 C:15.96 D:43.86.
Delete 6
Add
Essential In trapezoid JKLM, JM = 6, KL = 6, and
Find ∠K. Condition ∠JML is 80 degrees. Find ∠K.
Choices: A:6° B:60° C:100° D:180°. Choices: A:6° B:60° C:100° D:180°.
Figure 13: Manual Modification for Textual Essential Condition in MATHVERSE. For the
original problems shown, we transfer some of the Essential Condition from diagrams to question
texts (highlighted in green) to mark the Vision-dominant version.
- Length focuses on the measurement and comparison of distances between points. This
subfield includes understanding the properties of lines, segments, and their use in determining the perimeters of geometric shapes, which is foundational for MLLMs to solve plane
geometry problems.
- Area examines the size of two-dimensional surfaces. It encompasses calculating the areas
of various shapes, such as triangles, rectangles, circles, and more complex polygons, by
applying specific formulas and principles, which is crucial for comprehending the concept
of space within geometry.
- Angle involves the study of angles and their properties, including different types of angles
(acute, right, and obtuse), angle measurement, and the relationships between angles, particularly in polygons. This subfield demands the advanced spatial perception capacity of
MLLMs.
- Analytic Geometry, also known as coordinate geometry, merges algebra and geometry to
solve geometric problems using coordinate systems, exploring the calculation and reasoning
of equations for geometric shapes. MLLMs are evaluated on their coordinate identification
and algebraic capabilities.
- Applied Geometry relate to the application of geometric principles to solve real-world and
theoretical problems. It challenges MLLMs to first understand the background information
within questions, and apply their knowledge of lengths, areas, angles, and analytic geometry
for problem-solving.
**Solid Geometry** focuses on the study of three-dimensional objects that have depth, length, and
width, thereby offering a more complex and enriched exploration of spatial structures. This subject
investigates a variety of shapes such as cubes, cylinders, spheres, and pyramids, and assesses MLLMs
to tackle questions concerning the volume, surface area, and geometric properties of these solids.
This subject contains three subfields, as exemplified in Figure 15:
- Length, extending from the 2D counterpart, focuses on measuring the edges and curves
that define three-dimensional objects. It involves determining the linear distance between
-----
**Plane Geometry:**
**Length**
**Question:**
As shown in the figure,
passing point P to draw
the tangent PC of circle
O, if AO = OB = PB =
1.0, then the length of
PC is (). Choices: A:1
B:√3 C:2 D:√5.
**Answer: B:√3**
**Applied**
**Question:**
A fighter jet, flying at an
altitude of 2000 m is …
find the distance covered
by the jet in one minute.
Round your answer to
the nearest meter.
**Plane Geometry:Area**
**Length**
**Question:**
As shown in the figure,
circle O is tangent to AB
at point C, angle BCE =
60.0, DC = 6.0, DE = 4.0,Question:
As shown in the figure,then 𝑆!"#$ is ().
passing point P to drawChoices: A:6 √5 B:6 √3
the tangent PC of circleC:6√2 D:6.
O, if AO = OB = PB =Answer: B:6√3
1.0, then the length of
PC is (). Choices: A:1
**Examples of Five Subfields in Plane GeometryB:Answer:√3 C:2 D: B:√5√3.**
and Applied Geometry problems. We showcase the Text-lite version.
**Angle**
**Area**
**Question:**
Consider the points on
the diagram P, M, and S,
where S denotes the
southQuestion:cardinal point.
Using the diagram, find
As shown in the figure,the true bearing of P
circle O is tangent to ABfrom M.
at point C, angle BCE =
60.0, DC = 6.0, DE = 4.0,Answer: 230[∘]T
then 𝑆!"#$ is ().
Choices: A:6 √5 B:6 √3
C:6√2 D:6.
**Answer: B:6√3**
**Analytic**
**Angle**
**Question:**
What are the coordinates
of point A? Enter an
exact value or round to
**Question:**
the nearest hundredth.
Consider the points on
the diagram P, M, and S,
where S denotes the
**Answer:south** cardinal([√2]2 [, √2]2 point.[)]
Using the diagram, find
the true bearing of P
from M.
**Answer: 230[∘]T**
**Applied**
**Analytic**
**Question:**
A fighter jet, flying at an
altitude of 2000 m is …
find the distance covered
**Question:**
by the jet in one minute.
Round your answer to
the nearest meter.What are the coordinates
of point A? Enter an
exact value or round toAnswer: 1688
the nearest hundredth.
**Answer:** ([√2]2 [, √2]2 [)]
**Length** **Solid Geometry:Area** **Volume**
**Length** **Area** **Volume**
**Question:** **Question:** **Question:**
The perpendicularheight of the coneis 12 m. Hence, The length of CD sideis 12 cm. Calculate thearea of the triangular The length of the pipeis 13 cm. Calculatethe weight of the pipeif 1 𝑐𝑚['] of the metal
**Answer:find the length ofthe diameter of thecone's base. 10 m** divider correct to twodecimal places.Answer:Question:The perpendicularheight of the coneis 12 m. Hence, 124.85 𝑐𝑚[&] The length of CD sideis 12 cm. Calculate thearea of the triangularAnswer:Question:weighs 5.3 g, givingyour answer correct toone decimal place. 63.3 g **Question:The length of the pipeis 13 cm. Calculatethe weight of the pipeif 1 𝑐𝑚[']** of the metal
find the length ofthe diameter of thecone's base. divider correct to twodecimal places. weighs 5.3 g, givingyour answer correct to
one decimal place.
**Answer: 10 m** **Answer: 124.85 𝑐𝑚[&]** **Answer: 63.3 g**
**Functions:**
**Coordinate** **Property** **Expression** **Applied**
**Answer:**
1688
Figure 15:Coordinate
**Question:• Area**
10 ⋅𝑓7 + 9 ⋅𝑔(−1) =
performance.
**Answer:• Volume −1**
**Examples of Three Subfields in Solid GeometryProperty**
problems. We showcase the Text-lite version.Functions:
**Coordinate**
**Question:**
Is the discriminant
of 𝑓 positive, zero,
or negative?
**Question:Choices: A:Positive**
B:Zero C:Negative.
10 ⋅𝑓7 + 9 ⋅𝑔(−1) =Answer: B: Zero
**Answer: −1**
**Expression**
**Property**
points in space, the perimeters of bases of solids, and the height or depth of objects. This
measurement is a foundational element for MLLMs in analyzing geometric solids.
encompasses the calculation of the total area covered by the outer surfaces of solids.Question:
This normally requires MLLMs to break down complex shapes into several simpler components for area calculation in plane geometry, assessing their spatial and logical reasoningWrite the equation for g(x).
**Question:**
Is the discriminant
pertains to measuring the space enclosed within three-dimensional objects. ThisAnswer:ofor negative? 𝑓 positive, zero, 63.3 g
demands MLLMs to precisely identify the geometric solids and apply accurate formulasChoices: A:Positive
to calculate the volume, which evaluates their mathematical knowledge application andB:Zero C:Negative.
**Answer:** B: Zero
**Applied**
**Expression**
**Question:**
The function A models
the rectangle's area (in
square meters) ... Which
of these statements areQuestion:
true? Choices: A:Greater
width always relates …
Write the equation for g(x).Answer: D:When …
**Answer: 63.3 g**
**Functions** involve analyzing mathematical functions to understand the relationship between variables. These challenges range from simple tasks, like calculating a function value for a given input,
to more complex scenarios, such as exploring the behavior and representation of various function
types. We evaluate MLLMs by four types of function problems, exemplified in Figure 16:
- Function Coordinate focuses on interpreting and extracting coordinate-level information
from graphical representations of functions. It includes tasks such as identifying specific
coordinate values of points on the graph and observing intersection points between functions
and axes, which test the MLLM’s basic proficiency in functional visual perception.
-----
**Functions:**
**Property**
**Question:**
Is the discriminant
of 𝑓 positive, zero,
or negative?
Choices: A:Positive
B:Zero C:Negative.
**Expression**
**Question:**
Write the equation for g(x).
**Answer: 63.3 g**
**Applied**
**Question:**
The function A models
the rectangle's area (in
square meters) ... Which
of these statements are
true? Choices: A:Greater
width always relates …
**Answer: D:When …**
**Answer:**
B: Zero
**Coordinate**
**Question:**
10 ⋅𝑓7 + 9 ⋅𝑔(−1) =
**Answer: −1**
Figure 16: Examples of Four Subfields in Functions, spanning Function Coordinate, Property,
Expression, and Applied problems. We showcase the Text-lite version.
- Function Property emphasizes the model’s capacity to discern and deduce the inherent
properties of functions from their graphs, such as symmetry, asymptotes, extrema (maximum
and minimum points), and intervals of increase or decrease. These problems can reveal the
understanding of MLLMs for the deeper characteristics of functions.
- Function Expression refers to the direct analysis using the algebraic expressions of functions, widely including linear, quadratic, polynomial, exponential, logarithmic, and piecewise functions. It challenges MLLMs to extract specific function expressions and apply
transformations, bridging the gap between abstract mathematical reasoning and visual
interpretation.
- Applied Function, similar to the applied geometry problems, requires MLLMs to leverage
their functional knowledge and theorems in real-world scenarios, e.g., modeling economic
data, predicting physical phenomena, and calculating probabilities. This assesses the
MLLM’s capabilities to understand functions in both theoretical and practical contexts.
**C.3** **Detailed Statistics of MATHVERSE**
**More Data Statistics.** In Table 8, we provide a more detailed data statistics of MATHVERSE.
Therein, the 534 newly annotated questions refer to all the function problems, for which we meticulously annotate critical functional information, as depicted in Figure 11. The number of newly
annotated diagrams represents the 5,224 math problems in the Vision-dominant and Vision-only
versions. For these problems, we respectively integrate the Essential Condition and all textual content
with the diagrams. We also list the numbers of multiple-choice answers, where A, B, C, and D are
almost uniformly distributed.
**Problem Length Variance.** In Table 9, we highlight the variance in question and answer lengths
across the five problem versions in MATHVERSE, excluding the Vision-only category due to its
absence of text. For both word and character levels, as we remove the pre-defined textual elements
(Descriptive Information, Implicit Property, and Essential Condition), the maximum and average
lengths of questions decrease accordingly, while the answer lengths remain the same. In Figure 17,
we visualize the word-level variation of question length for the three problem versions: Text Dominant (blue), Text Lite (green), and Vision Dominant (red). By progressively omitting Descriptive
_Information and Essential Condition from the Text-dominant version, we observe a clear downward_
trajectory for the question length distribution and their average values.
-----
Table 8: Statistics of MATHVERSE.
**Statistic** **Number**
Total questions 2,612
- Subjects/subfields 3/12
- Multiple-choice questions 1,631 (62.4%)
- Free-form questions 981 (37.6%)
- Newly collected questions **1,236 (47.3%)**
- Existing-dataset questions 1,376 (52.7%)
- Questions with explanations **1,236 (47.3%)**
- Newly annotated questions **534 (20.4%)**
Multiple-choice question
- Proportion of answer A 585 (22.4%)
- Proportion of answer B 828 (31.7%)
- Proportion of answer C 703 (26.9%)
- Proportion of answer D 444 (17.0%)
- Proportion of answer E&F 52 (2.0%)
**Total test samples** **15,672**
- Newly annotated samples **10,448 (66.7%)**
- Newly annotated diagrams **5,224 (33.3%)**
- Samples of each version 2,612 (16.7%)
Number of unique images 2,420 (92.6%)
Number of unique questions 2,573 (98.5%)
Number of unique answers 847 (32.4%)
Table 9: Length of Different Problem Ver**sions in MATHVERSE.**
**Problem Version** **Word** **Character**
Text Dominant & Text Only
- Maximum question length 203 1,311
- Maximum answer length 17 102
- Average question length 35.7 204.8
- Average answer length 1.4 6.3
Text Lite
- Maximum question length 179 1,173
- Maximum answer length 17 102
- Average question length 22 133.8
- Average answer length 1.4 6.3
Vision Intensive
- Maximum question length 171 1,126
- Maximum answer length 17 102
- Average question length 18.8 116.8
- Average answer length 1.4 6.3
Vision Dominant
- Maximum question length 176 1,132
- Maximum answer length 17 102
- Average question length 17.6 123.5
- Average answer length 1.4 6.3
Distribution of Question Lengths (Word)
|Col1|Col2|Col3|Text Dominant Text Lite Vision Dominant Text Dominant Average Text Lite Average Vision Dominant Average|Text Dominant Text Lite Vision Dominant Text Dominant Average Text Lite Average Vision Dominant Average|
|---|---|---|---|---|
20 40 60 80 100
0.20
0.15
0.10
0.05
0.00
Text Dominant
Text Lite
Vision Dominant
Text Dominant Average
Text Lite Average
Vision Dominant Average
Figure 17: Distribution of Question Length for Three Problem Versions. We exclude the
_Descriptive Information and Essential Condition from the Text-dominant problems, respectively_
creating the Text-lite and Vision-dominant versions.
**D** **Comparison to Current Benchmarks**
In this section, we offer a detailed comparison between MATHVERSE and existing multi-modal
mathematical benchmarks, i.e., geometry-specific benchmarks [9, 5, 7, 42, 51], MathVista [41], and
MMMU [63], from the following four aspects:
**The Investigation of Diagram Interpretation Capacity.** As discussed in Figure 1 of the main
paper, the math problems in most existing datasets contain excessive redundant information in
textual content, which is repetitive to the visual elements in diagrams. This issue enables MLLMs to
potentially bypass the process of visual understanding, and thereby cannot determine whether and how
much MLLMs truly interpret the math diagram. In contrast, our MATHVERSE includes six problem
versions with different information content across text and vision. By comparing the performance
variance between different problem versions, we can thoroughly investigate the mathematical diagram
interpretation capabilities of MLLMs for the first time.
**Evaluation Approach.** Previous benchmarks adopt a simple True or False metric to score the
response from MLLMs, which lacks fine-grained information and intermediate reasoning assessment,
as analyzed in Figure 2 of the main paper. In contrast, MATHVERSE adopts a unique CoT evaluation
-----
**Table QA:**
**Textbook and Science QA:**
**Plot and Chart QA:**
**IQ Test and Synthetic QA:**
**General and Icon QA:**
Figure 18: Diagram Examples of Math-related Tasks in MathVista [41]. These tasks are not
strongly correlated to the mathematical reasoning skills of MLLMs, probably skewing the assessment
emphasis towards visual math problems.
-----
Required Nonlinear Alternating Lagrangian
Calculus
Knowledge: Programming Group Multipliers
Diagram:
Required Cayley Hopcroft Chromatic
Dijkstra Algorithm
Knowledge: Diagram Algorithm Index
Diagram:
Figure 19: Diagram Examples with Required Knowledge in MMMU [63]. These math problems
demand MLLMs to comprehend college-level domain knowledge, potentially hindering them from
fully exerting mathematical reasoning skills.
strategy by examining each crucial solution step within the model output. This approach not only
unveils the CoT reasoning quality of MLLMs, but also provides detailed error analysis, serving as
valuable guidance for future enhancement.
**The Depth and Width in Math Problems.** The geometry-specific benchmarks evaluate only a
limited dimension of mathematical skills in MLLMs. MathVista instead incorporates a variety of mathrelated question-answering tasks, e.g., textbook figures, tables, plots, charts, puzzles, and synthetic
scenes, as exemplified in Figure 18. However, the integration of these peripheral tasks (covering more
than 70%) might divert the focus from the specialized mathematical evaluation of MLLMs. In addition,
MMMU focuses on college-level complexity, requiring advanced domain-specific knowledge, as
depicted in Figure 19. Given this, the lack of profound mathematical theorems would restrict the
performance of MLLMs, biasing the evaluation of logical reasoning and visual perception proficiency.
Therefore, our MATHVERSE concentrates on specialized visual math problems (plane geometry,
solid geometry, and functions) with a moderate difficulty (high-school level), aiming to fully exert
the capabilities of MLLMs.
**Total Volume of Test Samples.** We summarize the size of test instances for different datasets
in Table 10. As demonstrated, our MATHVERSE offers a considerably larger number of samples
than others, nearly three times to MathVista and twenty times to GeoQA+, including meticulously
annotated six versions of visual math problems. This contributes to a comprehensive and robust
evaluation of visual mathematical reasoning capabilities.
Table 10: Number of Test Samples in Different Benchmarks.
MMMU**Benchmark** GEOS Geo3K GeoQA+ MathVista **MATHVERSE**
Math
**Test Samples** 119 601 755 6,141 540 **15,672**
-----
**E** **Limitation and Future Work**
While our MATHVERSE takes a step forward in the field of visual mathematical evaluation for
MLLMs, it is important to recognize several limitations as follows.
We have categorized the math problems in MATHVERSE by various criteria, including subjects,
subfields, and versions featuring differing degrees of multi-modal content. These categorization
approaches evaluate the capabilities of MLLMs from multiple dimensions. Nevertheless, it is also
meaningful to further divide the problems based on their difficulty levels, akin to MATH [26], a
text-only benchmark defining five levels of difficulty. This additional layer of differentiation can
provide deeper insights into the problem-solving abilities of MLLMs across a spectrum of challenges,
which we leave as future work.
The curated dataset in MATHVERSE focuses on math problems in the high school level with moderate
difficulty, which aims to fully demonstrate the mathematical reasoning skills within current MLLMs.
However, with the advancement of architecture and training methodologies, future MLLMs have the
potential to grasp more complex knowledge and theorems across a variety of domains. Therefore,
there is significant value in further augmenting MATHVERSE with problems spanning broader
complexity and disciplines, including those at the college level and within scientific fields. By
transforming the expanded problems into different versions, we can facilitate a more comprehensive
and robust evaluation of MLLMs for their diagram interpretation and reasoning capabilities.
Moreover, the problems in MATHVERSE and other current mathematical benchmarks are mainly
in English. Given that some multilingual MLLMs [24, 3] have been developed, existing evaluation
cannot reveal their full capabilities when confined to a single language. The incorporation of
multilingual visual math problems would not only extend the dataset’s global applicability, but also
enhance the assessment of MLLMs for linguistic diversity and understanding.
**F** **Qualitative Examples**
To ease the understanding, we offer a variety of qualitative examples in MATHVERSE. In Section F.1,
we showcase the meticulously transformed six versions of visual math problems. In Section F.2, we
compare the response of different MLLMs on Text-lite problems, including GPT-4V [47], LLaVANeXT [38], and SPHINX-MoE [21]. Specifically, we present the key-step extraction output by the
CoT evaluation, and mark the multi-step scoring results aside. In Section F.3, we provide the response
comparison of GPT-4V for three problem versions in MATHVERSE, i.e., Text Dominant, Text Lite,
and Vision Dominant.
**F.1** **Comparison of Six Problem Versions**
Please refer to Figures 13∼15.
**F.2** **Response of Different MLLMs**
Please refer to Figures 16∼21.
**F.3** **Response of Different Problem Versions**
Please refer to Figures 22∼27.
-----
Descriptive Information Implicit Property Essential Condition
📖 Text Input 🔍 Vision Input
Write an equation for
the transformation of
f(x)=|x| as shown in
the figure, which
passes (3, 3) and is
piecewise.
Write an equation for
the transformation of
f(x)=|x| as shown in
the figure.
Write an equation for
the transformation of
f(x)=|x| as shown in
the figure, which
passes (3, 3) and are
piecewise.
Write an equation for
the function.
Write an equation for
the transformation of
f(x)=|x| as shown in
the figure.
📖 Text Input 🔍 Vision Input
The logarithm function
f(x)=ln(x) passing (1,4)
and logarithm function
g(x) passing (1,0) are
shown below.
g(x) is a transformation
of f(x). Hence state the
equation of g(x).
The logarithm function
f(x)=ln(x) and logarithm
function g(x) are shown
below.
g(x) is a transformation
of f(x). Hence state the
equation of g(x).
The logarithm function
f(x)=ln(x) passing (1,4)
and logarithm function
g(x) passing (1,0) are
shown below.
g(x) is a transformation
of f(x). Hence state the
equation of g(x).
The function f(x)=ln(x)
and g(x) are shown,
hence state the equation
of g(x).
The logarithm function f(x)
and logarithm function g(x)
are shown below.
g(x) is a transformation of
f(x). Hence state the
equation of g(x).
Figure 20: Comparison of Six Problem Versions in MATHVERSE.
Text
Dominant
Text
Lite
Text
Only
Vision
Intensive
Vision
Dominant
Vision
Only
-----
Descriptive Information Implicit Property Essential Condition
📖 Text Input 🔍 Vision Input
A soft drink can has a
height of 13 cm and a
radius of 3 cm. Find
L, the length of the
longest straw that can
fit into the can (so
that the straw is not
bent and fits entirely
inside the can).
The radius is 3 cm.
Find L.
A soft drink can has a
height of 13 cm and a
radius of 3 cm. Find
L, the length of the
longest straw that can
fit into the can (so
that the straw is not
bent and fits entirely
inside the can).
The radius is 3 cm.
Find L.
Find L.
📖 Text Input 🔍 Vision Input
A square pyramid is
shown in the image.
The height of this
square pyramid is
5x+7. Length L is the
side of the base of the
pyramid. Write down
an expression for L, in
terms of the variable x.
A square pyramid is
shown in the inage.
The height of this
square pyramid is
5x+7. Write down an
expression for L, in
terms of the variable x.
A square pyramid is
shown in the image.
The height of this
square pyramid is
5x+7. Length L is the
side of the base of the
pyramid. Write down
an expression for L, in
terms of the variable x.
The height of this
solid is 5x+7. Write
down an expression
for L, in terms of the
variable x.
A square pyramid is
shown in the image.
Find an expression for
L, in terms of the
variable x.
Figure 21: Comparison of Six Problem Versions in MATHVERSE.
Text
Dominant
Text
Lite
Text
Only
Vision
Intensive
Vision
Dominant
Vision
Only
-----
Descriptive Information Implicit Property Essential Condition
📖 Text Input 🔍 Vision Input
As shown in figure,
in right triangle ABC,
it is known that angle
A = 90°, AC = 3.0,
AB = 4.0, then Sin B
is. Choices: …
As shown in figure,
in right triangle ABC,
AC = 3.0, AB = 4.0,
then Sin B is equal to.
Choices: …
As shown in figure,
in right triangle ABC,
it is known that angle
A = 90°, AC = 3.0,
AB = 4.0, then Sin B
is. Choices: …
As shown in figure,
AC = 3.0, AB = 4.0,
then Sin B is equal to.
Choices: …
As shown in figure,
in right triangle ABC,
then Sin B is equal to.
Choices: …
📖 Text Input 🔍 Vision Input
As shown in the
figure, it is known
that AB and AD are
the chords of circle O,
angle BOD = 50° then
the degree of angle
BAD is (). Choices: ..
As shown in the
figure, angle BOD =
50° then the degree of
angle BAD is ().
Choices: ..
As shown in the
figure, it is known
that AB and AD are
the chords of circle O,
angle BOD = 50° then
the degree of angle
BAD is (). Choices: ..
As shown in the
figure, angle BOD =
50° then the degree of
angle BAD is ().
Choices: ..
As shown in the
figure, the degree
of angle BAD is ().
Choices: ..
Figure 22: Comparison of Six Problem Versions in MATHVERSE.
Text
Dominant
Text
Lite
Text
Only
Vision
Intensive
Vision
Dominant
Vision
Only
-----
**Question:**
A square prism has a side GF of length 15 cm as shown. Now, we want to find y, the length
of the diagonal DF. Calculate y to two decimal places.
**Answer:** y = 21.61
**Subject:**
**Subfield:**
Solid Geometry
Length
**GPT-4V:**
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
1. Consider the square prism as a combination of two right-angled triangles. Triangle DEF is one of them, and triangle DCF is the other.
2. Calculate the length of the diagonal EF of the bottom square face EFGH using the Pythagorean theorem. Since EF is the diagonal of a
square of side length 15cm (side GF), we have: EF^2 = GF^2 + GE^2, EF^2 = 450. Now taking the square root to find EF: EF = √450 =
15√2 centimeters.
3. Find the diagonal DF which runs through the rectangular face DCEF. Triangle DEF is a right-angled triangle, hence we again apply
Pythagorean theorem, with EF (just calculated) and DE being the legs, and DF being the hypotenuse. Since DE is equivalent in length to
AB which is given as 11 cm, we have: DF^2 = DE^2 + EF^2 = 571.Taking the square root to find DF: DF = √571. Let's calculate the
numeric value to two decimal places: DF ≈23.89 centimeters.
4. Therefore, y, the length of the diagonal DF, is approximately 23.89 cm. Visual Perception Error
571. Let's calculate the
Multi-step Score: 0.5 Final CoT Evaluation Score: 0.35
**LLaVA-NeXT:**
1. Use the Pythagorean theorem to find the length of the diagonal. The theorem states that the square of the length of the
hypotenuse (the diagonal in this case) is equal to the sum of the squares of the lengths of the legs. In this case, the legs are the sides
of the square prism, which are of equal length.
2. Calculate the square of the lengths of the legs: y^2 = 11^2 + 11^2, y^2 = 121 + 121, y^2 = 242.
3. Take the square root of both sides to find the length of the diagonal: y = √242.
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
4. Round the square root to two decimal places and then multiply by itself to find the length of y to two decimal places:
y = √242 ≈ 15.5, y * y ≈ 25.25.
5. The length of the diagonal DF is approximately 15.5 cm. Numerical Calculation Error
Multi-step Score: 0.2 Final CoT Evaluation Score: 0.14
**SPHINX-MoE:**
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
1.Use the Pythagorean theorem to find the length of the diagonal DF.
2.The Pythagorean theorem states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right
angle) is equal to the sum of the squares of the lengths of the other two sides.
3.In this case, we have a right triangle with sides of length 11cm and 15cm. Let's call the length of the hypotenuse y.
.
4. Using the Pythagorean theorem, we have: y^2 = 11^2 + 15^2, y^2 = 121 + 225, y^2 = 346.
5. Taking the square root of both sides, we get: y = √346, y ≈ 18.60.
6. Therefore, the length of the diagonal DF is approximately 18.60 cm.
Multi-step Score: 0.33
Final CoT Evaluation Score: 0.23
Figure 23: Response Comparison of GPT-4V [47], LLaVA-NeXT [38], and SPHINX-MoE [21].
We adopt the Text-lite version of the problem, and highlight the key-step extraction and scoring by
the CoT evaluation strategy.
-----
**Question:**
Find the volume of the cone shown. The perpendicular height is 6 cm. Round your answer to
two decimal places.
**Answer:** Volume = 25.13 cm[3]
**Subject:**
**Subfield:**
Solid Geometry
Volume
**GPT-4V:**
1/1
1/1
1/1
1/1
1/1
1.Use the formula for the volume of a cone: Volume = 1/3 * 𝜋 - r^2 * h, where r is the radius and h is the height of the cone.
2. Substitute the given values into the formula: Volume = 1/3 * 𝜋 - (2 cm)^2 * 6 cm, Volume = 1/3 * 𝜋 - 4 cm[2] - 6 cm, Volume = 𝜋 - 4 cm[2]
- 2 cm, Volume = 8 𝜋 cm[3].
3. Compute the volume numerically using 𝜋≈ 3.14159: Volume ≈ 8 * 3.14159, Volume ≈ 25.13272 cm[3].
4. Round the result to two decimal places: Volume ≈ 25.13 cm[3].
5. Final value: 25.13 cm[3].
Multi-step Score: 1 Final CoT Evaluation Score: 1
**LLaVA-NeXT:**
Reasoning Error
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
1. Use the formula for the volume of a cone: V = (1/3) 𝜋 r^2 h, where r is the radius and h is the height of the cone.
2. Convert the height from centimeters to meters. There are 100 centimeters in a meter, so 6 centimeters is equal to 0.06 meters.
3. Plug the values into the formula: V = (1/3) 𝜋 (2^2) (0.06), V = (1/3) 𝜋 (4) (0.06), V = (1/3) 𝜋 (4) (0.06) m[3], V = 0.00064 m[3].
4. Round to two decimal places. The volume of the cone is approximately 0.00064 cubic meters.
Multi-step Score: 0.25 Final CoT Evaluation Score: 0.175
**SPHINX-MoE:**
Knowledge Error
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
✓ 1/1
1. Use the formula for the volume of a cone: V = (1/3) 𝜋 r^2 h, where V is the volume, r is the radius, and h is the height.
2. Given the height h = 6 cm but not the radius r. Use the fact that the radius is equal to the height of the triangle formed by the
cone's slant height and the height of the cone.
3. The slant height is the hypotenuse of a right triangle with legs of length 6 cm and 2 cm (the radius of the base of the cone). Use
the Pythagorean theorem to find the slant height: a^2 = b^2 + c^2, where a is the hypotenuse (slant height), b is the height of the
cone (6 cm), and c is the radius of the base of the cone.
4. Calculate the slant height: a^2 = 6^2 + 2^2, a^2 = 36 + 4, a^2 = 40, a = √40.
5. Use the Pythagorean theorem again to find the radius (r = 2 cm): r^2 = a^2 - b^2, r^2 = (√40)^2 - 6^2 = 40 - 36, r^2 = 4, r = √4.
6. Now that we have the radius (r = 2 cm), we can find the volume of the cone: V = (1/3) 𝜋 r^2 h, V = (1/3) 𝜋 (2)^2 (6), V = (1/3) 𝜋
(4)(6), V = (1/3) 𝜋 (24), V = 8𝜋.
7. The volume of the cone is approximately 25.13 cubic centimeters.
Multi-step Score: 0.29 Final CoT Evaluation Score: 0.5
Figure 24: Response Comparison of GPT-4V [47], LLaVA-NeXT [38], and SPHINX-MoE [21].
We adopt the Text-lite version of the problem, and highlight the key-step extraction and scoring by
the CoT evaluation strategy.
-----
**Question:**
There is an isosceles triangle. Find the area of the figure.
Choices: A:30 B:60 C:120 D:240
**Answer:** B
**Subject:**
**Subfield:**
Plane Geometry
Area
**GPT-4V:**
1/1
1/1
1/1
1/1
1/1
1/1
1. Use the formula for the area of a triangle, which is ( 1/2 * base * height).
2. Determine the height h of the isosceles triangle with two equal sides (each with length 13) and a base (length 10).
3. Drop a perpendicular from the vertex between the equal sides to the midpoint of the base to create two right-angled triangles. This
splits the base into two segments of length 5 each.
4. Use the Pythagorean theorem for one of these right-angled triangles: h^2 + 5^2 = 13^2, h^2 = 13^2 - 5^2, h^2 = 169 – 25, h^2 = 144.
So, h = √144. Therefore, h = 12.
5. Find the area of the isosceles triangle: Area = (1/2 * base * height), Area = (1/2 * 10 * 12), Area = 5 * 12, Area = 60.
6. The area of the isosceles triangle is 60. The correct option is B: 60.
Multi-step Score: 1 Final CoT Evaluation Score: 1
**LLaVA-NeXT:**
1. Calculate the area of an isosceles triangle using the formula: Area = (1/2) * Base * Height.
2. In the given problem, the base of the triangle is 10 units long, and the height is 13 units long.
3. Substitute the given values into the formula: Area = (1/2) * 10 * 13, Area = 13 * 7, Area = 91.
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
4. The final answer is 91.
Multi-step Score: 0.5 Final CoT Evaluation Score: 0.35
Numerical Calculation Error
**SPHINX-MoE:**
Visual Perception Error
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
1. The area of an isosceles triangle is given by the formula A = (1/2) * b * h, where b is the base length and h is the height.
2. In this case, the base length is 10 and the height is 13.
3. Plugging these values into the formula, we get A = (1/2) * 10 * 13 = 65.
4. Therefore, the area of the triangle is 65. The correct option is B: 60.
Multi-step Score: 0.25 Final CoT Evaluation Score: 0.475
Figure 25: Response Comparison of GPT-4V [47], LLaVA-NeXT [38], and SPHINX-MoE [21].
We adopt the Text-lite version of the problem, and highlight the key-step extraction and scoring by
the CoT evaluation strategy.
-----
**Question:**
Find the value of h to the nearest metre.
**Answer:** 9
**Subject:**
**Subfield:**
Plane Geometry
Applied
**GPT-4V:**
1. Use trigonometry, specifically the tangent function, to relate the opposite side of the angle in the triangle (which is h, the height of
the tower) to the adjacent side (which is the horizontal distance from the base of the tower).
2. For the smaller angle of 10 degrees, the tower with the known height creates the opposite side, and the 29-meter distance is the
adjacent side. However, we need to find the height of the taller tower, for which we have a 22-degree angle.
3. Use the tangent function for the 22-degree angle: tan(22°) = opposite/adjacent tan(22°) = h/29.
4. Solve for h: h = 29 * tan(22°).
5. Calculate this with a calculator: h ≈29 * 0.4040 (tan(22°) ≈0.4040) h ≈11.716 metres.
6. Round to the nearest metre: h ≈12 metres Final value: 12.
Multi-step Score: 1 Final CoT Evaluation Score: 1
1/1
1/1
1/1
1/1
1/1
1/1
**LLaVA-NeXT:**
✓ 1/1
**×** [0/1]
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
1. Determine the height of the tower relative to the person's height by measuring the distance between the person's shadow and the
tower.
2. Assume that the distance between the person and the tower is equal to the length of the shadow.
3. Use the formula to find the length of the shadow: Shadow length = (person's height) x tan(angle of light).
= 2.2 meters x 1 = 2.2 meters. Visual Perception Error
meters - 2.2 meters = 0.3 meters.
6. The value of h is approximately 0.3 meters, or 30 centimeters.
Multi-step Score: 0.3 Final CoT Evaluation Score: 0.2
Reasoning Error
**SPHINX-MoE:**
**×** [0/1]
1. The single answer is that the value of h is 30.
Multi-step Score: 0 Final CoT Evaluation Score: 0
Figure 26: Response Comparison of GPT-4V [47], LLaVA-NeXT [38], and SPHINX-MoE [21].
We adopt the Text-lite version of the problem, and highlight the key-step extraction and scoring by
the CoT evaluation strategy.
-----
**Question:**
A polynomial p is graphed. What could be the equation of p ?
Choices:
A: p(x)=(x+1)[2](2x+5)[2](x-3)[2]
B: p(x)=(x+1)[2](2x+5)(x-3)[2]
C: p(x)=(x+1)[2](2x+5)(x-3)
D: p(x)=(x+1)(2x+5)(x-3)[2]
**Answer: B** **Subject:** Functions **Subfield:** Expression
**GPT-4V:**
1. Analyze the graph for key features such as x-intercepts (roots) and the behavior of the graph at those intercepts.
2. Identify the x-intercepts: at x = -1, x = -2.5 (which is -5/2 when converted to a fraction), and x = 3. This tells us that the factors of
the polynomial must include (x + 1), (x + 5/2), and (x - 3).
3. Look at the behavior of the graph at each intercept to determine the multiplicity of each root. The graph touches the x-axis and
turns around at x = -1 and x = 3, which indicates that these roots are of even multiplicity. The graph crosses the x-axis at x = -2.5,
which indicates that this root is of odd multiplicity.
4. Match the graph to the correct equation by comparing the multiplicity of the roots with the given options.
5. The correct equation that matches the graph is: B: p(x)=(x+1)[2](2x+5)(x-3)[2].
Multi-step Score: 1 Final CoT Evaluation Score: 1
1/1
1/1
1/1
1/1
1/1
**LLaVA-NeXT:**
Visual Perception Error
✓ 1/1
**×** [0/1]
**× 0/1**
**×** [0/1]
**×** [0/1]
1. Analyze the graph and identify the degree of the terms in the polynomial.
2. Identify that the polynomial has a degree of 2 for the terms with the highest degree, which are the squares of the linear terms.
a degree of 2
3. Identify the highest degree term that is not squared is the term with the variable raised to the second power, which is (x+1)[2].
4. Identify the next highest degree term is (2x+5)[2], and the lowest degree term is (x-3)[2].
5. Conclude that the equation of the polynomial p(x) should be of the form: p(x) = (x+1)[2](2x+5)[2](x-3)[2] This is option A.
Multi-step Score: 0.2 Final CoT Evaluation Score: 0.14
**SPHINX-MoE:**
**×** [0/1]
**×** [0/1]
**×** [0/1]
✓ 1/1
**×** [0/1]
**×** [0/1]
1. The graph of p has a vertical tangent at x=-1.
2.The graph has a horizontal tangent at x=3.
3. The graph has a vertical tangent at x=-5/2.
4.The graph of p also passes through the point (-1,0).
5. The equation of p is p(x)=(x+1)[2](2 x+5)(x-3)[2].
6. The correct option is A.
Visual Perception Error
Multi-step Score: 0.17 Final CoT Evaluation Score: 0.12
Figure 27: Response Comparison of GPT-4V [47], LLaVA-NeXT [38], and SPHINX-MoE [21].
We adopt the Text-lite version of the problem, and highlight the key-step extraction and scoring by
the CoT evaluation strategy.
-----
**Question:**
The function A models the rectangle‘s area (in square meters) as a function of its width (in meters).
Which of these statements are true? Choices:
A: Greater width always relates to smaller area.
B: Greater width relates to smaller area as long as the width is less than 10 m.
C: When there is no width, the area is 20 m[2].
D: When there is no width, the area is 0 m[2].
**Answer:**
**Subject:** Functions
**GPT-4V:**
✓ 1/1 1.Analyze the graph of the function A, which models the area of a rectangle as a function of its width.
2. Evaluate each statement based on the graph:
- A: Greater width always relates to a smaller area.
- B: The area is always increasing with the width.
- C: When the width is 0, the area is not 0.
- D: When there is no width, the area is 0 m[2].
3. Determine that statement D is true because the graph starts at the origin (0,0), indicating that when there is no width, there is no
area, which is consistent with the properties of a rectangle.
4. The correct option is: D: When there is no width, the area is 0 m[2].
Multi-step Score: 1 Final CoT Evaluation Score: 1
1/1
1/1
1/1
**LLaVA-NeXT:**
Visual Perception Error
✓ 1/1
**×** [0/1]
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
✓ 1/1
✓ 1/1
**×** [0/1]
1. Analyze the given information and the graph of the quadratic function A.
2. Observe that the vertex of the quadratic function is at (10, 100), indicating that the function has a minimum point at this point.
3. Note that the quadratic function intersects the x-axis at (0,0) and (20,0), which means the area is zero when the width is zero.
4. Consider the given statements and determine their correctness based on the observations from the graph.
5. Statement A is incorrect as the area does not always decrease with increasing width.
6. Statement B is correct as the area decreases with increasing width up to the vertex, which is at 10 meters.
7. Statement C is incorrect as the area is zero when the width is zero, not 20 square meters.
8. Statement D is correct as the area is zero when the width is zero.
9. The correct statement is B: Greater width relates to smaller area as long as the width is less than 10 m.
Multi-step Score: 0.67 Final CoT Evaluation Score: 0.47
**SPHINX-MoE:**
1/1
1. The single answer is D.
Multi-step Score: 1 Final CoT Evaluation Score: 1
Figure 28: Response Comparison of GPT-4V [47], LLaVA-NeXT [38], and SPHINX-MoE [21].
We adopt the Text-lite version of the problem, and highlight the key-step extraction and scoring by
the CoT evaluation strategy.
-----
(Text Dominant)
**Question:**
There is a curve goes through (-4,1), (-1,3), and (0,0). Estimate the intervals on which the
polynomial function is increasing or decreasing.
**Answer:** The function is increasing on (-∞,-2.5) ∪ (1, ∞), and decreasing on (-2.5,1)
**Subject:**
**Subfield:**
Functions
Property
Text Dominant:
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
✓ 1/1
1. Determine the intervals on which the function is increasing or decreasing by looking at the slope of the curve on the graph.
2. The function starts off decreasing from the point (-5, something above 6) to the point (-4, 1).
3. The function then starts increasing at the point (-4, 1) and continues to increase until it reaches a peak somewhere between -2 and -1
on the x-axis.
4. After reaching the peak, the function starts to decrease again, and it continues to decrease past the point (-1, 3) until it reaches the
point (0, 0).
5. The function reaches its minimum at (0, 0) and then starts to increase again. It continues to increase past the point (2, something
below -3) and keeps increasing as it moves off to the right of the graph.
6. The intervals on which the function is increasing or decreasing are as follows:
- Decreasing: from the start of the graph to x = -4, and from just before x = -2 to x = 0.
- Increasing: from x = -4 to just before x = -2, and from x = 0 to the end of the graph.
7. More precisely:
- The function is increasing on the intervals (-4, -1.5) and (0, ∞).
- The function is decreasing on the intervals (-∞, -4) and (-1.5, 0).
8. Note that the exact turning points are estimated since the graph does not provide precise values, and the intervals are given in terms
of x-values. The exact values of the turning points cannot be determined without more information or a more precise graph.
Multi-step Score: 0.25 Final CoT Evaluation Score: 0.175
Text Lite:
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
1. Determine the slope of the function as it moves from left to right along the x-axis.
2. Identify the points where the function reaches its peaks and its lowest points.
3. The function is increasing from the leftmost part of the graph until it reaches its first peak, which is approximately at x = -3.
4. The function starts decreasing after this peak until it reaches its lowest point, which is approximately at x = 0.
5. The function then starts increasing again from x = 0 until it reaches its next peak, which is approximately at x = 2.
6. The function decreases again after x = 2 towards the rightmost part of the graph.
7. The intervals on which the function is increasing are from x = -5 to x = -3 and from x = 0 to x = 2. The intervals on which the
function is decreasing are from x = -3 to x = 0 and from x = 2 to x = 4.
8. Final values:
Increasing intervals: (-5, -3) and (0, 2),
Decreasing intervals: (-3, 0) and (2, 4).
Multi-step Score: 0.25 Final CoT Evaluation Score: 0.175
Vision Dominant:
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
1. Determine the intervals on which the function is increasing or decreasing by looking at the slope of the function as it moves from
left to right along the x-axis.
2. From the graph, identify that the function is increasing from the leftmost part of the graph until it reaches its first peak. This
occurs from approximately x = -5 to x = -2.5.
3. After reaching the first peak, the function starts to decrease until it reaches the bottom of the first trough. This occurs from
approximately x = -2.5 to x = 0.
4. The function then starts to increase again from the bottom of the trough until it reaches the second peak. This occurs from
approximately x = 0 to x = 1.
5. Finally, the function decreases again after the second peak towards the rightmost part of the graph. This occurs from
approximately x = 1 to x = 4.
6. Conclude that the function is increasing on the intervals: (-5, -2.5) and (0, 1)
7. And the function is decreasing on the intervals: (-2.5, 0) and (1, 4)
Multi-step Score: 0.14 Final CoT Evaluation Score: 0.1
Figure 29: Response of Different Problem Versions by GPT-4V [47]. By comparing the output
from three versions, we observe increasing reasoning errors when more textual content is visually
depicted in diagrams. We only showcase the Text-dominant version of the problem for brevity.
-----
(Text Dominant)
**Question:**
Sia is swinging from a chandelier. The horizontal distance between Sia and the wall, in meters, is
modeled by D(t) where t is the time in seconds. The function is graphed below, along with one
segment highlighted, a red double arrow from (3.75,3), (3.75,5). What is the meaning of the
highlighted segment?
Choices:
A:The chandelier is hung 2 meters from the wall.
B:The furthest Sia gets from the point where the chandelier hangs is 2 meters.
C:The chandelier is 2 meters wide.
D:Sia completes 2 swing cycles per second.
**Answer:** B **Subject:** Functions **Subfield:** Applied
Text Dominant:
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
✓ 1/1
✓ 1/1
✓ 1/1
1. The graph shows a function D(t) that represents the horizontal distance between Sia and the wall over time.
2. The highlighted segment with a red double arrow spans vertically from the point (3.75, 3) to the point (3.75, 5) on the graph.
3. This segment is at a constant time of 3.75 seconds and shows a change in the distance D(t) from 3 meters to 5 meters.
4. The segment does not provide information about the width of the chandelier or the number of swing cycles per second. Instead, it
indicates a change in distance at a specific moment in time.
5. Since the distance changes from 3 to 5 meters, this represents a 2-meter difference in the horizontal distance from the wall during
the swing.
6. Therefore, the meaning of the highlighted segment is that the furthest Sia gets from the point where the chandelier hangs is 2
meters.
7. The correct option is: B: The furthest Sia gets from the point where the chandelier hangs is 2 meters.
Multi-step Score: 0.71 Final CoT Evaluation Score: 0.8
Text Lite:
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
1. Analyze the graph which shows a periodic function, representing the back-and-forth motion of Sia swinging from the chandelier. The
horizontal axis is labeled with time t in seconds, and the vertical axis is labeled with distance D in meters. The highlighted segment on
the graph indicates a change in the distance from one point to another over time.
2. Evaluate each choice:
- A: The chandelier is hung 2 meters from the wall. This statement would imply a fixed distance from the wall, which the graph does
not provide information about.
- B: The furthest Sia gets from the point where the chandelier hangs is 2 meters. This statement seems to be about the amplitude of
the swing, but the highlighted segment does not indicate the amplitude; it indicates a change in distance over time.
- C: The chandelier is 2 meters wide. The width of the chandelier is not something that can be determined from this graph, which
shows distance from the wall over time.
- D: Sia completes 2 swing cycles per second. The highlighted segment shows a change in distance over time, but it does not provide
information about the number of cycles per second.
3. Conclude that none of the options A, B, C, or D directly describe the meaning of the highlighted segment in terms of the graph's
representation of distance over time.
4. Given the options provided and the information from the graph, none of the choices accurately describe the meaning of the
highlighted segment. Therefore, based on the information provided and the reasoning above, there is no correct option to choose from
A, B, C, or D.
Multi-step Score: 0.63 Final CoT Evaluation Score: 0.44
Vision Dominant:
✓ 1/1
**×** [0/1]
**× 0/1**
**×** [0/1]
1. The graph shows a periodic function, representing the motion of a pendulum or a swing. The horizontal axis represents time (t), and
the vertical axis represents the horizontal distance (D) from the wall. The highlighted segment indicates a change in the distance D
over time.
2. The highlighted segment is a vertical line, showing a change in distance at a particular moment in time, from a lower value to a
higher value on the D(t) function.
3. None of the options A, B, C, or D directly relate to the meaning of a single vertical segment on the graph. The closest interpretation
of the highlighted segment would be the change in distance from the wall at a specific moment, which is not explicitly described in any
of the options provided.
4. Given the information and the graph, none of the options A, B, C, or D correctly describe the meaning of the highlighted segment.
Multi-step Score: 0.25 Final CoT Evaluation Score: 0.175
Figure 30: Response of Different Problem Versions by GPT-4V [47]. By comparing the output
from three versions, we observe increasing reasoning errors when more textual content is visually
depicted in diagrams. We only showcase the Text-dominant version of the problem for brevity.
-----
(Text Dominant)
**Question:**
As shown in the figure, Xiaoming designed two right angles to measure the width of the river BC,
he measured AB = 2.0, BD = 7/3, CE = 9.0, then the width of the river BC is ()
Choices:
A:7 / 2
B:17 / 2
C:40 / 7
D:11
**Answer:** C **Subject:** Plane Geometry **Subfield:**
Applied
Text Dominant:
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1. Identify that the triangles ABD and CEB are similar because they both have a right angle, and they share the angle at point B.
2. Use the property of similar triangles that the ratios of their corresponding sides are equal:AB/BD = BC/CE.
3. Substitute the given values into the ratio: 2.0 / (7.0/3.0) = BC / 9.0.
4. Cross-multiply to solve for BC: 2.0 * 9.0 = BC * (7.0/3.0).
5. Solve for BC by dividing both sides by (7.0/3.0): BC = 18.0 / (7.0/3.0).
6. Simplify the equation: BC = 18.0 * (3.0/7.0).
7. Calculate the final value: BC = 54.0 / 7.0.
8. Round the result to the closest answer: BC = 7.71428571...
9. The correct option is: C: 40 / 7 meters.
Multi-step Score: 1 Final CoT Evaluation Score: 1
Text Lite:
✓ 1/1
✓ 1/1
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
1. Identify that the triangles ABD and EBC are similar because they both have a right angle, and they share the angle at point B.
2. Find the ratio between the sides of the triangles ABD and EBC using the given lengths AB and BD. The ratio of AB to BD is 2.0 /
(7.0/3.0) = 6.0/7.0.
3. Set up a proportion using the sides of the similar triangles: AB/BD = BC/CE.
4. Substitute the known values and the ratio into the proportion: (6.0/7.0) = BC/9.0.
5. Solve for BC: BC = (6.0/7.0) * 9.0, BC = 54.0/7.0, BC = 7.71428571...
6. Compare the calculated value with the provided options. The closest value to this result among the choices given is 40/7 meters,
which is approximately 5.71428571 meters. However, none of the options exactly match the calculated value. There might be a mistake
in the calculation or in the provided options. Given the options, the closest one is C: 40/7 m.
Multi-step Score: 0.83 Final CoT Evaluation Score: 0.58
Vision Dominant:
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
1. Identify the similar triangles in the figure (Triangle ABD and triangle EBC).
2. Note the lengths of the known sides in each triangle.
3. Since the triangles are similar, the corresponding sides are proportional. Write the proportions as:
AB/BD = CE/BC.
4. Plug in the known values and solve for BC: BC = 9.0 * (7.0/3.0) / 2.0 BC = 10.5.
5. The correct answer is not directly listed among the choices. However, since 10.5 meters can be written as 21/2 meters, which is
identical to 7/2 times 3 meters, it seems that there might have been a miscalculation in the options or I'm misinterpreting the figure
based on the given variables. Please verify the options provided or the figure details to ensure an accurate answer can be given.
Assuming the options provided contain a typo and closest to 10.5 (or 21/2) meters is the fraction that also involves 7, the most
sensible choice if we round 10.5 to the nearest half is:
A: 7/2 meters (which is 3.5 meters and not the correct answer).
6. Given the absence of an accurate option and assuming the calculations are correct, there should be an option that reflects 21/2
meters, which is not present in the given choices. If we are to choose from the closest available option, it would be:
B: 17/2 meters.
7. However, 17/2 meters equals 8.5 meters, which is not the result we have calculated. Therefore, there seems to be an error in the
options or in the figures provided which needs to be addressed for an accurate selection. Please review the question or the choices for
any discrepancies.
Multi-step Score: 0.43 Final CoT Evaluation Score: 0.3
Figure 31: Response of Different Problem Versions by GPT-4V [47]. By comparing the output
from three versions, we observe increasing reasoning errors when more textual content is visually
depicted in diagrams. We only showcase the Text-dominant version of the problem for brevity.
-----
(Text Dominant)
**Question:**
As shown in the figure, triangle ABC is inscribed in circle O, OC perpendicular OB, OD
perpendicular AB intersects AC at point E. Knowing that the radius of circle O is 1.0, then what is
the value of AE[2] + CE[2]?
Choices:
A:1
B:2
C:3
D:4
**Answer:** B **Subject:** Plane Geometry **Subfield:**
Length
Text Dominant:
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
✓ 1/1
1. Identify that OC is perpendicular to OB, making triangle OCB a right triangle with OC and OB as its legs. Since OC and OB are radii of
the circle and the radius is given as 1.0, OC = OB = 1.0. Therefore, triangle OCB is an isosceles right triangle.
2. Identify that OD is perpendicular to AB, which means that OD is the altitude of triangle AOB from point O to side AB. Since triangle
AOB is also isosceles (OA = OB = 1.0), OD bisects AB. This means that AD = DB.
3. Consider triangle AOD and triangle BOD. Since OD is perpendicular to AB, both triangles AOD and BOD are right triangles. Also,
since AD = DB, triangle AOD is congruent to triangle BOD. Therefore, AE = EB.
4. Since AE = EB and AC = AB (because triangle ABC is inscribed in the circle and AC and AB are both radii), we can say that CE = EB.
5. To find AE^2 + CE^2, since AE = CE, we can say that AE^2 + CE^2 = 2 * AE^2.
6. To find AE, use the Pythagorean theorem in triangle AOD (or BOD, since they are congruent). We have:
AO^2 = AD^2 + OD^2.
Since AO = 1.0 (radius of the circle) and AD = 1/2 * AB (because AD = DB and AB is the diameter of the circle, which is twice the
radius), we have:
1.0^2 = (1/2 * AB)^2 + OD^2.
AB is the diameter of the circle, so AB = 2 * radius = 2 * 1.0 = 2.0. Therefore: 1.0^2 = (1/2 * 2.0)^2 + OD^2 OD^2 = 0
Since OD^2 = 0, OD must be 0, which means that point D coincides with point O, and AE = EB = 1.0 (since AE = EB and AB = 2 * radius =
2.0). Therefore, AE^2 + CE^2 = 2 * AE^2 = 2 * 1.0^2 = 2.
7. The correct answer is: B: 2
Multi-step Score: 0.57 Final CoT Evaluation Score: 0.7
Text Lite:
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
✓ 1/1
✓ 1/1
1. Use the properties of a circle and the Pythagorean theorem.
2. Since OC is perpendicular to OB and both are radii of the circle, triangle OCB is an isosceles right triangle. Therefore, the length of
BC can be found using the Pythagorean theorem: BC = √2
3. Similarly, since OD is perpendicular to AB, triangle OAD is also an isosceles right triangle. Therefore, AD = OD = 1.
4. Using the Pythagorean theorem again, AO = √2. Since AO is the diameter of the circle, AC is also the diameter, and thus AC = AO =
√2.
5. We have AC = √2 and BC = √2. Since E is a point on AC, AE + EC = AC.
6. We want to find AE^2 + CE^2. Let's denote AE as x and CE as (√2 - x). Then we can express AE^2 + CE^2 as: 2x^2 - 2√2x + 2.
7. To minimize this expression, we can take the derivative with respect to x and set it to zero to find the minimum value. However,
since we are given that AE and CE are segments of a line, and we know that the sum of their squares in a right triangle is constant, we
can infer that the sum of the squares of AE and CE will be equal to the square of AC.
8. Therefore, AE^2 + CE^2 = AC^2 = (√2)^2 = 2.
9. The correct answer is: B: 2
Multi-step Score: 0.56 Final CoT Evaluation Score: 0.69
Vision Dominant:
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
1. Given that the radius of circle O is 1, this means that OA, OE, and OC are all radii of the circle, and thus they all have a length of 1.
2. Use the Pythagorean theorem (a^2 + b^2 = c^2) on triangle AOE, where AE is the hypotenuse, and OA and OE are the two legs:
AE^2 = OA^2 + OE^2, AE^2 = 2.
3. By the same logic, apply the Pythagorean theorem to triangle COE, where CE is the hypotenuse:
CE^2 = OC^2 + OE^2, CE^2 = 2.
4. Now, according to the question, find the value of AE^2 + CE^2:
AE^2 + CE^2 = 2 + 2 AE^2 + CE^2 = 4.
5. Therefore, the value of AE^2 + CE^2 is 4. The correct option is D: 4.
Multi-step Score: 0.6 Final CoT Evaluation Score: 0.42
Figure 32: Response of Different Problem Versions by GPT-4V [47]. By comparing the output
from three versions, we observe increasing reasoning errors when more textual content is visually
depicted in diagrams. We only showcase the Text-dominant version of the problem for brevity.
-----
(Text Dominant)
**Question:**
Consider the solid pictured and answer the following: Hence what is the total surface area?
The outer radius is 11 cm, inner radius is 10 cm and length 24 cm. Give your answer to the
nearest two decimal places.
**Answer:** SA = 3298.67 cm[2]
**Subject:**
**Subfield:**
Solid Geometry
Area
Text Dominant
✓ 1/1
✓ 1/1
✓ 1/1
**×** [0/1]
✓ 1/1
**×** [0/1]
1. Calculate the surface area of the outer side of the cylinder using the formula for the lateral surface area of a cylinder: 2𝜋 rh, where
r is the outer radius and h is the length (height) of the cylinder. Outer surface area = 2 𝜋 - 11 * 24.
2. Calculate the surface area of the inner side of the cylinder using the same formula, but with the inner radius. Inner surface area = 2
𝜋 - 10 * 24.
3. Calculate the surface area of both circular ring-shaped ends. This is the area of the outer circle minus the area of the inner circle,
multiplied by 2 (since there are two ends). Area for one end = Area of outer circle - Area of inner circle = 𝜋router^2 - 𝜋rinner^2 = 𝜋11^2 𝜋10^2. Total area for both ends = 2 * (𝜋(11^2 - 10^2)).
4. Add all these together to get the total surface area: Total surface area = 528𝜋 + 480𝜋 + 42𝜋 = 1050𝜋.
5. Use the approximate value of π, which is 3.14, to calculate the total surface area in decimal form: Total surface area ≈1050 * 3.14.
6. Round the result to the nearest two decimal places: 3297.00.
**×** [0/1] 7. The total surface area of the solid, to the nearest two decimal places, is approximately 3297.00 square centimeters.
Multi-step Score: 0.57 Final CoT Evaluation Score: 0.4
Text Lite
1. Calculate the area A of a circle using the formula A = πr^2, where r is the radius. In this case, the radius r is 5 cm. Therefore, the
area of one circular base is π(5^2) or 25π.
2. Since a cylinder has two bases (top and bottom), multiply the area of one base by 2 to find the total area of both bases: 2 × 25π =
50π cm².
3. To find the area of the curved surface of the cylinder, use the formula for the lateral surface area of a cylinder, which is A = 2πrh,
where h is the height or length of the cylinder. In this case, the height h is 24 cm. The lateral surface area would therefore be 2πrh =
2π(5)(24) = 240π cm².
4. Add both calculated areas together to find the total surface area: Total surface area = area of both bases + lateral surface area.
Total surface area = 50π + 240π = 290π cm².
5. Use the approximation π ≈3.14159 to compute the numerical answer: Total surface area ≈290 × 3.14159. Total surface area ≈
910.8611 cm².
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1] 6. Rounded to the nearest two decimal places, the total surface area is 910.86 cm² .
Multi-step Score: 0 Final CoT Evaluation Score: 0
Vision Dominant
1. Find the area of one circular base using the formula A = πr², where r is the radius of the base. In this case, the radius is 5 cm, so the
area of one base is π * 25 cm² = 78.54 cm² (to two decimal places).
2. Since there are two circular bases, double the result to get the total area of the bases: 2 * 78.54 cm² = 157.08 cm².
3. Calculate the rectangular surface area, which is the curved surface that wraps around the cylinder. The length of the rectangle is the
same as the circumference of the base circle, which can be found with the formula C = 2πr. In this case, the circumference is 2 * π * 5
cm = 31.42 cm (to two decimal places).
4. The width of the rectangle is the same as the height (length) of the cylinder, which is given as 24 cm. So the area of the rectangle is
31.42 cm * 24 cm = 753.96 cm² (to two decimal places).
5. Add the areas of both circles and the rectangular wrap to get the total surface area: 157.08 cm² + 753.96 cm² = 911.04 cm².
6. The total surface area of the cylinder to the nearest two decimal places is 911.04 cm² .
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
**×** [0/1]
Multi-step Score: 0
Final CoT Evaluation Score: 0
Figure 33: Response of Different Problem Versions by GPT-4V [47]. By comparing the output
from three versions, we observe increasing reasoning errors when more textual content is visually
depicted in diagrams. We only showcase the Text-dominant version of the problem for brevity.
-----
(Text Dominant)
**Question:**
Find the volume of the cone pictured. The perpendicular height is 6 and radius is 3. (Give
your answer correct to 2 decimal places.)
**Answer:** 56.55
**Subject:**
**Subfield:**
Solid Geometry
Volume
Text Dominant
1. Use the formula for the volume of a cone: V = 1/3𝜋r^2h, where V is the volume, r is the radius of the base, and h is the height of
the cone.
2. Given the radius r = 3 and the height h = 6, plug these values into the formula: V = 1/3𝜋(3)^2 (6), V = 1/3 𝜋∗ 9 * 6, V = 18 𝜋.
3. Calculate the numerical value using the approximation 𝜋≈ 3.14159: V ≈ 18 * 3.14159, V ≈ 56.54862 𝜋.
4. Round to two decimal places, the volume of the cone is approximately: V ≈ 56.55.
5. Final Value: 56.55.
Multi-step Score: 1 Final CoT Evaluation Score: 1
1/1
1/1
1/1
1/1
1/1
Text Lite
1. Use the formula for the volume of a cone: Volume = 1/3 𝜋 r^2 h, where r is the radius of the base of the cone, and h is the height.
2. Assume the "3" refers to the radius of the base of the cone, and the perpendicular height provided in the question text is "6". These
values are the radius and height respectively.
3. Substitute r = 3 and h = 6 into the formula: Volume = 1/3 𝜋 (3)^2 (6), Volume = 1/3 𝜋 (9) (6), Volume = 1/3 𝜋 (54), Volume = 18 𝜋.
4. Calculate the numeric value using the approximation 𝜋≈ 3.14159: Volume = 18 * 3.14159, Volume ≈ 56.54862.
5. Round to two decimal places, the volume of the cone is: Volume ≈ 56.55.
6. Final value: 56.55.
Multi-step Score: 1 Final CoT Evaluation Score: 1
1/1
1/1
1/1
1/1
1/1
1/1
Vision Dominant
✓ 1/1
✓ 1/1
**×** [0/1]
**×** [0/1]
✓ 1/1
**×** [0/1]
1. Identify the shape of the solid figure. In this case, it is a cone.
2. Use the formula for the volume of a cone: V = 1/3𝜋r^2 h, where V is the volume, r is the radius of the base, h is the height of the
cone, and \pi is the constant approximately equal to 3.14159.
3. Identify the values for r and h from the image. Here, h = 6 and the diameter of the base is 3, which means the radius r = 3/2 = 1.5.
4. Substitute the values into the formula: V = 1/3𝜋(1.5)^2 * 6, V = 1/3𝜋* 2.25 * 6, V = 1/3𝜋* 13.5, V = 𝜋 - 4.5, V ≈ 3.14159 * 4.5, V ≈
14.13765.
5. Round the final answer to two decimal places. The volume is 14.14 cubic units.
6. Final answer: 14.14.
Multi-step Score: 0.5
Final CoT Evaluation Score: 0.35
Figure 34: Response of Different Problem Versions by GPT-4V [47]. By comparing the output
from three versions, we observe increasing reasoning errors when more textual content is visually
depicted in diagrams. We only showcase the Text-dominant version of the problem for brevity.
-----
| [
"Pan, Lu",
"Aojun, Zhou",
"Dongzhi, Jiang",
"Yichi, Zhang",
"Haokun, Lin",
"Kai-Wei, Chang",
"Peng, Gao",
"Hongsheng, Li",
"Ziyu, Guo",
"Pengshuo, Qiu",
"Renrui, Zhang"
] | 2024-03-21T00:00:00 | null | false | 55 | 0 | null | https://arxiv.org/abs/2403.14624v2 | https://arxiv.org/abs/2403.14624 | https://www.semanticscholar.org/paper/6d017adda6b2b1ea627dde2f0e85401ebb9fe566 |
Ape210K: A Large-Scale and Template-Rich Dataset of Math Word Problems | Automatic math word problem solving has attracted growing attention in recent years. The evaluation datasets used by previous works have serious limitations in terms of scale and diversity. In this paper, we release a new large-scale and template-rich math word problem dataset named Ape210K. It consists of 210K Chinese elementary school-level math problems, which is 9 times the size of the largest public dataset Math23K. Each problem contains both the gold answer and the equations needed to derive the answer. Ape210K is also of greater diversity with 56K templates, which is 25 times more than Math23K. Our analysis shows that solving Ape210K requires not only natural language understanding but also commonsense knowledge. We expect Ape210K to be a benchmark for math word problem solving systems. Experiments indicate that state-of-the-art models on the Math23K dataset perform poorly on Ape210K. We propose a copy-augmented and feature-enriched sequence to sequence (seq2seq) model, which outperforms existing models by 3.2% on the Math23K dataset and serves as a strong baseline of the Ape210K dataset. The gap is still significant between human and our baseline model, calling for further research efforts. We make Ape210K dataset publicly available at https://github.com/yuantiku/ape210k | A copy-augmented and feature-enriched sequence to sequence (seq2seq) model is proposed, which outperforms existing models by 3.2% on the Math23K dataset and serves as a strong baseline of the Ape210K dataset. | [
"Wei, Zhao",
"Yang, Liu",
"Mingyue, Shang",
"Liang, Wang",
"Jingming, Liu"
] | 2020-10-08T00:00:00 | null | false | 54 | 4 | null | http://arxiv.org/abs/2009.11506 | https://arxiv.org/abs/2009.11506 | https://www.semanticscholar.org/paper/44507bde6e9caf60b41c60d703e7972b520d48a6 |
|
ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs | Large Language Models (LLMs) still struggle with natural language reasoning tasks. Motivated by the society of minds (Minsky, 1988), we propose ReConcile, a multi-model multi-agent framework designed as a round table conference among diverse LLM agents. ReConcile enhances collaborative reasoning between LLM agents via multiple rounds of discussion, learning to convince other agents to improve their answers, and employing a confidence-weighted voting mechanism that leads to a better consensus. In each round, ReConcile initiates discussion between agents via a ‘discussion prompt’ that consists of (a) grouped answers and explanations generated by each agent in the previous round, (b) their confidence scores, and (c) demonstrations of answer-rectifying human explanations, used for convincing other agents. Experiments on seven benchmarks demonstrate that ReConcile significantly improves LLMs’ reasoning – both individually and as a team – surpassing prior single-agent and multi-agent baselines by up to 11.4% and even outperforming GPT-4 on three datasets. ReConcile also flexibly incorporates different combinations of agents, including API-based, open-source, and domain-specific models, leading to an 8% improvement on MATH. Finally, we analyze the individual components of ReConcile, demonstrating that the diversity originating from different models is critical to its superior performance. | Experiments demonstrate that ReConcile significantly improves LLMs' reasoning -- both individually and as a team -- surpassing prior single-agent and multi-agent baselines by up to 11.4% and even outperforming GPT-4 on three datasets. | # RECONCILE: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs
**Justin Chih-Yao Chen** **Swarnadeep Saha** **Mohit Bansal**
UNC Chapel Hill
{cychen,swarna,mbansal}@cs.unc.edu
**Abstract**
Large Language Models (LLMs) still struggle
with natural language reasoning tasks. Motivated by the society of minds (Minsky, 1988),
we propose RECONCILE, a multi-model multiagent framework designed as a round table conference among diverse LLM agents. RECON
CILE enhances collaborative reasoning between
LLM agents via multiple rounds of discussion,
learning to convince other agents to improve
their answers, and employing a confidenceweighted voting mechanism that leads to a better consensus. In each round, RECONCILE
initiates discussion between agents via a ‘discussion prompt’ that consists of (a) grouped
answers and explanations generated by each
agent in the previous round, (b) their confidence scores, and (c) demonstrations of answerrectifying human explanations, used for convincing other agents. Experiments on seven
benchmarks demonstrate that RECONCILE significantly improves LLMs’ reasoning – both
individually and as a team – surpassing prior
single-agent and multi-agent baselines by up
to 11.4% and even outperforming GPT-4 on
three datasets. RECONCILE also flexibly incorporates different combinations of agents, including API-based, open-source, and domainspecific models, leading to an 8% improvement
on MATH. Finally, we analyze the individual components of RECONCILE, demonstrating that the diversity originating from different
models is critical to its superior performance.[1]
**1** **Introduction**
A large body of recent work has focused on improving the reasoning capabilities of Large Language Models (LLMs) by imitating various human
cognitive processes (Wang and Zhao, 2023; Park
et al., 2023; Sumers et al., 2023; Ye et al., 2023).
These include phenomena like reflecting on and
critiquing one’s own predictions, being receptive
to feedback, and learning from feedback. Of note,
[1Code: https://github.com/dinobby/ReConcile](https://github.com/dinobby/ReConcile)
self-reflection is an introspective process that allows the model to improve its outputs by generating feedback from the model itself (Madaan et al.,
2023; Shinn et al., 2023). However, self-reflection
suffers from Degeneration-of-Thought – when the
model is overly confident in its answer, it is unable to generate novel thoughts even after multiple
rounds of feedback (Liang et al., 2023).
To promote more diverse thoughts, past work
has drawn inspiration from the concept of society
_of minds in multi-agent systems (Minsky, 1988;_
Zhuge et al., 2023). It highlights the importance of
communication and collaboration between multiple
agents for complex decision-making tasks. While
such collaborative frameworks like multi-agent debate (Liang et al., 2023; Du et al., 2023) increase
the reasoning diversity through the process of a
debate, multiple agents have typically been limited to different instances of the same underlying
model like ChatGPT (OpenAI, 2022).[2] This results
in an inherent model bias, a restricted knowledge
scope, and a lack of external feedback from other
models due to identical pre-training data and model
architectures across all agents. In general, when
multiple agents propose solutions to a problem, the
success of such a multi-agent system is fundamentally reliant on (a) the diversity of the solutions,
(b) the ability to estimate each agent’s confidence,
and (c) accordingly, convince other agents (with explanations) to reach a better consensus. This puts
forward the question: if multiple diverse LLMs
collaboratively solve a task, are they capable of
discussing their solutions with each other to reach
a better consensus?
We aim to solve reasoning problems by learning
from diverse insights and external feedback, originating from agents that belong to different model
2In this work, we refer to multi-agent as multiple instances
of the same underlying model (e.g., ChatGPT), whereas multimodel model-agent refers to different models (e.g., ChatGPT,
Bard and Claude2) as agents.
7066
-----
**_ReConcile_**
**_(Group-Discuss-and-Convince)_**
**_Self-Refine_**
**_Multi-Agent_**
**_Debate (MAD)_**
**_yes_** **_no_**
**_MAD+Judge_**
**_yes_** **_no_**
**_no_**
**_no_**
**Convincing Samples**
**Question (Q):**
Is an ammonia fighting cleaner
good for pet owners?
**Human Explanation (Exp):**
Ammonia is a component in pet
urine. It has an unpleasant odor.
**Gold Answer: Yes**
_with 40%_
_confidence_
**_ChatGPT_**
**_Bard_**
**_Claude_**
**_Yes, with 95%_**
_confidence_
**_No, with 50%_**
_confidence_
**Confidence Estimation**
_(40% no, 50% no,_ _95% yes)_
**_no_**
Figure 1: An illustration of the main differences between RECONCILE and prior works. While most current selfrefine and debating techniques rely on multiple instances of a single model (e.g., ChatGPT), our method incorporates
models from different families (e.g., ChatGPT, Bard, and Claude2). Our approach also emphasizes critical elements
of effective discussion, including convincing another agent to improve their answers and incorporating the estimated
confidence of all agents. For illustrative simplicity, we depict only one agent contemplating how to convince the
other two agents.
|Col1|𝑓(𝑝)|
|---|---|
|||
families. Collaborative processes such as brainstorming, group meetings, and discussions play a
pivotal role in reaching a consensus and arriving
at more refined solutions to complex problems (Li
et al., 2022b). Effective discussion also entails
the selection of stances, voting, convincing, exchange of information, and a diversity of opinions.
Thus, we propose RECONCILE, a framework of
round-table conference for obtaining better consensus among diverse LLM agents. RECONCILE
consists of multiple discussion rounds between diverse LLM agents who try to convince[3] each other
to either rectify their answers or become more con_fident of their initial correct answers (see Fig. 1 for_
a broad overview).
Given a reasoning problem, RECONCILE begins
with each agent first generating an answer, its uncertainty, and a corresponding explanation (as a Chainof-Thought (Wei et al., 2022)) for the answer. Then
all agents enter a multi-round discussion phase.
Each discussion round consists of all agents generating a revised explanation and answer based on all
other agents’ explanations and answers from the
previous round. In particular, RECONCILE initiates
a discussion by designing a discussion prompt for
each agent, that lets it condition on (1) grouped
answers from all agents, (2) corresponding explanations generated in the previous round, and (3)
demonstrations of answer-rectifying human expla
3When we say that an agent tries to convince another agent,
we mean that it learns (based on corrective explanations) to
defend or argue for its stance while still being receptive to the
other agent’s argument.
nations for convincing other agents. We leverageQ
them in an in-context learning framework to teachExp
**Q** **( , )** **X**
**Q**
**+** **( , )** **V**
**Exp**
models to generate their own convincing explanations (see Fig. 3). Even in cases where an agent
initially offers an incorrect answer and explanation,
it can consider another agent’s convincing explanation and amend its response accordingly. In each
discussion round, we estimate an agent’s uncertainty via a confidence-estimation prompt (Tian
et al., 2023; Xiong et al., 2023a). Once all agents
converge to the same answer (i.e., a consensus has
been reached), we employ these confidences to
compute a weighted vote as the team answer.
We primarily develop RECONCILE with three
state-of-the-art LLMs: ChatGPT (OpenAI, 2022),
Bard (Anil et al., 2023), and Claude2 (Anthropic,
2023). We also demonstrate the flexibility of
RECONCILE with variants that employ a much
stronger GPT-4 (OpenAI, 2023), an open-source
LLaMA-2-70B (Touvron et al., 2023), or a domainspecific DeepSeekMATH (Shao et al., 2024) model
as an agent. Across seven benchmarks spanning
commonsense reasoning, mathematical reasoning,
logical reasoning, and Natural Language Inference (NLI), RECONCILE outperforms prior singleagent (e.g., Self-Refine (Madaan et al., 2023) and
Self-consistency (Wang et al., 2023b)) and multiagent baselines (Debate (Du et al., 2023) and
Judge (Liang et al., 2023)) that are built on top
of the same underlying models. For example, REC
ONCILE, (1) on a date understanding task, outperforms the leading multi-agent debate baseline by
7067
-----
Refine Ensemble Multi-Agent Multi-Model Convincingness Confidence
Self-Refine (SR)
Self-Consistency (SC)
SR + SC
Debate -
Judge
RECONCILE (Ours)
Table 1: Summary of the main differences between prior work, including Self-Refine (SR, Madaan et al. (2023));
Self-Consistency (SC, Wang et al. (2023b)); Debate (Du et al., 2023) and Judge (Liang et al., 2023). means
supported and means not supported. RECONCILE supports multi-model multi-agent discussion with confidence
estimation and convincingness. * = Du et al. (2023) primarily experiment with multiple instances of ChatGPT as
different agents and conduct an initial investigation with 20 samples using ChatGPT and Bard as the two agents.
11.4%, (2) on StrategyQA, also outperforms GPT4 by 3.4%, and (3) on MATH, outperforms both
GPT-4 and a specialized DeepSeekMath model
by 8%. Moreover, detailed analyses of the individual components of RECONCILE demonstrate
that leveraging diverse LLM agents leads to maximum gains, and we further validate their higher
response diversity via a BERTScore-based diversity metric (Zhang et al., 2019). Finally, we show
that RECONCILE not only leads to better team performance but also enables each agent to improve
individually via the discussion process.
In summary, our primary contributions are:
- We propose RECONCILE, a reasoning framework
involving diverse Large Language Models in a
Round Table Conference.
- We conduct extensive experiments on seven
benchmarks to show that RECONCILE outperforms strong baselines (including GPT-4 on some
benchmarks) and also generalizes to different
combinations of agents.
- We study the role of diversity, confidence estimation, and an agent’s ability to convince others (by
learning from corrective explanations) in multiagent discussion systems.
**2** **Related Work**
**Reasoning with LLMs. Progress in LLMs has**
led to the development of advanced prompting
and fine-tuning techniques for solving reasoning problems. Representative methods include
Chain-of-Thought (CoT) (Kojima et al., 2022;
Wei et al., 2022; Wang et al., 2023a) and Treeof-Thought prompting (Yao et al., 2023a), selfconsistency (Wang et al., 2023b), meta-reasoning
over multiple paths (Yoran et al., 2023), use
of scratchpads (Nye et al., 2021), training veri
fiers (Cobbe et al., 2021), self-collaboration (Wang
et al., 2023c; Schick et al., 2022; Li et al., 2023a;
Feng et al., 2024), self-reflection (Shinn et al.,
2023; Madaan et al., 2023; Wang and Zhao, 2023;
Yao et al., 2023b), improved math reasoning (Yue
et al., 2023; Luo et al., 2023) and fine-tuning
via bootstrapping models (Zelikman et al., 2022;
Lewkowycz et al., 2022; Li et al., 2023b). Eliciting
reasoning from a single agent, while promising, is
fundamentally limited by a lack of diverse insights.
**Reasoning in Multi-Agent Systems. A recent line**
of work has explored student-teacher frameworks
with the goal of distilling reasoning capabilities
from a stronger teacher to a weaker student (Magister et al., 2023; Fu et al., 2023; Ho et al., 2023;
Saha et al., 2023; Mukherjee et al., 2023). As opposed to a teacher teaching weaker agents, we seek
to develop a multi-agent system where different
LLM agents have their unique strengths and try
to collaboratively improve performance by reaching a better consensus. Notable prior works include multi-agent debating frameworks (Du et al.,
2023; Liang et al., 2023; Chan et al., 2023; Xiong
et al., 2023a; Khan et al., 2024) but such efforts
are still largely limited to multiple instances of
the same underlying language model. We argue
that relying on a single model limits the potential
of complementary benefits from different model
families and the advantage of ensemble learning.
Moreover, estimating the confidence of each agent
and being able to defend or improve one’s opinions become more prominent components in such
multi-model multi-agent systems because of the individual differences. Overall, Table 1 summarizes
RECONCILE’s key differences compared to prior
single-agent and multi-agent reasoning methods.
**Ensembling Large Pretrained Models. Large**
pre-trained models, by virtue of being trained on
7068
-----
**Phase3:**
**Team Answer Generation**
**Question: Is August a winter month for part of the world?**
**Previous Response**
(𝒂(𝒓−𝟏)𝒋, 𝒆(𝒓−𝟏)𝒋 )
**Confidence Estimation 𝒑(𝒓−𝟏)𝒋**
**95%** **0.8**
**80%** **0.5**
**70%** **0.3**
**Grouping**
**Yes** **No**
**Convincing Samples 𝑪𝒋≠𝒊**
**Question: Can a prime**
number be represented by the
number of days in a week?
**Explanation: There are 7 days**
in a week. 7 is a prime number.
**Phase1: Initial Response Generation** Round 0
**_Initial Prompt_** **_Initial Prompt_** **_Initial Prompt_** **Weighted Vote**
**Yes, parts of the world in** **Yes, in countries like** **No, August is a summer** **Yes x 0.1 Yes x 0.3 No x 0.3**
Southern Hemisphere… Australia… month… Ans: Yes
I am 60% confident ... I am 70% confident … I am 70% confident ... yes no
**Phase2: Multi-Round Discussion** Round 1
𝑫(𝟏)𝟏 𝑫(𝟏)𝟐 𝑫(𝟏)𝟑 **Yes x 0.5Weighted Vote Yes x 0.5 No x 0.1**
**Yes, August in Southern** **Yes, after reviewing, I** **No, in Northern** Ans: Yes
Hemisphere is actually… still think the answer… Hemisphere… yes no
I am 80% confident… I am 80% confident … I am 50% confident ...
Round 2
𝑫(𝟐)𝟏 𝑫(𝟐)𝟐 𝑫(𝟐)𝟑 **Weighted Vote**
**Yes x 0.8** **Yes x 1.0** **Yes x 0.3**
**Yes, August is a winter** **Yes, I agree that August** **Yes, I think in Southern**
month for part of the…I am 95% confident… is a winter month… I am 100% confident … Hemisphere…I am 60% confident ... yes no Ans: Yes
Figure 2: Overview of RECONCILE with ChatGPT, Bard, and Claude2, consisting of three phases: (1) Initial
Response Generation: Each agent generates an initial answer and explanation. (2) Multi-Round Discussion: Each
model is presented with a discussion prompt (as illustrated on the left) and subsequently generates an updated
answer and explanation. (3) Team answer generation: The team answer is determined by a weighted vote at the
end of each round. The left part of the figure shows the discussion prompt for an agent, consisting of (a) grouped
answers and explanations of all agents from the previous round, (b) estimated confidence, and (c) demonstrations of
convincing samples.
of convincing samples Ci = _c[(]j[i][)]_ _j=1[. Each con-]_
_{_ _[}][k]_
vincing sample c[(]j[i][)] = (qj[(][i][)][, a]j[(][i][)][, e]j[(][i][)][)][ for an agent]
_Ai is an instance of a question qj[(][i][)][, gold answer]_
_a[(]j[i][)][, and a human explanation][ e]j[(][i][)]_ that helps rectify
an agent’s initial incorrect answer (see more details
in Sec 4). The objective of RECONCILE is to
improve the team performance on a given task by
holding multiple rounds of discussion between the
agents, quantifying the uncertainty associated with
each agent, and convincing other agents to reach
a better consensus. Note that convincing samples
serve as an additional performance enhancer;
even when the dataset lacks human explanations,
our method can still yield performance gains
independent of this (more details below).
**4** **RECONCILE: A Collaborative**
**Discussion Framework**
RECONCILE operates in three phases: initial response generation, multi-round discussion, and
team answer generation. The overview of our
method is demonstrated in Fig. 2 and Algorithm 1.
**Phase 1: Initial Response Generation.** REC
ONCILE operates with each agent Ai initially generating an answer a[(0)]i [, an explanation][ e]i[(0)][, and an]
associated confidence p[(0)]i [0, 1] for the gener_∈_
ated answer. Each agent conditions on a zero-shot
different data and with architectural variations,
exhibit distinct capabilities. This has led to the development of ensembles (Sagi and Rokach, 2018)
in multimodal learning (Zeng et al., 2023; Li et al.,
2022a). Mixture of Experts, a popular ensemble
learning technique, trains multiple smaller specialized models to improve robustness and overall
accuracy (Jacobs et al., 1991; Shazeer et al., 2017;
Du et al., 2022). Specific to language models,
Self-Consistency (Wang et al., 2023b) generates
diverse reasoning paths using CoT and chooses the
most consistent answer as the final output. Jiang
et al. (2023) propose LLM-Blender, a method to
rank and fuse generations from different models.
Different from these, we study communication
via explanations between distinct LLM agents and
their ability to discuss and convince each other in
order to improve collective reasoning.
**3** **Problem Setup**
We assume that we are given a test problem Q
and there are n agents A = {Ai}i[n]=1 [participating]
in a round table discussion. Each agent is a
distinct LLM, potentially trained with different
pre-training data and model architectures. All
agents are capable of generating an answer and a
corresponding Chain-of-Thought explanation (Wei
et al., 2022) for the test problem. For each agent
_Ai, we utilize a small number of k demonstrations_
7069
-----
for all other agents Aj≠ _i.[4]_ When an agent tries to
reassess its reasoning in light of the reasoning provided by other agents, we hypothesize that it should
benefit from conditioning on demonstrations that
can convince other agents. In order to obtain such
convincing samples for an agent Aj, we select a
small number of samples (4 in our experiments) for
which the agent’s initial answer is wrong but conditioning on the corresponding human explanation,
rectifies the answer (see Fig. 3). For datasets that
_do not come with human explanations (e.g., the_
date understanding task in our experiments), we develop RECONCILE without using any convincing
sample in the discussion prompt and still obtain
large improvements (see §6.2 for details).
We now define the discussion prompt _i_ =
_D[(][r][)]_
_a[(]j[r][−][1)], e[(]j[r][−][1)], p[(]j[r][−][1)], Cj=i_ _j=1_ [for each agent]
_{_ _̸_ _}[n]_
_Ai in round r, based on the above three compo-_
nents. The agent conditions on it to generate an updated answer a[(]i[r][)][, explanation][ e]i[(][r][)][, and confidence]
_p[(]i[r][)][, to be used in the next round. Demonstrations]_
of convincing explanations enable the agent to generate explanations that are more likely to convince
other agents to reach a better consensus.
**Phase 3: Team Answer Generation.** RECON
CILE continues the discussion for a maximum of
_R rounds or terminates it as soon as a consensus_
is reached (i.e., all agents agree on the same answer). At the end of any round r, RECONCILE
generates the team answer ˆa[(][r][)] for that round using
a weighted voting scheme (see the right side of
Fig. 2). In particular, we recalibrate each agent’s
confidence using a function f (·) and then use these
as weights to compute the team answer, as follows:
**1** **_Convincing Sample (CS) Collection_** **CS of ChatGPT**
**Question:**
Is an ammonia fighting cleaner
good for pet owners?
_Question_ **Human Explanation:**
Ammonia is a component in pet
ChatGPT _Human_ urine. It has an unpleasant odor.
_explanation_ **Gold Answer: Yes**
_Question_ **CS of Bard**
**Question:…**
Bard _explanationHuman_ **Human Explanation:Gold Answer: Yes / No …**
_Question_ **CS of Claude**
**Question:…**
Claude2 _explanationHuman_ **Human Explanation:…**
**Gold Answer: Yes / No**
**2** **_In-context Learning for each model_**
**2**
**CS of Bard**
**CS of Claude**
**CS of ChatGPT**
**CS of Claude**
**CS of ChatGPT**
**CS of Bard**
Figure 3: Method for choosing convincing samples for
each agent. A convincing sample for ChatGPT consists
of a question, a gold answer, and a ‘corrective’ human
explanation that can rectify its initial incorrect answer.
Then Bard and Claude2 use it in-context during discussion to convince ChatGPT.
prompt that instructs it to reason about the problem ‘step-by-step’. See ‘Phase 1’ in Fig. 2 and the
prompt is shown in Fig. 5 in Appendix A.2.
**Phase 2: Multi-round Discussion.** RECONCILE
then enters a discussion phase, consisting of R
rounds (see ‘Phase 2’ in Fig. 2). In discussion
round r, for each agent Ai, RECONCILE develops a discussion prompt _i_ (as shown in Fig. 5),
_D[(][r][)]_
consisting of the following three components.
**(a) Grouped responses of all agents from the**
**previous round.** _i_ consists of the answers
_D[(][r][)]_
_{a[(]j[r][−][1)]}j[n]=1_ [and explanations][ {][e]j[(][r][−][1)]}j[n]=1 [of all]
agents from round (r − 1). To foster better discussions, RECONCILE summarizes this information
by grouping the answers into distinct categories
and appends all plausible explanations for each answer, as shown in our discussion prompt (Appendix
Fig. 5) and on the left side of Fig. 2.
**(b) Confidence associated with the answers. All**
agents are not equally confident in their answers.
Hence, an effective discussion should also consider
each agent’s uncertainty. For all black-box models, we estimate its confidence p[(]i[r][)] in round r by
directly prompting the agent to verbally quantify
its uncertainty, which in past work has been shown
to be effective (Xiong et al., 2023b). See Appendix
Fig. 5 for the usage of confidence in discussion.
**(c) Convincing samples from all other agents. Fi-**
nally, the prompt contains convincing samples Cj
_aˆ[(][r][)]_ = arg max
_f_ (p[(]i[r][)][)][1][(ˆ]a[(]i[r][)] = a)
where a is a distinct answer generated by any of the
agents, p[(]i[r][)] is the original confidence of agent Ai
in round r and f (p[(]i[r][)][)][ is the corresponding recali-]
brated confidence. While an unweighted majority
vote and uncalibrated confidence-weighted vote
also work well in practice, we use the calibrated
weighted vote because it not only obtains slightly
better results but the same recalibration strategy
also works out-of-the-box for all seven tasks that
4We did not include an agent’s own convincing samples
in the prompt because an agent is expected to specifically
convince other agents. We also verify this empirically – additionally including self-convincing samples in the prompt leads
to comparable performance.
7070
-----
we experiment with (see Appendix B.5 for more
details of our recalibration function f (·)).
**5** **Experimental Setup**
**Agents in RECONCILE. We primarily implement**
RECONCILE with ChatGPT, Bard, and Claude2
as the three agents, engaging them in up to three
rounds of discussion. Later in §6.1, we also show
the generalizability of our RECONCILE framework
with different choices of agents, including APIbased (GPT-4), open-source (LLaMA-2-70B), and
domain-specific (DeepSeekMath) agents.
**Datasets.** We evaluate RECONCILE on seven
benchmarks, including two commonsense, three
math, one logical reasoning, and one NLI task.
These are: (1) StrategyQA (Geva et al., 2021),
(2) CommonsenseQA (CSQA; (Aggarwal et al.,
2021; Talmor et al., 2019)), (3) GSM8K (Cobbe
et al., 2021), (4) AQuA (Ling et al., 2017), (5)
MATH (Hendrycks et al., 2021), (6) Date Understanding (BIG-bench collaboration, 2023), and (7)
ANLI (Nie et al., 2020).
**Baselines.** We compare RECONCILE to prior
works in three categories:
- Vanilla single-agent methods. In this category,
we experiment with (1) zero-shot CoT prompting (Kojima et al., 2022) with one of the interacting LLMs, and (2) eight-shot CoT with Claude2
where the number eight matches the number of
convincing samples used in RECONCILE.
- Advanced single-agent methods. Next, we compare with (1) Self-Refine (SR) that iteratively
generates feedback and refines the output leveraging the model itself (Madaan et al., 2023), (2)
Self-Consistency (SC) that samples multiple reasoning paths and generates the most consistent
answer (Wang et al., 2023b), and (3) their combination, SR+SC, that first conducts multiple iterations of refinement, followed by a majority
vote. Note that in RECONCILE, the number of
LLM calls per instance can vary between 3, 6,
and 9 based on the number of discussion rounds.
Hence, for a fair comparison, we implement SC
with the same average number of LLM calls as in
RECONCILE. Later in Appendix B.3, we show
that RECONCILE even outperforms 9-way SC
(that equates to the worst-case LLM calls in REC
ONCILE).
- Multi-agent methods with a single backbone
**model. Our final baselines are two multi-agent**
debating methods: a multi-agent debate between
multiple ChatGPT instances (Du et al., 2023) and
a debate with judge method (Liang et al., 2023).
These methods use multiple instances of the same
underlying model (ChatGPT) as different agents.
**Implementation Details. Owing to the cost associ-**
ated with API-based models and the limit imposed
on the number of API calls, we follow many prior
works (Du et al., 2023; Bian et al., 2023; Besta
et al., 2023; Yao et al., 2023a) to experiment with a
subset of 100 samples (from the validation set for
StrategyQA and the test set for all other datasets).
Later in Appendix B.1, we also experiment on the
full test sets of StrategyQA and Date understanding and find similar trends. We report accuracy and
its standard deviation. For each experiment, we
conduct at least three runs on the same test samples with the same prompts, primarily accounting
for the variance caused by the decoding strategy.
Other implementation details can be found in Appendix A.1.
**6** **Results**
**6.1** **Main Results**
**RECONCILE outperforms single-agent and**
**multi-agent baselines.** We first evaluate the overall reasoning capabilities of RECONCILE in Table 2
with ChatGPT, Bard, and Claude2 as the three
agents. For fair comparisons, all iterative methods go through 3 rounds of iteration and all singlemodel multi-agent baselines are implemented with
three agents with a sufficiently high temperature
of 1.0 for maximizing diversity. Across all five
datasets, RECONCILE outperforms all single-agent
and multi-agent baselines that are built on top of the
same models (see last row). Notably, without using
GPT-4 as an agent, our method outperforms GPT-4
on commonsense tasks like StrategyQA and CSQA
and obtains comparable performance to GPT-4 on
most other tasks. GPT-4’s especially strong results on GSM8K could be attributed in part to
the inclusion of some of GSM8K’s training samples in GPT-4’s pre-training data (OpenAI, 2023).
While multi-agent debate with ChatGPT (Du et al.,
2023) improves results on math benchmarks, debate with multiple Bard or Claude2 instances is
not effective, possibly because the responses (generated from the same model) are not sufficiently
diverse. When they team up with ChatGPT in a
multi-round discussion, RECONCILE outperforms
debate frameworks. It obtains maximum gains of
7071
-----
Method Category Method Agent StrategyQA CSQA GSM8K AQuA Date
Zero-shot CoT GPT-4 _75.6_ 4.7 _73.3_ 0.4 _90.7_ 1.7 _65.7_ 4.6 _89.0_ 2.2
_±_ _±_ _±_ _±_ _±_
Zero-shot CoT ChatGPT 67.3 3.6 66.0 1.8 73.7 3.1 44.7 0.5 67.7 1.2
Vanilla _±_ _±_ _±_ _±_ _±_
Single-agent Zero-shot CoT Bard 69.3±4.4 56.8±2.7 58.7±2.6 33.7±1.2 50.2±2.2
Zero-shot CoT Claude2 73.7 3.1 66.7 2.1 79.3 3.6 60.3 1.2 78.7 2.1
_±_ _±_ _±_ _±_ _±_
Eight-shot CoT Claude2 74.3 0.8 68.3 1.7 84.7 0.9 64.7 1.2 78.7 1.7
_±_ _±_ _±_ _±_ _±_
Self-Refine (SR) ChatGPT 66.7 2.7 68.1 1.8 74.3 2.5 45.3 2.2 66.3 2.1
Advanced _±_ _±_ _±_ _±_ _±_
Self-Consistency (SC) ChatGPT 73.3 0.5 73.0 0.8 82.7 0.5 60.3 1.2 69.3 0.4
Single-agent _±_ _±_ _±_ _±_ _±_
SR + SC ChatGPT 72.2 1.9 71.9 2.1 81.3 1.7 58.3 3.7 68.7 1.2
_±_ _±_ _±_ _±_ _±_
Debate 3 66.7 3.1 62.7 1.2 83.0 2.2 65.3 3.1 68.0 1.6
_×_ _±_ _±_ _±_ _±_ _±_
Single-model Debate 3 65.3 2.5 66.3 2.1 56.3 1.2 29.3 4.2 46.0 2.2
Multi-agent Debate _×3_ 71.3±2.2 68.3±1.7 70.7±4.8 62.7±2.6 75.3±3.3
_×_ _±_ _±_ _±_ _±_ _±_
Debate+Judge 3 69.7 2.1 63.7 2.5 74.3 2.9 57.3 2.1 67.7 0.5
_×_ _±_ _±_ _±_ _±_ _±_
Table 2: Comparison of RECONCILE (using ChatGPT, Bard, Claude2) with vanilla and advanced single-agent
methods and multi-agent debating frameworks. Across all reasoning benchmarks, RECONCILE outperforms all
prior single-agent and multi-agent methods. On commonsense tasks (StrategyQA and CSQA), RECONCILE also
outperforms GPT-4. All results are on a random subset of 100 samples. The agents are GPT-4, ChatGPT,
Method Accuracy
Best Single-agent (zero-shot) 75.6 ( ) 73.7 ( )
Best Multi-agent (Debate) 83.7 ( _×3)_ 71.3 ( _×3)_
RECONCILE **87.7 (**,, ) **78.0 (**,, )
73.7 (
_×_
,
Table 3: Comparison of the best single-agent, best multiagent, and RECONCILE on StrategyQA for a given combination of three agents. RECONCILE flexibly incorporates agents with varying strengths, such as a stronger
model like GPT-4, or an open-source model like
LLaMA2-70B.
11.4% (75.3% → 86.7%) on date understanding
and 7.7% (71.3% → 79.0%) on StrategyQA when
compared to the strongest baseline (multi-agent
debate with Claude2). Improvements in the math
reasoning tasks are relatively moderate, because of
ChatGPT’s initial strong performance. However,
as demonstrated later in Table 4, integrating a specialized math reasoning model into RECONCILE
significantly boosts team performance.
**RECONCILE generalizes to agents of varying**
**strengths.** Next, we vary the agents in RECON
CILE to study its generalization as a multi-agent
framework. In particular, we either include (a)
a stronger GPT-4 model, or (b) an open-source
LLaMA-2-70B-chat model in the discussion. As
shown in Table 3, in both these scenarios, RECON
CILE outperforms the best single-agent and multiagent baselines, notably even outperforming the
zero-shot GPT-4 performance by 12.1% (75.6% →
87.7%) on StrategyQA. This highlights the potential of a stronger agent to also obtain useful external
feedback from comparatively weaker agents.
Method Accuracy
GPT-4 (zero-shot) 44.0 ( )
Best Single-agent (zero-shot) 50.5 ( )
Best Multi-agent (Debate) 48.7 ( _×3)_
RECONCILE **58.3 (**,, )
_×_
Table 4: RECONCILE generalizes to specialized models
like DeepSeekMath and improves on a challenging
mathematical reasoning benchmark, MATH.
**RECONCILE generalizes to domain-specific**
**agents.** So far, we have experimented with REC
ONCILE variants that employed general-purpose
models like ChatGPT as agents. Our next result
in Table 4 shows that even for tasks that require
substantial domain knowledge (e.g., the MATH
benchmark (Hendrycks et al., 2021)), RECON
CILE is flexible enough to utilize and improve
upon specialized, domain-specific models. Recently, Shao et al. (2024) proposed DeepSeekMath,
a 7B model pre-trained on a large number of mathrelated web corpus and improving over GPT-4.
Notably, RECONCILE with GPT-4, Claude2, and
DeepSeekMath as agents significantly outperforms
zero-shot DeepSeekMath and GPT4-based Debate
by 7.8% and 9.6% respectively. In summary, REC
ONCILE shows consistent improvements across a
wide range of agent combinations (involving APIbased, open-source, and domain-specific models).
**RECONCILE also improves Natural Language**
**Inference.** While all our previous results were
with reasoning tasks, we also demonstrate REC
ONCILE’s effectiveness on ANLI (Nie et al., 2020),
7072
-----
Metric Method Accuracy D (A1, A2) D (A1, A3) D (A2, A3) D (A1, A2, A3)
RECONCILE ( Paraphrased) 72.2 0.9364 0.9376 0.9453 0.9398
BERTScore RECONCILE ( _×3)_ 72.2 0.9077 0.9181 0.9049 0.9102
RECONCILE (,, ) **79.0** **0.8891** **0.8833** **0.8493** **0.8739**
Table 5: Comparison of diversity between (a) paraphrased responses (first row) and (b) responses from multiple
instances of the same ChatGPT model (second row). RECONCILE with a multi-model component also leads to
higher accuracy. Responses from different models in RECONCILE (last row) are most diverse (i.e., less similar).
Method Accuracy
Best Single-agent (zero-shot) 51.3 ( )
Best Multi-agent (Debate) 48.3 ( _×3)_
RECONCILE **57.7 (**,, )
_×_
Table 6: RECONCILE improves a challenging NLI
benchmark (ANLI), outperforming Debate by 9.4%.
a challenging Natural Language Inference benchmark. Table 6 shows that RECONCILE on ANLI
outperforms Debate by a significant 9.4%, pointing
to its widespread applicability.
**6.2** **Ablations and Analysis of RECONCILE**
**Each component of RECONCILE improves rea-**
**soning. In Table 7, we evaluate individual compo-**
nents of RECONCILE on StrategyQA. In particular,
we compare four variants: (1) w/o Multiple Mod**els: We use ChatGPT as the backbone for all three**
agents, (2) w/o Grouping: We simply concatenate
the responses from different agents without grouping their answers, (3) w/o Convincingness: We
remove convincing samples from all prompts, and
(4) w/o Confidence Estimation: We do not use
any confidence estimates during the discussion and
compute majority vote as the team answer. We
show that each component has a positive impact on
RECONCILE with varying capacities. The effect of
different models as agents is particularly significant
and we observe a 6.8% improvement compared to
only using ChatGPT as all three agents. This reinforces our hypothesis (and further verified below
in ‘Diversity Analysis’) that diverse LLMs have
complementary strengths and when put together
in a round table discussion, they can learn from
diverse external feedback from other agents and
refine their responses to reach a better consensus.
Notably, convincing samples lead to a 4.5% improvement in accuracy. In Appendix B.2, we study
the role of convincing samples to show that (1) they
also improve other interaction frameworks, and (2)
even in the absence of such examples, RECONCILE
outperforms debate baselines.
Method Accuracy
RECONCILE **79.0±1.6**
w/o Multiple Models 72.2 2.1
_±_
w/o Grouping 76.7 2.5
_±_
w/o Convincingness 74.5 1.7
_±_
w/o Conf Estimation 77.7 1.3
_±_
Table 7: Ablations of RECONCILE on StrategyQA.
**Different models enhance response diversity.**
As was shown in Table 7, RECONCILE obtains the
most improvements via its multi-model component.
This surpasses RECONCILE with multiple ChatGPT instances, even when the generations sampled
from these instances are encouraged to exhibit high
diversity with a sufficiently high temperature. To
further validate the importance of having multiple
models and the diversity brought about by them,
we develop a diversity metric. We hypothesize
that if explanations from different models are indeed more diverse than those generated from multiple instances of the same model (e.g., in Multiagent Debate), then our diversity metric should
capture that. With that goal, we define diversity between multiple agents as the summation of the pairwise diversity between agents: D(A1, A2, A3) =
_D(A1, A2) + D(A1, A3) + D(A2, A3), where A1,_
_A2, and A3 are the three agents’ initial responses_
(either belonging to the same underlying model or
different models). We then measure pairwise diversity by computing the cosine similarity between
the response embeddings with BERTScore (Zhang
et al., 2019). Note that lower similarity scores will
mean greater diversity. With the diversity metric
defined, we compute this metric for three variants:
(a) paraphrased responses of a single ChatGPT to
serve as a baseline, (b) responses from RECON
CILE using three instances of a single ChatGPT
model, and (c) responses from RECONCILE with
ChatGPT, Bard, and Claude2 as agents. In Table 5,
we show that responses from different models exhibit the highest diversity (yielding the lowest similarity score of 0.8739) and also the highest accuracy (79.0%), followed by the single-model variant
7073
-----
(a)
(b)
(c)
ReConcile Debate (ChatGPT) Debate (Bard) ReConcile Debate (ChatGPT) Debate (Bard) w/o Discussion w/ Discussion
82
80 100 0.9
78 95 0.85
76
74 90 0.8
727068 8580 0.750.7
66 75
Accuracy 64 70 Accuracy 0.65
6260 Consensus (%) 65 0.6
58 60 0.55
56 55
0 1 2 3 4 0 1 2 3 4 0 20 40 60 80 100
Discussion Round Discussion Round Consensus (%)
Figure 4: RECONCILE achieves better and faster consensus. (a) Comparison of RECONCILE with Debate baselines
showing the accuracy after each round. (b) Fraction of samples for which a consensus is reached after each round.
(c) Accuracy as a function of consensus.
of samples for which consensus has been reached;
and in Fig. 4(c), we analyze accuracy as a function of consensus. From the first plot, we make
two important observations: (1) RECONCILE improves accuracy for two rounds, following which
the accuracy saturates, (2) Compared to the debate
baselines, RECONCILE is not only superior after
every round but also peaks at a highest accuracy
of 79.0% (vs 71.3% for the baselines). Next, from
Fig. 4(b), our observations are also two-fold: (1)
In the initial rounds (0 and 1), RECONCILE’s consensus percentage is lower because the discussion
takes place between diverse LLMs. Diverse agents
lead to more differences in opinions initially. (2)
However, as the discussion proceeds, RECONCILE
establishes consensus for all samples by round 3,
while in the baseline, 13% of the samples do not
converge even after round 4. Finally, Fig. 4(c)
shows that for the samples that enter the discussion phase (i.e., their initial answers did not have a
consensus), accuracy is positively correlated with
consensus. In other words, as a greater number of
samples reach a consensus, accuracy proportionally
improves. In summary, RECONCILE reaches faster
and better consensus compared to baselines.
**7** **Conclusion**
We presented RECONCILE, a multi-agent framework for reasoning with diverse LLM agents, engaged in multiple rounds of discussion via confidence estimation and generating explanations that
can correctively convince other agents. RECON
CILE demonstrated strong results on multiple reasoning benchmarks, consistently outperforming
prior single-agent and multi-agent baselines and
even improving upon GPT-4 on some benchmarks.
Round ChatGPT Bard Claude2 Team
0 71.0 2.1 71.7 0.9 73.7 1.7 74.3 1.2
_±_ _±_ _±_ _±_
1 71.3 0.9 77.7 1.2 75.3 0.8 77.0 0.9
_±_ _±_ _±_ _±_
2 76.7±0.8 **77.3±1.4** **77.7±0.9** **79.0±0.5**
3 **77.0±0.9** 76.7±0.8 77.0±1.2 78.7±1.2
Table 8: The round-wise accuracy of ChatGPT,
Bard, and Claude2 and their team performance (using
weighted vote) on StrategyQA.
(with a similarity score of 0.9102) and the paraphrased variant (with a similarity score of 0.9398).
Thus, the higher diversity of (multi-model) REC
ONCILE means that agents have access to alternate
solutions and external feedback, leading to better discussion and reasoning accuracy. We also
present a case study in Appendix C.5 to illustrate
that the debate baseline sometimes struggles with
echo chambers, stemming from a lack of external
feedback, supporting the need for external feedback
for improving LLMs (Huang et al., 2023).
**RECONCILE improves all agents individually.**
We showed that the team performance of the agents
improves through discussion. Next, in Table 8, we
also present the accuracy of each agent after every
round, as well as the overall team accuracy for
StrategyQA. Evidently, the individual performance
of each agent also improves alongside the team’s
performance.
**RECONCILE Reaches Faster and Better Consen-**
**sus.** RECONCILE terminates the discussion when
a consensus is reached. More discussion rounds
are costlier due to the increased API calls. Hence,
achieving faster consensus while maintaining comparable accuracy gains is more efficient. To study
this, in Fig. 4(a), we plot the accuracy trends after each round; in Fig. 4(b), we plot the fraction
7074
-----
**Limitations**
For the API-based models used in RECONCILE,
we note that we lack complete knowledge of the
data that these models have been exposed to, and
their scales in terms of parameters. Moreover, due
to the API access, we do not possess complete control over their behavior. Depending on API-based
models also necessitates the need to prompt these
models to estimate their confidence. While this
approach proves effective as evidenced by our results, we note that these estimates remain post-hoc
in nature. Nevertheless, it is worth highlighting
that these limitations could potentially be mitigated
in the future should more open-sourced models
emerge and demonstrate robust capabilities in adhering to long instructions.
**Acknowledgments**
We thank Peter Hase and Elias Stengel-Eskin for
useful feedback and suggestions regarding experiments. This work was supported by NSF-CAREER
Award 1846185, NSF-AI Engage Institute DRL2112635, DARPA MCS Grant N66001-19-2-4031,
Accelerate Foundation Models Research program,
and a Google PhD Fellowship. The views contained in this article are those of the authors and
not of the funding agency.
**References**
Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet
Agrawal, Dinesh Khandelwal, Parag Singla, and Di[nesh Garg. 2021. Explanations for commonsenseqa:](https://aclanthology.org/2021.acl-long.238)
[New dataset and models. In Proceedings of the 59th](https://aclanthology.org/2021.acl-long.238)
_Annual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Confer-_
_ence on Natural Language Processing (Volume 1:_
_Long Papers), pages 3050–3065._
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, Eric Chu, Jonathan H. Clark, Laurent El
Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao,
Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez
Abrego, Junwhan Ahn, Jacob Austin, Paul Barham,
Jan Botha, James Bradbury, Siddhartha Brahma,
Kevin Brooks, Michele Catasta, Yong Cheng, Colin
Cherry, Christopher A. Choquette-Choo, Aakanksha
Chowdhery, Clément Crepy, Shachi Dave, Mostafa
Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz,
Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu
Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur
Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua
Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun,
Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li,
Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu,
Frederick Liu, Marcello Maggioni, Aroma Mahendru,
Joshua Maynez, Vedant Misra, Maysam Moussalem,
Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek,
Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif,
Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee
Shelby, Ambrose Slone, Daniel Smilkov, David R.
So, Daniel Sohn, Simon Tokumine, Dasha Valter,
Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang,
Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting
Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven
Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav
[Petrov, and Yonghui Wu. 2023. Palm 2 technical](http://arxiv.org/abs/2305.10403)
[report.](http://arxiv.org/abs/2305.10403)
[Anthropic. 2023. model card and evaluations for claude](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf)
[models.](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf)
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz
Lehmann, Michal Podstawski, Hubert Niewiadom[ski, Piotr Nyczyk, and Torsten Hoefler. 2023. Graph](http://arxiv.org/abs/2308.09687)
[of thoughts: Solving elaborate problems with large](http://arxiv.org/abs/2308.09687)
[language models. arXiv preprint arXiv:2308.09687.](http://arxiv.org/abs/2308.09687)
Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie
[Lu, and Ben He. 2023. Chatgpt is a knowledgeable](http://arxiv.org/abs/2303.16421)
[but inexperienced solver: An investigation of com-](http://arxiv.org/abs/2303.16421)
[monsense problem in large language models. arXiv](http://arxiv.org/abs/2303.16421)
_preprint arXiv:2303.16421._
[BIG-bench collaboration. 2023. Beyond the imitation](https://openreview.net/forum?id=uyTL5Bvosj)
[game: Quantifying and extrapolating the capabili-](https://openreview.net/forum?id=uyTL5Bvosj)
[ties of language models. Transactions on Machine](https://openreview.net/forum?id=uyTL5Bvosj)
_Learning Research._
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu,
Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan
[Liu. 2023. Chateval: Towards better llm-based eval-](https://arxiv.org/abs/2308.07201)
[uators through multi-agent debate. arXiv preprint](https://arxiv.org/abs/2308.07201)
_arXiv:2308.07201._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021. Training verifiers to solve math](https://arxiv.org/abs/2110.14168)
[word problems. arXiv preprint arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
Nan Du, Yanping Huang, Andrew M Dai, Simon Tong,
Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun,
Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. 2022.
[Glam: Efficient scaling of language models with](https://proceedings.mlr.press/v162/du22c/du22c.pdf)
[mixture-of-experts. In International Conference on](https://proceedings.mlr.press/v162/du22c/du22c.pdf)
_Machine Learning, pages 5547–5569. PMLR._
Yilun Du, Shuang Li, Antonio Torralba, Joshua B.
[Tenenbaum, and Igor Mordatch. 2023. Improving](http://arxiv.org/abs/2305.14325)
7075
-----
[factuality and reasoning in language models through](http://arxiv.org/abs/2305.14325)
[multiagent debate.](http://arxiv.org/abs/2305.14325)
Elias Stengel-Eskin and Benjamin Van Durme. 2023.
[Calibrated interpretation: Confidence estimation in](https://arxiv.org/abs/2211.07443)
[semantic parsing. In TACL.](https://arxiv.org/abs/2211.07443)
Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding,
Vidhisha Balachandran, and Yulia Tsvetkov. 2024.
[Don’t hallucinate, abstain: Identifying llm knowl-](http://arxiv.org/abs/2402.00367)
[edge gaps via multi-llm collaboration. arXiv preprint](http://arxiv.org/abs/2402.00367)
_arXiv:2402.00367._
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
[Tushar Khot. 2023. Specializing smaller language](https://arxiv.org/abs/2301.12726)
[models towards multi-step reasoning. In ICML.](https://arxiv.org/abs/2301.12726)
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://arxiv.org/abs/2101.02235)
[use a laptop? a question answering benchmark with](https://arxiv.org/abs/2101.02235)
[implicit reasoning strategies. Transactions of the](https://arxiv.org/abs/2101.02235)
_Association for Computational Linguistics, 9:346–_
361.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Wein[berger. 2017. On calibration of modern neural net-](https://arxiv.org/abs/1706.04599)
[works. In International conference on machine learn-](https://arxiv.org/abs/1706.04599)
_ing, pages 1321–1330. PMLR._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021. Measuring mathematical](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper-round2.pdf)
[problem solving with the math dataset. NeurIPS.](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper-round2.pdf)
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers. In](https://doi.org/10.18653/v1/2023.acl-long.830)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 14852–14882, Toronto, Canada._
Association for Computational Linguistics.
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xiny[ing Song, and Denny Zhou. 2023. Large language](http://arxiv.org/abs/2310.01798)
[models cannot self-correct reasoning yet.](http://arxiv.org/abs/2310.01798) _arXiv_
_preprint arXiv:2310.01798._
Robert A Jacobs, Michael I Jordan, Steven J Nowlan,
[and Geoffrey E Hinton. 1991. Adaptive mixtures of](https://ieeexplore.ieee.org/document/6797059)
[local experts. Neural computation, 3(1):79–87.](https://ieeexplore.ieee.org/document/6797059)
Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. 2023.
[LLM-blender: Ensembling large language models](https://aclanthology.org/2023.acl-long.792)
[with pairwise ranking and generative fusion. In Pro-](https://aclanthology.org/2023.acl-long.792)
_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 1: Long_
_Papers), pages 14165–14178, Toronto, Canada. As-_
sociation for Computational Linguistics.
Akbir Khan, John Hughes, Dan Valentine, Laura
Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward
Grefenstette, Samuel R. Bowman, Tim Rocktäschel,
[and Ethan Perez. 2024. Debating with more per-](http://arxiv.org/abs/2402.06782)
[suasive llms leads to more truthful answers. arXiv](http://arxiv.org/abs/2402.06782)
_preprint arXiv:2402.06782._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://arxiv.org/abs/2205.11916)
[guage models are zero-shot reasoners. Advances in](https://arxiv.org/abs/2205.11916)
_neural information processing systems, 35:22199–_
22213.
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
[Gutman-Solo, et al. 2022. Solving quantitative rea-](https://arxiv.org/abs/2206.14858)
[soning problems with language models. Advances](https://arxiv.org/abs/2206.14858)
_in Neural Information Processing Systems, 35:3843–_
3857.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani
Itani, Dmitrii Khizbullin, and Bernard Ghanem.
[2023a. Camel: Communicative agents for "mind"](http://arxiv.org/abs/2303.17760)
[exploration of large scale language model society.](http://arxiv.org/abs/2303.17760)
_arXiv preprint arXiv:2303.17760._
Shuang Li, Yilun Du, Joshua B Tenenbaum, Antonio
[Torralba, and Igor Mordatch. 2022a. Composing en-](https://arxiv.org/abs/2210.11522)
[sembles of pre-trained models via iterative consensus.](https://arxiv.org/abs/2210.11522)
_International Conference on Learning Representa-_
_tions (ICLR)._
Yanhong Li, Gang Kou, Guangxu Li, and Yi Peng.
[2022b. Consensus reaching process in large-scale](https://doi.org/https://doi.org/10.1016/j.ejor.2022.03.040)
[group decision making based on bounded confidence](https://doi.org/https://doi.org/10.1016/j.ejor.2022.03.040)
[and social network. European Journal of Opera-](https://doi.org/https://doi.org/10.1016/j.ejor.2022.03.040)
_tional Research, 303(2):790–802._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023b. Making](https://doi.org/10.18653/v1/2023.acl-long.291)
[language models better reasoners with step-aware](https://doi.org/10.18653/v1/2023.acl-long.291)
[verifier. In Proceedings of the 61st Annual Meet-](https://doi.org/10.18653/v1/2023.acl-long.291)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 5315–5333, Toronto,_
Canada. Association for Computational Linguistics.
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang,
Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and
[Shuming Shi. 2023. Encouraging divergent thinking](http://arxiv.org/abs/2305.19118)
[in large language models through multi-agent debate.](http://arxiv.org/abs/2305.19118)
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://aclanthology.org/P17-1015/)
[tion: Learning to solve and explain algebraic word](https://aclanthology.org/P17-1015/)
[problems. ACL.](https://aclanthology.org/P17-1015/)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](http://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](http://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](http://arxiv.org/abs/2308.09583)
_arXiv preprint arXiv:2308.09583._
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Katherine Hermann, Sean Welleck, Amir Yazdan[bakhsh, and Peter Clark. 2023. Self-refine: Iterative](http://arxiv.org/abs/2303.17651)
[refinement with self-feedback.](http://arxiv.org/abs/2303.17651)
7076
-----
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason. In Pro-](https://doi.org/10.18653/v1/2023.acl-short.151)
_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 2: Short_
_Papers), pages 1773–1781, Toronto, Canada. Associ-_
ation for Computational Linguistics.
Sabrina J. Mielke, Arthur Szlam, Emily Dinan, and Y[Lan Boureau. 2022. Reducing conversational agents’](https://doi.org/10.1162/tacl_a_00494)
[overconfidence through linguistic calibration. Trans-](https://doi.org/10.1162/tacl_a_00494)
_actions of the Association for Computational Linguis-_
_tics, 10:857–872._
Marvin Minsky. 1988. Society Of Mind. Simon and
Schuster.
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed
[Awadallah. 2023. Orca: Progressive learning from](https://arxiv.org/abs/2306.02707)
[complex explanation traces of gpt-4. arXiv preprint](https://arxiv.org/abs/2306.02707)
_arXiv:2306.02707._
Mahdi Pakdaman Naeini, Gregory Cooper, and Milos
[Hauskrecht. 2015. Obtaining well calibrated proba-](https://ojs.aaai.org/index.php/AAAI/article/view/9602)
[bilities using bayesian binning. In Proceedings of the](https://ojs.aaai.org/index.php/AAAI/article/view/9602)
_AAAI conference on artificial intelligence, volume 29._
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal,
[Jason Weston, and Douwe Kiela. 2020. Adversarial](https://doi.org/10.18653/v1/2020.acl-main.441)
[NLI: A new benchmark for natural language under-](https://doi.org/10.18653/v1/2020.acl-main.441)
[standing. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/v1/2020.acl-main.441)
_ing of the Association for Computational Linguistics,_
pages 4885–4901, Online. Association for Computational Linguistics.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
[David Luan, et al. 2021. Show your work: Scratch-](https://arxiv.org/abs/2112.00114)
[pads for intermediate computation with language](https://arxiv.org/abs/2112.00114)
[models. arXiv preprint arXiv:2112.00114.](https://arxiv.org/abs/2112.00114)
[OpenAI. 2022. Chatgpt: Optimizing language models](https://openai.com/blog/chatgpt/)
[for dialogue.](https://openai.com/blog/chatgpt/)
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai,
Meredith Ringel Morris, Percy Liang, and Michael S.
Bernstein. 2023. [Generative agents: Interactive](http://arxiv.org/abs/2304.03442)
[simulacra of human behavior.](http://arxiv.org/abs/2304.03442) _arXiv preprint_
_arXiv:2304.03442._
John Platt et al. 1999. Probabilistic outputs for support
vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers,
10(3):61–74.
[Omer Sagi and Lior Rokach. 2018. Ensemble learning:](https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1249)
[A survey. Wiley Interdisciplinary Reviews: Data](https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1249)
_Mining and Knowledge Discovery, 8(4):e1249._
Swarnadeep Saha, Peter Hase, and Mohit Bansal. 2023.
[Can language models teach weaker agents? teacher](https://arxiv.org/abs/2306.09299)
[explanations improve students via theory of mind. In](https://arxiv.org/abs/2306.09299)
_NeurIPS._
Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio
Petroni, Patrick Lewis, Gautier Izacard, Qingfei You,
Christoforos Nalmpantis, Edouard Grave, and Sebas[tian Riedel. 2022. Peer: A collaborative language](http://arxiv.org/abs/2208.11663)
[model. arXiv preprint arXiv:2208.11663.](http://arxiv.org/abs/2208.11663)
Zhihong Shao, Peiyi Wang, Runxin Xu Qihao Zhu,
Junxiao Song, Mingchuan Zhang, Y.K. Li, Y. Wu,
[and Daya Guo. 2024. Deepseekmath: Pushing the](https://arxiv.org/abs/2402.03300)
[limits of mathematical reasoning in open language](https://arxiv.org/abs/2402.03300)
[models.](https://arxiv.org/abs/2402.03300)
Noam Shazeer, *Azalia Mirhoseini, *Krzysztof
Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,
[and Jeff Dean. 2017. Outrageously large neural net-](https://openreview.net/forum?id=B1ckMDqlg)
[works: The sparsely-gated mixture-of-experts layer.](https://openreview.net/forum?id=B1ckMDqlg)
In International Conference on Learning Representa_tions._
Noah Shinn, Federico Cassano, Beck Labash, Ashwin
Gopinath, Karthik Narasimhan, and Shunyu Yao.
[2023. Reflexion: Language agents with verbal rein-](http://arxiv.org/abs/2303.11366)
[forcement learning.](http://arxiv.org/abs/2303.11366)
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan,
and Thomas L. Griffiths. 2023. [Cognitive ar-](http://arxiv.org/abs/2309.02427)
[chitectures for language agents.](http://arxiv.org/abs/2309.02427) _arXiv preprint_
_arXiv:2309.02427._
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. CommonsenseQA: A ques-](https://doi.org/10.18653/v1/N19-1421)
[tion answering challenge targeting commonsense](https://doi.org/10.18653/v1/N19-1421)
[knowledge. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1421)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4149–4158, Minneapolis, Minnesota. Association for
Computational Linguistics.
Katherine Tian, Eric Mitchell, Allan Zhou, Archit
Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn,
[and Christopher D Manning. 2023. Just ask for cali-](https://arxiv.org/abs/2305.14975)
[bration: Strategies for eliciting calibrated confidence](https://arxiv.org/abs/2305.14975)
[scores from language models fine-tuned with human](https://arxiv.org/abs/2305.14975)
[feedback. arXiv preprint arXiv:2305.14975.](https://arxiv.org/abs/2305.14975)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
7077
-----
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](http://arxiv.org/abs/2307.09288)
[tuned chat models. arXiv preprint arXiv:2307.09288.](http://arxiv.org/abs/2307.09288)
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu,
Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
[2023a. Plan-and-solve prompting: Improving zero-](https://aclanthology.org/2023.acl-long.147)
[shot chain-of-thought reasoning by large language](https://aclanthology.org/2023.acl-long.147)
[models. In Proceedings of the 61st Annual Meet-](https://aclanthology.org/2023.acl-long.147)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 2609–2634, Toronto,_
Canada. Association for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023b. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_The Eleventh International Conference on Learning_
_Representations._
Yuqing Wang and Yun Zhao. 2023. [Metacognitive](https://arxiv.org/pdf/2308.05342.pdf)
[prompting improves understanding in large language](https://arxiv.org/pdf/2308.05342.pdf)
[models. arXiv preprint arXiv:2308.05342.](https://arxiv.org/pdf/2308.05342.pdf)
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu,
[Tao Ge, Furu Wei, and Heng Ji. 2023c. Unleash-](http://arxiv.org/abs/2307.05300)
[ing cognitive synergy in large language models:](http://arxiv.org/abs/2307.05300)
[A task-solving agent through multi-persona self-](http://arxiv.org/abs/2307.05300)
[collaboration. arXiv preprint arXiv:2307.05300.](http://arxiv.org/abs/2307.05300)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
[et al. 2022. Chain-of-thought prompting elicits rea-](https://arxiv.org/abs/2201.11903)
[soning in large language models. Advances in Neural](https://arxiv.org/abs/2201.11903)
_Information Processing Systems, 35:24824–24837._
Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, and Bing
[Qin. 2023a. Examining the inter-consistency of large](http://arxiv.org/abs/2305.11595)
[language models: An in-depth analysis via debate.](http://arxiv.org/abs/2305.11595)
Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie
[Fu, Junxian He, and Bryan Hooi. 2023b. Can llms](https://arxiv.org/abs/2306.13063)
[express their uncertainty? an empirical evaluation](https://arxiv.org/abs/2306.13063)
[of confidence elicitation in llms.](https://arxiv.org/abs/2306.13063) _arXiv preprint_
_arXiv:2306.13063._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
[Narasimhan. 2023a. Tree of thoughts: Deliberate](http://arxiv.org/abs/2305.10601)
[problem solving with large language models. arXiv](http://arxiv.org/abs/2305.10601)
_preprint arXiv:2305.10601._
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2023b.
[React: Synergizing reasoning and acting in language](http://arxiv.org/abs/2210.03629)
[models. arXiv preprint arXiv:2210.03629.](http://arxiv.org/abs/2210.03629)
Hongbin Ye, Tong Liu, Aijia Zhang, Wei Hua, and
[Weiqiang Jia. 2023. Cognitive mirage: A review](http://arxiv.org/abs/2309.06794)
[of hallucinations in large language models. arXiv](http://arxiv.org/abs/2309.06794)
_preprint arXiv:2309.06794._
Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel
Deutch, and Jonathan Berant. 2023. [Answering](https://arxiv.org/abs/2304.13007)
[questions by meta-reasoning over multiple chains](https://arxiv.org/abs/2304.13007)
[of thought. arXiv preprint arXiv:2304.13007.](https://arxiv.org/abs/2304.13007)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
[2023. Mammoth: Building math generalist models](http://arxiv.org/abs/2309.05653)
[through hybrid instruction tuning. arXiv preprint](http://arxiv.org/abs/2309.05653)
_arXiv:2309.05653._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good[man. 2022. Star: Bootstrapping reasoning with rea-](https://arxiv.org/abs/2203.14465)
[soning. Advances in Neural Information Processing](https://arxiv.org/abs/2203.14465)
_Systems, 35:15476–15488._
Andy Zeng, Maria Attarian, brian ichter,
Krzysztof Marcin Choromanski, Adrian Wong,
Stefan Welker, Federico Tombari, Aveek Purohit,
Michael S Ryoo, Vikas Sindhwani, Johnny Lee, Vin[cent Vanhoucke, and Pete Florence. 2023. Socratic](https://openreview.net/forum?id=G2Q2Mh3avow)
[models: Composing zero-shot multimodal reasoning](https://openreview.net/forum?id=G2Q2Mh3avow)
[with language.](https://openreview.net/forum?id=G2Q2Mh3avow) In The Eleventh International
_Conference on Learning Representations._
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating
text generation with bert. In International Confer_ence on Learning Representations._
Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, Róbert Csordás, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, et al. 2023.
[Mindstorms in natural language-based societies of](https://arxiv.org/abs/2305.17066)
[mind. arXiv preprint arXiv:2305.17066.](https://arxiv.org/abs/2305.17066)
7078
-----
**A** **Additional Details of RECONCILE**
**A.1** **Implementation Details**
We provide more implementation details of REC
ONCILE in this section. During decoding, we set
the temperature to 0.7 for ChatGPT and Bard and
use the default setting for Claude2. All implementations involving ChatGPT are using gpt-3.5_turbo-0613 from Azure OpenAI.[5]_ We retrieve results from Claude2 by posting requests to their
webpage[6], and for Bard, we use chat-bison-001
from PaLM2 API[7]. For each agent, we use four
demonstrations of convincing samples. In addition, we provide the workflow of RECONCILE in
Algorithm 1. Required input contains a test problem Q, maximum number of discussion rounds R,
_n agents A = {Ai}i[n]=1[, and convincing samples]_
_C = {Ci}i[n]=1_ [for each agent. The output would be]
the team answer ˆa[(][r][)]. For the open-source models LLaMA2-70B and DeepSeekMath, we use four
RTX A6000 GPUs, each with 48GB memory to
generate output from them.
**Initial Prompt**
**{convincing_samples}**
Q: {test_question}
Please answer the question with step-by-step reasoning. Also,
evaluate your confidence level (between 0.0 and 1.0) to indicate the
possibility of your answer being right.
**Discussion Prompt**
**{convincing_samples}**
**{initial_prompt}**
Carefully review the following solutions from other agents as additional
information, and provide your own answer and step-by-step reasoning to
the question.
Clearly state which point of view you agree or disagree with and why.
There are {majority_num} agents think the answer is {majority_ans}.
One agent solution: {agent_reasoning} {agent_ans} {agent_confidence}
One agent solution: {agent_reasoning} {agent_ans} {agent_confidence}
There are {minority_num} agents think the answer is {minority_ans}.
One agent solution: {agent_reasoning} {agent_ans} {agent_confidence}
Figure 5: The prompts used in RECONCILE consist of
an initial prompt and a discussion prompt.
**A.2** **Initial Prompt and Discussion Prompt**
We show the prompts used in RECONCILE in Fig. 5.
The initial prompt encompasses (1) the convincing
samples that demonstrate how to convince other
agents, (2) the test question, and (3) a requirement
for ‘step-by-step’ reasoning. The prompt also instructs the agent to express their confidence level,
5https://oai.azure.com/
6https://claude.ai/chats
7https://developers.generativeai.google/products/palm
**Model** **StrategyQA** **Date**
ChatGPT 68.1 69.3
Bard 70.6 52.8
Claude2 72.7 77.9
Multi-agent Debate 71.4 72.4
ReConcile **78.4** **84.5**
Table 9: Comparison of RECONCILE with baselines on
the full test sets of StrategyQA and Date Understanding.
Method Accuracy
Debate (Du et al., 2023) 66.7 3.1
_±_
RC (w/o Convincing Expl) 74.5 1.7
_±_
RC (w/ Random Expl) 75.0 2.5
_±_
RC (w/ Convincing Expl) 79.0 1.6
_±_
Debate (w/ Random Expl) 68.7 2.2
_±_
Debate (w/ Convincing Expl) 69.5 1.7
_±_
Table 10: Evaluation of the role of convincing samples
on StrategyQA. RECONCILE (RC) without convincing
samples outperforms multi-agent debate and with it obtains further gains. Convincing samples also boost the
debate baseline.
ranging from 0.0 to 1.0, indicating the likelihood of
their answer being correct. The discussion prompt
is an extension of the initial prompt, instructing the
agent to review and express agreement or disagreement with other agents’ solutions. To facilitate discussions, we design a grouping scheme that aggregates information based on the current opinions at
the table. For instance, if two agents affirm that the
answer to a given question is ‘yes’ while the third
agent disagrees with a ‘no’, the designed grouping
mechanism in the discussion prompt consolidates
this information rather than simply concatenating
all responses.
**B** **Additional Results**
**B.1** **Results on Full Test Sets**
In Table 2, we reported results with 100 test samples following several previous works and due to
budget constraints. Upon experimenting on the
full test sets of StrategyQA and Date Understanding, we confirm similar trends. Specifically, in
Table 9, we compare RECONCILE to all of our major baselines and show that RECONCILE continues
to outperform all baselines.
**B.2** **Convincing Samples Improve Both**
**RECONCILE and Multi-agent Debate**
Recall that RECONCILE selects a sample as convincing if the corresponding human explanation
7079
-----
**Algorithm 1 RECONCILE: A Group-Discuss-And-Convince Framework**
**Require: Test Problem Q, Discussion Rounds R, Agents** = _Ai_ _i=1[, Convincing Samples][ C][ =][ {][C][i][}][n]i=1_
_A_ _{_ _}[n]_
**function RECONCILE(Q, R, A, C)**
_r ←_ 0
**while r** _R and not CONSENSUS(Q,_ _a[(]i[r][−][1)]_ _i=1[)][ do]_
_≤_ _{_ _}[n]_
_S ←_ [], P ← []
**for eachif r = 0 Ai then ∈A do**
_PI ←_ (Q, C) _▷_ Initial prompt consists of question and convincing samples
_a[(0)]i_ _, e[(0)]i_ _, p[(0)]i_ _Ai(PI_ ) _▷_ Generate initial answer, explanation, and confidence
_←_
**else**
_PD_ (Q, a[(]i[r][−][1)], e[(]i[r][−][1)], p[(]i[r][−][1)], ) _▷_ Discussion prompt
_←_ _C_
_a[(]i[r][)], e[(]i[r][)], p[(]i[r][)]_ _Ai(PD)_
_←_
**end if**
_S_ _S + [a[(]i[r][)]], P_ _P + [p[(]i[r][)]]_ _▷_ Append each agent’s answer and confidence
_←_ _←_
**end for**
_aˆ[(][r][)]_ _←_ WEIGHTEDVOTE(S, P ) _▷_ Get team answer through a confidence weighted vote
**end while**
**return ˆa[(][r][)]**
**end function**
rectifies an agent’s incorrect answer. Based on this,
Table 7 showed that by collecting only four human
explanations, we can obtain significant improvements (‘w/o Convincingness’ row). Next, we consider a scenario where no human explanations are
present. Table 10 shows that even then, RECON
CILE outperforms the debate baseline by absolute
7.8 points (second row). If random (i.e., general human explanations that may not necessarily ensure
answer rectification) are available (third row), we
obtain some small improvements; but our convincing samples that are selected based on our novel
answer-rectification criterion (fourth row) improve
the results substantially. See Sections C.3 and C.4
for illustrative examples. Being able to convince
another agent is also a generic concept that can
be applied to other multi-agent systems, as demonstrated by improvements in the debate baseline (last
row).
**B.3** **Comparison with Other Methods**
In Table 11, we compare RECONCILE to two
other single-agent variants. While in our main
Table 2, we experimented with a random 8-shot
Claude2 baseline, here we replace the in-context
samples with our convincing samples. Even then,
RECONCILE exhibits superior performance on all
datasets except for GSM8K, again highlighting
the importance of collaboration between diverse
models. Next, we also report results for 9-way
Self-Consistency which in terms of LLM calls represents the worst-case scenario of RECONCILE –
even for a more open-ended dataset like GSM8K,
9 LLM calls (i.e., 3 discussion rounds) happen in
only 12% of the samples and an even lesser 9% on
multiple-choice QA dataset like Date understanding. That said, RECONCILE continues to outperform 9-way SC by a large margin on most datasets.
**B.4** **Recalibration Strategy of RECONCILE**
Directly using confidence scores as the voting
weights is less effective due to the overconfidence
problem of LLMs (Xiong et al., 2023b; Tian et al.,
2023; Mielke et al., 2022). Specifically, LLMs
tend to produce consistently high confidence scores,
which can make it challenging to discern subtle
distinctions in confidence levels across different
outputs. To address this, we employ a simple yet
effective rescaling technique, facilitating better differentiation of confidence levels. This is expressed
as:
1.0, if p[(]i[r][)] = 1.0
0.8, if 0.9 _p[(]i[r][)]_ _< 1.0_
_≤_
0.5, if 0.8 _p[(]i[r][)]_ _< 0.9_
_≤_
0.3, if 0.6 < p[(]i[r][)] _< 0.8_
0.1, otherwise
_f_ (p[(]i[r][)][) =]
where p[(]i[r][)] is the original confidence of agent Ai in
round r and f (p[(]i[r][)][)][ is the corresponding adjusted]
score. To decide the optimal weights, we compare
with a variety of settings including the majority
vote and the uncalibrated confidence-weighted vote.
The results are summarized in Table 13. We denote the weight we used in our main experiment
as w[∗] = [1.0, 0.8, 0.5, 0.3, 0.1] where each value
corresponds to the recalibrated confidence score.
We further compare with other settings:
7080
-----
**Model** **StrategyQA** **CSQA** **GSM8K** **AQuA** **Date**
Claude2 (w/ 8-shot convincing samples) 74.0 0.0 69.7 1.2 85.3 0.5 64.3 1.2 81.3 0.5
_±_ _±_ _±_ _±_ _±_
Self-Consistency w/ ChatGPT (9-way) 74.7±0.8 73.3±1.2 **85.7±0.4** 62.7±1.2 70.3±0.9
RECONCILE **79.0±1.6** **74.7±0.4** 85.3±2.2 **66.0±0.8** **86.7±1.2**
Table 11: Comparison of RECONCILE with Claude2 using 8-shot convincing samples and 9-way Self-Consistency.
Max Conf Majority Vote Weighted Vote
Accuracy 74.7±2.1 77.1±1.3 **79.0±0.5**
Table 12: Performance comparison of different voting
strategies on StrategyQA. Weighted vote performs the
best compared to simple majority vote and choosing the
agent’s answer with highest confidence.
Voting weight StrategyQA GSM8K
_w1_ 0.77 0.84
_w2_ 0.79 0.83
_w3_ 0.78 0.82
_w4_ 0.77 0.83
Majority 0.76 0.83
Uncalibrated 0.78 0.84
_w[∗]_ (Ours) **0.79** **0.85**
Table 13: The robustness of the recalibation weight. We
use the same weights w[∗] across all datasets.
- w1 = [1.0, 0.9, 0.7, 0.5, 0.3]
- w2 = [1.0, 0.9, 0.5, 0.3, 0.1]
- w3 = [1.0, 0.8, 0.6, 0.4, 0.2]
- w4 = [1.0, 0.75, 0.5, 0.25, 0.0]
and the results show that our w[∗] works the best
across datasets. In our main experiment, we fix the
weight using w[∗] and it is constantly outperforming
majority vote across all seven datasets. In addition, Fig. 9 shows that it helps reduce the Expected
Calibration Error (ECE), a popular calibration metric (Naeini et al., 2015). While we note that recalibration can also be achieved through a learned
model (e.g., Platt Scaling (Platt et al., 1999)), we
refrain from using such models because RECON
CILE is primarily designed as a few-shot method,
and developing a recalibration model would necessitate access to a substantial number of annotated
samples. Therefore, we use f (p[(]i[r][)][)][ to perform a]
weighted vote to generate the team answer.
**B.5** **Comparison of Different Voting Strategies**
At the end of any round r, every agent in REC
ONCILE generates its answer. Here we explore
three voting strategies: (1) maximum confidence
vote, where the agent’s answer with the maximum
confidence score would be the final team answer,
**Dataset** **License**
StrategyQA [MIT License (License)](https://github.com/eladsegal/strategyqa/blob/main/LICENSE)
CommonsenseQA [MIT License (License)](https://github.com/jonathanherzig/commonsenseqa/issues/5)
GSM8K [MIT License (License)](https://github.com/openai/grade-school-math/blob/master/LICENSE)
AQuA [Apache 2.0 (License)](https://github.com/google-deepmind/AQuA/blob/master/LICENSE)
MATH [MIT License (License)](https://github.com/hendrycks/math/blob/main/LICENSE)
Date [Apache 2.0 (License)](https://github.com/google/BIG-bench/blob/main/LICENSE)
ANLI [CC BY-NC 4.0 (License)](https://github.com/facebookresearch/anli/blob/main/LICENSE)
Table 14: Dataset licenses
(2) unweighted majority vote, where each vote carries equal weight, irrespective of the confidence
score, and (3) weighted vote, where we use the recalibrated confidence scores as the voting weights.
As shown in Table 12, weighted vote is the most
effective way to aggregate the team answer.
**C** **Qualitative Examples**
**C.1** **Convincing Samples for Each Agent**
Table 15 shows examples of convincing samples
on StrategyQA for each agent.
**C.2** **Effect of Convincing Samples**
Here, we provide qualitative examples of how convincing samples change the way each agent responds to the question. We compare the initial
responses (of each agent) with and without convincing samples in Table 16.
**C.3** **RECONCILE w/o Convincing Samples**
We notice that when RECONCILE operates in the
absence of convincing samples, the agents tend
to maintain their initial opinions more often. As
depicted in Fig. 6, all three agents adhere to their
original stances throughout the entire discussion
and hence never converge to the correct answer.
**C.4** **RECONCILE with Convincing Samples**
On the contrary, when convincing samples are
present, we show how the explanations of all agents
change during the course of a discussion (see
Fig. 7). Initially, Bard and Claude2 provide incorrect answers, but as the discussion unfolds, both
7081
-----
|Was Bruce Lee absent from the 1964 University of Washington graduation ceremony?|Col2|Col3|
|---|---|---|
||Round 0||
|No, Bruce Lee graduated from high school in 1958. No, Bruce Lee graduated from the University of He then studied philosophy at the University of Yes, Bruce Lee graduated from the University of Washington in 1963. The 1964 graduation Washington from 1961-1964. Bruce Lee was very Washington in 1964 with a degree in philosophy. ceremony took place after Bruce Lee had already proud of his academic achievements, making it However, he was not present at the graduation graduated, so he would not have been absent unlikely he would intentionally miss his college ceremony because he was filming the movie The from it. Confidence level: 0.90 graduation. Confidence level: 1.0 Green Hornet in Hong Kong. Confidence level: 0.90|||
||Round 1||
|N a 1 O s c ao eg t9 no tru6e ee e, r4n n mcB a t deUa s g o .s,s n e n Ce i iit n invd o eti e d s ns o r m i fcsl an ii di ak e rt etet ey n ih l nnt iy oe i m cgo f t e i n phn W h e la of i eso a dt r v tsr aB eatm h bhr lni :u sn a a t 0ect tg e .i e n 8o t t v co h 0Ln een ee n,p r e tg e ar s do nr aa tdv i r hd di e d a gu n te n ra o ad o mtt di o b c oa un ry st ae t ttct e d ih se on i tbe r nd ue l e dmt eh o ne n t sy . N a c L G no eg oro e enr t, e ec bma e l nu ef iwt d gs He se ii or nt e dh rr d fn e t it e lv thh mhi teee e ii w nsa r 1 e gni ln 9 ia s k ug 6s w eo n4 lt e yn th cr i i l e ien ni 1s rgo ce 9n ot m 6fh o rr 5ro oe . e .mr T n c C h ya t o, t eg d nh a e c u fe sn il e da tt t his ew tm' o a n os t c o t f ea mih ll g mu a lo eet t ii v vn no B i et egn r ls u : s dTw c 0, iheI .dh 9 e o 5 Y W H c G Cee oroa r es nw es, e fh mB e n idi vr n o eHu eg n nc r ot, ye c o rh e nbnL e eee l ei tw ce n va i a eng 1 us l r 9 s : Ha n e6 0d oo 4 .h 9u nt e 5w a gp t i wre t Kehd a os s a nef r n gfdo it l . e mm ag i t nrt eh t ghee e t iU h n g en rp i a mv h de i ol ur o vs asi itt eo iy o p To nh hf y e .|||
||Round 2||
|No, After considering the reasoning provided by No, After reviewing the other agents' solutions, I the agents, it is apparent that there is a consensus agree with the reasoning from the two agents who Yes, Bruce Lee graduated from the University of among the agents that the answer is no. Taking into concluded the answer is no. Bruce Lee graduated Washington in 1964 with a degree in philosophy. account the agreement among the agents and the from the University of Washington in 1963, so he However, he was not present at the graduation reasoning provided, it is unlikely that Bruce Lee was would not have been scheduled to participate in the ceremony because he was filming the movie The absent from the 1964 University of Washington 1964 graduation ceremony. Confidence level: 0.95 Green Hornet in Hong Kong. Confidence level: 1.0 graduation ceremony. Confidence level: 1.0|||
**Was Bruce Lee absent from the 1964 University of Washington graduation ceremony?**
**Gold Answer: Yes**
**No, Bruce Lee graduated from the University of**
Washington in 1963. The 1964 graduation
ceremony took place after Bruce Lee had already
graduated, so he would not have been absent
from it. Confidence level: 0.90
**No, Bruce Lee graduated from high school in 1958.**
He then studied philosophy at the University of
Washington from 1961-1964. Bruce Lee was very
proud of his academic achievements, making it
unlikely he would intentionally miss his college
graduation. Confidence level: 1.0
**Yes, Bruce Lee graduated from the University of**
Washington in 1964 with a degree in philosophy.
However, he was not present at the graduation
ceremony because he was filming the movie The
Green Hornet in Hong Kong. Confidence level: 0.90
**No, Based on the information provided by the**
agents, it is likely that Bruce Lee did not attend the
1964 University of Washington graduation ceremony.
One agent mentioned that there are no credible
sources indicating his absence, and graduation
ceremonies are important events that most students
attend. Confidence level: 0.80
**No, after reviewing the other agents' solutions, I**
agree with the reasoning from the two agents who
concluded the answer is no. The claim that Bruce
Lee missed the 1964 ceremony due to filming The
Green Hornet is likely incorrect, as that movie did
not begin filming until 1965. Confidence level: 0.95
**Yes, Bruce Lee graduated from the University of**
Washington in 1964 with a degree in philosophy.
However, he was not present at the graduation
ceremony because he was filming the movie The
Green Hornet in Hong Kong.
Confidence level: 0.95
**No, After considering the reasoning provided by**
the agents, it is apparent that there is a consensus
among the agents that the answer is no. Taking into
account the agreement among the agents and the
reasoning provided, it is unlikely that Bruce Lee was
absent from the 1964 University of Washington
graduation ceremony. Confidence level: 1.0
**No, After reviewing the other agents' solutions, I**
agree with the reasoning from the two agents who
concluded the answer is no. Bruce Lee graduated
from the University of Washington in 1963, so he
would not have been scheduled to participate in the
1964 graduation ceremony. Confidence level: 0.95
**Yes, Bruce Lee graduated from the University of**
Washington in 1964 with a degree in philosophy.
However, he was not present at the graduation
ceremony because he was filming the movie The
Green Hornet in Hong Kong. Confidence level: 1.0
Figure 6: An illustrative example showing that without demonstrations of convincing samples, agents’ opinions do
not change much across the discussion rounds. All agents maintain their original stances and hence, never converge
to the correct answer.
|Question: Is it dangerous to consume chlorine when mixed with sodium?|Col2|Col3|
|---|---|---|
||Round 0||
|No, When chlorine and sodium combine, they Yes, Chlorine and sodium are both reactive Yes, Chlorine is a highly reactive element that reacts form table salt or sodium chloride. Consuming elements. When mixed together, they can react violently with sodium to produce sodium chloride chlorine gas directly can be extremely hazardous violently to produce dangerous chlorine gas. This (table salt) and hydrogen gas. The reaction is for human health. However, when chlorine is exothermic reaction gives off heat and toxic fumes, exothermic, meaning that it releases heat. The heat mixed with sodium to form sodium chloride, it is making it very hazardous. generated by the reaction can cause the mixture to no longer dangerous to consume. Confidence level: 0.90 catch fire or explode. Confidence level: 0.95 Confidence level: 0.90|||
||Round 1||
|Y r t s s f Cue h t oe oaa mi ds n ntc, ii k e ft unC i s dmgv th,h ei l o a teo nch nl r ce aaa di ennn tn t re ls t pl e e y hw ri l v s ee oew e a da r lri s : et u ih eh 0s a c i . c e hg y 8s t eo e 0h i dos adl ay n ti p n .u r r b ge m o ea ev. tc r i wT odti h ue ev e se es n ct ie mw hl ce lo i h olm a l ra or ie ng rrn i ee e nt n a e gt st h as aoa snnw,t di th nc o oa g xn, i c N W c o r ceh owo ah l mo n c, e tr pI n c i i oda o acg ne unh,r n wle bwo de e ir .h t i hw Cn dic e oai sht on nh a i g d f snt i e i dh d t ura ee mos b n uoa l ce tg sd eu e i ts ru lon a n em lt vst i n e.w ir ltgWe :h e ia 0o nh sc .t i 9tts l o,e, 5 a tt ai hchd h e he t ly ao ch rhre f mio e na r m le emn sis g c sw s aa oe ls d r o ii u ns m n ito s . Y W p r h Cee ar oaohs z nc, e d a ftC n r iu i ddo h cm eonelo nu i gdxr csi e i ea .n v d n Ie e l e ag sta vg o e on errg f e lod f :e e u 0t hs sh .o we 9 e cd a i 0r th ti, hu lt ao m h tnr he i d n ea y e r t fce o iga r xb sn a i tco s r t.t fe wh uTa omhcre i t s ae a v gsec i, eo t x mi nolv e tte an sh k 'te e il al ny re nm gm t s o i wice t en v rt e ss r .. y|||
||Round 2||
|No, I agree with the agents who say the answer is No, I agree with the agents who said the answer is No, Chlorine and sodium, when combined, react no. Chlorine and sodium react to form sodium no. When chlorine and sodium react, they form to form sodium chloride, which is table salt. While chloride, which is table salt. The end product is safe sodium chloride, which is table salt. The reasoning chlorine gas on its own can be dangerous to for consumption. I disagree with the agent who says that the mixture remains dangerous after the ingest, the chemical reaction with sodium turns it the mixture remains dangerous after the reaction, reaction is not convincing. The agents saying no into a harmless compound. as table salt is commonly used for seasoning food. provide more compelling explanations. Confidence level: 0.90 Confidence level: 0.95 Confidence level: 0.95|||
**Question: Is it dangerous to consume chlorine when mixed with sodium?**
**Gold Answer: No**
**No, When chlorine and sodium combine, they**
form table salt or sodium chloride. Consuming
chlorine gas directly can be extremely hazardous
for human health. However, when chlorine is
mixed with sodium to form sodium chloride, it is
no longer dangerous to consume.
Confidence level: 0.95
**Yes, Chlorine and sodium are both reactive**
elements. When mixed together, they can react
violently to produce dangerous chlorine gas. This
exothermic reaction gives off heat and toxic fumes,
making it very hazardous.
Confidence level: 0.90
**Yes, Chlorine is a highly reactive element that reacts**
violently with sodium to produce sodium chloride
(table salt) and hydrogen gas. The reaction is
exothermic, meaning that it releases heat. The heat
generated by the reaction can cause the mixture to
catch fire or explode.
Confidence level: 0.90
**Yes, Chlorine is a highly reactive element that can**
react violently with sodium. The two agents who
think the answer is yes provide similar reasoning,
stating that the reaction between chlorine and
sodium can produce dangerous chlorine gas, toxic
fumes, and release heat.
Confidence level: 0.80
**No, I agree with the agent who said the answer is no.**
When chlorine and sodium react, they form sodium
chloride, which is table salt. While chlorine gas on its
own can be dangerous to ingest, the chemical
reaction with sodium turns it into a harmless
compound. Confidence level: 0.95
**Yes, Chlorine and sodium are both reactive elements.**
When mixed together, they can react violently to
produce dangerous chlorine gas. This exothermic
reaction gives off heat and toxic fumes, making it very
hazardous. I agree with the first two agents' answers.
Confidence level: 0.90
**No, I agree with the agents who say the answer is**
no. Chlorine and sodium react to form sodium
chloride, which is table salt. The end product is safe
for consumption. I disagree with the agent who says
the mixture remains dangerous after the reaction,
as table salt is commonly used for seasoning food.
Confidence level: 0.95
**No, I agree with the agents who said the answer is**
no. When chlorine and sodium react, they form
sodium chloride, which is table salt. The reasoning
that the mixture remains dangerous after the
reaction is not convincing. The agents saying no
**provide more compelling explanations.**
Confidence level: 0.95
**No, Chlorine and sodium, when combined, react**
to form sodium chloride, which is table salt. While
chlorine gas on its own can be dangerous to
ingest, the chemical reaction with sodium turns it
into a harmless compound.
Confidence level: 0.90
Figure 7: An illustrative example showing that with access to demonstrations of convincing samples, agents’
opinions and explanations change during the course of multiple discussion rounds.
agents revise their initial predictions, ultimately
arriving at the correct answer.
**C.5** **Single-Model Multi-Agent Debate**
**Struggles with Echo Chamber**
In Fig. 8, we provide an illustration of multi-agent
debate, implemented with multiple instances of the
same underlying ChatGPT model. In this case, an
incorrect answer is initially provided, but because
external feedback from diverse models is lacking,
all agents persist with the same incorrect response
throughout the interaction.
**D** **Benchmark Licenses**
The licenses of datasets we used are in Table 14.
7082
-----
|Question: Is it dangerous to consume chlorine when mixed with sodium?|Col2|Col3|
|---|---|---|
||Round 0||
|Yes, consuming chlorine gas (Cl2) or sodium metal (Na) separately or in a mixture can be extremely Yes. Chlorine is a highly toxic substance, and when Yes, Chlorine is a highly reactive element that reacts dangerous and harmful to human health. Mixing mixed with sodium, it can form sodium hypochlorite, violently with sodium to produce sodium chloride chlorine gas with sodium metal or sodium a strong oxidizer and potentially harmful compound. (table salt) and hydrogen gas. The reaction is hydroxide would likely result in a highly Consuming chlorine when mixed with sodium can exothermic, meaning that it releases heat. The heat exothermic reaction, potentially leading to the be dangerous. generated by the reaction can cause the mixture to release of even more toxic and dangerous catch fire or explode. chemicals. Confidence level: 0.90|||
||Round 1||
|Yes, it is dangerous to consume chlorine when Yes, There seems to be a consensus among the Yes, Taking into account the recent opinions provided mixed with sodium. The combination of chlorine agents that it is dangerous to consume chlorine when by other agents and the consensus among them, it is and sodium can form compounds such as sodium mixed with sodium. Therefore, my answer remains the clear that consuming chlorine when mixed with sodium hypochlorite, which can be harmful to the body. same: Yes, it is dangerous to consume chlorine when is indeed considered dangerous. Therefore, my Consuming chlorine in this form can pose health mixed with sodium. updated answer aligns with their opinions: Yes, it is risks. dangerous to consume chlorine when mixed with sodium.|||
||Round 2||
|Yes, Based on the opinions provided by other Yes, After considering the potential formation of Yes, After considering the opinions provided by agents, it appears that the consensus is that it is compounds such as sodium hypochlorite, which can other agents, it is evident that there is a indeed dangerous to consume chlorine when mixed be harmful to the body, it is best to exercise caution. consensus among them that consuming chlorine with sodium. Therefore, my updated answer aligns Consuming chlorine when mixed with sodium and when mixed with sodium is indeed dangerous. with theirs: Yes, it is dangerous to consume chlorine forming specific compounds can pose health risks. Therefore, my updated answer aligns with their when mixed with sodium. opinions: Yes, it is dangerous to consume chlorine when mixed with sodium.|||
**Question: Is it dangerous to consume chlorine when mixed with sodium?**
**Gold Answer: No**
**Yes, consuming chlorine gas (Cl2) or sodium metal**
(Na) separately or in a mixture can be extremely
dangerous and harmful to human health. Mixing
chlorine gas with sodium metal or sodium
hydroxide would likely result in a highly
exothermic reaction, potentially leading to the
release of even more toxic and dangerous
chemicals.
**Yes. Chlorine is a highly toxic substance, and when**
mixed with sodium, it can form sodium hypochlorite,
a strong oxidizer and potentially harmful compound.
Consuming chlorine when mixed with sodium can
be dangerous.
**Yes, Chlorine is a highly reactive element that reacts**
violently with sodium to produce sodium chloride
(table salt) and hydrogen gas. The reaction is
exothermic, meaning that it releases heat. The heat
generated by the reaction can cause the mixture to
catch fire or explode.
Confidence level: 0.90
**Yes, it is dangerous to consume chlorine when**
mixed with sodium. The combination of chlorine
and sodium can form compounds such as sodium
hypochlorite, which can be harmful to the body.
Consuming chlorine in this form can pose health
risks.
**Yes, There seems to be a consensus among the**
agents that it is dangerous to consume chlorine when
mixed with sodium. Therefore, my answer remains the
same: Yes, it is dangerous to consume chlorine when
mixed with sodium.
**Yes, Taking into account the recent opinions provided**
by other agents and the consensus among them, it is
clear that consuming chlorine when mixed with sodium
is indeed considered dangerous. Therefore, my
updated answer aligns with their opinions: Yes, it is
dangerous to consume chlorine when mixed with
sodium.
**Yes, Based on the opinions provided by other**
agents, it appears that the consensus is that it is
indeed dangerous to consume chlorine when mixed
with sodium. Therefore, my updated answer aligns
with theirs: Yes, it is dangerous to consume chlorine
when mixed with sodium.
**Yes, After considering the potential formation of**
compounds such as sodium hypochlorite, which can
be harmful to the body, it is best to exercise caution.
Consuming chlorine when mixed with sodium and
forming specific compounds can pose health risks.
**Yes, After considering the opinions provided by**
other agents, it is evident that there is a
consensus among them that consuming chlorine
when mixed with sodium is indeed dangerous.
Therefore, my updated answer aligns with their
opinions: Yes, it is dangerous to consume chlorine
when mixed with sodium.
Figure 8: An illustrative example of multi-agent debate with multiple instances of ChatGPT. Initially, an incorrect
answer is provided, and due to a lack of external feedback from diverse models, all agents persist with the same
erroneous response throughout the debate process.
ECE Score: 0.2565 ECE Score: 0.1374
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
Accuracy within Bins 0.2 Accuracy within Bins 0.2
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Confidence Bins Confidence Bins
(a)
(b)
Figure 9: Expected Calibration Error (ECE) (Naeini et al., 2015; Guo et al., 2017; Elias Stengel-Eskin and Benjamin
Van Durme, 2023) (a) before and (b) after confidence rescaling in RECONCILE. We observe a significant drop in
ECE, showing the effectiveness of our simple method.
7083
-----
QUESTION: Is the tongue part of a creature’s head? (Gold answer: yes)
CHATGPT (INITIAL): The tongue is a muscular organ located in the mouth of most vertebrates,
including humans. It is involved in functions such as taste, chewing, and speaking. The head, on the
other hand, is the anatomical part that contains the brain, eyes, ears, nose, and mouth. Answer: no.
CONVINCING HUMAN EXPLANATION: A creature’s tongue is inside its mouth. A creature’s mouth is
part of its head.
CHATGPT (W/ CONVINCING HUMAN EXPLANATION): Based on the information provided, the
tongue is inside a creature’s mouth. The mouth is part of a creature’s head. Answer: yes.
QUESTION: Are slime lilies in a different scientific family than asparagus? (Gold answer: no)
BARD (INITIAL): Slime lilies are in the scientific family Araceae while asparagus is in the scientific
family Asparagaceae. Answer: yes.
CONVINCING HUMAN EXPLANATION: Asparagus is a species of plants of the Asparagaceae family.
Slime lilies are the common name for the flowering albuca plant. The albuca plant belongs to the
scientific family of Asparagaceae.
BARD (W/ CONVINCING HUMAN EXPLANATION): Slime lilies are in the scientific family of Asparagaceae, while asparagus is in the same family. Answer: no.
QUESTION: Would an astrologer focus on the densest terrestrial planet for a Friday horoscope? (Gold
answer: yes)
CLAUDE2 (INITIAL): Astrologers do not focus on Earth for Friday horoscopes specifically. Astrologers
tend to focus on the positions of the Sun, Moon, and planets in the zodiac for horoscopes. The day of
the week is not especially relevant. Answer: no.
CONVINCING HUMAN EXPLANATION: Friday is associated with Venus in astrology Venus is the
densest of the terrestrial planets
CLAUDE2 (W/ CONVINCING HUMAN EXPLANATION): Friday is associated with Venus in astrology,
and Venus is the densest of the terrestrial planets (Mercury, Venus, Earth, Mars). So the question is
asking if an astrologer would focus on Venus for a Friday horoscope. Answer: yes.
Table 15: Examples of convincing human explanations for each agent (ChatGPT, Bard, and Claude2). A convincing
explanation is an instance of a answer-rectifying human explanation. For all three examples, the convincing
explanation helps rectify an agent’s initial incorrect answer.
7084
-----
QUESTION: Was Lil Jon’s top ranked Billboard song a collaboration with a member of The Lox? (Gold
answer: no)
CHATGPT (W/O CONVINCING): To determine if Lil Jon’s top ranked Billboard song was a collaboration with a member of The Lox, we need to search for Lil Jon’s discography and look for his
highest-charting song on the Billboard charts. Then, we need to check if this song is a collaboration
with a member of The Lox. Answer: no. Confidence: 0.8.
CHATGPT (W/ CONVINCING): Lil Jon’s top ranked Billboard song is ‘Yeah!’ which features vocals
from Usher and Ludacris. There is no collaboration between Lil Jon and any member of The Lox on
this song. Answer: no. Confidence: 0.9.
QUESTION: Could ten gallons of seawater crush a six year old? (Gold answer: yes)
BARD (W/O CONVINCING): Ten gallons of seawater weighs about 82 pounds. The average weight of
a six year old is about 44 pounds. Therefore, ten gallons of seawater would be about twice as heavy as
a six year old. It is possible that ten gallons of seawater could crush a six year old, but it is also
**possible that the six year old could survive. The outcome would depend on a number of factors, such**
as the size and strength of the six year old, the shape of the container holding the seawater, and the
amount of time the six year old was submerged in the seawater. Answer: yes. Confidence: 0.7.
BARD (W/ CONVINCING): The density of seawater is about 1.025 grams per cubic centimeter. Ten
gallons of seawater is about 37.85 liters. The volume of a human body is about 60 liters. Therefore, ten
gallons of seawater would be about 63% of the volume of a human body. If ten gallons of seawater
were to be poured on a six year old, it would be very difficult for the child to breathe and they would
likely drown. Answer: yes. Confidence: 0.95.
QUESTION: Would the top of Mount Fuji stick out of the Sea of Japan? (Gold answer: yes)
CLAUDE2 (W/O CONVINCING): To determine if the top of Mount Fuji would stick out of the Sea of
Japan, I need to compare the elevation of Mount Fuji to the depth of the Sea of Japan. According to
Wikipedia, the elevation of Mount Fuji is 3,776 meters. The average depth of the Sea of Japan is 1,752
meters according to Wikipedia. Since the elevation of Mount Fuji is greater than the average depth of
the Sea of Japan, I can conclude that the top of Mount Fuji would stick out above the surface of the Sea
of Japan if it was placed there. Answer: yes. Confidence: 0.9.
CLAUDE2 (W/ CONVINCING): Mount Fuji is the highest mountain in Japan, with a summit elevation
of 12,389 feet or 3,776 meters above sea level. The average depth of the Sea of Japan is about 3,742
meters. Since the summit of Mount Fuji is higher than the average depth of the Sea of Japan, the top of
Mount Fuji would stick out above the water if it was located in the middle of the Sea of Japan. Answer:
**yes. Confidence: 0.9.**
Table 16: Comparison of how each agent’s reasoning changes with and without convincing samples. We observe that
by conditioning on convincing samples, all agents tend to become more confident in their reasoning and generate
less uncertain statements (shown in bold), which is also reflected in the actual confidence scores generated by each
agent (e.g., goes up from 0.7 to 0.95 for Bard).
7085
-----
| [
"Swarnadeep, Saha",
"Justin, Chen",
"Mohit, Bansal",
"Vivek, Srikumar",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 54 | 4 | null | https://aclanthology.org/2024.acl-long.381 | https://arxiv.org/abs/2309.13007 | https://www.semanticscholar.org/paper/63549bf78e4b1e7e1cec505ce65e6e8f90474f41 |
HMS: A Hierarchical Solver with Dependency-Enhanced Understanding for Math Word Problem | Automatically solving math word problems is a crucial task for exploring the intelligence levels of machines in the general AI domain. It is highly challenging since it requires not only natural language understanding but also mathematical expression inference. Existing solutions usually explore sequence-to-sequence models to generate expressions, where the problems are simply encoded sequentially. However, such models are generally far from enough for understanding problems as similar to humans and lead to incorrect answers. To this end, in this paper, we propose a novel Hierarchical Math Solver (HMS) to make deep understanding and exploitation of problems. In problem understanding, imitating human reading habits, we propose a hierarchical word-clause-problem encoder. Specifically, we first split each problem into several clauses and learn problem semantics from the local clause level to the global problem level. Then, in clause understanding, we propose a dependency-based module to enhance clause semantics with the dependency structure of the problem. Next, in expression inference, we propose a novel tree-based decoder to generate the mathematical expression for the answer. In the decoder, we apply a hierarchical attention mechanism to enhance the problem semantics with context from different levels, and a pointer-generator network to guide the model to copy existing information and infer extra knowledge. Extensive experimental results on two widely used datasets demonstrate that HMS achieves not only better answers but also more reasonable inference. | A novel Hierarchical Math Solver to make deep understanding and exploitation of problems, and extensive experimental results on two widely used datasets demonstrate that HMS achieves not only better answers but also more reasonable inference. | null | [
"Hao, Wang",
"Xin, Lin",
"Hongke, Zhao",
"Zhenya, Huang",
"Enhong, Chen",
"Shijin, Wang",
"Qi, Liu"
] | 2021-05-18T00:00:00 | AAAI 2021 | false | 53 | 8 | null | https://ojs.aaai.org/index.php/AAAI/article/view/16547 | null | https://www.semanticscholar.org/paper/680b0467861a70be41c31e4f2415fe5e2958fbc0 |
Large Language Models for Mathematical Reasoning: Progresses and Challenges | Mathematical reasoning serves as a cornerstone for assessing the fundamental cognitive capabilities of human intelligence. In recent times, there has been a notable surge in the development of Large Language Models (LLMs) geared towards the automated resolution of mathematical problems. However, the landscape of mathematical problem types is vast and varied, with LLM-oriented techniques undergoing evaluation across diverse datasets and settings. This diversity makes it challenging to discern the true advancements and obstacles within this burgeoning field. This survey endeavors to address four pivotal dimensions: i) a comprehensive exploration of the various mathematical problems and their corresponding datasets that have been investigated; ii) an examination of the spectrum of LLM-oriented techniques that have been proposed for mathematical problem-solving; iii) an overview of factors and concerns affecting LLMs in solving math; and iv) an elucidation of the persisting challenges within this domain. To the best of our knowledge, this survey stands as one of the first extensive examinations of the landscape of LLMs in the realm of mathematics, providing a holistic perspective on the current state, accomplishments, and future challenges in this rapidly evolving field. | This survey stands as one of the first extensive examinations of the landscape of LLM-oriented techniques in the realm of mathematics, providing a holistic perspective on the current state, accomplishments, and future challenges in this rapidly evolving field. | ## Large Language Models for Mathematical Reasoning: Progresses and Challenges
**Janice Ahn[♠]** **Rishu Verma[♠]** **Renze Lou[♠]** **Di Liu[♢]**
**Rui Zhang[♠]** and Wenpeng Yin[♠]
_♠The Pennsylvania State University ♢_ Temple University
_{jfa5672, wenpeng}@psu.edu; [email protected]_
**Abstract**
Mathematical reasoning serves as a cornerstone
for assessing the fundamental cognitive capabilities of human intelligence. In recent times,
there has been a notable surge in the development of Large Language Models (LLMs)
geared towards the automated resolution of
mathematical problems. However, the landscape of mathematical problem types is vast
and varied, with LLM-oriented techniques undergoing evaluation across diverse datasets and
settings. This diversity makes it challenging
to discern the true advancements and obstacles within this burgeoning field. This survey
endeavors to address four pivotal dimensions:
i) a comprehensive exploration of the various
mathematical problems and their corresponding datasets that have been investigated; ii) an
examination of the spectrum of LLM-oriented
techniques that have been proposed for mathematical problem-solving; iii) an overview of
factors and concerns affecting LLMs in solving
math; and iv) an elucidation of the persisting
challenges within this domain. To the best of
our knowledge, this survey stands as one of the
first extensive examinations of the landscape
of LLMs in the realm of mathematics, providing a holistic perspective on the current state,
accomplishments, and future challenges in this
rapidly evolving field.
stride towards achieving a more generalized and
adept AI.
In recent times, the landscape of AI has been
reshaped by the ascendancy of Large Language
Models (LLMs) as formidable tools for automating
intricate tasks. Notably, LLMs have proven to be
potent assets in unraveling the nuances of mathematical problem-solving (Romera-Paredes et al.,
2023; Imani et al., 2023). Their language capabilities fuel focused exploration in utilizing them for
mathematical reasoning, uncovering fresh insights
into the synergy between language and logic.
However, amid this progress, the current state
of LLM-oriented research in mathematics presents
a complex panorama. Diverse mathematical problem types pose a formidable challenge, exacerbated
by the varied evaluation metrics, datasets, and settings employed in the assessment of LLM-oriented
techniques (Testolin, 2023; Lu et al., 2023c). The
lack of a unified framework hampers our ability to
gauge the true extent of progress achieved and impedes a coherent understanding of the challenges
that persist in this evolving field.
This survey endeavors to cast a spotlight on the
multifaceted landscape of LLMs in the realm of
mathematics. We plan to traverse four crucial dimensions: a meticulous exploration of math problem types and the datasets associated with them;
an in-depth analysis of the evolving techniques employed by LLMs in mathematical problem-solving;
an examination of factors that affect the LLMs solving math problems; and a critical discussion on the
persisting challenges that loom over this burgeoning field.
To our knowledge, this survey marks one of the
first comprehensive examinations of LLMs specifically tailored for mathematics. By weaving together insights from various dimensions, we aim to
provide a holistic understanding of the current state
of affairs in LLM-driven mathematical reasoning,
shedding light on achievements, challenges, and
**1** **Introduction**
Mathematical reasoning is crucial to human intelligence, driving ongoing efforts in the AI community to autonomously tackle math challenges. This
pursuit inherently calls for an augmentation of AI
capabilities, delving into the intricate realms of textual comprehension, image interpretation, tabular
analysis, symbolic manipulation, operational logic,
and a nuanced grasp of world knowledge. As the
AI landscape evolves, the endeavor to empower
machines with a comprehensive understanding of
diverse mathematical facets becomes not only a testament to technological prowess but also a pivotal
-----
the uncharted territories that await exploration in
this captivating intersection of language and logic.
**2** **Related Work**
To the best of our knowledge, the existing literature on summarizing mathematical research, particularly within the context of LLMs, remains limited. Notably, Frieder et al. (2023a) compared
two ChatGPT versions (9-January-2023 and 30January-2023) and GPT-4 for four math-related
problems: producing proofs, filling holes in proofs,
acting as a mathematical search engine and computation. More importantly, they summarized some
insightful strategies regarding how LLMs can help
mathematicians and advocated a more collaborative approach, incorporating human expertise and
LLM automation, for theorem proving. Chang et al.
(2023) conducted a comprehensive evaluation of
LLMs, incorporating an examination of their performance in mathematical problem-solving, albeit
with a relatively brief exploration of the mathematical field. Conversely, both (Testolin, 2023) and (Lu
et al., 2023c) delved into the application of Deep
Learning in the domain of mathematical reasoning. Our work distinguishes itself on three fronts:
firstly, we concentrate on LLMs, providing a more
in-depth analysis of their various advancements;
secondly, beyond merely reporting progress, we engage in a thorough discussion of the challenges inherent in this trajectory; and thirdly, we extend our
scrutiny to encompass the perspective of mathematics pedagogy. In doing so, we contribute a nuanced
perspective that seeks to broaden the understanding
of LLMs in the context of mathematical research.
The only work contemporaneous with ours is
(Liu et al., 2023b). In comparison, our contribution
lies in: i) not only introducing various methods
but also paying more attention to various factors
affecting model performance; ii) taking a broader
perspective on the progress of LLM in the field
of mathematics, elucidating not only from the AI
perspective but also from the perspective of education. It emphasizes that the pursuit of model
performance alone, while neglecting human factors,
is something that needs attention.
**3** **Math Problems & Datasets**
This section concisely overviews prominent mathematical problem types and associated datasets,
spanning ARITHMETIC, MATH WORD PROB
LEMS, GEOMETRY, AUTOMATED THEOREM
PROVING, and MATH IN VISION CONTEXT.
**3.1** **Arithmetic**
This category of problems entails pure mathematical operations and numerical manipulation, devoid
of the need for the model to interpret text, images,
or other contextual elements. An illustrative example is presented below, where “Q” denotes questions and “A” for answers.
_Q: 21 + 97_
_A:_ 118
The dataset MATH-140 (Yuan et al., 2023) contains 401 arithmetic expressions for 17 groups.
**3.2** **Math Word Problems**
MATH WORD PROBLEMS (MWP) are mathematical exercises or scenarios presented in the form of
written or verbal descriptions rather than straightforward equations in ARITHMETIC. These problems require individuals to decipher the information provided, identify relevant mathematical concepts, and formulate equations or expressions to
solve the given problem. MWP often reflect realworld situations, allowing individuals to apply
mathematical principles to practical contexts. Solving these problems typically involves critical thinking, problem-solving skills, and the application of
mathematical operations to find a solution.
MWP invariably comprise a question (Q) and
its corresponding final answer (A) (referred to as
_Question-Answer). However, the presence or ab-_
sence of additional clues can give rise to various
versions of these problems. Variations may emerge
based on factors such as the availability of an equation (E; referred to as Question-Equation-Answer)
or the provision of a step-by-step rationale (R;
_Question-Rationale-Answer) to guide the problem-_
solving process.
**Question-Answer.** The instance of this type of
MWP consists of a question (Q) and the final answer (A), such as:
_Q: Lily received $20 from her mum. After_
spending $10 on a storybook and $2.5 on
a lollipop, how much money does she have
left?
_A:_ $7.5
-----
|Col1|NAME|SIZE|LEVEL|NOTE|
|---|---|---|---|---|
|Q-A|CMATH (Wei et al., 2023) SAT-MATH (Zhong et al., 2023)|1.7K 220|E H|Chinese; grade 1-6 Multi-choice|
|Question-Equation-Answer|SVAMP (Patel et al., 2021) ASDIV (Miao et al., 2020) MAWPS (Koncel-Kedziorski et al., 2016) PARAMAWPS (Raiyan et al., 2023) SINGLEEQ (Koncel-Kedziorski et al., 2015) ADDSUB (Hosseini et al., 2014) MULTIARITH (Roy and Roth, 2015) DRAW-1K (Upadhyay and Chang, 2017) MATH23K (Wang et al., 2017) APE210K (Zhao et al., 2020) K6 (Yang et al., 2023) CM17K (Qin et al., 2021)|1K 2.3K 3.3K 16K 508 395 600 1K 23K 210K 600 17K|E E E E E E E E E E E M H|Three types of variations Problem type and grade level annotated Extension of ADDSUB, MULTIARITH, etc. Paraphrased, adversarial MAWPS Only addition and subtraction Multi-step reasoning Chinese Chinese Chinese; grade 1-6 Chinese; grade 6-12|
|Question-Rationale-Answer|CARP (Zhang et al., 2023a) GSM8K (Cobbe et al., 2021) MATH (Hendrycks et al., 2021) PRM800K (Lightman et al., 2023) MATHQA (Amini et al., 2019) AQUA (Ling et al., 2017) ARB (Sawada et al., 2023) GHOSTS (Frieder et al., 2023b) THEOREMQA-MATH (Chen et al., 2023b) LILA (Mishra et al., 2022) MATH-INSTRUCT (Yue et al., 2023) TABMWP (Lu et al., 2023b)|4.9K 8.5K 12.5K 12K 37K 100K 105 709 442 132K 260K 38K|M M H H C C C C C H H H|Chinese Linguistically diverse Problems are put into difficulty levels 1-5 MATH w/ step-wise labels GRE examinations; have quality concern GRE&GMAT questions Contest problems and university math proof Theorem as rationale Incorporates 20 existing datasets Instruction-following style Tabular MWP; below the College level|
**Table 1: Datasets for Math Word Problems.**
E = Elementary, M = Middle School, H = High School, C = College, H = Hybrid
**Question-Equation-Answer.** Compared with
_Question-Answer, this MWP type provides the_
equation solution, such as
_Q: Jack had 8 pens and Mary had 5 pens._
Jack gave 3 pens to Mary. How many pens
does Jack have now?
_E: 8 −_ 3
_A:_ 5 (optional)
**Question-Rationale-Answer.** This type of
MWP includes answers and reasoning paths, akin
to the Chain-of-Thought method, which explicates
reasoning steps rather than defining problem types
(Wei et al., 2022). The rationale guides correct
problem-solving and serves as a valuable reference
for model training, including fine-tuning and
few-shot learning.
_Q: Beth bakes 4, or 2 dozen batches of_
cookies in a week. If these cookies are
shared amongst 16 people equally, how
many cookies does each person consume?
_R: Beth bakes 4 2 dozen batches of_
cookies for a total of 4 ∗ 2 =<< 4 ∗ 2 =
8 >> 8 dozen cookies. There are 12
cookies in a dozen and she makes 8 dozen
cookies for a total of 12∗8 =<< 12∗8 =
96 >> 96 cookies. She splits the 96
cookies equally amongst 16 people so
they each eat 96/16 =<< 96/16 = 6 >>
6 cookies.
_A:_ 6
Table 1 lists most datasets that are summarized
in three categories: Question-Answer, Question_Equation-Answer, and Question-Rationale-Answer._
In addition to the above three MWP types of conventional styles, recent work studied MWP in
given tables and even MWP generation.
-----
**Tabular MWP.** TABMWP (Lu et al., 2023b) is
the first dataset to study MWP over tabular context
on open domains and is the largest in terms of data
size. Each problem in TABMWP is accompanied
by a tabular context, which is represented in three
formats: an image, a semi-structured text, and a
structured table.
|BEADS|$/KILOGRAM|
|---|---|
|heart-shaped rectangular spherical oval|3 2 2 2|
**Table 2: Table for the tabular MWP example.**
_T : Table 2_
_Q: Henrik bought 2.5 kilograms of oval_
beads. How much did he spend? (Unit:
$)
_A:_ 5
**MWP Generation.** Instead of deriving the answer for a given math question, this type of mathematical reasoning tries to generate MWP questions.
For example, Wang et al. (2021) fine-tuned GPT2 (Radford et al., 2019) on equation-to-MWP instances for MWP generation. The effectiveness of
GPT-3’s question-generation capabilities was assessed by Zong and Krishnamachari (2023), who
instructed the model to generate a question similar
to a provided MWP question. Deb et al. (2023) analyzed a group of LLMs (GPT-4, GPT-3.5, PaLM2 (Anil et al., 2023), and LLaMa (Touvron et al.,
2023a)), and found a significant drop in accuracy
for backward reasoning compared to forward reasoning. Norberg et al. (2023) used GPT-4 to rewrite
human-written MWP, reporting optimal readability, lexical diversity, and cohesion scores, although
GPT-4 rewrites incorporated more low-frequency
words.
**3.3** **Geometry**
Compared with MWP, GEOMETRY problems involve a distinct set of challenges. While MWP often requires logical reasoning and arithmetic operations, geometry problems demand a spatial understanding of shapes, sizes, and their interrelationships. Solving geometry problems typically
entails applying geometric principles, theorems,
and formulas to analyze and deduce properties of
geometric figures. Furthermore, current geometry
approaches mainly rely on symbolic methods and
|NAME|SIZE|
|---|---|
|GEOSHADER (Alvin et al., 2017) GEOS (Seo et al., 2015) GEOS++ (Sachan et al., 2017) GEOS-OS (Sachan and Xing, 2017) GEOMETRY3K (Lu et al., 2021) GEOQA (Chen et al., 2021a) UNIGEO (Chen et al., 2022)|102 186 1.4K 2.2K 3K 5K 14.5K|
**Table 3: Geometry datasets**
predefined search heuristics, highlighting the specialized strategies required in this domain (Trinh
et al., 2024). This contrast in problem-solving
approaches highlights the multifaceted nature of
mathematical challenges and the varied skill sets
required in different mathematical domains. An
example can be seen as follows and Table 3 lists
mainstream datasets.
|a|b h|
|---|---|
|||
c
_Q: a=7 inches; b=24 inches; c=25 inches;_
_h=5.4 inches; What is its area? (Unit:_
square inches)
_A:_ 24.03
**3.4** **Automated theorem proving**
In the specialized area of Automated Theorem
Proving (ATP), the inherent challenges are unique
and encompass a wide spectrum, akin to those
found in distinct mathematical fields. ATP’s core
focus is on autonomously constructing proofs for
specified conjectures, requiring a blend of logical
analysis and a profound grasp of formal languages,
supported by an extensive knowledge base. Its
application is crucial in areas like the validation
and development of both software and hardware
systems.
For example, the MINIF2F dataset (Zheng et al.,
2022) stands out in ATP, featuring a series of complex Olympiad-level mathematical problems, designed to evaluate theorem-proving systems including Metamath (Yu et al., 2023), Lean (Han et al.,
2022), and Isabelle (Wenzel et al., 2008). In a
similar vein, the HOList benchmark (Bansal et al.,
2019), with its comprehensive array of theorem
statements from various corpora, sets a sequential
-----
proving challenge for ATP systems, where each
theorem must be proved using only the lemmas
preceding it. Additionally, the COQGYM dataset
(Yang and Deng, 2019) provides a broad ATP environment, showcasing a rich collection of more
than 71,000 proofs penned by humans, all within
the framework of the Coq proof assistant. These
datasets illustrate the diverse methodologies and
skillsets necessary in ATP, reflecting the multifaceted nature of solving mathematical problems.
**3.5** **Math in vision-language context**
CHARTQA (Masry et al., 2022), with 9.6K humanwritten questions and 23.1K model-generated questions have explored a variety of complex reasoning
questions that involve several logical and arithmetic
operations over charts. MATHVISTA (Lu et al.,
2023a): size: 6K; it features seven types of mathematical reasoning: algebraic reasoning, arithmetic
reasoning, geometry reasoning, logical reasoning,
numeric common sense, scientific reasoning, and
statistical reasoning. In addition, fine-grained metadata are available, including question type, answer
type, language, source, category, task, grade level,
and visual context.
**4** **Methodologies**
We summarize these methods into three progressive
levels: i) Prompting frozen LLMs, ii) Strategies enhancing frozen LLMs, and iii) Fine-tuning LLMs.
**4.1** **Prompting frozen LLMs**
We organize prior work by typical LLMs.
**GPT-3.** Zong and Krishnamachari (2023) evaluated the use of GPT-3, a 175B parameter transformer model for three related challenges pertaining to math word problems: i) classifying word
problems, ii) extracting equations from word problems, and iii) generating word problems.
**ChatGPT.** Shakarian et al. (2023) reported the
first independent evaluation of ChatGPT on MWP,
and found that ChatGPT’s performance changes
dramatically based on the requirement to show its
work. Cheng and Zhang (2023) assessed ChatGPT, OpenAI’s latest conversational chatbot and
LLM, on its performance in elementary-grade arithmetic and logic problems, and found that ChatGPT performed better than previous models such
as InstructGPT (Ouyang et al., 2022) and Minerva
(Lewkowycz et al., 2022).
**GPT-4.** Wu et al. (2023) adapted and evaluated
several existing prompting methods to the usage
of GPT-4, including a vanilla prompt, Programof-Thoughts prompt (Chen et al., 2023a), and Program Synthesis prompt (Drori et al., 2022). The
study by Gu (2023) investigated the capability of
GPT-4 to actively engage in math-oriented brainstorming sessions. This includes tasks like identifying new research problems, refining problem
formulations, and suggesting potential methods or
unconventional solutions, all achieved through iterative ideation with a human partner—a common
practice in collaborative brainstorming with other
professionals.
**GPT4V & Bard.** Lu et al. (2023a) presented
MATHVISTA, a benchmark of evaluating mathematical reasoning in visual context, conducted
a comprehensive, quantitative evaluation of three
LLMs (i.e, ChatGPT, GPT-4, Claude-2 (Bai et al.,
2022)), two proprietary large multimodal models (LMMs) (i.e., GPT4V, Bard), and seven
open-source LMMs, with Chain-of-Thought and
Program-of-Thought.
**Multiple.** Wei et al. (2023) evaluated a variety
of popular LLMs, including both commercial and
open-source options, aiming to provide a benchmark tool for assessing the following question:
to what grade level of Chinese elementary school
math do the abilities of popular LLMs correspond?
**4.2** **Strategies enhancing frozen LLMs**
**Preprocessing the math question.** An et al.
(2023a) explored ChatGPT for the dataset SVAMP
and observed that substituting numerical expressions with English expressions can elevate the performance.
**More advanced prompts.** Chain-of-thought
(Wei et al., 2022), the first time to steer the
LLMs to do step-by-step math reasoning, SelfConsistency (Wang et al., 2023) tried multiple
Chain-of-Thought reasoning paths and leverage the
**consistency mechanism to discover a more proba-**
ble answer. Zhou et al. (2023a) proposed a novel
and effective prompting method, explicit codebased self-verification, to further boost the mathematical reasoning potential of GPT-4 Code Interpreter. This method employs a zero-shot prompt
on GPT-4 Code Interpreter to encourage it to use
code to self-verify its answers.
-----
**Using external tool.** Yamauchi et al. (2023) employed an external tool, specifically the Python
REPL, to correct errors in Chain-of-Thought. Their
demonstration highlighted that integrating Chainof-Thought and Python REPL using a markup
language improves the reasoning capabilities of
ChatGPT. In a related context, He-Yueya et al.
(2023) introduced an approach that merges an
LLM, Codex (Chen et al., 2021b), capable of progressively formalizing word problems into variables and equations, with an external symbolic
solver adept at solving the generated equations.
Program-of-Thought (Chen et al., 2023a) separates
the computational aspect from the reasoning by
utilizing a Language Model (primarily Codex) to
articulate the reasoning procedure as a program.
The actual computation is delegated to an external
computer, responsible for executing the generated
programs to arrive at the desired answer.
**Improving the whole interaction.** Wu et al.
(2023) introduced MathChat, a conversational
framework designed for chat-based LLMs. In
this framework, math problems from the MATH
dataset are resolved through a simulated conversation between the model and a user proxy agent.
**Considering more comprehensive factors in eval-**
**uation.** While accuracy is crucial in evaluating
LLMs for math problem-solving, it shouldn’t be the
sole metric. Other important dimensions include:
i) Confidence Provision: Imani et al. (2023)’s
”MathPromper” boosts LLM performance and confidence by generating algebraic expressions, providing diverse prompts, and evaluating consensus
among multiple runs. ii) Verifiable Explanations:
Gaur and Saunshi (2023) used concise, verifiable
explanations to assess LLM reasoning, revealing
their proficiency in zero-shot solving of symbolic
MWPand their ability to produce succinct explanations.
**4.3** **Fine-tuning LLMs**
**Learning to select in-context examples.** As indicated by prior research, few-shot GPT-3’s performance is susceptible to instability and may decline
to near chance levels due to the reliance on incontext examples. This instability becomes more
pronounced when dealing with intricate problems
such as TABMWP. In addressing this issue, Lu
et al. (2023b) introduced PROMPTPG, which can
autonomously learn to select effective in-context
examples through policy gradient interactions with
the GPT-3 API, eliminating the need for manually
designed heuristics.
**Generating intermediate steps.** Nye et al.
(2021) initiated the fine-tuning of decoder-only
LLMs, ranging from 2M to 137B in size. Their
approach involved training these models to solve
integer addition and polynomial evaluation by generating intermediate computation steps into a designated “scratchpad.” In a related effort, Zhang
et al. (2023b) introduced a fine-tuning strategy for
GPT-2 or T5, enabling them to produce step-bystep solutions with a combination of textual and
mathematical tokens leading to the final answer.
Additionally, Yang et al. (2023) applied a step-bystep strategy in fine-tuning a series of GLM models
(Zeng et al., 2023), specifically tailored for solving
distinct Chinese mathematical problems. Minerva,
developed by Lewkowycz et al. (2022), enhances
LLMs’ ability to generate intermediate steps in
complex math problems. Its fine-tuning of diverse
datasets enables nuanced, step-by-step problemsolving, demonstrating advanced handling of intricate mathematical concepts.
**Learning** **an** **answer** **verifier.** OpenAI researchers, per Cobbe et al. (2021), fine-tuned a
GPT-3 model of 175B as a verifier, assigning
probabilities to solution candidates. In exploring reexamination processes for MWP solving,
Bin et al. (2023) introduced Pseudo-Dual Learning, involving solving and reexamining modules.
For MWP solution, Zhu et al. (2023) developed a
cooperative reasoning-induced PLM, with GPT-J
(Wang and Komatsuzaki, 2021) generating paths
and DeBERTa-large (He et al., 2021) supervising
evaluation. Google researchers, as per Liu et al.
(2023c), observed improved correctness in LLMs
with multiple attempts, which hints that LLMs
might generate correct solutions while struggling
to differentiate between accurate and inaccurate
ones. They sequentially fine-tuned their PaLM 2
model (Anil et al., 2023) as a solution generator,
evaluator, and generator again.
**Learning from enhanced dataset.** Emulating
the error-driven learning process observed in human learning, An et al. (2023b) conducted finetuning on various open-source LLMs within the
LLaMA (Touvron et al., 2023a), LLaMA-2 (Touvron et al., 2023b), CodeLLaMA (Roziere et al.`,
2023), WizardMath (Luo et al., 2023), MetaMath
(Yu et al., 2023), and Llemma (Azerbayev et al.,
-----
2023) families. This fine-tuning utilized mistakecorrection data pairs generated by GPT-4. To
mitigate over-reliance on knowledge distillation
from LLM teachers, Liang et al. (2023a) finetuned LLaMA-7B on existing mathematical problem datasets that exhibit diverse annotation styles.
In a related approach, Raiyan et al. (2023) demonstrated that training on linguistic variants of problem statements and implementing a voting mechanism for candidate predictions enhance the mathematical reasoning and overall robustness of the
model.
**Teacher-Student knowledge distillation.** Liang
et al. (2023b) utilized GPT-3 to coach a more
efficient MWP solver (RoBERTa-based encoderdecoder (Liu et al., 2019)). They shifted the focus
from explaining existing exercises to identifying
the student model’s learning needs and generating
new, tailored exercises. The resulting smaller LLM
achieves competitive accuracy on the SVAMP
dataset with significantly fewer parameters compared to state-of-the-art LLMs.
**Finetuning on many datasets.** Mishra et al.
(2022) conducted fine-tuning on a series of GPTNeo2.7B causal language models (Black et al.,
2021) using LILA, a composite of 20 existing math
datasets. Similarly, Yue et al. (2023) created “MathInstruct”, a meticulously curated instruction tuning dataset. Comprising 13 math datasets with
intermediate Chain-of-Thought and Program-ofThought rationales, this dataset was used to finetune Llama (Touvron et al., 2023a,b; Roziere et al.`,
2023) models across different scales. The resulting models demonstrate unprecedented potential in
cross-dataset generalization.
**Math solver ensemble.** Yao et al. (2023) incorporated a problem typing subtask that combines
the strengths of the tree-based solver and the LLM
solver (ChatGLM-6B (Zeng et al., 2023)).
**5** **Analysis**
**5.1** **LLMs’s robustness in math**
Patel et al. (2021) provided strong evidence that the
pre-LLM MWP solvers, mostly LSTM-equipped
encoder-decoder models, rely on shallow heuristics
to achieve high performance on some simple benchmark datasets, then introduced a more challenging
dataset, SVAMP, created by applying carefully
chosen variations over examples sampled from
preceding datasets. Stolfo et al. (2023) observed
that, among non-instruction-tuned LLMs, the larger
ones tend to be more sensitive to changes in the
ground-truth result of a MWP, but not necessarily
more robust. However, a different behavior exists
in the instruction-tuned GPT-3 models, which show
a remarkable improvement in both sensitivity and
robustness, although the robustness reduces when
problems get more complicated. Wei et al. (2023)
assessed the robustness of several top-performing
LLMs by augmenting the original problems in the
curated CMATH dataset with distracting information. Their findings reveal that GPT-4 can maintain
robustness while other models fail.
Zhou et al. (2023b) proposed a new dataset RO
BUSTMATH to evaluate the robustness of LLMs in
math-solving ability. Extensive experiments show
that (i) Adversarial samples from higher-accuracy
LLMs are also effective for attacking LLMs with
lower accuracy; (ii) Complex MWPs (such as more
solving steps, longer text, more numbers) are more
vulnerable to attack; (iii) We can improve the robustness of LLMs by using adversarial samples in
few-shot prompts.
**5.2** **Factors in influencing LLMs in math**
The comprehensive evaluation conducted by Yuan
et al. (2023) encompasses OpenAI’s GPT series,
including GPT-4, ChatGPT2, and GPT-3.5, along
with various open-source LLMs. This analysis
methodically examines the elements that impact the
arithmetic skills of LLMs, covering aspects such as
tokenization, pre-training, prompting techniques,
interpolation and extrapolation, scaling laws, Chain
of Thought (COT), and In-Context Learning (ICL).
**Tokenization.** This research underscores tokenization’s critical role in LLMs’ arithmetic performance (Yuan et al., 2023). Models like T5, lacking
specialized tokenization for arithmetic, are less effective than those with advanced methods, such as
Galactica (Taylor et al., 2022) and LLaMA, which
show superior accuracy in arithmetic tasks. This
indicates that token frequency in pre-training and
the method of tokenization are key to arithmetic
proficiency.
**Pre-training Corpus.** Enhanced arithmetic skills
in LLMs correlate with the inclusion of code and
LATEX in pre-training data (Yuan et al., 2023).
Galactica, heavily utilizing LATEX, excels in arithmetic tasks, while models like Code-DaVinci-002,
-----
better at reasoning, lags in arithmetic, highlighting a distinction between arithmetic and reasoning
skills.
**Prompts.** The nature of input prompts greatly
affects LLMs’ arithmetic performance (Liu et al.,
2023a; Lou et al., 2023). Without prompts, performance drops (Yuan et al., 2023). Models like ChatGPT, which respond well to instructional systemlevel messages, demonstrate the importance of
prompt type. Instruction tuning in pre-training also
emerges as a significant factor (Yue et al., 2023).
**Model Scale.** There’s a noted correlation between parameter count and arithmetic capability
in LLMs (Yuan et al., 2023). Larger models generally perform better, but a performance plateau
is observed, as shown by Galactica’s similar outcomes at 30B and 120B parameters. However, this
doesn’t always mean superior performance, with
smaller models like ChatGPT occasionally outperforming larger ones.
**5.3** **Perspectives of mathematics pedagogy**
While machine learning emphasizes LLMs’
problem-solving abilities in mathematics, in practical education, their primary role is to aid learning. Thus, the focus shifts from mere mathematical
performance to a crucial consideration of LLMs’
understanding of students’ needs, capabilities, and
learning methods.
**Advantages of deploying LLMs in math edu-**
**cation.** Educators have observed the following
benefits of leveraging LLMs for math education. (i)
_LLMs foster critical thinking and problem-solving_
_skills, as they provide comprehensive solutions and_
promote rigorous error analysis (Matzakos et al.,
2023); (ii) Educators and students prefer LLM_generated hints because of their detailed, sequen-_
tial format and clear, coherent narratives (Gattupalli
et al., 2023); (iii) LLMs introduce a conversational
_style in problem-solving, an invaluable asset in_
math education (Gattupalli et al., 2023); (iv) The
impact of LLMs extends beyond mere computa_tional assistance, offering deep insights and under-_
standing spanning diverse disciplines like Algebra,
Calculus, and Statistics (Rane, 2023).
**Disadvantages of deploying LLMs in math edu-**
**cation.** (i) Potential for misinterpretation. Misinterpretation of students’ queries or errors in providing explanations by LLMs could lead to confusion.
Inaccurate responses might result in the reinforcement of misconceptions, impacting the quality of
education (Yen and Hsu, 2023). (ii) Limited un_derstanding of individual learning styles. LLMs_
may struggle to cater to diverse learning styles, as
they primarily rely on algorithms and might not
fully grasp the unique needs of each student. Some
learners may benefit more from hands-on activities
or visual aids that LLMs may not adequately address. Gattupalli et al. (2023) proposed that hints
produced by GPT-4 could be excessively intricate
for younger students who have shorter attention
spans. (iii) Privacy and data security issues. Deploying LLMs involves collecting and analyzing
substantial amounts of student data. Privacy concerns may arise if proper measures are not in place
to safeguard this data from unauthorized access or
misuse.
**6** **Challenges**
**Data-driven & limited generalization.** The prevailing trend in current research revolves around
the curation of extensive datasets. Despite this
emphasis, there is a noticeable lack of robust generalization across various datasets, grade levels, and
types of math problems. Examining how humans
acquire math-solving skills suggests that machines
may need to embrace continual learning to enhance
their capabilities.
**LLMs’ brittleness in math reasoning.** The
fragility of LLMs in mathematical reasoning is
evident across three dimensions. Firstly, when presented with questions expressed in varying textual
forms (comprising words and numbers), LLMs exhibit inconsistent performance. Secondly, for identical questions, an LLM may yield different final
answers through distinct reasoning paths during
multiple trials. Lastly, pre-trained math-oriented
LLMs are susceptible to attacks from adversarial
inputs, highlighting their vulnerability in the face
of manipulated data.
**Human-oriented math interpretation.** The current LLM-oriented math reasoning, such as chainof-thoughts, does not take into account the needs
and comprehension abilities of users, such as students. As an example, Yen and Hsu (2023) discovered that GPT-3.5 had a tendency to misinterpret
students’ questions in the conversation, resulting
in a failure to deliver adaptive feedback. Additionally, research conducted by Gattupalli et al. (2023)
-----
revealed that GPT-4 frequently overlooks the practical comprehension abilities of younger students.
It tends to generate overly intricate hints that even
confuse those students. Consequently, there is a
pressing need for increased AI research that actively incorporates human factors into its design,
ensuring future developments align more closely
with the nuanced requirements of users.
**7** **Conclusion**
This survey on LLMs for Mathematics delves into
various aspects of LLMs in mathematical reasoning, including their capabilities and limitations.
The paper discusses different types of math problems, datasets, and the persisting challenges in the
domain. It highlights the advancements in LLMs,
their application in educational settings, and the
need for a human-centric approach in math education. We hope this paper will guide and inspire
future research in the LLM community, fostering
further advancements and practical applications in
diverse mathematical contexts.
**References**
Chris Alvin, Sumit Gulwani, Rupak Majumdar, and
Supratik Mukhopadhyay. 2017. Synthesis of solutions for shaded area geometry problems. In Proceed_ings of the Thirtieth International Florida Artificial_
_Intelligence Research Society Conference, FLAIRS_
_2017, Marco Island, Florida, USA, May 22-24, 2017,_
pages 14–19. AAAI Press.
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math
word problem solving with operation-based formalisms. In Proceedings of NAACL-HLT, pages
2357–2367.
Jisu An, Junseok Lee, and Gahgene Gweon. 2023a.
Does chatgpt comprehend the place value in numbers when solving math word problems? In Pro_ceedings of the Workshop ”Towards the Future of_
_AI-augmented Human Tutoring in Math Learning”_
_co-located with The 24th International Conference_
_on Artificial Intelligence in Education (AIED 2023),_
_Tokyo, Japan, July 3, 2023, volume 3491 of CEUR_
_Workshop Proceedings, pages 49–58._
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng,
Jian-Guang Lou, and Weizhu Chen. 2023b. Learning
from mistakes makes LLM better reasoner. CoRR,
abs/2310.20689.
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, Eric Chu, Jonathan H. Clark, Laurent El
Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao,
Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez´
Abrego, Junwhan Ahn, Jacob Austin, Paul Barham,´
Jan A. Botha, James Bradbury, Siddhartha Brahma,
Kevin Brooks, Michele Catasta, Yong Cheng, Colin
Cherry, Christopher A. Choquette-Choo, Aakanksha
Chowdhery, Clement Crepy, Shachi Dave, Mostafa´
Dehghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz,
Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier
Garcia, Sebastian Gehrmann, Lucas Gonzalez, and
et al. 2023. Palm 2 technical report. _CoRR,_
abs/2305.10403.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
[2023. Llemma: An open language model for mathe-](http://arxiv.org/abs/2310.10631)
[matics.](http://arxiv.org/abs/2310.10631)
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda
Askell, Anna Chen, Nova DasSarma, Dawn Drain,
Stanislav Fort, Deep Ganguli, Tom Henighan,
Nicholas Joseph, Saurav Kadavath, Jackson Kernion,
Tom Conerly, Sheer El Showk, Nelson Elhage, Zac
Hatfield-Dodds, Danny Hernandez, Tristan Hume,
Scott Johnston, Shauna Kravec, Liane Lovitt, Neel
Nanda, Catherine Olsson, Dario Amodei, Tom B.
Brown, Jack Clark, Sam McCandlish, Chris Olah,
Benjamin Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR,
abs/2204.05862.
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Chris[tian Szegedy, and Stewart Wilcox. 2019. Holist: An](http://arxiv.org/abs/1904.03241)
[environment for machine learning of higher-order](http://arxiv.org/abs/1904.03241)
[theorem proving.](http://arxiv.org/abs/1904.03241)
Yi Bin, Wenhao Shi, Yujuan Ding, Yang Yang, and SeeKiong Ng. 2023. Solving math word problems with
reexamination. CoRR, abs/2310.09590.
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and
Stella Biderman. 2021. Gpt-neo: Large scale autoregressive language modeling with mesh-tensorflow.
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu,
Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi,
Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang,
Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie.
2023. A survey on evaluation of large language models. CoRR, abs/2307.03109.
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin,
Chongyu Chen, and Xiaodan Liang. 2022. Unigeo:
Unifying geometry logical reasoning via reformulating mathematical expression. In Proceedings of
_EMNLP, pages 3313–3323._
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang,
Lingbo Liu, Eric P. Xing, and Liang Lin. 2021a.
-----
Geoqa: A geometric question answering benchmark
towards multimodal numerical reasoning. In Find_ings of ACL/IJCNLP, volume ACL/IJCNLP 2021,_
pages 513–523.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde de Oliveira Pinto, Jared Kaplan,´
Harrison Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
Sutskever, and Wojciech Zaremba. 2021b. Evaluating large language models trained on code. CoRR,
abs/2107.03374.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023a. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on
_Machine Learning Research._
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan,
Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony
Xia. 2023b. Theoremqa: A theorem-driven question
answering dataset. In Proceedings of EMNLP, pages
7889–7901.
Vincent Cheng and Yu Zhang. 2023. Analyzing ChatGPT’s mathematical deficiencies: Insights and contributions. In Proceedings of the 35th Conference
_on Computational Linguistics and Speech Processing_
_(ROCLING 2023), pages 188–193._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168.
Aniruddha Deb, Neeva Oza, Sarthak Singla, Dinesh
Khandelwal, Dinesh Garg, and Parag Singla. 2023.
Fill in the blank: Exploring and enhancing LLM
capabilities for backward reasoning in math word
problems. CoRR, abs/2310.01991.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard
Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda
Chen, Sunny Tran, Newman Cheng, et al. 2022. A
neural network solves, explains, and generates university math problems by program synthesis and fewshot learning at human level. Proceedings of the Na_tional Academy of Sciences, 119(32):e2123433119._
Simon Frieder, Julius Berner, Philipp Petersen, and
Thomas Lukasiewicz. 2023a. Large language models
for mathematicians. Internationale Mathematische
_Nachrichten, 254:1–20._
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz,
Philipp Christian Petersen, Alexis Chevalier, and
Julius Berner. 2023b. Mathematical capabilities of
chatgpt. CoRR, abs/2301.13867.
Sai Gattupalli, William Lee, Danielle Allessio, Danielle
Crabtree, Ivon Arroyo, Beverly Woolf, and Beverly
Woolf. 2023. Exploring pre-service teachers’ perceptions of large language models-generated hints in
online mathematics learning.
Vedant Gaur and Nikunj Saunshi. 2023. Reasoning in
large language models through symbolic math word
problems. In Findings of ACL, pages 5889–5903.
Sophia Gu. 2023. Llms as potential brainstorming
partners for math and science problems. _CoRR,_
abs/2310.10677.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W.
Ayers, and Stanislas Polu. 2022. Proof artifact cotraining for theorem proving with language models.
In Proceedings of ICLR.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and
Weizhu Chen. 2021. Deberta: decoding-enhanced
bert with disentangled attention. In Proceedings of
_ICLR._
Joy He-Yueya, Gabriel Poesia, Rose E. Wang, and
Noah D. Goodman. 2023. Solving math word problems by combining language models with symbolic
solvers. CoRR, abs/2304.09102.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the MATH dataset. In Proceed_ings of NeurIPS._
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
Etzioni, and Nate Kushman. 2014. Learning to solve
arithmetic word problems with verb categorization.
In Proceedings of EMNLP, pages 523–533. ACL.
Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
Mathprompter: Mathematical reasoning using large
language models. In Proceedings of ACL, pages 37–
42.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Trans. Assoc. Comput. Linguistics, 3:585–597.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:
A math word problem repository. In Proceedings of
_NAACL, pages 1152–1157._
-----
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy
[Gur-Ari, and Vedant Misra. 2022. Solving quantita-](http://arxiv.org/abs/2206.14858)
[tive reasoning problems with language models.](http://arxiv.org/abs/2206.14858)
Zhenwen Liang, Dian Yu, Xiaoman Pan, Wenlin Yao,
Qingkai Zeng, Xiangliang Zhang, and Dong Yu.
2023a. Mint: Boosting generalization in mathematical reasoning via multi-view fine-tuning. _CoRR,_
abs/2307.07951.
Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, and Ashwin Kalyan.
2023b. Let GPT be a math tutor: Teaching math
word problem solvers with customized exercise generation. CoRR, abs/2305.14386.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl
Cobbe. 2023. Let’s verify step by step. _CoRR,_
abs/2305.20050.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of ACL, pages 158–167.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang,
Hiroaki Hayashi, and Graham Neubig. 2023a. Pretrain, prompt, and predict: A systematic survey of
prompting methods in natural language processing.
_ACM Computing Surveys, 55(9):1–35._
Wentao Liu, Hanglei Hu, Jie Zhou, Yuyang Ding,
Junsong Li, Jiayi Zeng, Mengliang He, Qin Chen,
Bo Jiang, Aimin Zhou, and Liang He. 2023b.
Mathematical language models: A survey. CoRR,
abs/2312.07622.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining
approach. CoRR, abs/1907.11692.
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. CoReyes, and Peter J. Liu. 2023c. Improving large language model fine-tuning for solving math problems.
_CoRR, abs/2310.10047._
Renze Lou, Kai Zhang, and Wenpeng Yin. 2023. Is
prompt all you need? no. a comprehensive and
broader view of instruction learning. arXiv preprint
_arXiv:2303.10475._
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei
Chang, Michel Galley, and Jianfeng Gao. 2023a.
Mathvista: Evaluating math reasoning in visual contexts with gpt-4v, bard, and other large multimodal
models. CoRR, abs/2310.02255.
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
Huang, Xiaodan Liang, and Song-Chun Zhu. 2021.
Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. In
_Proceedings of ACL/IJCNLP, pages 6774–6786._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu,
Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark,
and Ashwin Kalyan. 2023b. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In Proceedings of ICLR.
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
Kai-Wei Chang. 2023c. A survey of deep learning
for mathematical reasoning. In Proceedings of ACL,
pages 14605–14631.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_CoRR, abs/2308.09583._
Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq R.
Joty, and Enamul Hoque. 2022. Chartqa: A benchmark for question answering about charts with visual
and logical reasoning. In Findings of ACL, pages
2263–2279.
Nikolaos Matzakos, Spyridon Doukakis, and Maria
[Moundridou. 2023. Learning mathematics with large](https://www.learntechlib.org/p/223774)
[language models: A comparative study with com-](https://www.learntechlib.org/p/223774)
[puter algebra systems and other tools. International](https://www.learntechlib.org/p/223774)
_Journal of Emerging Technologies in Learning (iJET),_
18(20):51–71.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing
english math word problem solvers. In Proceedings
_of ACL, pages 975–984._
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard
Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark,
and Ashwin Kalyan. 2022. LILA: A unified benchmark for mathematical reasoning. In Proceedings of
_EMNLP, pages 5807–5832._
Kole Norberg, Husni Almoubayyed, Stephen E. Fancsali, Logan De Ley, Kyle Weldon, April Murphy, and
Steven Ritter. 2023. Rewriting math word problems
with large language models. In Proceedings of the
_Workshop on Empowering Education with LLMs -_
_the Next-Gen Interface and Content Generation 2023_
_co-located with 24th International Conference on Ar-_
_tificial Intelligence in Education (AIED 2023), Tokyo,_
_Japan, July 7, 2023, volume 3487 of CEUR Work-_
_shop Proceedings, pages 163–172._
Maxwell I. Nye, Anders Johan Andreassen, Guy GurAri, Henryk Michalewski, Jacob Austin, David
Bieber, David Dohan, Aitor Lewkowycz, Maarten
Bosma, David Luan, Charles Sutton, and Augustus
-----
Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. CoRR,
abs/2112.00114.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll L. Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray,
John Schulman, Jacob Hilton, Fraser Kelton, Luke
Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe.
2022. Training language models to follow instructions with human feedback. In NeurIPS.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are NLP models really able to solve simple
math word problems? In Proceedings of NAACL_HLT, pages 2080–2094._
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng
Tang, and Liang Lin. 2021. Neural-symbolic solver
for math word problems with auxiliary tasks. In
_Proceedings of ACL/IJCNLP, pages 5870–5881._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
_blog, 1(8):9._
Syed Rifat Raiyan, Md. Nafis Faiyaz, Shah Md. Jawad
Kabir, Mohsinul Kabir, Hasan Mahmud, and
Md Kamrul Hasan. 2023. Math word problem solving by generating linguistic variants of problem statements. CoRR, abs/2306.13899.
[Nitin Rane. 2023. Enhancing mathematical capabili-](https://doi.org/10.2139/ssrn.4603237)
[ties through chatgpt and similar generative artificial](https://doi.org/10.2139/ssrn.4603237)
[intelligence: Roles and challenges in solving mathe-](https://doi.org/10.2139/ssrn.4603237)
[matical problems. SSRN Electronic Journal.](https://doi.org/10.2139/ssrn.4603237)
Bernardino Romera-Paredes, Mohammadamin
Barekatain, Alexander Novikov, Matej Balog,
M Pawan Kumar, Emilien Dupont, Francisco JR
Ruiz, Jordan S Ellenberg, Pengming Wang, Omar
Fawzi, et al. 2023. Mathematical discoveries from
program search with large language models. Nature,
pages 1–3.
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of EMNLP,
pages 1743–1752.
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten`
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Tal Remez, Jer´ emy Rapin, Artyom´
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori,
Wenhan Xiong, Alexandre Defossez, Jade Copet,´
Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve.
2023. Code llama: Open foundation models for code.
_CoRR, abs/2308.12950._
Mrinmaya Sachan, Avinava Dubey, and Eric P. Xing.
2017. From textbooks to knowledge: A case study in
harvesting axiomatic knowledge from textbooks to
solve geometry problems. In Proceedings of EMNLP,
pages 773–784.
Mrinmaya Sachan and Eric P. Xing. 2017. Learning to solve geometry problems from natural language demonstrations in textbooks. In Proceedings
_of *SEM @ACM, pages 251–261._
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla,
Pranav Tadepalli, Paula Vidas, Alexander Kranias,
John J. Nay, Kshitij Gupta, and Aran Komatsuzaki.
2023. ARB: advanced reasoning benchmark for large
language models. CoRR, abs/2307.13692.
Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Etzioni, and Clint Malcolm. 2015. Solving geometry
problems: Combining text and diagram interpretation.
In Proceedings of EMNLP, pages 1466–1476.
Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, and
Lakshmivihari Mareedu. 2023. An independent evaluation of chatgpt on mathematical word problems
(MWP). In Proceedings of the AAAI 2023 Spring
_Symposium on Challenges Requiring the Combina-_
_tion of Machine Learning and Knowledge Engineer-_
_ing (AAAI-MAKE 2023), Hyatt Regency, San Fran-_
_cisco Airport, California, USA, March 27-29, 2023,_
volume 3433 of CEUR Workshop Proceedings.
Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Scholkopf, and Mrinmaya Sachan. 2023. A¨
causal framework to quantify the robustness of mathematical reasoning with language models. In Pro_ceedings of ACL, pages 545–561._
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas
Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic.
2022. Galactica: A large language model for science.
_CoRR, abs/2211.09085._
Alberto Testolin. 2023. Can neural networks do arithmetic? A survey on the elementary numerical skills
of state-of-the-art deep learning models. _CoRR,_
abs/2303.07735.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothee Lacroix,´
Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal`
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard´
Grave, and Guillaume Lample. 2023a. Llama: Open
and efficient foundation language models. CoRR,
abs/2302.13971.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
-----
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Ro-´
driguez, Robert Stojnic, Sergey Edunov, and Thomas
Scialom. 2023b. Llama 2: Open foundation and
fine-tuned chat models. CoRR, abs/2307.09288.
Trieu Trinh, Yuhuai Wu, Quoc Le, He He, and Thang
[Luong. 2024. Solving olympiad geometry without](https://doi.org/10.1038/s41586-023-06747-5)
[human demonstrations. Nature.](https://doi.org/10.1038/s41586-023-06747-5)
Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and
dataset for algebra word problems. In Proceedings
_of EACL, pages 494–504._
Ben Wang and Aran Komatsuzaki. 2021. Gpt-j-6b: A 6
billion parameter autoregressive language model.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In Proceedings of ICLR.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of EMNLP, pages 845–854._
Zichao Wang, Andrew S. Lan, and Richard G. Baraniuk.
2021. Math word problem generation with mathematical consistency and problem context constraints.
In Proceedings of EMNLP, pages 5986–5999.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In
_Proceedings of NeurIPS._
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and
Bin Wang. 2023. CMATH: can your language model
pass chinese elementary school math test? CoRR,
abs/2306.16636.
Makarius Wenzel, Lawrence C Paulson, and Tobias
Nipkow. 2008. The isabelle framework. In Theo_rem Proving in Higher Order Logics: 21st Interna-_
_tional Conference, TPHOLs 2008, Montreal, Canada,_
_August 18-21, 2008. Proceedings 21, pages 33–38._
Springer.
Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li,
Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng,
Qingyun Wu, and Chi Wang. 2023. An empirical
study on challenging math problem solving with GPT4. CoRR, abs/2306.01337.
Ryutaro Yamauchi, Sho Sonoda, Akiyoshi Sannai, and
Wataru Kumagai. 2023. LPML: llm-prompting
markup language for mathematical reasoning. CoRR,
abs/2309.13078.
[Kaiyu Yang and Jia Deng. 2019. Learning to prove](http://arxiv.org/abs/1905.09381)
[theorems via interacting with proof assistants.](http://arxiv.org/abs/1905.09381)
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang,
Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. 2023.
GPT can solve mathematical problems without a calculator. CoRR, abs/2309.03241.
Jie Yao, Zihao Zhou, and Qiufeng Wang. 2023. Solving
math word problem with problem type classification.
In Proceedings of NLPCC, volume 14304, pages 123–
134.
An-Zi Yen and Wei-Ling Hsu. 2023. Three questions
concerning the use of large language models to facilitate mathematics learning. CoRR, abs/2310.13615.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions
for large language models. CoRR, abs/2309.12284.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang,
and Songfang Huang. 2023. How well do large language models perform in arithmetic tasks? CoRR,
abs/2304.02015.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.
Mammoth: Building math generalist models through
hybrid instruction tuning. CoRR, abs/2309.05653.
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma,
Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan
Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023.
GLM-130B: an open bilingual pre-trained model. In
_Proceedings of ICLR._
Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin
Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen.
2023a. Evaluating and improving tool-augmented
computation-intensive math reasoning. _arXiv_
_preprint arXiv:2306.02408._
Mengxue Zhang, Zichao Wang, Zhichao Yang, Weiqi
Feng, and Andrew S. Lan. 2023b. Interpretable math
word problem solution generation via step-by-step
planning. In Proceedings of ACL, pages 6858–6877.
Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and
[Jingming Liu. 2020. Ape210k: A large-scale and](http://arxiv.org/abs/2009.11506)
[template-rich dataset of math word problems.](http://arxiv.org/abs/2009.11506)
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu.
[2022. Minif2f: a cross-system benchmark for formal](http://arxiv.org/abs/2109.00110)
[olympiad-level mathematics.](http://arxiv.org/abs/2109.00110)
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
and Nan Duan. 2023. Agieval: A human-centric
benchmark for evaluating foundation models. CoRR,
abs/2304.06364.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun
Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song,
Mingjie Zhan, and Hongsheng Li. 2023a. Solving
-----
challenging math word problems using GPT-4 code
interpreter with code-based self-verification. CoRR,
abs/2308.07921.
Zihao Zhou, Qiufeng Wang, Mingyu Jin, Jie Yao, Jianan
Ye, Wei Liu, Wei Wang, Xiaowei Huang, and Kaizhu
Huang. 2023b. Mathattack: Attacking large language models towards math solving ability. CoRR,
abs/2309.01686.
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang,
Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. 2023. Solving math word problems via
cooperative reasoning induced language models. In
_Proceedings of ACL, pages 4471–4485._
Mingyu Zong and Bhaskar Krishnamachari. 2023. Solving math word problems concerning systems of equations with GPT-3. In Proceedings of AAAI, pages
15972–15979.
-----
| [
"Janice, Ahn",
"Rishu, Verma",
"Rui, Zhang",
"Renze, Lou",
"Di, Liu",
"Wenpeng, Yin"
] | 2024-04-05T00:00:00 | EACL 2024 student research workshop | false | 53 | 1 | null | http://arxiv.org/abs/2402.00157 | https://arxiv.org/abs/2402.00157 | https://www.semanticscholar.org/paper/42445823fb0156afddc8c72eaa5ee81dded5b965 |
Learning by Fixing: Solving Math Word Problems with Weak Supervision | Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions. In this paper, we address this issue by introducing a weakly-supervised paradigm for learning MWPs. Our method only requires the annotations of the final answers and can generate various solutions for a single problem. To boost weakly-supervised learning, we propose a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning. Specifically, for an incorrect solution tree generated by the neural network, the fixing mechanism propagates the error from the root node to the leaf nodes and infers the most probable fix that can be executed to get the desired answer. To generate more diverse solutions, tree regularization is applied to guide the efficient shrinkage and exploration of the solution space, and a memory buffer is designed to track and save the discovered various fixes for each problem. Experimental results on the Math23K dataset show the proposed LBF framework significantly outperforms reinforcement learning baselines in weakly-supervised learning. Furthermore, it achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods, demonstrating its strength in producing diverse solutions. | This paper proposes a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning and achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods. | ### Learning by Fixing: Solving Math Word Problems with Weak Supervision
**Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, Song-Chun Zhu**
University of California, Los Angeles, USA.
[email protected], {liqing, danielciao, huangsiyuan}@ucla.edu, [email protected]
**Abstract**
Previous neural solvers of math word problems (MWPs) are
learned with full supervision and fail to generate diverse solutions. In this paper, we address this issue by introducing a
_weakly-supervised paradigm for learning MWPs. Our method_
only requires the annotations of the final answers and can generate various solutions for a single problem. To boost weaklysupervised learning, we propose a novel learning-by-fixing
(LBF) framework, which corrects the misperceptions of the
neural network via symbolic reasoning. Specifically, for an
incorrect solution tree generated by the neural network, the fix_ing mechanism propagates the error from the root node to the_
leaf nodes and infers the most probable fix that can be executed
to get the desired answer. To generate more diverse solutions,
_tree regularization is applied to guide the efficient shrinkage_
and exploration of the solution space, and a memory buffer
is designed to track and save the discovered various fixes for
each problem. Experimental results on the Math23K dataset
show the proposed LBF framework significantly outperforms
reinforcement learning baselines in weakly-supervised learning. Furthermore, it achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods,
demonstrating its strength in producing diverse solutions.
**Introduction**
Solving math word problems (MWPs) poses unique challenges for understanding natural-language problems and performing arithmetic reasoning over quantities with commonsense knowledge. As shown in Figure 1, a typical MWP
consists of a short narrative describing a situation in the
world and asking a question about an unknown quantity. To
solve the MWP in Figure 1, a machine needs to extract key
quantities from the text, such as “100 kilometers” and “2
hours”, and understand the relationships between them. General mathematical knowledge like “distance = velocity ×
time” is then used to calculate the solution.
Researchers have recently focused on solving MWPs using
neural-symbolic models (Ling et al. 2017; Wang, Liu, and
Shi 2017; Huang et al. 2018; Wang et al. 2018; Xie and Sun
2019). These models usually consist of a neural perception
module (i.e., Seq2Seq or Seq2Tree) that maps the problem
text into a solution expression or tree, and a symbolic module
Copyright © 2021, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
**Problem: A truck travels 100 kilometers in 2 hours. At this**
speed, if it travels for another 3.5 hours, how many kilometers
will it complete for the entire journey? Answer: 275
**Solution1: 100/2 ×(2 + 3.5)**
Total Distance
Velocity × Total Time
Distance 1 / Time 1 Time 1 + Time 2
100 2 2 3.5
**Solution2: 100 + 100/2×3.5**
Total Distance
Distance 1 + Distance 2
100 Velocity × Time 2
Distance 1 / Time 1 3.5
100 2
Figure 1: Exemplar MWP with multiple solutions.
which executes the expression and generates the final answer.
Training these models requires the full supervision of the
solution expressions.
However, these fully-supervised approaches have three
drawbacks. First, current MWP datasets only provide one
solution for each problem, while there naturally exist multiple solutions that give different paths of solving the same
problem. For instance, the problem in Figure 1 can be solved
by “(100/2) × (2 + 3.5)” if we first calculate the speed and
then multiply it by the total time; alternatively, we can solve
it using “100 + 100/2 × 3.5” by summing the distances of
the first and second parts of the journey. The models trained
with full supervision on current datasets are forced to fit the
given solution and cannot generate diverse solutions. Second,
annotating the expressions for MWPs is time-consuming.
However, a large amount of MWPs with their final answers
can be mined effortlessly from the internet (e.g., online forums). How to efficiently utilize these partially-labeled data
without the supervision of expressions remains an open problem. Third, current supervised learning approaches suffer
from the train-test discrepancy. The fully-supervised learning
-----
methods optimize expression accuracy rather than answer
accuracy. However, the model is evaluated by the answer
accuracy on the test set, causing a natural performance gap.
To address these issues, we propose to solve the MWPs
with weak supervision, where only the problem texts and the
final answers are required. By directly optimizing the answer
accuracy rather than the expression accuracy, learning with
weak supervision naturally addresses the train-test discrepancy. Our model consists of a tree-structured neural model
similar to Xie and Sun (2019) to generate the solution tree
and a symbolic execution module to calculate the answer.
However, the symbolic execution module for arithmetic expressions is non-differentiable with respect to the answer
accuracy, making it infeasible to use back-propagation to
compute gradients. A straightforward approach is to employ
policy gradient methods like REINFORCE (Williams 1992)
to train the neural model. The policy gradient methods explore the solution space and update the policy based on generated solutions that happen to hit the correct answer. Since the
solution space is large and incorrect solutions are abandoned
with zero reward, these methods usually converge slowly or
fail to converge.
To improve the efficiency of weakly-supervised learning,
we propose a novel fixing mechanism to learn from incorrect
predictions, which is inspired by the human ability to learn
from failures via abductive reasoning (Magnani 2009; Zhou
2019a). The fixing mechanism propagates the error from the
root node to the leaf nodes in the solution tree and finds the
most probable fix that can generate the desired answer. The
fixed solution tree is further used as a pseudo label to train
the neural model. Figure 2 shows how the fixing mechanism
corrects the wrong solution tree by tracing the error in a
top-down manner.
Furthermore, we design two practical techniques to traverse the solution space and discover possible solutions efficiently. First, we observe a positive correlation between the
number of quantities in the text and the size of the solution
tree (the number of leaf nodes in the tree), and propose a tree
_regularization technique based on this observation to limit_
the range of possible tree sizes and shrink the solution space.
Second, we adopt a memory buffer to track and save the discovered fixes for each problem with the fixing mechanism.
All memory buffer solutions are used as pseudo labels to train
the model, encouraging the model to generate more diverse
solutions for a single problem.
In summary, by combining the fixing mechanism and
the above two techniques, the proposed learning-by-fixing
(LBF) method contains an exploring stage and a learning
stage in each iteration, as shown in Figure 2. We utilize the
fixing mechanism and tree regularization to correct wrong
answers in the exploring stage and generate fixed expressions
as pseudo labels. In the learning stage, we train the neural
model using these pseudo labels.
We conduct comprehensive experiments on the Math23K
dataset (Wang, Liu, and Shi 2017). The proposed LBF
method significantly outperforms the reinforcement learning baselines in weakly-supervised learning and achieves
comparable performance with several fully-supervised methods. Furthermore, our proposed method achieves significantly
better answer accuracies of all the top-3/5 answers than fullysupervised methods, illustrating its advantage in generating
diverse solutions. The ablative experiments also demonstrate
the efficacy of the designed algorithms, including the fixing
mechanism, tree regularization, and memory buffer.
**Related Work**
**Math Word Problems**
Recently, there emerges various question-answering tasks
that require human-like reasoning abilities (Qi et al. 2015;
Tu et al. 2014; Zhang et al. 2019; Dua et al. 2019; Hong et al.
2019; Zhu et al. 2020; Zhang et al. 2020b; Li et al. 2020b;
Yu et al. 2020). Among them, solving mathematical word
problems (MWPs) is a fundamental and challenging task.
Previous studies of MWPs range from traditional rulebased methods (Fletcher 1985; Bakman 2007; Yu-hui et al.
2010), statistical learning methods (Kushman et al. 2014;
Zhou, Dai, and Chen 2015; Mitra and Baral 2016; Roy
and Roth 2017; Huang et al. 2016), semantic-parsing methods (Shi et al. 2015; Koncel-Kedziorski et al. 2015; Huang
et al. 2017) to recent deep learning methods (Ling et al.
2017; Wang, Liu, and Shi 2017; Huang et al. 2018; Robaidek,
Koncel-Kedziorski, and Hajishirzi 2018; Wang et al. 2018,
2019; Chiang and Chen 2019; Xie and Sun 2019; Zhang et al.
2020a).
In particular, Deep Neural Solver (DNS) (Wang, Liu, and
Shi 2017) is a pioneering work that designs a Seq2seq model
to solve MWPs and achieves promising results. Xie and Sun
(2019) propose a tree-structured neural solver to generate the
solution tree in a goal-driven manner. All these neural solvers
learn the model with full supervision, where the ground-truth
intermediate representations (e.g., expressions, programs) are
given during training. To learn the solver with less supervision, Koncel-Kedziorski et al. (2015) use a discriminative
model to solve MWPs in a weakly-supervised way. They utilize separate modules to extract features, construct expression
trees, and score the likelihood, which is different from the current end-to-end neural solvers. Upadhyay et al. (2016), Zhou,
Dai, and Chen (2015), and Kushman et al. (2014) use mixed
supervision, where one dataset has only annotated equations,
and the other has only final answers. However, for the set
with final answers, they also depend on pre-defined equation
templates. Chen et al. (2020) apply a neural-symbolic reader
on MathQA(Amini et al. 2019), which is a large-scale dataset
with fully-specified operational programs. They have access
to the ground truth programs for a small fraction of training
samples at the first iterations of training.
Unlike these methods, the proposed LBF method requires
only the supervision of the final answer and generates diverse
solutions by keeping a memory buffer. Notably, it addresses
the sparse reward problem in policy gradient methods using a
fixing mechanism that propagates error down a solution tree
and finds the most probable fix.
**Neural-Symbolic Learning for NLP**
Neural-symbolic learning has been applied to solve NLP
tasks with weak supervision, such as semantic parsing and
program synthesis (Liang et al. 2016a; Guu et al. 2017; Liang
-----
_V_ _[op]_ = {+, −, ×, ÷, ∧}, and numeric values V _[num]_ from
the problem text P . Therefore, the target vocabulary of P is
denoted as Σ = V _[op]_ _∪_ _V_ _[con]_ _∪_ _V_ _[num]_ and it varies between
problems due to different V _[num]._
To generate the solution tree, we adopt the goal-driven treestructured neural model (GTS) (Xie and Sun 2019), which
first encodes the problem text into its goal and then recursively decomposes it into sub-goals in a top-down manner.
**Problem Encoding. Each word of the problem text is en-**
coded into a contextual representation. Specifically, for a
problem P = w1w2...wn, each word wi is first converted to
a word embedding wi. Then the sequence of embeddings is
inputted to a bi-directional GRU (Cho et al. 2014) to produce
a contextual word representation: hi = _→[−]hi +[←]h−i_ _, where_ _→[−]hi_ _,_ _[←]h−i_
are the hidden states of the forward and backward GRUs at
position i, respectively.
**Solution Tree Generation. The tree generation process is**
designed as a preorder tree traversal (root-left-right). The
root node of the solution tree is initialized with a goal vector
**q0 =** **h→n +** **h−0.**
_[−]_ _[←]_
For a node with goal q, we first derive a context vector c by
an attention mechanism to summarize relevant information
from the problem:
_ai = softmax(v[⊤]a_ [tanh][(][W][a][[][q][,][ h][i][]))] (1)
et al. 2018; Agarwal et al. 2019; Li et al. 2020b). Similar to
MWP, they generate intermediate symbolic representations
with a neural network and execute the intermediate representation with a symbolic reasoning module to get the final
result. Typical approaches for such neural-symbolic models
use policy gradient methods like REINFORCE since the symbolic execution module is non-differentiable. For example,
Neural Symbolic Machines (Liang et al. 2016b) combines
REINFORCE with a maximum-likelihood training process
to find good programs. Guu et al. (2017) augment reinforcement learning with the maximum marginal likelihood so that
probability is distributed evenly across consistent programs.
Memory Augmented Policy Optimization (MAPO) (Liang
et al. 2018) formulates its learning objective as an expectation
over a memory buffer of high-reward samples and a separate expectation outside the buffer, which helps accelerate
and stabilize policy gradient training. Meta Reward Learning (Agarwal et al. 2019) uses an auxiliary reward function
to provide feedback beyond a binary success or failure. Since
these methods can only learn from sparse successful samples, they suffer from cold start and inefficient exploration
of large search spaces. Recently, Dai and Zhou (2017), Dai
et al. (2019), and Zhou (2019b) introduce abductive learning,
which states that human misperceptions can be corrected via
abductive reasoning. In this paper, we follow the abductive
learning method (Li et al. 2020a) and propose a novel fixing mechanism to learn from negative samples, significantly
accelerating and stabilizing the weakly-supervised learning
process. We further design the tree regularization and memory buffer techniques to efficiently shrink and explore the
solution space.
**Weakly-Supervised MWPs**
In this section, we define the weakly-supervised math word
problems and describe the goal-driven tree model originated
from Xie and Sun (2019). Then we introduce the proposed
learning-by-fixing method, as also shown in Figure 2.
**Problem Definition**
A math word problem is represented by an input problem text
_P_ . The machine learning model with parameters θ requires
to translate P into an intermediate expression T, which is
executed to compute the final answer y. In fully-supervised
learning, we learn from the ground truth expression T and
the final answer y. The learning objective is to maximize
the data likelihood p(T, y _P_ ; θ) = pθ(T _P_ )p(y _T_ ), where
_|_ _|_ _|_
computing y given T is a deterministic process. In contrast,
in the weakly-supervised setting, only P and y are observed,
while T is hidden. In other words, the model is required to
generate an unknown expression from the problem text. The
expression is then executed to get the final answer.
**Goal-driven Tree-Structured Model**
A problem text P consists of words and numeric values. The
model takes in problem text P and generates a solution tree
_T_ . Let V _[num]_ denote the ordered list of numeric values in P
according to their order in the problem text. Generally, T may
contain constants V _[con]_ = {1, 2, π}, mathematical operators
**c =**
_aihi_ (2)
where va and Wa are trainable parameters. Then the goal q
and the context c are used to predict the token of this node
from the target vocabulary Σ. The probability of token t is
defined as:
_s(t|q, c) = w[⊤]n_ [tanh][(][W][s][[][q][,][ c][,][ e][(][t][)])] (3)
_p(t|q, c) = softmax(s(t|q, c))_ (4)
where e(t) is the embedding of token t:
**Mop(t)** if t ∈ _V_ _[op]_
**e(t) =** **Mcon(t)** if t _V_ _[con]_ (5)
_∈_
hloc(t,P ) if t _V_ _[num]_
_∈_
where Mop and Mcon are two trainable embeddings for operators and constants, respectively. For a number token, its
embedding is the corresponding hidden state hloc(t,P ) from
the encoder, where loc(t, P ) is the index of t in the problem
_P_ . The predicted token _t[ˆ] is:_
_tˆ = arg max_ (6)
_t_ Σ _[p][(][t][|][q][,][ c][)]_
_∈_
If the predicted token is a number token or constant, the node
is terminated and its goal is realized by the predicted token;
otherwise, the predicted token is an operator and the current
goal is decomposed into left and right sub-goals combined
by the operator. Please refer to the supplementary material
for more details about the goal decomposition process.
**Answer Calculation. The generated solution tree is trans-**
formed into a reasoning tree _T[ˆ] by creating auxiliary non-_
terminal nodes in place of the operator nodes to store the
-----
**Goal-Driven Tree Model** **Fixing** **Memory Buffer**
**GC:: Total Distance travels 100 kilometers in 2 hours.** + 200 **275** + ×
_travels for another 3.5 hours_ 100 × / +
**G: Distance 2** / 3.5 100 2 2 3.5
**GC:: Distance 1 travels 100 kilometers** 100 × **C: At this speed, travels** 100 + 100 **175**
_for another 3.5 hours_ 100 2
×
+
**GC:: Velocity travels 100** / 3.5 **GC: : Time 23.5 hours** 28.57 **50** × 3.5 × 100 100 /
_kilometers in 2 hours_
+ 2
/ 3.5
**GC:: Distance 1 travels 100 kilometers** 100 3.5 **GC:: Time 2 3.5 hours** 100 / 3.5 **2** 100 2 2 3.5
**G: Goal** **Exploring** **Bottom-up** **Top-down**
**C: Context** **Learning** **reasoning** **fixing**
Figure 2: Overview of our proposed learning-by-fixing (LBF) method. It shows the process for learning the example in Figure 1.
LBF works by iteratively exploring the solution space and learning the MWP solver. Exploring: the problem first goes through
the GTS module and produces a tentative solution using tree regularization. Then the fixing mechanism diagnoses this solution
by propagating the correct answer in a top-down manner. The fixed solution is then added to the memory buffer. Learning: all
solutions in the memory buffer are used as pseudo labels to train the GTS module using a cross-entropy loss function.
intermediate results, and the original operator nodes are attached as child nodes to the corresponding auxiliary nodes.
Then the final answer ˆy is calculated by executing _T[ˆ] to the_
value of the root node in a bottom-up manner.
**Learning-by-Fixing**
**Fixing Mechanism** Drawing inspiration from humans’
ability to correct and learn from failures, we propose a fixing
mechanism to correct the wrong solution trees via abductive
reasoning following Li et al. (2020a) and use the fixed solution trees as pseudo labels for training. Specifically, we
find the most probable fix for the wrong prediction by backtracking the reasoning tree and propagating the error from
the root node into the leaf nodes in a top-down manner.
The key ingredient in the fixing mechanism is the 1-step
fix (1-FIX) algorithm which assumes that only one symbol in
the reasoning tree can be substituted. As shown by the 1-FIX
function in Algorithm 1, the 1-step fix starts from the root
node of the reasoning tree and gradually searches down to
find a fix that makes the final output equal to the ground-truth.
The search process is implemented with a priority queue,
where each element is defined as a fix-tuple (A, αA, p):
- A is the current visiting node.
- αA is the expected value on this node, which means if
the value of A is changed to αA, _T[ˆ] will execute to the_
ground-truth answer y.
- p is the visiting priority, which reflects the probability of
changing the value of A.
In 1-FIX, error propagation through the solution tree is
achieved by a solve function, which aims at computing the
expected value of a child node from its parent’s expected
value. Supposing B is A’s child node and αA is the expected
value of A, the solve(B, A, αA) function works as following:
- If B is A’s left or right child, we directly solve the equation
_αB_ _childR(A) = αA or childL(A)_ _αB = αA to get_
_B’s expected value αB, where_ denotes the operator.
- If BL is an operator node, we try to replace B with
L
all other operators and check whether the new ex
[L]
pression can generate the correct answer. That is,
_childL(A) αB childR(A) = αA where αB is now an_
operator. If there is no αB satisfying this equation, the
solve function returns none.
Please refer to the supplementary material for the definition
of the visiting priority as well as the illustrative example of
the 1-FIX process.
To search the neighbors of _T[ˆ] within multi-step distance,_
we extend the 1-step fix to multi-step by incorporating a
RANDOMWALK function. As shown in Algorithm 1, if we
find a fix by 1-FIX, we return this fix; otherwise, we randomly change one leaf node in the reasoning tree to another
symbol within the same set (e.g., operators V _[op]) based on_
the probability in Equation 4. This process will be repeated
for certain iterations until it finds a fix for the solution.
**Solution Space Exploration**
**Tree Regularization While Li et al. (2020a) assumes the**
length of the intermediate representation is given, the expression length is unknown in weakly-supervised learning. Thus,
the original solution space is infinite since the predicted token
decides whether to continue the generation or stop. Therefore, it is critical to shrink the solution space, i.e., control the
size of the generated solution trees. If the size of the generated solution tree varies a lot from the target size, it would
be challenging for the solution or its fix to hit the correct
answer. Although the target size is unknown, we observe a
positive correlation between the target size and the number
of quantities in text. Regarding this observation as a tree size
-----
𝑉[012] = 100, 2, 3.5
𝑉[-.] = +, −,×,÷,∧
𝑉[3-0] = {1, 2, 𝜋}
**Target size 𝒍= 𝟓**
× ② 𝑉[-.]
**Algorithm 1 Fixing Mechanism**
1: Input: reasoning tree _T,[ˆ]_ ground-truth answer y
2: T [(0)] = T[ˆ]
3:4: forT i ←[∗] = 1-F0 to mIX do(T [(][i][)], y)
5: **if T** _[∗]_ ≠ ∅ **then**
6: **return T** _[∗]_
7: **else**
8: _T_ [(][i][+1)] = RANDOMWALK(T [(][i][)])
9: return ∅
10:
11: function 1-FIX(T, y)
12: q = PriorityQueue(), S = the root node of T
13: q.push(S, y, 1)
14: while (A, αA, p) = q.pop() do
15: **if A ∈** Σ then
16: _T_ _[∗]_ = T[ˆ](A → _αA)_
17: **return T** _[∗]_
18: **for B ∈** _child(A) do_
19: _αB = solve(B, A, αA)_
20: **if not (B ∈** Σ and αB /∈ Σ) then
21: _q.push(B, αB, p(B →_ _αB))_
22: return ∅
**Target size 𝒍= 𝟕**
× ②
÷ N/A
𝑉[-.]
𝑉[-.] ∪𝑉[012] ∪𝑉[3-0]
𝑉[-.] ∪𝑉[012] ∪𝑉[3-0]
𝑉[-.] ∪𝑉[012] ∪𝑉[3-0]
𝑉[-.]
𝑉[012] ∪𝑉[3-0]
𝑉[012] ∪𝑉[3-0]
100
2
+
2
3.5
N/A
N/A
𝑉[-.] ∪𝑉[012] ∪𝑉[3-0]
𝑉[012] ∪𝑉[3-0]
𝑉[012] ∪𝑉[3-0]
𝑉[012] ∪𝑉[3-0]
N/A
100
2
3.5
Prefix: × ÷ 100 2 + 2 3.5
Prefix: × ÷ 100 2 3.5
Figure 3: Tree regularization for the problem in Figure 1
given different target sizes. The three columns are the generated tokens, the effective rules, and the target vocabularies
shrunk by the rules, respectively.
and its buffer β, the learning objective is to minimize the
negative log-likelihood of all fixed expressions in the buffer:
log p(T _[∗]|P_ ) (8)
_TX[∗]∈β_
_J(P, β) = −_
**Learning-by-Fixing Framework**
The complete learning-by-fixing method is described in
Algorithm 2. In the exploring state, we use the fixing
mechanism and tree regularization to discover possible fixes
for the wrong trees generated by the neural network, and put
them into a buffer. In the learning stage, we train the model
with all the solutions in the memory buffer by minimizing
the loss function in Equation 8.
**Algorithm 2 Learning-by-Fixing**
1: Input: training set = (Pi, yi) _i=1_
_D_ _{_ _}[N]_
2: memory buffer B = {βi}i[N]=1[, the GTS model][ θ]
3:4: for Pi, yi, βi ∈ (D, B) do _▷Exploring_
5: _Tˆi = GTS (P_ ; θ)
6: _Ti[∗]_ [=][ m][-FIX][( ˆ]Ti, yi)
7:8:9: **if Tβi[∗]i ←[̸][=][ ∅]βi[and] ∪{[ T]T[ ∗]ii[∗][}]∈[/]** _βi then_ _▷Learning_
10: _θ = θ_ _θJ(Pi, βi)_
_−∇_
prior, we design a tree regularization algorithm to generate a
solution tree with a target size and regularize the size in an
empirical range. Denote the size of a solution tree Size(T )
as the number of leaf nodes including quantities, constants,
and operators. The prior range of Size(T ) given the length of
the numeric value list len(V _[num]) is defined as:_
Size(T ) ∈ [minSize(T ), maxSize(T )]
minSize(T ) = aminlen(V _[num]) + bmin_
maxSize(T ) = amaxlen(V _[num]) + bmax_
(7)
where amin, bmin, amax, bmax are the hyperparameters. The
effect of these hyperparameters will be discussed in Table 2.
We further propose a tree regularization algorithm to decode a solution tree with a given size. To generate a tree of
a given size l, we design two rules to produce a prefix-order
expression during the preorder tree decoding:
1. The number of operators cannot be greater than ⌊l/2⌋.
2. Except the l-th position, the number of numeric values
(quantities and constants) cannot be greater than the number of operators.
These two rules are inspired by the syntax of prefix notation
(a.k.a, normal Polish notation) for mathematical expressions.
The rules shrink the target vocabulary Σ in Equation 6 so
that the tree generation can be stopped when it reaches the
target size. Figure 3 shows illustrative examples of the tree
regularization algorithm.
With tree regularization, we can search the possible fixes
within a given range of tree size [minSize(T ), maxSize(T )]
for each problem.
**Memory Buffer. We adopt a memory buffer to track and save**
the discovered fixes for each problem. The memory buffer
enables us to seek multiple solutions for a single problem
and use all of them as pseudo labels for training, which
encourages diverse solutions. Formally, given a problem P
**Experimental Results**
**Experimental Setup**
**Dataset. We evaluate our proposed method on the Math23K**
dataset (Wang, Liu, and Shi 2017). It contains 23,161 math
word problems annotated with solution expressions and answers. For the weakly-supervised setting, we only use the
problems and final answers and discard the expressions. We
do cross-validation following the setting of Xie and Sun
(2019).
**Evaluation Metric. We evaluate the model performance by**
answer accuracy, where the generated solution is considered
correct if it executes to the ground-truth answer. Specifically,
we report answer accuracies of all the top-1/3/5 predictions
-----
using beam search. It evaluates the model’s ability to generate
multiple possible solutions.
**Models. We conduct experiments by comparing our meth-**
ods with variants of weakly-supervised learning methods. Specifically, we experiment with two inference models: Seq2Seq with bidirectional Long Short Memory network (BiLSTM) (Wu et al. 2016) and GTS (Xie and Sun
2019), and train with four learning strategies: REINFORCE,
MAPO (Liang et al. 2018), LBF, LBF-w/o-M (without memory buffer). MAPO is a state-of-the-art method in semantic
parsing task that extends the REINFORCE with augmented
memory. Both models are also trained with the tree regularization algorithm. We also compare with the fully-supervised
learning methods to demonstrate our superiority in generating diverse solutions. In the ablative studies, we analyze the
effect of the proposed tree regularization and the length of
search steps in fixing mechanism.
**Comparisons with State-of-the-art**
Table 1 summarizes the answer accuracy of different weaklysupervised learning methods and the state-of-the-art fullysupervised approaches. The proposed learning-by-fixing
framework significantly outperforms the policy gradient baselines like REINFORCE and MAPO, on both the Seq2seq and
the GTS models. It demonstrates the strength of our proposed
LBF method in weakly-supervised learning. The GTS-LBFfully model is trained by initializing the memory buffer with
all the ground-truth expressions. It demonstrates that by extending to the fully-supervised setting, our model maintains
the top-1 accuracy while significantly improving solutions’
diversity. We believe that learning MWPs with weak supervision is a promising direction. It requires fewer annotations
and allows us to build larger datasets with less cost.
_Fully-Supervised_
proposed LBF method converges significantly faster and
achieves higher accuracy compared with other methods. Both
the REINFORCE and MAPO take a long time to start improving, which indicates the policy gradient methods suffer
from the cold-start and need time to accumulate rewarding
samples.
0.6
0.5
0.2
Testing Result Accuracy 0.1
0.0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||LBF-wo-|Memory|
|||||||||||
|||||||||LBF MAPO||
|||||||||||
|||||||||REINFOR|CE|
|||||||||||
|||||||||||
|||||||||||
0k 20k 40k 60k 80k 100k 120k 140k
LBF-wo-Memory
LBF
MAPO
REINFORCE
Iteration
Figure 4: The learning curves of the GTS model using different weakly-supervised learning methods.
**Diverse Solutions with Memory Buffer**
To evaluate the ability to generate diverse solutions, we report
the answer accuracies of all the top-1/3/5 solutions on the
test set using beam search, denoted as Acc@1/3/5, as shown
in Table 2. In the weakly-supervised scenario, GTS-LBF
achieves slightly better Acc@1 accuracy and much better
Acc@3/5 accuracy than GTS-LBF-w/o-M. In the fully supervised scenario, GTS-LBF-fully achieves comparable Acc@1
accuracy and much better Acc@3/5 accuracy than the original GTS model. Particularly, GTS-LBF-fully outperforms
GTS by 21% and 26% in terms of Acc@3/5 accuracy. It
reveals the efficacy of the memory buffer in encouraging
diverse solutions in both weakly-supervised learning and
fully-supervised learning.
|Model|Accuracy(%)|
|---|---|
_Fully Supervised_
|Model|Tree Size|Acc@1 Acc@3 Acc@5|
|---|---|---|
|GTS|74.3 42.2 30.0|
|---|---|
|GTS-LBF-fully|74.1 63.4 56.3|
|GTS-LB|BF-fully Weakly|74.1 63.4 56.3 Supervised|
|---|---|---|
|GTS-LBF- w/o-M|[1,+∞) [2n-1,2n+1] [2n-1,2n+3] [2n-3,2n+5]|∼0 ∼0 ∼0 55.3 26.2 19.3 58.3 27.7 20.3 56.7 27.7 20.6|
|GTS-LBF|[1,+∞) [2n-1,2n+1] [2n-1,2n+3] [2n-3,2n+5]|∼0 ∼0 ∼0 56.7 45.3 39.1 59.4 49.6 45.2 57.6 49.3 45.2|
Table 2: Answer accuracies of all the top-1/3/5 solutions
decoded using beam search, denoted as Acc@1/3/5.
**Qualitative Analysis**
We visualize several examples of the top-5 predictions of
GTS-LBF in Figure 5. In the first example, the first solution
generated by our model is to sum up the prices of a table
and a chair first, and then multiply it by the number of pairs
of tables and chairs. Our model can also produce another
|Retrieval (Robaidek, Koncel-Kedziorski, and Hajishirzi 2018) Classification (Robaidek, Koncel-Kedziorski, and Hajishirzi 2018) LSTM (Robaidek, Koncel-Kedziorski, and Hajishirzi 2018) CNN (Robaidek, Koncel-Kedziorski, and Hajishirzi 2018) DNS (Wang, Liu, and Shi 2017) Seq2seqET (Wang et al. 2018) Stack-Decoder (Chiang and Chen 2019) T-RNN (Wang et al. 2019) GTS (Xie and Sun 2019) Graph2Tree (Zhang et al. 2020a) GTS-LBF-fully|47.2 57.9 51.9 42.3 58.1 66.7 65.8 66.9 74.3 74.8 1 74.1|
|---|---|
|Col1|Graph2Tree (Zhang et al. 2020a) GTS-LBF-fully Weakly-Supervised|74.8 1 74.1|
|---|---|---|
|Seq2seq|REINFORCE MAPO LBF-w/o-M LBF|1.2 10.7 44.7 43.6|
|GTS|REINFORCE MAPO LBF-w/o-M LBF|15.8 20.8 58.3 59.4|
Table 1: Answer accuracy on the Math23K dataset. We compare variants of models with our LBF method.
**Convergence Speed**
Figure 4 shows the learning curves of different weaklysupervised learning methods for the GTS model. The
1We run the code using the same setting as GTS for three times
and compute the average accuracy.
-----
**Problem** **p**
The school purchased 85 sets of × × ✔ × ✔ × ✔ × ✔ + ✔
tables and chairs for 67 dollars
+ 85 85 + + 85 85 + + 85 × +
per table and 23 dollars per chair.
How much did the school spend 67 23 67 23 67 23 23 67 23 67 85 67 85 23
buying these tables and chairs?
There are 1200 students in a × × ✔ × ✔ í ✔ í ✔ ✘
×
school, and 65% are girls. 1200 í 1200 í í 1200 1200 × 1200 ×
How many boys are there? 1200 65%
1 65% 1 65% 1 65% 1200 65% 65% 1200
The fruit store shipped 240 ✔ + ☒ + ✘ í ✘ + ✘
í í
kilograms of raw pears. The
× í 240 + 240 + 240 í
apples shipped were 60 × 60 × 60
kilograms less than twice the 240 2 240 + 240 í 60 × 240 í
240 2 240 2
weight of raw pears. How many
240 60 240 60 240 2 60 240
kilograms of apples are shipped?
í
The cafeteria has 260kg of flour and 6 bags of rice, 25kg í í ✘ í ✔ í ✔ í ✘ 260 × ☒
260 ×
per bag. How many more × 260 260 × 260 × × × 6 í
kilograms of flour are there 25 6 25 6 6 25 25 6 6 25 260 1 6 í
than rice?
6 25
Expression Right, Expression Wrong, Expression Wrong,
✔ ✘ ☒
Answer Right Answer Wrong Answer Right (Spurious)
Figure 5: Qualitative results on the Math23K dataset. We visualize the solution trees generated by our method.
|Col1|Right Wrong Spurious|
|---|---|
|Acc@1 Acc@3 Acc@5|58.6 40.6 0.56 49.3 50.4 0.27 44.9 54.8 0.32|
Table 3: Human evaluation on the generated solutions (%).
reasonable solution (the fifth column) by deriving the prices
of tables and chairs separately and then summing them up.
One caveat for the multiple solutions is that some solutions
have different solution trees but are equivalent by switching
the order of numeric values or subtrees, as shown in the first
four solutions of the first problem in Figure 5. In particular, multiplication and addition are commutative, and our
model learns and exploits this property to generate equivalent
solutions with different tree structures.
The first solution to the fourth problem in Figure 5 is a
typical error case of our model due to the wrong prediction
of the problem goal. Another failure type is the spurious
solutions, which are correct but not meaningful answers, such
as the second solution of the third problem in Figure 5. To
test how frequent the spurious solutions appear, we randomly
select 500 examples from the test set, and ask three human
annotators to determine whether each generated expression
is right, wrong, or spurious. Table 3 provides the human
evaluation results, and it shows that spurious solutions are
rare in our model.
**Ablative Analyses**
**Tree Regularization.** We test different choices of the hyperparameters defined by Equation 7 in tree regularization.
As shown in Table 2, the model without tree regularization, i.e., tree size ∈ [1, +∞), fails to converge and gets
nearly 0 accuracy. The best range for the solution tree size
is [2n − 1, 2n + 3], where n = len(V _[num]). We provide an_
intuitive interpretation of this range: for a problem with n
quantities, (1) n − 1 operators are needed to connect n quantities, which leads to the lower bound of tree size to 2n − 1;
(2) in certain cases, the constants or quantities are used more
than once, leading to a rough upper bound of 2n + 3. Therefore, we use [2n − 1, 2n + 3] as the default range in our
implementations. Empirically, this range covers 88% of the
lengths of the given ground-truth expressions in the Math23K
dataset, providing an efficient prior for tree size.
**Number of Search Steps** Table 4 shows the comparison
of various step lengths in the m-FIX algorithm. In most cases,
increasing the step length improves the chances of correcting
wrong solutions, thus improving the performance.
|Steps Models|1 10 50 (default) 100|
|---|---|
|Seq2seq-LBF-w/o-M Seq2seq-LBF|41.9 43.4 44.7 47.8 43.9 45.7 43.6 44.6|
|GTS-LBF-w/o-M GTS-LBF|51.2 54.6 58.3 57.8 52.5 55.8 59.4 59.6|
Table 4: Accuracy (%) using various search steps.
-----
**Conclusion**
In this work, we propose a weakly-supervised paradigm for
learning MWPs and a novel learning-by-fixing framework
to boost the learning. Our method endows the MWP learner
with the capability of learning from wrong solutions, thus
significantly improving the answer accuracy and learning
efficiency. One future direction of the proposed model is to
prevent generating equivalent or spurious solutions during
training, possibly by making the generated solution trees
more interpretable with semantic constraints.
**Ethical Impact**
The presented work should be categorized as research in
the field of weakly-supervised learning and abductive reasoning. It can help teachers in school get various solutions
of a math word problem. This work may also inspire new
algorithmic, theoretical, and experimental investigation in
neural-symbolic methods and NLP tasks.
**Acknowledgement**
This work reported herein is supported by ARO
W911NF1810296, DARPA XAI N66001-17-2-4029, and
ONR MURI N00014-16-1-2007.
**References**
Agarwal, R.; Liang, C.; Schuurmans, D.; and Norouzi, M.
2019. Learning to Generalize from Sparse and Underspecified Rewards. In ICML.
Amini, A.; Gabriel, S.; Lin, S.; Koncel-Kedziorski, R.; Choi,
Y.; and Hajishirzi, H. 2019. MathQA: Towards Interpretable
Math Word Problem Solving with Operation-Based Formalisms. In NAACL-HLT.
Bakman, Y. 2007. Robust Understanding of Word Problems
with Extraneous Information.
Chen, X.; Liang, C.; Yu, A. W.; Zhou, D.; Song, D.; and Le,
Q. V. 2020. Neural Symbolic Reader: Scalable Integration
of Distributed and Symbolic Representations for Reading
Comprehension. In ICLR.
Chiang, T.-R.; and Chen, Y.-N. 2019. Semantically-Aligned
Equation Generation for Solving and Reasoning Math Word
Problems. ArXiv abs/1811.00720.
Cho, K.; Van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.;¨
Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning phrase representations using RNN encoder-decoder for
statistical machine translation. EMNLP .
Dai, W.-Z.; Xu, Q.; Yu, Y.; and Zhou, Z.-H. 2019. Bridging Machine Learning and Logical Reasoning by Abductive
Learning. In Advances in Neural Information Processing
_Systems, 2811–2822._
Dai, W.-Z.; and Zhou, Z.-H. 2017. Combining logical abduction and statistical induction: Discovering written primitives
with human knowledge. In Thirty-First AAAI Conference on
_Artificial Intelligence._
Dua, D.; Wang, Y.; Dasigi, P.; Stanovsky, G.; Singh, S.; and
Gardner, M. 2019. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. In
_NAACL-HLT._
Fletcher, C. R. 1985. Understanding and solving arithmetic
word problems: A computer simulation. Behavior Research
_Methods, Instruments, & Computers 17: 565–571._
Guu, K.; Pasupat, P.; Liu, E. Z.; and Liang, P. 2017. From
Language to Programs: Bridging Reinforcement Learning
and Maximum Marginal Likelihood. In ACL.
Hong, Y.; Wang, J.; Jia, Y.; Zhang, W.; and Wang, X. 2019.
Academic Reader: An Interactive Question Answering System on Academic Literatures. Thirty-Third AAAI Conference
_on Artificial Intelligence ._
Huang, D.; Liu, J.; Lin, C.-Y.; and Yin, J. 2018. Neural
Math Word Problem Solver with Reinforcement Learning. In
_COLING._
Huang, D.; Shi, S.; Lin, C.-Y.; and Yin, J. 2017. Learning
Fine-Grained Expressions to Solve Math Word Problems. In
_EMNLP._
Huang, D.; Shi, S.; Lin, C.-Y.; Yin, J.; and Ma, W.-Y. 2016.
How well do Computers Solve Math Word Problems? LargeScale Dataset Construction and Evaluation. In ACL.
Koncel-Kedziorski, R.; Hajishirzi, H.; Sabharwal, A.; Etzioni,
O.; and Ang, S. D. 2015. Parsing Algebraic Word Problems
into Equations. Transactions of the Association for Compu_tational Linguistics 3: 585–597._
Kushman, N.; Zettlemoyer, L.; Barzilay, R.; and Artzi, Y.
2014. Learning to Automatically Solve Algebra Word Problems. In ACL.
Li, Q.; Huang, S.; Hong, Y.; Chen, Y.; Wu, Y. N.; and Zhu,
S.-C. 2020a. Closed Loop Neural-Symbolic Learning via
Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning. In International Conference on Machine
_Learning (ICML)._
Li, Q.; Huang, S.; Hong, Y.; and Zhu, S.-C. 2020b. A
Competence-aware Curriculum for Visual Concepts Learning via Question Answering. In European Conference on
_Computer Vision. Springer._
Liang, C.; Berant, J.; Le, Q.; Forbus, K. D.; and Lao,
N. 2016a. Neural symbolic machines: Learning semantic
parsers on freebase with weak supervision. arXiv preprint
_arXiv:1611.00020 ._
Liang, C.; Berant, J.; Le, Q. V.; Forbus, K. D.; and Lao,
N. 2016b. Neural Symbolic Machines: Learning Semantic
Parsers on Freebase with Weak Supervision. In ACL.
Liang, C.; Norouzi, M.; Berant, J.; Le, Q. V.; and Lao, N.
2018. Memory Augmented Policy Optimization for Program
Synthesis and Semantic Parsing. In NeurIPS.
Ling, W.; Yogatama, D.; Dyer, C.; and Blunsom, P. 2017.
Program Induction by Rationale Generation: Learning to
Solve and Explain Algebraic Word Problems. _ArXiv_
abs/1705.04146.
-----
Magnani, L. 2009. Abductive cognition: The epistemologi_cal and eco-cognitive dimensions of hypothetical reasoning,_
volume 3. Springer Science & Business Media.
Mitra, A.; and Baral, C. 2016. Learning To Use Formulas To
Solve Simple Arithmetic Problems. In ACL.
Qi, H.; Wu, T.; Lee, M.; and Zhu, S. 2015. A Restricted
Visual Turing Test for Deep Scene and Event Understanding.
_ArXiv abs/1512.01715._
Robaidek, B.; Koncel-Kedziorski, R.; and Hajishirzi, H. 2018.
Data-Driven Methods for Solving Algebra Word Problems.
_ArXiv abs/1804.10718._
Roy, S.; and Roth, D. 2017. Unit Dependency Graph and Its
Application to Arithmetic Word Problem Solving. In AAAI.
Shi, S.; Wang, Y.; Lin, C.-Y.; Liu, X.; and Rui, Y. 2015.
Automatically Solving Number Word Problems by Semantic
Parsing and Reasoning. In EMNLP.
Tu, K.; Meng, M.; Lee, M.; Choe, T. E.; and Zhu, S. 2014.
Joint Video and Text Parsing for Understanding Events and
Answering Queries. IEEE MultiMedia 21: 42–70.
Upadhyay, S.; Chang, M.-W.; Chang, K.-W.; and tau Yih,
W. 2016. Learning from Explicit and Implicit Supervision
Jointly For Algebra Word Problems. In EMNLP.
Wang, L.; Wang, Y.; Cai, D.; Zhang, D.; and Liu, X. 2018.
Translating Math Word Problem to Expression Tree. In
_EMNLP._
Wang, L.; Zhang, D.; Zhang, J.; Xu, X.; Gao, L.; Dai, B. T.;
and Shen, H. T. 2019. Template-Based Math Word Problem Solvers with Recursive Neural Networks. Proceedings
_of the AAAI Conference on Artificial Intelligence 33(01):_
7144–7151.
Wang, Y.; Liu, X.; and Shi, S. 2017. Deep Neural Solver
for Math Word Problems. 845–854. Copenhagen, Denmark:
Association for Computational Linguistics.
Williams, R. J. 1992. Simple statistical gradient-following
algorithms for connectionist reinforcement learning. Machine
_learning 8(3-4): 229–256._
Wu, Y.; Schuster, M.; Chen, Z.; Le, Q. V.; Norouzi, M.;
Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K.;
Klingner, J.; Shah, A.; Johnson, M.; Liu, X.; Kaiser, L.;
Gouws, S.; Kato, Y.; Kudo, T.; Kazawa, H.; Stevens, K.;
Kurian, G.; Patil, N.; Wang, W.; Young, C.; Smith, J.; Riesa,
J.; Rudnick, A.; Vinyals, O.; Corrado, G. S.; Hughes, M.; and
Dean, J. 2016. Google’s Neural Machine Translation System:
Bridging the Gap between Human and Machine Translation.
_ArXiv abs/1609.08144._
Xie, Z.; and Sun, S. 2019. A Goal-Driven Tree-Structured
Neural Model for Math Word Problems. In IJCAI.
Yu, W.; Jiang, Z.; Dong, Y.; and Feng, J. 2020. ReClor: A
Reading Comprehension Dataset Requiring Logical Reasoning. ArXiv abs/2002.04326.
Yu-hui, M.; Ying, Z.; Guang-zuo, C.; Yun, R.; and Ronghuai, H. 2010. Frame-Based Calculus of Solving Arithmetic
Multi-Step Addition and Subtraction Word Problems. 2010
_Second International Workshop on Education Technology_
_and Computer Science 2: 476–479._
Zhang, C.; Gao, F.; Jia, B.; Zhu, Y.; and Zhu, S. 2019.
RAVEN: A Dataset for Relational and Analogical Visual
REasoNing. 2019 IEEE/CVF Conference on Computer Vi_sion and Pattern Recognition (CVPR) 5312–5322._
Zhang, J.; Wang, L.; Lee, R. K.-W.; Bin, Y.; Shao, J.; and
Lim, E.-P. 2020a. Graph-to-Tree Learning for Solving Math
Word Problems. ACL 2020 .
Zhang, W.; Zhang, C.; Zhu, Y.; and Zhu, S. 2020b. Machine
Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning. ArXiv abs/2004.12193.
Zhou, L.; Dai, S.; and Chen, L. 2015. Learn to Solve Algebra
Word Problems Using Quadratic Programming. In EMNLP.
Zhou, Z.-H. 2019a. Abductive learning: towards bridging
machine learning and logical reasoning. Science China Infor_mation Sciences 62: 1–3._
Zhou, Z.-H. 2019b. Abductive learning: towards bridging
machine learning and logical reasoning. Science China Infor_mation Sciences 62: 1–3._
Zhu, Y.; Gao, T.; Fan, L.; Huang, S.; Edmonds, M.; Liu, H.;
Gao, F.; Zhang, C.; Qi, S.; Wu, N. Y.; Tenenbaum, B. J.; and
Zhu, S.-C. 2020. Dark, Beyond Deep: A Paradigm Shift to
Cognitive AI with Humanlike Common Sense. Engineering .
-----
| [
"Qing, Li",
"Siyuan, Huang",
"Yining, Hong",
"Daniel, Ciao",
"Song-Chun, Zhu"
] | 2021-08-03T00:00:00 | AAAI 2021 Neuro-Symbolic AI | false | 53 | 5 | null | http://arxiv.org/abs/2012.10582 | https://arxiv.org/abs/2012.10582 | https://www.semanticscholar.org/paper/edaafec59b651d503ac7c8a86f8e2335273e1f7a |
Mathematical Reasoning via Self-supervised Skip-tree Training | We demonstrate that self-supervised language modeling applied to mathematical formulas enables logical reasoning. To measure the logical reasoning abilities of language models, we formulate several evaluation (downstream) tasks, such as inferring types, suggesting missing assumptions and completing equalities. For training language models for formal mathematics, we propose a novel skip-tree task. We find that models trained on the skip-tree task show surprisingly strong mathematical reasoning abilities, and outperform models trained on standard skip-sequence tasks. We also analyze the models' ability to formulate new conjectures by measuring how often the predictions are provable and useful in other proofs. | It is found that models trained on the skip-tree task show surprisingly strong mathematical reasoning abilities, and outperform modelstrained on standard skip-sequence tasks. | ## Mathematical Reasoning via Self-supervised Skip-tree Training
**Markus N. Rabe**
Google Research
```
[email protected]
```
**Dennis Lee**
Google Research
```
[email protected]
```
**Kshitij Bansal**
Google Research
```
[email protected]
```
**Christian Szegedy**
Google Research
```
[email protected]
```
**Abstract**
We examine whether self-supervised language modeling applied to mathematical
formulas enables logical reasoning. We suggest several logical reasoning tasks that
can be used to evaluate language models trained on formal mathematical statements,
such as type inference, suggesting missing assumptions and completing equalities.
To train language models for formal mathematics, we propose a novel skip-tree
task. We find that models trained on the skip-tree task show surprisingly strong
mathematical reasoning abilities, and outperform models trained on standard skipsequence tasks. We also analyze the models’ ability to formulate new conjectures
by measuring how often the predictions are provable and useful in other proofs.
**1** **Introduction**
Language modeling using Transformers [Vaswani et al., 2017] has been hugely successful for
applications like translation and text generation. Models like GPT-2 are able to generate impressive
news articles and stories given just an abstract [Radford et al., 2018]. These models are usually first
trained on a proxy task, such as predicting missing words in the case of BERT [Devlin et al., 2019],
before fine tuning the models on more specific (downstream) tasks such as machine translation and
question-answering. The proxy tasks are not reliant on labeled data, and thus can be trained on large
corpora of unlabeled data. Even the models trained on the proxy tasks alone, have shown impressive
language understanding [Brown et al., 2020].
Prior work in deep learning for mathematics has focused on learning directly on logical reasoning
tasks, such as predicting the proof steps or premises or assignments. These approaches require labeled
data, which is hard to come by and typically very limited in size. In this work, we apply the paradigms
of language modeling to formal mathematics and define proxy tasks on unlabeled mathematical
expressions that allows us to use much more data. We start with the HOList dataset [Bansal et al.,
2019], which spans a wide range of mathematical topics, including topology, multivariate calculus,
real and complex analysis, geometric algebra, and measure theory, formalized in the HOL Light proof
assistant [Harrison, 1996]. We find that training a language model on all mathematical expressions in
this dataset leads to surprisingly strong mathematical reasoning capabilities.
For training language models on formal mathematics, we propose a novel skip-tree task. The skip-tree
tasks is a specialization of the skip-sequence task that respects the tree structure of expressions. We
show that models trained on the skip-tree task significantly outperform those trained on the standard
skip-sequence task.
Reasoning can refer to a wide range of abilities, and thus we measure the mathematical reasoning
abilities of language models on a variety of tasks, including mechanical derivations, such as type
inference, and also creative tasks, such as predicting under which assumptions a statement is true. In
contrast to most works in natural language modeling, we do not fine-tune the models to the evaluation
Preprint. Under review.
-----
(downstream) tasks, as we want to study what reasoning capabilities can be acquired just through
language modeling proxy tasks.
An advantage of formal language compared to natural language is that we can attempt to automatically
evaluate statements. That is, even if the language models fail to predict the ground truth, the statements
they predicted might still be true and useful. We evaluate these conjectures by attempting to prove
them and checking if they are can be used in the context of other proofs.
Our contributions are as follows:
1. We introduce several evaluation tasks that test logical reasoning abilities.
2. We introduce a new skip-tree language modeling task that outperforms skip-sequence
approaches in our evaluation on the logical reasoning tasks.
3. We show that language modeling on mathematical formulas results in surprisingly strong
logical reasoning capabilities.
4. We suggest a way to create and evaluate mathematical conjectures with language models.
The remainder of this paper is structured as follows: First, we review related work on language
modeling and deep learning for mathematics in Section 2. Then, in Section 3 we discuss the source
corpus of formal mathematical statements from which we generate our training data. In Section 4, we
present our novel language modeling task for formal languages, as well as several variations that we
used in our ablation studies. We present the evaluation tasks in Section 5, present our experimental
findings in Section 6, and conclude in Section 7.
**2** **Related work**
Recently, we have seen a series of rapid improvements in language modeling stemming from better
pretraining tasks [Devlin et al., 2019, Zhang et al., 2019, Song et al., 2019, Dong et al., 2019, Raffel
et al., 2019, Conneau and Lample, 2019]. BERT [Devlin et al., 2019] is a pretraining task for
Transformers [Vaswani et al., 2017], which masks out a certain fraction of the input tokens that the
model then has to predict. UniLM uses multiple pretraining tasks [Dong et al., 2019]. One of them
is a sequence-to-sequence task; to predict the next sentence from the previous sentence. MASS
and SpanBERT consider a generalized sequence-to-sequence pretraining task, which is to predict a
masked out subsequence of the input [Song et al., 2019, Joshi et al., 2020]. However, both MASS
and SpanBERT reveal the length of the sequence to predict as they replace it by a number of mask
tokens equal to the length of the sequence.
T5 introduced a generalization of sequence-to-sequence pretraining tasks that is crucial to our
work [Raffel et al., 2019]. They replace the subsequence (or multiple subsequences) to be predicted
by a single token (not a number of mask tokens equal to the length of the subsequence, as in MASS).
[Zhang et al., 2019] additionally exploit the sentence structure of natural language. They suggest the
pretraining task Pegasus, which masks out entire sentences of a given text, and additionally masks
out randomly selected tokens in the remaining text (or alternatively replace them by other tokens).
In a similar way Pegasus’ exploitation of the sentence structure of natural language, our skip-tree
task exploits the tree structure of formal expressions. [Zhang et al., 2019] also suggest sampling the
sentences to be masked with the help of ROUGE1-F1 [Lin, 2004].
We work with the HOList dataset by Bansal et al. [2019], which is closely related to the Flyspeck
dataset by Kaliszyk and Urban [2014]. There are other datasets which might be suitable for our
approach as well, including proofs extracted from HOL4 [Gauthier et al., 2017], and from Coq [Huang
et al., 2019, Yang and Deng, 2019, Sanchez-Stern et al., 2019].
Most previous works that apply sequence-to-sequence models to logics have focused on specific
logical tasks in supervised training settings. In contrast, we train language models on an unsupervised
proxy task that does not require labeled data and can thus be applied to almost any source of mathematical expressions. Lample and Charton [2020] use a Transformer model for symbolic integration. They
train their model directly on the reasoning task they want to learn, and their approach requires that the
inverse of the prediction task can be computed effectively with classical algorithms. Finkbeiner et al.
[2020] explore the generalization properties of the Transformer architecture predicting the solutions
to SAT formulas and temporal logic, but require a data generator that can solve formulas, which
-----
Training Validation Testing
Theorem database
Proofs
Figure 1: We use the theorems and proofs of the training split, marked in green, for training. For
our evaluation tasks, we only use the theorems of the validation set, marked in red, to ensure that the
model has never seen the statements from which the evaluation tasks are derived.
is currently not feasible for higher-order logic. Piotrowski et al. [2019] train RNNs on individual
logical reasoning steps, such as substitutions, using a dataset of rewrites on polynomials extracted
from Prover9. Wang et al. [2018] translate between synthetic descriptions in natural language and
formal mathematics on a dataset generated with Mizar.
Self-supervised training techniques for formal mathematics have received much less attention. Wang
et al. [2020] apply recent unsupervised translation techniques by Lample et al. [2018] to align formal
and informal techniques. They that they perform considerably worse than supervised techniques.
Very recently, Urban and Jakub˚uv [2020] presented initial experiments on applying self-supervised
language modeling to formal mathematics in order to produce conjectures. Earlier statistical approaches to produce conjectures include Gauthier et al. [2016]. Also, very recently, Li et al. [2020]
applied language modeling to proofs of formal mathematics.
Transformer models for program understanding have focused on providing inductive biases in the
architecture [Shiv and Quirk, 2019, Hellendoorn et al., 2020], whereas this work suggests to use a
modified language modeling proxy task.
Applying natural language techniques to formal mathematics has a long history. Already in 2004,
Cairns [2004] applied information retrieval based on latent semantics to improve over search for
keywords, and Urban [2004] formulated the intention to learn from large amounts of data in formalized
mathematics.
**3** **Dataset**
We start from the HOList dataset introduced by Bansal et al. [2019]. The complete dataset includes
29465 theorems and their proofs. We here consider only the “core” and “complex” datasets which
comprise 18943 theorems, 637 definitions and 603,950 proof steps. These proof steps were extracted
from the human proof logs. The theorems and proofs were written (by humans) using the HOL Light
proof assistant, and span various areas of mathematics such as set theory, arithmetic, linear algebra,
topology, and multivariate complex analysis. The proofs contain a lot of intermediate goals which are
the result of applying “tactics” on previous proof goals. For example, one of the tactics is to rewrite
the current proof goal with a set of equations selected from the theorem database.
From this dataset we extract all theorem statements as well as all intermediate proof goals. We use
S-expressions to represent all statements. For example, (v bool x) represents a boolean variable
named x, and (a (v (fun (bool) (bool)) f) (v bool x) represents the function application
_f_ (x) where f is a function from bool to bool. The S-expression syntax is thus very verbose, which
can cause some expressions to not fit into the size constraints of our Transformer model.
We use the same split into training/validation/testing data as defined in HOList. The split is defined on
the theorems, and the entire proof of each theorem is assigned to the same split as the theorem. This
means that we have used the proof of 11,655 theorems in the training split of the core and complex
libraries. This avoids partially revealing the proofs of theorems in the validation and test sets during
training. We derive all training data from the theorems and proofs in the training set, and use only
the theorems (not the proofs) for the evaluation tasks. This addresses the possibility that some proof
-----
( fun ( bool ) ( bool ) ) <END>
Encoder Decoder
( <MASK> ( fun ( bool ) <PREDICT>) = ) <START> ( fun ( bool ) ( bool ) ) <END>
Original formula: ( c ( fun ( bool ) ( fun ( bool ) ( bool ) ) ) = )
Figure 2: The skip-tree training task for the example of the equality operator on boolean constants
(original formula). In this example we assume that a part of the type was sampled to be the
subexpression to be predicted, and that subexpression c was sampled to be masked out additionally.
Note the input to the decoder is shifted to the right, such that the next token prediction task yields the
target sequence.
steps for training theorems and for validation theorems might be shared. In Figure 1 we depict our
choice of training and evaluation data.
**4** **Skip-tree Training**
In this section we define the skip-tree training task. We parse a given mathematical statement into a
tree of subexpressions, and replace one of the subexpressions by a <PREDICT> token. The task is to
predict the subexpression replaced by <PREDICT>. See Figure 2 for an example.
For training, the trees are converted back to a sequence of tokens; the target sequence is extended
by a <START> token in the front and an <END> token in the back. We exclude training examples
where the output sequence is longer than the length of the decoder (512 tokens), and we cut off input
sequences when they exceed the length of the encoder (1024 tokens).
**Additional masked subexpressions.** In addition to the subexpression to be masked out by
<PREDICT>, we select k = 2 subexpressions to be masked out by a different mask token <MASK>.
In contrast to the <PREDICT> token, we replace all occurrences of these subexpressions by the
<MASK> token. Note that it can happen that the subexpressions we want to replace by the <MASK>
tokens overlap with each other or with the subexpression replaced by the <PREDICT> token. In this
case, we give the highest preference to the <PREDICT> token, and then in decreasing order of size
for the expression to be replaced by the <MASK> tokens.
The subexpressions masked by <MASK> do not have to be predicted. They are only hidden to make
the task harder and to make the model tolerant to having partial information. A beneficial side effect
of replacing some expressions by a <MASK> token is that the input sequences get substantially
shorter and more mathematical expressions fit in the size constraints of the Transformer architecture.
**Distributions of subexpressions.** Sampling subexpressions uniformly at random results in very
short sequences to be predicted: since our trees are mostly ternary, two thirds of the subexpressions
are leaves. Besides picking subexpressions uniformly at random, we thus experiment with weighting
the subexpressions by the number of tokens they contain. We refer to these variants as “uniform” and
“weighted”. This results in a much more diverse set of expressions to be sampled.
**Multiple samples per statement.** Since we started with a data source that is small compared
datasets in natural language modeling, we use each mathematical statement from the training set to
generate n = 100 training examples. Our initial data consists of about 360K intermediate statements
from the proofs of 10K statements in the training split of the core and complex library of the HOList
corpus. To avoid duplicates, we sample the subexpressions that are replaced by a <PREDICT> token
for each original formula without replacement.
-----
**4.1** **Ablations**
To verify the design choices of the skip-tree training task we generated multiple variants of the
training task and trained a model on each of them.
**No mask tokens.** To answer the question of whether it helps to mask out subexpressions besides
the one to predict, we generated a dataset with k = 0, called “skip-tree (no <MASK>)”.
**Fewer samples per statement.** Instead of sampling many training examples from each formula,
we could train on a fewer training examples for more epochs. We generated a smaller version with
_n = 20 of the skip-tree training data, which we call “skip-tree (small)”._
**Skip-sequence.** MASS [Song et al., 2019], SpanBERT [Joshi et al., 2020], and T5 [Raffel et al.,
2019] pretrain their sequence-to-sequence natural language models by predicting subsequences of the
tokens. The skip-tree task is similar, but exploits our ability to parse the formulas as trees. To examine
if this makes a difference, we consider a “skip-sequence” task that samples subsequences of the list
of tokens instead of sampling subexpressions. We generated three datasets for the skip-sequence task,
where we sample subsequences of different lengths (short/medium/long). For the task “skip-sequence
(long)”, we pick two positions in the token sequence at uniformly at random and select the sequence
that is between them. For the tasks “skip-sequence (medium)” and “skip-sequence (short)”, we limit
their distance to 100 and 50 tokens, respectively.
|Dataset|# examples|# tokens (input/output)|avg length (input/output)|
|---|---|---|---|
|Skip-tree (weighted) Skip-tree (uniform) Skip-tree (small) Skip-tree (no <MASK>) Skip-sequence (long) Skip-sequence (medium) Skip-sequence (short)|25.8M 25.7M 5.2M 25.8M 19.2M 26.0M 26.0M|17.4B/1.6B 18.8B/316M 3.5B/521M 19.4B/1.6B 11.9B/2.8B 19.4B/884M 19.6B/479M|675/61 732/12 673/100 750/61 620/146 744/34 752/18|
Dataset # examples # tokens (input/output) avg length (input/output)
Skip-tree (weighted) 25.8M 17.4B/1.6B 675/61
Skip-tree (uniform) 25.7M 18.8B/316M 732/12
Skip-tree (small) 5.2M 3.5B/521M 673/100
Skip-tree (no <MASK>) 25.8M 19.4B/1.6B 750/61
Skip-sequence (long) 19.2M 11.9B/2.8B 620/146
Skip-sequence (medium) 26.0M 19.4B/884M 744/34
Skip-sequence (short) 26.0M 19.6B/479M 752/18
Table 1: Basic statistics of the training splits of the data sets. Number of tokens in the training set
measured before padding.
**5** **Evaluation Tasks**
In this section we suggest several logical reasoning tasks on which our language models can be
evaluated. These tasks require different levels of logical reasoning, ranging from mostly mechanical
application of typing rules to conjecturing under which assumptions a statement might hold.
We intentionally define them to be out-of-distribution compared to the training data. Not only do we
generate the examples in a slightly different way, we also generate them from the validation set of the
theorem database. That is, the model has never seen the source data, nor has it seen the proofs of
these theorems. This makes the tasks more challenging, and also ensures that we force the models to
go beyond memorization. To give the interested reader a better impression of the evaluation tasks, we
provide a list of randomly selected examples in Appendix D.
**Type Inference.** We generate type inference problems similar to how we generated the skip-tree
training data, which we described in Section 4. However, we restrict the sampling of subexpressions
to subtrees that represent types of variables or constants (i.e. not fragments of other types).
We generated two variants of the type inference task: In the task we call “Type Inference,” we
replace only the selected type by the <PREDICT> token and do not mask out anything else. In
the second variant we name “Hard Type Inference,” we additionally replace all other types by the
<MASK> token. The two tasks loosely correspond to the deriving the first and the last type during
type inference.
For example, consider x = x, which in the s-expression syntax is represented as follows:
```
(a (a (c (fun (A) (fun (A) (bool))) =) (v A x)) (v A x))
```
-----
Each subexpression here is either a leaf or a triple. The first element of these triples indicates their
kind: a indicates function applications, c indicates constants (i.e. symbols that have been defined in
the formal system), v indicates a variable, and finally fun indicates a function type. The equality
operator “=” is represented by (c (fun (A) (fun (A) (bool))) =), which indicates that it is a
constant that has a function type taking two arguments of arbitrary type A and returns a bool. Since
functions are typically curried in this representation, we have two function applications, both times
with the variable x as the argument.
An example for the “Type Inference” evaluation task would be:
```
(a (a (c <PREDICT> =) (v A x)) (v A x))
```
The type of the equality operator is still uniquely defined, as we know what the equality is applied to
(two arguments of type A) and because top-level application always has to return a boolean value. In
this example the type could have been computed by a classical type inference algorithm.
For the “Hard Type Inference” evaluation task, the input would look as follows:
```
(a (a (c <PREDICT> =) (v <MASK> x)) (v <MASK> x))
```
Now, the type inference task is highly ambiguous. In fact, in this case, variable x could have any type,
and the equality operator would have to adapt to the type of its arguments accordingly. Further, note
that the hard type inference task masks out many more subtrees compared to the training data.
**Assumptions.** This evaluation task is to predict missing assumptions for theorems in the validation
set. We extract these tasks by searching for “top-level implications” and replacing their left operand
by the <PREDICT> token. We define an implication operator “⇒” in an expression to be a top-level
_implication if it is either the top-most operator of the expression, or occurs only under quantifiers,_
conjunctions, disjunctions, or on the right side of other top-level implications. This definition helps
us to avoid picking assumptions in negated parts of formulas.
Note that we can have multiple top-level implications per validation theorem. Consider the abstracted
example (a ⇒ _b) ∧_ (c ⇒ (d ⇒ _e)). In this case, a, c, and d are all considered to be assumptions of_
top-level implications.
An example from the theorem database is x = y ⇒ _a + x = a + y, for which the task is to predict_
_x = y given <PREDICT> ⇒_ _a + x = a + y. (We omit the presentation of this example as an_
s-expression for the sake of readability.) At first, the expression to predict in this case may seem
unique, but there are actually many ways to complete the task into a true statement; e.g. y = x or
_x = 0 ∧_ _y = 0. Still, most humans would likely guess x = y as it is simple and general, and because_
_x occurs before y in the alphabet. To make a correct prediction, our language models thus have to_
understand which statements are more general and also know about naming conventions.
Below we give some examples of this reasoning task that we selected for their simplicity. (For a
representative selection, see Appendix D.) While it is often easy to “see” that a given solution to such
a task is correct, it can be non-trivial to come up with a solution in the first place. We encourage the
reader to make their own predictions before looking up the ground truth in Appendix C:
_• <PREDICT> ⇒_ (x ⇔ ( b ∨ _x1) ∧_ (b ∨ _x0))_
_• <PREDICT> ⇒_ (g \ {s}) = g
_• <PREDICT> ⇒_ (x1/y1 = x2/y2 ⇔ _x1 ∗_ _y2 = x2 ∗_ _y1)_
**Equalities.** Similar to the task of predicting missing assumptions, we ask to predict one side of a
top-level equality in this task. Again, we define top-level equalities to be any equality that occurs as
the top-level operator of the formula or occurs inside quantifiers, conjunctions, disjunctions, or on
the right side of implications. For example, from the theorem ∀x.x = (x = True) we extract two
evaluation examples: ∀x. <PREDICT> = (x = True) and ∀x. x = <PREDICT>.
Again, we present some simple example tasks (in mathematical notation for the sake of readability)
and provide the ground truth as well as the model predictions in Appendix C:
_• ∀x, n ∈_ N : (x[n] = 1) = <PREDICT>
_• ∀m, n : n ≤_ _m ⇒_ _m −_ _n + n = <PREDICT>_
_• ∀l, m : <PREDICT> = APPEND(REVERSE(m), REVERSE(l))_
-----
**6** **Results and Discussion**
We trained a Transformer with the hyperparameters specified in the appendix on the skip-tree dataset
and each of the ablations for 1M steps with a batch size of 256.
In language modeling for natural language one of the key metrics is how often the next token in the
ground truth is correctly predicted. This is not an ideal measurement for formal mathematics as even
a single incorrect token can invalidate the entire statement. Also, the s-expression representation is
relatively lengthy and barely human-readable, so a token-level measurement does not allow us to
compare our models to the natural language models in any case. In the first part of our evaluation we
_therefore focus on exact matches of the entire predicted statement._
|Dataset|Type Inference|Hard Type Inference|Assumptions|Equalities|
|---|---|---|---|---|
|Skip-tree (uniform) Skip-tree (weighted) Skip-tree (small) Skip-tree (no <MASK>) Skip-sequence (long) Skip-sequence (medium) Skip-sequence (short)|96.21% 96.23% 95.89% 96.07% 9.44% 48.94% 77.25%%|94.40% 93.32% 90.42% 32.50% 0.06% 5.97% 3.21%|40.85% 40.86% 39.23% 38.38% 0.53% 3.32% 0.68%|46.57% 42.89% 40.91% 41.60% 0.56% 3.55% 2.06%|
Dataset Type Inference Hard Type Inference Assumptions Equalities
Skip-tree (uniform) 96.21% **94.40%** 40.85% **46.57%**
Skip-tree (weighted) **96.23%** 93.32% **40.86%** 42.89%
Skip-tree (small) 95.89% 90.42% 39.23% 40.91%
Skip-tree (no <MASK>) 96.07% 32.50% 38.38% 41.60%
Skip-sequence (long) 9.44% 0.06% 0.53% 0.56%
Skip-sequence (medium) 48.94% 5.97% 3.32% 3.55%
Skip-sequence (short) 77.25%% 3.21% 0.68% 2.06%
Table 2: Success rate of predicting the ground truth in a beam search of width 8 after training a model
on various datasets. Grayed out values indicate experiments where the training data did not include
the <MASK> token but the evaluation data did.
In Table 2 we present how well the Transformer model, trained on different datasets, can predict
the ground truth sequences. We can observe that for type inference, i.e. the more mechanical
reasoning tasks, the models achieve a high accuracy - even in the Hard Type Inference case where the
expression was stripped of all types. We see that the skip-tree task and its ablations clearly dominate
the skip-sequence language modeling task. There does not seem to be a major difference between the
“uniform” and “weighted” sampling strategies for the skip-tree model.
A closer inspection of the skip-sequence model shows that its predictions rarely parse or typecheck.
On manual inspection of the predictions, it seems that the skip-sequence models consistently add
surplus tokens at the end, or stop expressions too early; they appear to be unable to correctly identify
_the end of the expression to predict._
**6.1** **Conjecturing**
In the experiments above, we measured how often the models predicted the ground truth in the
evaluation tasks. We now change our point of view, and examine whether the models can be used
to generate new conjectures. We define conjectures as mathematical statements that differ from the
_ground truth and any expression the model has seen during training. Additionally, a meaningful_
conjecture should be syntactically correct, typecheck, be provable, and be useful in the context of
other proofs.
Since the training data is derived exclusively from true statements (i.e. human proof steps), the
language models are incentivized to complete partial statements in a way that makes them true.
Presented with one of the evaluation tasks, to predict missing assumptions or to predict the missing
side of an equation, the models may thus complete these statements in multiple ways that make them
true. The predictions that do not match the ground truth may still be true and useful statements. In
the following we describe experiments that help us estimate how often this is the case.
**Free-form conjecturing.** In addition to the “assumptions” and the “equalities” evaluation tasks, we
consider a third task for producing conjectures. In this task, which we call “free-form conjecturing”,
we query the model with a single prompt: (<theorem> <PREDICT>). This helps us to analyze what
the language models produce when given no context. The <theorem> tag indicates only that the
statement should be a theorem, and not an intermediate proof step, which would start with the <goal>
tag. For free-form conjecturing we want to produce a variety of different predictions, and thus use a
beam search with high beam width of 1024. We did not include the free-form conjecturing task in
Table 2, as there is no ground truth to match against.
-----
**How often are predictions true and new?** For this measurement, we replace the <PREDICT>
token with the predicted sequence and attempt to prove the resulting statement in the DeepHOL
theorem prover [Bansal et al., 2019]. Note that this can only give us a lower bound to the number
_of true statements, because of the limitations of the prover: The version of the DeepHOL theorem_
prover used here can prove around 58% of the validation theorems. So we expect the estimates here
to be considerably below the number of actually true statements.
In Table 3 we report two numbers for each evaluation task: The first number is the percentage of
generated statements known to be provable, including exact matches, statements from the training
set, and statements provable with DeepHOL. The second number is the percentage of generated
statements that are provable and new - excluding exact matches with the ground truth and statements
from the training set. The denominator for both numbers is the same: the set of all predictions from
the beam searches in Table 2.
We believe that these measurements show a significant bias towards true statements. While in some
tasks, less than half of the statements were provable, there are simply many more ways to write a
false statement than a true statement.
|Dataset|Assumptions|Equalities|Free-form Conjecturing|
|---|---|---|---|
|Skip-tree (uniform) Skip-tree (weighted)|32.19%/26.20% 32.41%/26.91%|19.61%/12.28% 17.96%/11.63%|82.32%/12.70% 97.75%/0.59%|
Dataset Assumptions Equalities Free-form Conjecturing
Skip-tree (uniform) 32.19%/26.20% 19.61%/12.28% 82.32%/12.70%
Skip-tree (weighted) 32.41%/26.91% 17.96%/11.63% 97.75%/0.59%
Table 3: Percentage of “provable statements”/“provable new statements”. The type inference tasks
are not included as we are only interested in the predictions that do not match the ground truth. For
the type inference tasks, these statements are either semantically equivalent to existing statements or
statements that do not type check.
**Are the conjectures useful?** For some evaluation tasks, the models could “cheat” on the truth
metric by making the statements trivially true. For example, the models can predict False as an
assumption, or complete the missing part of an equation by making it an identity (e.g. complete x =
```
<PREDICT> by predicting x). In fact, manual inspection revealed several such cases.
```
To make this measurable, we added the provable statements to the theorem database, and ran the
reinforcement learning experiments of the DeepHOL theorem prover [Bansal et al., 2019] to measure
how many of the statements were used as premises. In this experiment we also make sure that the new
theorems cannot be used in the proofs of their premises. In a “pruning” step DeepHOL minimizes
proofs by removing each individual premise in a proof and checking if the proof still holds. Only the
premises that survive this step are classified as useful. While this measurement is a relatively low bar,
it filters out statements that have no effect in any proof.
We ran three reinforcement learning experiments, one for each of the evaluation tasks. We then
measured how many of the theorems generated by each task are used as a premise in one of the
over 200,000 proofs found for each of the experiments. For the assumptions task, 3445 of the 3857
theorems were used at least once. For the equalities task and the free-form conjectures it was 979 out
of 3440 and 49 out of 130, respectively. We provide usage histograms in Appendix B.
While some of the most frequently used conjectures turned out to be alpha-equivalent variations of
existing theorems in the theorem database, we found some interesting examples among the most used
conjectures:
_• Assumptions task, 1728 usages: b = a + c ⇒_ _a = b −_ _c. Humans have used this theorem_
over vector arithmetic in many proofs. However, this theorem has always been defined as a
_local lemma and thus did not made it into the theorem database. This conjecture apparently_
filled a gap in the theorem database.
_• Free-form conjecturing task, 15 usages: COUNTABLE({s(n) | n ∈_ N}). In contrast to the
previous example, there are no occurrences of this statement (or an equivalent statement) in
the theorem database or any human proof, not even as a local lemma.
These results suggest that self-supervised language models show some ability to produce new, useful
conjectures, even without fine tuning or specialized training.
-----
**7** **Conclusion**
In this work, we applied the paradigms of self-supervised language modeling to formal mathematics
and show that this leads to surprisingly good reasoning capabilities. We introduced a novel selfsupervised training task for formal mathematics that outperforms existing training tasks used for
natural language. We also suggested several evaluation tasks for measuring mathematical reasoning
capabilities of language models for formal mathematics without the need of fine tuning. Finally, we
explored the ability of language models to produce new conjectures by measuring how many of the
new predictions are provable and useful for other proofs.
**Broader Impact**
Our ambition is to create strong automated reasoning systems. In the long run, such systems could be
used as a tool in mathematical research, engineering, and other sciences. Such systems could be used
as stand alone tools, but also as a component of other systems that utilizes mathematical reasoning,
such as in verification of software and hardware systems and physical modeling and exploration. This
should be very helpful for accelerating scientific progress.
In its current form, however, the methods presented in the paper are not applicable directly to solving
any particular scientific tasks. Therefore we do not anticipate any ethical or fairness issues arising
from direct applications of the technologies presented here. However if our methods are trained for
mimicking human reasoning in domains that argue over, for example, personal data, then it might
reinforce human biases present in the dataset. On the other hand, the abstraction capabilities of our
system might make those biases more explicit and interpretable and could help exposing them. As
part of a larger software verification system, some of the methods presented here might be used for
automated reverse engineering the internal working of other software systems, finding and exploiting
vulnerabilities in them.
**References**
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg,
Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors,
_Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information_
_Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008, 2017._
[URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.](http://papers.nips.cc/paper/7181-attention-is-all-you-need)
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya
Sutskever. Language models are unsupervised multitask learners. In _OpenAI_
_[Blog, 2018. URL https://d4mucfpksywv.cloudfront.net/better-language-models/](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)_
```
language_models_are_unsupervised_multitask_learners.pdf.
```
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep
bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of
_the North American Chapter of the Association for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019._
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler,
Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott
Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya
[Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. URL https:](https://arxiv.org/abs/2005.14165)
```
//arxiv.org/abs/2005.14165.
```
Kshitij Bansal, Sarah M Loos, Markus N Rabe, Christian Szegedy, and Stewart Wilcox. HOList:
An environment for machine learning of higher-order theorem proving. In Kamalika Chaudhuri
and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine
_Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings_
-----
_[of Machine Learning Research, pages 454–463. PMLR, 2019. URL http://proceedings.mlr.](http://proceedings.mlr.press/v97/bansal19a/bansal19a.pdf)_
```
press/v97/bansal19a/bansal19a.pdf.
```
John Harrison. HOL Light: A tutorial introduction. In Mandayam K. Srivas and Albert John
Camilleri, editors, Formal Methods in Computer-Aided Design, First International Conference,
_FMCAD ’96, Palo Alto, California, USA, November 6-8, 1996, Proceedings, volume 1166 of_
_[Lecture Notes in Computer Science, pages 265–269. Springer, 1996. URL https://doi.org/](https://doi.org/10.1007/BFb0031795)_
```
10.1007/BFb0031795.
```
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. PEGASUS: pre-training with
extracted gap-sentences for abstractive summarization. CoRR, abs/1912.08777, 2019. URL
```
http://arxiv.org/abs/1912.08777.
```
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: masked sequence to sequence pre-training for language generation. In Kamalika Chaudhuri and Ruslan Salakhutdinov,
editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019,
_9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learn-_
_[ing Research, pages 5926–5936. PMLR, 2019. URL http://proceedings.mlr.press/v97/](http://proceedings.mlr.press/v97/song19d.html)_
```
song19d.html.
```
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou,
and Hsiao-Wuen Hon. Unified language model pre-training for natural language understanding and
generation. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc,
Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems
_32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14_
_December 2019, Vancouver, BC, Canada, pages 13042–13054, 2019._
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text
[transformer. CoRR, abs/1910.10683, 2019. URL http://arxiv.org/abs/1910.10683.](http://arxiv.org/abs/1910.10683)
Alexis Conneau and Guillaume Lample. Cross-lingual language model pretraining. In Hanna M.
Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Con_ference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019,_
_[Vancouver, BC, Canada, pages 7057–7067, 2019. URL http://papers.nips.cc/paper/](http://papers.nips.cc/paper/8928-cross-lingual-language-model-pretraining)_
```
8928-cross-lingual-language-model-pretraining.
```
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert:
Improving pre-training by representing and predicting spans. Transactions of the Association for
_[Computational Linguistics, 8:64–77, 2020. doi: 10.1162/tacl\_a\_00300. URL https://doi.](https://doi.org/10.1162/tacl_a_00300)_
```
org/10.1162/tacl_a_00300.
```
Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summariza_tion Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational_
[Linguistics. URL https://www.aclweb.org/anthology/W04-1013.](https://www.aclweb.org/anthology/W04-1013)
Cezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with flyspeck. Journal of
_Automated Reasoning, 53(2):173–213, 2014._
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. TacticToe: Learning to reason with HOL4
tactics. In Thomas Eiter and David Sands, editors, LPAR-21, 21st International Conference
_on Logic for Programming, Artificial Intelligence and Reasoning, Maun, Botswana, May 7-_
_12, 2017, volume 46 of EPiC Series in Computing, pages 125–143. EasyChair, 2017. URL_
```
https://easychair.org/publications/volume/LPAR-21.
```
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A learning environment
for theorem proving. In 7th International Conference on Learning Representations, ICLR 2019,
_[New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.](https://openreview.net/forum?id=r1xwKoR9Y7)_
```
net/forum?id=r1xwKoR9Y7.
```
-----
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International
_Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA,_
volume 97 of Proceedings of Machine Learning Research, pages 6984–6994. PMLR, 2019. URL
```
http://proceedings.mlr.press/v97/yang19a/yang19a.pdf.
```
Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating correctness
[proofs with neural networks. CoRR, abs/1907.07794, 2019. URL http://arxiv.org/abs/](http://arxiv.org/abs/1907.07794)
```
1907.07794.
```
Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In 8th Interna_tional Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30,_
_[2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=Ske31kBtPr.](https://openreview.net/forum?id=Ske31kBtPr)_
Bernd Finkbeiner, Christopher Hahn, Markus N. Rabe, and Frederik Schmitt. Teaching temporal
[logics to neural networks. CoRR, abs/2003.04218, 2020. URL https://arxiv.org/abs/2003.](https://arxiv.org/abs/2003.04218)
```
04218.
```
Bartosz Piotrowski, Josef Urban, Chad E Brown, and Cezary Kaliszyk. Can neural networks learn
symbolic rewriting? In Conference on Artificial Intelligence and Theorem Proving, 2019.
Qingxiang Wang, Cezary Kaliszyk, and Josef Urban. First experiments with neural translation of
informal to formal mathematics. In International Conference on Intelligent Computer Mathematics,
pages 255–270, 2018.
Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine
translation in autoformalization of mathematics in Mizar. In Proceedings of the 9th ACM SIGPLAN
_International Conference on Certified Programs and Proofs, pages 85–98, 2020._
Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. Unsupervised
machine translation using monolingual corpora only. In International Conference on Learning
_Representations, 2018._
Josef Urban and Jan Jakub˚uv. First neural conjecturing datasets and experiments. In Conference on
_Intelligent Computer Mathematics, 2020._
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Initial experiments with statistical conjecturing
over large formal corpora. In CICM 2016 - Work in Progress Proceedings 1785, pages 219–228,
2016.
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Modelling high-level mathematical
reasoning in mechanised declarative proofs. In arXiv 2006.09265, 2020.
Vighnesh Leonardo Shiv and Chris Quirk. Novel positional encodings to enable tree-based transformers. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B.
Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual
_Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019,_
_[Vancouver, BC, Canada, pages 12058–12068, 2019. URL http://papers.nips.cc/paper/](http://papers.nips.cc/paper/9376-novel-positional-encodings-to-enable-tree-based-transformers)_
```
9376-novel-positional-encodings-to-enable-tree-based-transformers.
```
Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. Global
relational models of source code. In 8th International Conference on Learning Representations,
_[ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https:](https://openreview.net/forum?id=B1lnbRNtwr)_
```
//openreview.net/forum?id=B1lnbRNtwr.
```
Paul Cairns. Informalising formal mathematics: Searching the mizar library with latent semantics. In
_International Conference on Mathematical Knowledge Management, pages 58–72. Springer, 2004._
Josef Urban. Mptp–motivation, implementation, first experiments. Journal of Automated Reasoning,
33(3-4):319–339, 2004.
-----
**A** **Hyperparameters**
We trained the Transformers with these hyperparameters:
_• vocabulary size: 1200_
_• embedding size: 128_
_• attention dropout: 0.1_
_• nonlinearity: gelu_
_• hidden layer dropout: 0.1_
_• hidden layer size: 512_
_• initializer range: 0.02_
_• intermediate size: 768_
_• number of attention heads: 8_
_• number of hidden layers in encoder: 2_
_• number of hidden layers in decoder: 4_
**B** **Usage Statistics of Conjectures**
Figure 3: Histograms of premise usage of the conjectures generated through the assumptions task
(left), the equality task (middle), and through free-form conjecturing (right). X-axes are the new
theorems, sorted by number of usages. Y-axes indicate the number of usages on a log scale.
**C** **A Close Look at Simple Example Tasks**
**Assumptions.** In Section 5 we presented the following three examples of the task to predict missing
assumptions. For the sake of readability we here discuss only the pretty printed versions. For
examples in s-expression syntax, please visit Appendix D.
_• <PREDICT> ⇒_ (x ⇔ ( b ∨ _x1) ∧_ (b ∨ _x0))_
_• <PREDICT> ⇒_ (g \ {s}) = g
_• <PREDICT> ⇒_ (x1/y1 = x2/y2 ⇔ _x1 ∗_ _y2 = x2 ∗_ _y1)_
The ground truth answers are as follows:
_• ((b ⇔_ `False) ⇒` (x ⇔ _x0)) ∧_ (b ⇔ `True) ⇒` (x ⇔ _x1)_
_• ¬(s ∈_ _g)_
_• 0 < y1 ∧_ 0 < y2, note that 0 ̸= y1 ∧ 0 ̸= y2 would be a more general assumption.
For the first and the third task, the language model “skip-tree (weighted)” makes a correct prediction
in the top 3 candidates in a beam search of width 8. For the seconds task, the language model mostly
produces incorrectly typed expressions: it appears to think that s is a set of the same type as g.
-----
**Equalities.** We presented these examples for the equality evaluation task:
_• ∀x, n ∈_ N : (x[n] = 1) = <PREDICT>
_• ∀m, n : n ≤_ _m ⇒_ _m −_ _n + n = <PREDICT>_
_• ∀l, m : <PREDICT> = APPEND(REVERSE(m), REVERSE(l))_
The ground truth for the tasks is:
_• x = 1 ∨_ _n = 0_
_• m_
_• REVERSE(APPEND(l, m))_
Examples two and three are predicted correctly in a beam search with beam width 8. For the first
example, the model almost gets it correct in two of the 8 attempts: x = 1 _∨_ _n = 1, and x = 0_ _∨_ _n = 1._
We find it surprising that the model apparently understands that there are two cases to consider, but
that the exact combination of constants (1 and 0) is a challenge.
**D** **Randomly Selected Example Tasks**
In the following, we provide a list of 5 examples for each of the evaluation tasks, sampled uniformly
at random.
**Type Inference.**
_• (<theorem> (a (c <PREDICT> !)_ `(l (v (fun (cart (real) ?1) (bool))`
```
t) (a (c (fun (fun (fun (cart (real) ?1) (bool)) (bool)) (bool)) !)
(l (v (fun (cart (real) ?1) (bool)) u) (a (a (c (fun (bool) (fun
(bool) (bool))) ==>) (a (a (c (fun (bool) (fun (bool) (bool))) ∧) (a
(c (fun (fun (cart (real) ?1) (bool)) (bool)) !) (l (v (cart (real)
?1) b) (a (a (c (fun (bool) (fun (bool) (bool))) ∨) (a (c (fun (fun
(cart (real) ?1) (bool)) (bool)) ?) (l (v (cart (real) ?1) w) (a (a
(c (fun (bool) (fun (bool) (bool))) ∧) (a (a (c (fun (cart (real)
?1) (fun (fun (cart (real) ?1) (bool)) (bool))) IN) (v (cart (real)
?1) w)) (v (fun (cart (real) ?1) (bool)) t))) (a (a (c (fun (cart
(real) ?1) (fun (fun (cart (real) ?1) (bool)) (bool))) IN) (v (cart
(real) ?1) w)) (a (c (fun (prod (cart (real) ?1) (real)) (fun (cart
(real) ?1) (bool))) ball) (a (a (c (fun (cart (real) ?1) (fun (real)
(prod (cart (real) ?1) (real)))),) (v (cart (real) ?1) b)) (a (c
(fun (num) (real)) real_of_num) (a (c (fun (num) (num)) NUMERAL) (a
(c (fun (num) (num)) BIT1) (c (num) _0))))))))))) (a (c (fun (fun
(cart (real) ?1) (bool)) (bool)) ?) (l (v (cart (real) ?1) w) (a (a
(c (fun (bool) (fun (bool) (bool))) ∧) (a (a (c (fun (cart (real)
?1) (fun (fun (cart (real) ?1) (bool)) (bool))) IN) (v (cart (real)
?1) w)) (v (fun (cart (real) ?1) (bool)) u))) (a (a (c (fun (cart
(real) ?1) (fun (fun (cart (real) ?1) (bool)) (bool))) IN) (v (cart
(real) ?1) w)) (a (c (fun (prod (cart (real) ?1) (real)) (fun (cart
(real) ?1) (bool))) ball) (a (a (c (fun (cart (real) ?1) (fun (real)
(prod (cart (real) ?1) (real)))),) (v (cart (real) ?1) b)) (a (c
(fun (num) (real)) real_of_num) (a (c (fun (num) (num)) NUMERAL) (a
(c (fun (num) (num)) BIT1) (c (num) _0)))))))))))))) (a (c (fun (fun
?0 (bool)) (bool)) !) (l (v ?0 x) (a (a (c (fun (bool) (fun (bool)
(bool))) ==>) (a (a (c (fun ?0 (fun (fun ?0 (bool)) (bool))) IN) (v
?0 x)) (v (fun ?0 (bool)) d))) (a (c (fun (bool) (bool)) ∼) (a (a
(c (fun (cart (real) ?1) (fun (fun (cart (real) ?1) (bool)) (bool)))
IN) (a (v (fun ?0 (cart (real) ?1)) g) (v ?0 x))) (a (a (c (fun (fun
(cart (real) ?1) (bool)) (fun (fun (cart (real) ?1) (bool)) (fun
(cart (real) ?1) (bool)))) UNION) (v (fun (cart (real) ?1) (bool))
t)) (v (fun (cart (real) ?1) (bool)) u))))))))) (a (c (fun (bool)
```
-----
```
(bool)) ∼) (a (c (fun (fun (cart (real) ?1) (bool)) (bool)) ?) (l
(v (cart (real) ?1) b) (a (a (c (fun (fun (cart (real) ?1) (bool))
(fun (fun (cart (real) ?1) (bool)) (bool))) SUBSET) (a (c (fun (prod
(cart (real) ?1) (real)) (fun (cart (real) ?1) (bool))) ball) (a (a
(c (fun (cart (real) ?1) (fun (real) (prod (cart (real) ?1) (real))))
,) (v (cart (real) ?1) b)) (a (c (fun (num) (real)) real_of_num)
(a (c (fun (num) (num)) NUMERAL) (a (c (fun (num) (num)) BIT1) (c
(num) _0))))))) (a (a (c (fun (fun ?0 (cart (real) ?1)) (fun (fun
?0 (bool)) (fun (cart (real) ?1) (bool)))) IMAGE) (v (fun ?0 (cart
(real) ?1)) g)) (v (fun ?0 (bool)) d))))))))))))
```
Ground truth: `<START> (fun (fun (fun (cart (real) ?1) (bool)) (bool))`
```
(bool)) <END>
```
_• (<theorem> (a (c <PREDICT> !)_ `(l (v (fun (cart (real) N) (bool))`
```
s) (a (a (c (fun (bool) (fun (bool) (bool))) =) (a (c (fun (fun
(cart (real) N) (bool)) (bool)) is_interval) (a (a (c (fun (fun
(cart (real) N) (cart (real) N)) (fun (fun (cart (real) N) (bool))
(fun (cart (real) N) (bool)))) IMAGE) (c (fun (cart (real) N) (cart
(real) N)) vector_neg)) (v (fun (cart (real) N) (bool)) s)))) (a (c
(fun (fun (cart (real) N) (bool)) (bool)) is_interval) (v (fun (cart
(real) N) (bool)) s))))))
```
Ground truth: `<START> (fun (fun (fun (cart (real) N) (bool)) (bool))`
```
(bool)) <END>
```
_• (<theorem> (a (c (fun (fun (real) (bool)) (bool)) !)_ `(l (v (real) x)`
```
(a (a (a (c (fun (fun (real) (real)) (fun (real) (fun (net (real))
(bool)))) has_real_derivative) (c (fun (real) (real)) atn)) (a
(c (fun (real) (real)) real_inv) (a (a (c (fun (real) (fun (real)
(real))) real_add) (a (c (fun (num) (real)) real_of_num) (a (c (fun
(num) (num)) NUMERAL) (a (c (fun (num) (num)) BIT1) (c (num) _0)))))
(a (a (c (fun (real) (fun (num) (real))) real_pow) (v <PREDICT> x))
(a (c (fun (num) (num)) NUMERAL) (a (c (fun (num) (num)) BIT0) (a (c
(fun (num) (num)) BIT1) (c (num) _0)))))))) (a (c (fun (real) (net
(real))) atreal) (v (real) x))))))
```
Ground truth: <START> (real) <END>
_• (<theorem> (a (a (c (fun (fun ?0 (bool)) (fun (fun ?0 (bool))_
```
(bool))) =) (a (a (c (fun (fun ?0 (bool)) (fun (fun ?0 (bool)) (fun
?0 (bool)))) INTER) (v (fun ?0 (bool)) s)) (a (a (c (fun (fun ?0
(bool)) (fun (fun ?0 (bool)) (fun ?0 (bool)))) UNION) (v (fun ?0
(bool)) t)) (v (fun ?0 (bool)) u)))) (a (a (c (fun (fun ?0 (bool))
(fun (fun ?0 (bool)) (fun ?0 (bool)))) UNION) (a (a (c <PREDICT>
INTER) (v (fun ?0 (bool)) s)) (v (fun ?0 (bool)) t))) (a (a (c (fun
(fun ?0 (bool)) (fun (fun ?0 (bool)) (fun ?0 (bool)))) INTER) (v
(fun ?0 (bool)) s)) (v (fun ?0 (bool)) u)))))
```
Ground truth: <START> (fun (fun ?0 (bool)) (fun (fun ?0 (bool)) (fun ?0
```
(bool)))) <END>
```
_• (<theorem> (a (a (c (fun (real) (fun (real) (bool))) =) (a (c (fun_
```
(cart (real) ?0) (real)) infnorm) (a (c (fun (num) (cart (real)
?0)) vec) (a (c (fun (num) (num)) NUMERAL) (c (num) _0))))) (a (c
(fun (num) (real)) real_of_num) (a (c (fun (num) (num)) NUMERAL) (c
<PREDICT> _0)))))
```
Ground truth: <START> (num) <END>
**Hard Type Inference.**
_• (<theorem> (a (c <MASK> !)_ `(l (v <MASK> s) (a (a (c <MASK> =)`
```
(a (c <MASK> INTERS) (v <MASK> s))) (a (a (c <PREDICT> DIFF) (c
<MASK> UNIV)) (a (c <MASK> UNIONS) (a (c <MASK> GSPEC) (l (v <MASK>
GEN%PVAR%0) (a (c <MASK> ?) (l (v <MASK> t) (a (a (a (c <MASK>
```
-----
```
SETSPEC) (v <MASK> GEN%PVAR%0)) (a (a (c <MASK> IN) (v <MASK> t))
(v <MASK> s))) (a (a (c <MASK> DIFF) (c <MASK> UNIV)) (v <MASK>
t)))))))))))))
```
Ground truth: <START> (fun (fun ?0 (bool)) (fun (fun ?0 (bool)) (fun ?0
```
(bool)))) <END>
```
_• (<theorem> (a (c <MASK> !)_ `(l (v <MASK> f) (a (c <MASK> !)` `(l (v`
```
<MASK> s) (a (a (c <MASK> =) (a (a (c <MASK> uniformly_continuous_on)
(v <MASK> f)) (v <MASK> s))) (a (c <MASK> !) (l (v <MASK> e) (a (a
(c <MASK> ==>) (a (a (c <MASK> real_lt) (a (c <MASK> real_of_num)
(a (c <MASK> NUMERAL) (c <MASK> _0)))) (v <MASK> e))) (a (c <MASK>
?) (l (v <MASK> d) (a (a (c <MASK> ∧) (a (a (c <MASK> real_lt)
(a (c <MASK> real_of_num) (a (c <MASK> NUMERAL) (c <MASK> _0))))
(v <MASK> d))) (a (c <MASK> !) (l (v <MASK> t) (a (c <MASK> !)
(l (v <MASK> t’) (a (a (c <MASK> ==>) (a (a (c <MASK> ∧) (a (a
(c <MASK> SUBSET) (v <MASK> t)) (v <MASK> s))) (a (a (c <MASK> ∧)
(a (a (c <MASK> SUBSET) (v <PREDICT> t’)) (v <MASK> s))) (a (a (c
<MASK> ∧) (a (c <MASK> bounded) (v <MASK> t))) (a (a (c <MASK> ∧)
(a (c <MASK> bounded) (v <MASK> t’))) (a (a (c <MASK> real_lt) (a (c
<MASK> hausdist) (a (a (c <MASK>,) (v <MASK> t’)) (v <MASK> t))))
(v <MASK> d))))))) (a (a (c <MASK> real_lt) (a (c <MASK> hausdist)
(a (a (c <MASK>,) (a (a (c <MASK> IMAGE) (v <MASK> f)) (v <MASK>
t’))) (a (a (c <MASK> IMAGE) (v <MASK> f)) (v <MASK> t))))) (v <MASK>
e)))))))))))))))))))
```
Ground truth: <START> (fun (cart (real) M) (bool)) <END>
_• (<theorem> (a (a (c <MASK> ==>) (a (a (c <PREDICT> IN) (v <MASK> a))_
```
(v <MASK> s))) (a (a (c <MASK> =) (a (a (c <MASK> DIFF) (a (a (c
<MASK> INSERT) (v <MASK> a)) (a (a (c <MASK> DELETE) (v <MASK> t)) (v
<MASK> b)))) (v <MASK> s))) (a (a (c <MASK> DELETE) (a (a (c <MASK>
DIFF) (v <MASK> t)) (v <MASK> s))) (v <MASK> b)))))
```
Ground truth: <START> (fun ?0 (fun (fun ?0 (bool)) (bool))) <END>
_• (<theorem> (a (c <MASK> !)_ `(l (v <PREDICT> b) (a (c <MASK> convex)`
```
(a (c <MASK> GSPEC) (l (v <MASK> GEN%PVAR%0) (a (c <MASK> ?) (l (v
<MASK> z) (a (a (a (c <MASK> SETSPEC) (v <MASK> GEN%PVAR%0)) (a (a
(c <MASK> real_gt) (a (c <MASK> Im) (v <MASK> z))) (v <MASK> b))) (v
<MASK> z))))))))))
```
Ground truth: <START> (real) <END>
_• (<theorem> (a (c <MASK> !)_ `(l (v <MASK> x) (a (a (c <MASK> ==>) (a`
```
(c <MASK> ∼) (a (a (c <MASK> nadd_eq) (v <MASK> x)) (a (c <MASK>
nadd_of_num) (a (c <MASK> NUMERAL) (c <MASK> _0)))))) (a (c <MASK>
?) (l (v <MASK> B) (a (c <MASK> ?) (l (v <MASK> N) (a (c <MASK> !)
(l (v <MASK> m) (a (c <MASK> !) (l (v <MASK> n) (a (a (c <MASK> ==>)
(a (a (c <MASK> ∧) (a (a (c <MASK> <=) (v <MASK> N)) (v <MASK> m)))
(a (a (c <MASK> <=) (v <MASK> N)) (v <MASK> n)))) (a (a (c <MASK> <=)
(a (a (c <MASK> *) (a (a (c <MASK> *) (a (a (c <MASK> dest_nadd) (v
<MASK> x)) (v <MASK> m))) (a (a (c <MASK> dest_nadd) (v <MASK> x)) (v
<MASK> n)))) (a (c <MASK> dist) (a (a (c <MASK>,) (a (a (c <MASK> *)
(v <MASK> m)) (a (a (c <MASK> nadd_rinv) (v <MASK> x)) (v <PREDICT>
n)))) (a (a (c <MASK> *) (v <MASK> n)) (a (a (c <MASK> nadd_rinv) (v
<MASK> x)) (v <MASK> m))))))) (a (a (c <MASK> *) (v <MASK> B)) (a (a
(c <MASK> *) (a (a (c <MASK> *) (v <MASK> m)) (v <MASK> n))) (a (a
(c <MASK> +) (v <MASK> m)) (v <MASK> n))))))))))))))))))
```
Ground truth: <START> (num) <END>
**Assumptions.**
_• Prompt:_ `(<theorem> (a (a (c (fun (bool) (fun (bool) (bool))) ==>) (a`
```
(a (c (fun (fun ?1 (bool)) (fun (fun ?1 (bool)) (bool))) =) (a (c
```
-----
```
(fun (fun ?1 (bool)) (fun ?1 (bool))) GSPEC) (l (v ?1 GEN%PVAR%0) (a
(c (fun (fun ?1 (bool)) (bool)) ?) (l (v ?1 x) (a (a (a (c (fun ?1
(fun (bool) (fun ?1 (bool)))) SETSPEC) (v ?1 GEN%PVAR%0)) (a (a (c
(fun (bool) (fun (bool) (bool))) ∧) (a (a (c (fun ?1 (fun (fun ?1
(bool)) (bool))) IN) (v ?1 x)) (v (fun ?1 (bool)) s))) (a (a (c (fun
?0 (fun ?0 (bool))) =) (a (v (fun ?1 ?0) f) (v ?1 x))) (v ?0 a))))
(v ?1 x))))))) (v (fun ?1 (bool)) t))) (a (a (c (fun (bool) (fun
(bool) (bool))) ==>) <PREDICT>) (a (c (fun (fun ?1 (bool)) (bool)) !)
(l (v ?1 x) (a (a (c (fun (bool) (fun (bool) (bool))) ==>) (a (a (c
(fun (bool) (fun (bool) (bool))) ∧) (a (v (fun ?1 (bool)) P) (v ?1
x))) (a (v (fun ?1 (bool)) Q) (v ?1 x)))) (a (c (fun (bool) (bool))
```
_∼) (a (a (c (fun ?0 (fun ?0 (bool))) =) (a (v (fun ?1 ?0) f) (v ?1_
```
x))) (v ?0 a)))))))))
```
Ground truth: `<START> (a (a (c (fun (bool) (fun (bool) (bool))) ∧) (a`
```
(c (fun (fun ?1 (bool)) (bool)) !) (l (v ?1 x) (a (a (c (fun (bool)
(fun (bool) (bool))) ==>) (a (v (fun ?1 (bool)) P) (v ?1 x))) (a (a
(c (fun ?1 (fun (fun ?1 (bool)) (bool))) IN) (v ?1 x)) (v (fun ?1
(bool)) s)))))) (a (c (fun (fun ?1 (bool)) (bool)) !) (l (v ?1 x)
(a (a (c (fun (bool) (fun (bool) (bool))) ==>) (a (a (c (fun (bool)
(fun (bool) (bool))) ∧) (a (v (fun ?1 (bool)) P) (v ?1 x))) (a (v
(fun ?1 (bool)) Q) (v ?1 x)))) (a (c (fun (bool) (bool)) ∼) (a (a
(c (fun ?1 (fun (fun ?1 (bool)) (bool))) IN) (v ?1 x)) (v (fun ?1
(bool)) t))))))) <END>
```
Source theorem pretty printed: {x | x IN s ∧ `f x = a} = t ==> (!x.` `P x ==>`
```
x IN s) ∧ (!x. P x ∧ Q x ==> ∼(x IN t)) ==> (!x. P x ∧ Q x ==>
```
_∼(f x = a))_
_• Prompt:_ `(<theorem> (a (c (fun (fun (fun (cart (real) N) (bool))`
```
(bool)) (bool)) !) (l (v (fun (cart (real) N) (bool)) s) (a (a (c
(fun (bool) (fun (bool) (bool))) ==>) <PREDICT>) (a (a (c (fun (fun
(cart (real) N) (bool)) (fun (fun (cart (real) N) (bool)) (bool))) =)
(a (c (fun (fun (cart (real) N) (bool)) (fun (cart (real) N) (bool)))
inside) (v (fun (cart (real) N) (bool)) s))) (c (fun (cart (real) N)
(bool)) EMPTY))))))
```
Ground truth: `<START> (a (a (c (fun (bool) (fun (bool) (bool))) ∧)`
```
(a (c (fun (fun (cart (real) N) (bool)) (bool)) connected) (a (a (c
(fun (fun (cart (real) N) (bool)) (fun (fun (cart (real) N) (bool))
(fun (cart (real) N) (bool)))) DIFF) (c (fun (cart (real) N) (bool))
UNIV)) (v (fun (cart (real) N) (bool)) s)))) (a (c (fun (bool)
(bool)) ∼) (a (c (fun (fun (cart (real) N) (bool)) (bool)) bounded)
(a (a (c (fun (fun (cart (real) N) (bool)) (fun (fun (cart (real) N)
(bool)) (fun (cart (real) N) (bool)))) DIFF) (c (fun (cart (real) N)
(bool)) UNIV)) (v (fun (cart (real) N) (bool)) s))))) <END>
```
Source theorem pretty printed: !s. `connected ((:realˆN) DIFF s) ∧∼bounded`
```
((:realˆN) DIFF s) ==> inside s = {}
```
_• Prompt:_ `(<theorem> (a (a (c (fun (bool) (fun (bool) (bool))) ==>) (a`
```
(a (c (fun (bool) (fun (bool) (bool))) ∧) (v (bool) q)) (a (c (fun
(bool) (bool)) ∼) (v (bool) p)))) (a (a (c (fun (bool) (fun (bool)
(bool))) ==>) <PREDICT>) (v (bool) r))))
```
Ground truth: `<START> (a (a (c (fun (bool) (fun (bool) (bool))) =) (v`
```
(bool) p)) (v (bool) q)) <END>
```
Source theorem pretty printed: q ∧∼p ==> (p <=> q) ==> r
_• Prompt:_ `(<theorem> (a (c (fun (fun (fun (cart (real) N) (real))`
```
(bool)) (bool)) !) (l (v (fun (cart (real) N) (real)) f) (a (c
(fun (fun (fun (real) (real)) (bool)) (bool)) !) (l (v (fun (real)
(real)) g) (a (c (fun (fun (cart (real) N) (bool)) (bool)) !) (l
(v (cart (real) N) x) (a (a (c (fun (bool) (fun (bool) (bool)))
==>) <PREDICT>) (a (a (c (fun (fun (cart (real) N) (real)) (fun
```
-----
```
(net (cart (real) N)) (bool))) real_continuous) (a (a (c (fun (fun
(real) (real)) (fun (fun (cart (real) N) (real)) (fun (cart (real)
N) (real)))) o) (v (fun (real) (real)) g)) (v (fun (cart (real) N)
(real)) f))) (a (c (fun (cart (real) N) (net (cart (real) N))) at) (v
(cart (real) N) x)))))))))))
```
Ground truth: `<START> (a (a (c (fun (bool) (fun (bool) (bool))) ∧) (a`
```
(a (c (fun (fun (cart (real) N) (real)) (fun (net (cart (real) N))
(bool))) real_continuous) (v (fun (cart (real) N) (real)) f)) (a (c
(fun (cart (real) N) (net (cart (real) N))) at) (v (cart (real) N)
x)))) (a (a (c (fun (fun (real) (real)) (fun (net (real)) (bool)))
real_continuous) (v (fun (real) (real)) g)) (a (a (c (fun (net
(real)) (fun (fun (real) (bool)) (net (real)))) within) (a (c (fun
(real) (net (real))) atreal) (a (v (fun (cart (real) N) (real)) f)
(v (cart (real) N) x)))) (a (a (c (fun (fun (cart (real) N) (real))
(fun (fun (cart (real) N) (bool)) (fun (real) (bool)))) IMAGE) (v
(fun (cart (real) N) (real)) f)) (c (fun (cart (real) N) (bool))
UNIV))))) <END>
```
Source theorem pretty printed: `!f g x.` `f real_continuous at x ∧` `g`
```
real_continuous atreal (f x) within IMAGE f (:realˆN) ==> g o f
real_continuous at x
```
_• Prompt:_ `(<theorem> (a (c (fun (fun (fun (cart (real) M) (cart (real)`
```
N)) (bool)) (bool)) !) (l (v (fun (cart (real) M) (cart (real) N))
f) (a (c (fun (fun (fun (cart (real) M) (cart (real) P)) (bool))
(bool)) !) (l (v (fun (cart (real) M) (cart (real) P)) g) (a (c
(fun (fun (fun (cart (real) M) (bool)) (bool)) (bool)) !) (l (v
(fun (cart (real) M) (bool)) s) (a (c (fun (fun (num) (bool)) (bool))
!) (l (v (num) n) (a (a (c (fun (bool) (fun (bool) (bool))) ==>)
<PREDICT>) (a (a (a (c (fun (num) (fun (fun (cart (real) M) (bool))
(fun (fun (cart (real) M) (cart (real) (finite_sum N P))) (bool))))
baire) (v (num) n)) (v (fun (cart (real) M) (bool)) s)) (l (v (cart
(real) M) x) (a (a (c (fun (cart (real) N) (fun (cart (real) P)
(cart (real) (finite_sum N P)))) pastecart) (a (v (fun (cart (real)
M) (cart (real) N)) f) (v (cart (real) M) x))) (a (v (fun (cart
(real) M) (cart (real) P)) g) (v (cart (real) M) x)))))))))))))))
```
Ground truth: `<START> (a (a (c (fun (bool) (fun (bool) (bool))) ∧)`
```
(a (a (a (c (fun (num) (fun (fun (cart (real) M) (bool)) (fun (fun
(cart (real) M) (cart (real) N)) (bool)))) baire) (v (num) n)) (v
(fun (cart (real) M) (bool)) s)) (v (fun (cart (real) M) (cart (real)
N)) f))) (a (a (a (c (fun (num) (fun (fun (cart (real) M) (bool))
(fun (fun (cart (real) M) (cart (real) P)) (bool)))) baire) (v (num)
n)) (v (fun (cart (real) M) (bool)) s)) (v (fun (cart (real) M)
(cart (real) P)) g))) <END>
```
Source theorem pretty printed: `!f g s n.` `baire n s f ∧` `baire n s g ==>`
```
baire n s (lambda x. pastecart (f x) (g x))
```
**Equalities.**
_• Prompt:_ `(<theorem> (a (c (fun (fun (fun ?0 (cart (real) (2))) (bool))`
```
(bool)) !) (l (v (fun ?0 (cart (real) (2))) f) (a (c (fun (fun (fun
?0 (cart (real) (2))) (bool)) (bool)) !) (l (v (fun ?0 (cart (real)
(2))) g) (a (c (fun (fun (fun ?0 (bool)) (bool)) (bool)) !) (l (v
(fun ?0 (bool)) s) (a (a (c (fun (bool) (fun (bool) (bool))) ==>)
(a (c (fun (fun ?0 (bool)) (bool)) FINITE) (v (fun ?0 (bool)) s)))
(a (a (c (fun (cart (real) (2)) (fun (cart (real) (2)) (bool))) =)
(a (a (c (fun (fun ?0 (bool)) (fun (fun ?0 (cart (real) (2))) (cart
(real) (2)))) cproduct) (v (fun ?0 (bool)) s)) (l (v ?0 x) (a (a (c
(fun (cart (real) (2)) (fun (cart (real) (2)) (cart (real) (2))))
```
-----
```
complex_mul) (a (v (fun ?0 (cart (real) (2))) f) (v ?0 x))) (a (v
(fun ?0 (cart (real) (2))) g) (v ?0 x)))))) <PREDICT>)))))))))
```
Ground truth: <START> (a (a (c (fun (cart (real) (2)) (fun (cart (real)
```
(2)) (cart (real) (2)))) complex_mul) (a (a (c (fun (fun ?0 (bool))
(fun (fun ?0 (cart (real) (2))) (cart (real) (2)))) cproduct) (v
(fun ?0 (bool)) s)) (v (fun ?0 (cart (real) (2))) f))) (a (a (c (fun
(fun ?0 (bool)) (fun (fun ?0 (cart (real) (2))) (cart (real) (2))))
cproduct) (v (fun ?0 (bool)) s)) (v (fun ?0 (cart (real) (2))) g)))
<END>
```
Source theorem pretty printed: !f g s. `FINITE s ==> cproduct s (\x.` `f x *`
```
g x) = cproduct s f * cproduct s g
```
_• Prompt:_ `(<theorem> (a (c (fun (fun (fun (cart (real) N) (bool))`
```
(bool)) (bool)) !) (l (v (fun (cart (real) N) (bool)) s) (a (c (fun
(fun (fun (cart (real) N) (bool)) (bool)) (bool)) !) (l (v (fun
(cart (real) N) (bool)) t) (a (a (c (fun (bool) (fun (bool) (bool)))
==>) (a (a (c (fun (bool) (fun (bool) (bool))) ∧) (a (c (fun (fun
(cart (real) N) (bool)) (bool)) convex) (v (fun (cart (real) N)
(bool)) s))) (a (a (c (fun (bool) (fun (bool) (bool))) ∧) (a (c (fun
(fun (cart (real) N) (bool)) (bool)) affine) (v (fun (cart (real) N)
(bool)) t))) (a (c (fun (bool) (bool)) ∼) (a (a (c (fun (fun (cart
(real) N) (bool)) (fun (fun (cart (real) N) (bool)) (bool))) =) (a
(a (c (fun (fun (cart (real) N) (bool)) (fun (fun (cart (real) N)
(bool)) (fun (cart (real) N) (bool)))) INTER) (a (c (fun (fun (cart
(real) N) (bool)) (fun (cart (real) N) (bool))) relative_interior)
(v (fun (cart (real) N) (bool)) s))) (v (fun (cart (real) N) (bool))
t))) (c (fun (cart (real) N) (bool)) EMPTY)))))) (a (a (c (fun (fun
(cart (real) N) (bool)) (fun (fun (cart (real) N) (bool)) (bool))) =)
<PREDICT>) (a (a (c (fun (fun (cart (real) N) (bool)) (fun (fun (cart
(real) N) (bool)) (fun (cart (real) N) (bool)))) INTER) (a (c (fun
(fun (cart (real) N) (bool)) (fun (cart (real) N) (bool))) closure)
(v (fun (cart (real) N) (bool)) s))) (v (fun (cart (real) N) (bool))
t)))))))))
```
Ground truth: `<START> (a (c (fun (fun (cart (real) N) (bool)) (fun`
```
(cart (real) N) (bool))) closure) (a (a (c (fun (fun (cart (real)
N) (bool)) (fun (fun (cart (real) N) (bool)) (fun (cart (real) N)
(bool)))) INTER) (v (fun (cart (real) N) (bool)) s)) (v (fun (cart
(real) N) (bool)) t))) <END>
```
Source theorem pretty printed: `!s t.` `convex s ∧` `affine t ∧`
_∼(relative_interior s INTER t = {}) ==> closure (s INTER t) =_
```
closure s INTER t
```
_• Prompt:_ `(<theorem> (a (a (c (fun (bool) (fun (bool) (bool))) ==>)`
```
(a (a (c (fun (fun ?0 (bool)) (fun (fun ?0 (bool)) (bool))) SUBSET)
(v (fun ?0 (bool)) t)) (a (a (c (fun (fun ?0 (bool)) (fun (fun ?0
(bool)) (fun ?0 (bool)))) DIFF) (c (fun ?0 (bool)) UNIV)) (v (fun
?0 (bool)) s)))) (a (a (c (fun (fun ?0 (bool)) (fun (fun ?0 (bool))
(bool))) =) <PREDICT>) (c (fun ?0 (bool)) EMPTY))))
```
Ground truth: <START> (a (a (c (fun (fun ?0 (bool)) (fun (fun ?0 (bool))
```
(fun ?0 (bool)))) INTER) (v (fun ?0 (bool)) s)) (v (fun ?0 (bool))
t)) <END>
```
Source theorem pretty printed: t SUBSET (:?0) DIFF s ==> s INTER t = {}
_• Prompt:_ `(<theorem> (a (c (fun (fun (real) (bool)) (bool)) !)` `(l (v`
```
(real) x) (a (a (c (fun (real) (fun (real) (bool))) =) <PREDICT>) (a
(c (fun (real) (real)) real_abs) (v (real) x))))))
```
Ground truth: <START> (a (a (c (fun (real) (fun (num) (real))) real_pow)
```
(a (c (fun (real) (real)) sqrt) (v (real) x))) (a (c (fun (num)
(num)) NUMERAL) (a (c (fun (num) (num)) BIT0) (a (c (fun (num) (num))
BIT1) (c (num) _0))))) <END>
```
-----
Source theorem pretty printed: !x. `sqrt x pow 2 = abs x`
_• Prompt:_ `(<theorem> (a (a (c (fun (fun A (bool)) (fun (fun A (bool))`
```
(bool))) =) <PREDICT>) (a (c (fun (fun A (bool)) (fun A (bool)))
GSPEC) (l (v A GEN%PVAR%0) (a (c (fun (fun A (bool)) (bool)) ?) (l
(v A y) (a (a (a (c (fun A (fun (bool) (fun A (bool)))) SETSPEC) (v
A GEN%PVAR%0)) (a (a (c (fun (bool) (fun (bool) (bool))) ∧) (a (a (c
(fun A (fun (fun A (bool)) (bool))) IN) (v A y)) (v (fun A (bool))
s))) (a (a (c (fun A (fun A (bool))) =) (v A y)) (v A x)))) (v A
y))))))))
```
Ground truth: `<START> (a (a (c (fun A (fun (fun A (bool)) (fun A`
```
(bool)))) INSERT) (v A x)) (v (fun A (bool)) s)) <END>
```
Source theorem pretty printed: x INSERT s = {y | y IN s ∧ `y = x}`
-----
| [
"Dennis, Lee",
"Markus N., Rabe",
"Kshitij, Bansal",
"Christian, Szegedy"
] | 2020-01-01T00:00:00 | ICLR 2021 | true | 53 | 15 | null | https://arxiv.org/abs/2006.04757 | https://arxiv.org/abs/2006.04757 | https://www.semanticscholar.org/paper/45bb43cdc35324fea4350ed335c500d4a5fd6ef5 |
IsarStep: a Benchmark for High-level Mathematical Reasoning | A well-defined benchmark is essential for measuring and accelerating research progress of machine learning models. In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models. We build a non-synthetic dataset from the largest repository of proofs written by human experts in a theorem prover. The dataset has a broad coverage of undergraduate and research-level mathematical and computer science theorems. In our defined task, a model is required to fill in a missing intermediate proposition given surrounding proofs. This task provides a starting point for the long-term goal of having machines generate human-readable proofs automatically. Our experiments and analysis reveal that while the task is challenging, neural models can capture non-trivial mathematical reasoning. We further design a hierarchical transformer that outperforms the transformer baseline. | A benchmark for high-level mathematical reasoning is presented and the reasoning capabilities of neural sequence-to-sequence models are studied and a hierarchical transformer is designed that outperforms the transformer baseline. | ## ISARSTEP: A BENCHMARK FOR HIGH-LEVEL MATHEMATICAL REASONING
**Wenda Li**
University of Cambridge
[email protected]
**Lawrence C. Paulson**
University of Cambridge
[email protected]
**Lei Yu**
DeepMind
[email protected]
**Yuhuai Wu**
University of Toronto, Vector Institute
[email protected]
ABSTRACT
A well-defined benchmark is essential for measuring and accelerating research
progress of machine learning models. In this paper, we present a benchmark
for high-level mathematical reasoning and study the reasoning capabilities of
neural sequence-to-sequence models. We build a non-synthetic dataset from the
largest repository of proofs written by human experts in a theorem prover. The
dataset has a broad coverage of undergraduate and research-level mathematical
and computer science theorems. In our defined task, a model is required to fill in a
missing intermediate proposition given surrounding proofs. This task provides a
starting point for the long-term goal of having machines generate human-readable
proofs automatically. Our experiments and analysis reveal that while the task
is challenging, neural models can capture non-trivial mathematical reasoning.
We further design a hierarchical transformer that outperforms the transformer
[baseline. The dataset and models are available from: https://github.com/](https://github.com/Wenda302/IsarStep)
[Wenda302/IsarStep.](https://github.com/Wenda302/IsarStep)
1 INTRODUCTION
Neural networks have achieved outstanding performance on a wide range of problems in natural
language processing, computer vision, and speech recognition. However, research investigating
their capacity of doing mathematical reasoning is still limited, with earlier attempts focusing on
simple arithmetic tasks like integer addition and multiplication (Zaremba & Sutskever, 2014; Kaiser
& Sutskever, 2016; Trask et al., 2018). More recently, there has been work on solving school-level
mathematical problems (Saxton et al., 2019), logical reasoning (Evans et al., 2018), and problems of
function integration, ordinary differential equations (Lample & Charton, 2020), and properties of
differential systems (Charton et al., 2020). While these are valuable contributions to the machine
learning community, they focused on generating answers to questions from a specific domain and
were carried out on synthetic datasets with small vocabulary (e.g. up to 100 unique tokens).
In this paper, we consider general undergraduate and research-level mathematical proofs as a target
for neural networks. When humans prove a theorem, a crucial step is to propose an intermediate
proposition to bridge the gap between the goal and the currently known facts. This step requires
complicated reasoning capabilities such as creative thinking, inference, understanding existing
conditions, and symbolic manipulation of rules. For example, consider the following proof of the
irrationality of _√2:_
_Proof of irrationality of_ _√2. Assume_ _√2 is rational. Then there exists a pair of coprime integers a_
and b such that _√2 = a/b, and it follows that 2 = a[2]/b[2]_ and then 2b[2] = a[2]. Hence a is even. Thus
there exists an integer c such that a = 2c, which combined with 2b[2] = a[2] yields 2c[2] = b[2]: hence b is
also even. So a and b are both even although they are coprime, contradiction.
-----
10
11
12
13
14
15
Figure 1: Full declarative proof the irrationality of
2 in Isabelle/HOL.
To derive ∃c ∈ Z. a = 2c from 2b[2] = a[2], the intermediate proposition “a is even” would reduce the
gap and lead to a successful proof. We would like to simulate the way humans prove theorems by
proposing an intermediate proposition synthesis task — IsarStep. Instead of having primitive steps
like 3 + 5 = 8, the proof steps in IsarStep are at a higher-level, with much bigger steps as basic.
Therefore it usually cannot be simply solved by pattern matching and rewriting. To succeed in this
task, a model is required to learn the meaning of important mathematical concepts (e.g. determinant
in linear algebra, residue in complex analysis), how they are related to each other through theorems,
and how they are utilised in proof derivations. Solving the IsaStep task will be potentially helpful for
improving the automation of theorem provers, because proposing a valid intermediate proposition
will help reduce their search space significantly. It is also a first step towards the long-term goal of
sketching complete human-readable proofs automatically.
We have built the IsarStep dataset by mining arguably the largest publicly-hosted repository of
mechanised proofs: the Achieve of Formal Proofs (AFP).[1] The AFP is checked by the Isabelle proof
assistant (Paulson, 1994) and contains 143K lemmas. Combining the AFP with the standard library of
Isabelle/HOL yields a dataset of 204K formally-proved lemmas. The dataset covers a broad spectrum
of subjects, including foundational logic (e.g. Gödel’s incompleteness theorems), advanced analysis
(e.g. the Prime Number Theorem), computer algebra, cryptographic frameworks, and various data
structures. A nice property of the mined formal proofs is that they are mostly declarative proofs, a
proof style very close to human prose proofs.[2] Fig. 1 illustrates the proof of irrationality of _√2 in_
Isabelle. We can see that the proof is actually legible (even to people who are not familiar with the
system) and and it captures high-level structures like those in human proofs.
We further explore the reasoning capabilities of neural models. We frame the proposed task as a
sequence-to-sequence (seq2seq) prediction problem. Beyond evaluating the existing neural seq2seq
model baselines—the seq2seq with attention (Bahdanau et al., 2015), the transformer (Vaswani et al.,
2017)—we also propose a new architecture, the hierarchical transformer (§4). The architecture is
motivated by the way humans reason about propositions; it consists of a set of local transformer
layers, modelling the representation of each proposition, and a set of global layers, modelling the
correlation across propositions. Experiments (§5) show that these neural models can solve 15–25%
of problems on the test set, and the hierarchical transformer achieves the best result. Further analysis
(§6) on the output of these models shows that while the proposition synthesis task is hard, the neural
models can indeed capture mathematical reasoning. We find that the embeddings of closely related
mathematical concepts are close in cosine space; models can reason about the relation between set,
subset, and member, and perform more complex multi-step reasoning that is even hard for humans.
Our contributions are summarised as follows:
1. We mine a large non-synthetic dataset of formal proofs and propose a task for evaluating neural models’ mathematical reasoning abilities. The dataset contains 820K training examples
with a vocabulary size of 30K.
[1https://www.isa-afp.org](https://www.isa-afp.org)
2A comparison of proofs in different systems is available in Wiedijk (2006). The declarative style proof is
also available in Mizar (Grabowski et al., 2010), where the style originates.
-----
2. We evaluate existing neural seq2seq models on this task.
3. We introduce a hierarchical transformer model, which outperforms the baseline models.
4. We provide a comprehensive analysis of what has been learned by the neural models.
5. We provide a test suite to check the correctness of types and the validity of the generated
propositions using automatic theorem provers.
2 THE ISARSTEP TASK
In this section, we define the task of intermediate proposition generation more concretely. We again
take the proof of irrationality of _√2 as an example. We will have the following derivation:_
2b[2] = a[2] _⇒_ _a is even_ _⇒∃c ∈_ Z. a = 2c _._
(1) (2) (3)
In our proposed task, we would like to generate| {z } | {z } (2)| given{z (1) and} (3). When humans prove a
theorem, they implicitly assume certain background knowledge, as lemmas. For example, in this
case we assume that we can trivially prove (1) ⇒ (2) based on the fact that the product of two
numbers are even iff at least one of them is even. In Isabelle (Paulson, 1994), these relevant lemmas
(e.g. even_mult_iff: even (?a * ?b) = (even ?a ∨ _even ?b) corresponding to line 10_
in Fig. 1) can be found automatically by its built-in automation Sledgehammer (Blanchette et al.,
2011). In our task, we optionally provide these lemmas as extra information in addition to (1) and
(3).
The derivation of (2) ⇒ (3) in the proof above is a simple step, because only (2) is needed to arrive
at (3). In most cases, multiple propositions have to be used together in order to infer a proposition, for
example P1, P2, P3 _P4. For these more general cases, we also include the additional propositions_
(e.g. P2 and P1) as part of the source propositions. ⇒
To summarize, each example in the IsarStep dataset is formed by five parts:
**F.1 a target proposition (e.g. a is even),**
**F.2 a set of used local propositions to derive F.1 (e.g. 2b[2]** = a[2]),
**F.3 a local proposition derived from the target proposition F.1 (∃c ∈** Z. a = 2c),
**F.4 other local propositions and (library) lemmas used to justify F.3,**
**F.5 a set of used (library) lemmas to justify F.1 (e.g. even_mult_iff: even (?a * ?b) =**
_(even ?a ∨_ _even ?b))._
We want to synthesise F.1 given F.2 – F.4 with F.5 optional: the named lemmas in F.5 are common
knowledge and can be used as additional hints. The propositions are generated as a sequence of
tokens and therefore the search space is Σ[∗]: search over 30K actions (§3.3, vocabulary size for
seq2seq models) at every timestep without a predefined maximum output length.
IsarStep can be considered as single step reasoning, which can be repeated to sketch more complex
proofs. Good performance on this task is a crucial step for designing models that can automatically
prove theorems with minimal human assistance.
3 DATASET PREPROCESSSING AND STATISTICS
The mined raw dataset has long propositions and a large number of unique tokens. To alleviate
the performance deterioration of machine learning models due to the aforementioned problems, we
propose tricks to preprocess the raw dataset, including free variable normalisation and removing
unnecessary parentheses. These tricks substantially reduce the sequence lengths and vocabulary size.
3.1 THE LOGIC AND TOKENS
The core logic of Isabelle/HOL is simply-typed λ-calculus with de Bruijn indices for bound variables
(Wenzel, 2020, Chapter 2.2). A local proposition or a (library) lemma/theorem is essentially a term
-----
in the calculus. As types can be inferred automatically, we drop types in terms (to reduce the size of
the vocabulary) and encode a term as a sequence of tokens that include lambda term constructors:
CONST, FREE, VAR, BOUND, ABS (function abstraction), and $ (function application). Additionally,
parentheses have been used in the sequence to represent the tree structure. To give an example, we
encode the proposition even a as the following sequence of tokens separated by a white space:
CONST HOL.Trueprop $ (CONST Parity.semiring_parity_class.even $ FREE <X0>)
where CONST HOL.Trueprop is a boilerplate function that converts from type bool to prop;
CONST Parity.semiring_parity_class.even is the even predicate; FREE <X0> encodes the Skolem constant a in even a. Since a is a user-introduced local constant that can be
arbitrary, we normalised it to the algorithmically generated name <X0> in order to reduce the
vocabulary size (see §3.2).
Overall, every local proposition and library lemma/theorem is encoded as a sequence of tokens, and
can be mostly decoded to the original term with type inference.
3.2 FREE VARIABLE NORMALISATION
Due to Isabelle’s use of de Bruijn indices, bound variables have already been normalised: ∀ _x. P x is_
no different from ∀ _y. P y, as both x and y are encoded as BOUND 0. However, arbitrary variable_
names can be introduced by the command fix in declarative proofs or unbounded variables in lemma
statements (e.g. False =⇒ _P and False =⇒_ _Q are semantically equivalent but with different free_
variables). To reduce the vocabulary size here, we normalised these free variables like the bound ones.
For example, False =⇒ _P would be normalised to False =⇒_ _<V0> as P is the first free variable_
in the proposition. Such normalisation reduced the vocabulary size by one third. The normalisation
preserves the semantics, and we can always parse a normalised term back under a proper context.
3.3 STATISTICS
We have mined a total of 1.2M data points for IsarStep. We removed examples in which the length of
the concatenation of the source propositions, i.e. F.2 – F.4 in §2, longer than 800 and the length of the
target propositions, i.e. F.1 in §2, longer than 200, which results in approximately 860K examples.
From these examples we randomly sampled 10K examples for validation and test. In the training
data, we removed duplicates, the examples whose target propositions exist in the held-out set, and
those that are from the same theorems as the propositions in the held-out set. The final dataset split
is 820K, 5000, 5000 for the training, validation, and test sets, respectively. The vocabulary size is
29,759.
4 MODEL
We define X = [x[1], x[2], . . ., x[I] ] as the sequence of I source propositions, and y = (y1, y2, . . ., yN )
as the target proposition containing N tokens. Let x[i] = (x[i]1[, x][i]2[, . . ., x][i]M [)][ represent the][ i][th proposi-]
tion in the set, consisting of M tokens. Each source proposition x[i] belongs to a category F.2 – F.4
defined in §2. We annotate the category corresponding to x[i] as Ci and therefore the sequence of
categories corresponding to X is = [ 1, 2, . . ., _I_ ]. The generation of a target proposition y is
**_C_** _C_ _C_ _C_
determined by finding the proposition ˆy, where p(ˆy | X, C) is optimal,
**_yˆ = arg max_** _p(y_ **_X,_** ). (1)
**_y_** _|_ **_C_**
We propose two approaches to parameterising the conditional probability p(y | X, C), which
differ in the way of modelling the sequence of source propositions. The first method is simply
appending a label to each source proposition indicating their category and then concatenating the
source propositions using a special token <SEP>, treating the resulting long sequence as the input
to a seq2seq model.
Our second approach models the encoding of source propositions hierarchically. As shown in
Fig. 2, the encoder has two types of layers. The local layers build the proposition representations by
modelling the correlations of tokens within each proposition; the global layers take the proposition
-----
Category Position Token Local Layers Global Layers
|Cat1 Pos1 x Cat1 Pos2 > Cat1 Pos3 y Cat2 Pos1 ¬ Cat2 Pos2 P|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
Figure 2: Architecture of the encoder of the hierarchical transformer (HAT). There are two types of
layers, the local layers model the correlation between tokens within a proposition, and the global
layers model the correlation between propositions. The input to the network is the sum of the token
embedding, the positional information, and the embedding of the corresponding category.
Table 1: Test set accurarcies (exact match) and BLEU scores of different models on the IsarStep task.
Top-1 Acc. Top-10 Acc. BLEU
Model
Base +F.5 Base +F.5 Base +F.5
RNNSearch 13.0 16.7 26.2 32.2 42.3 52.2
Transformer 20.4 22.1 33.1 34.6 59.6 62.9
HAT **22.8** **24.3** **35.2** **37.2** **61.8** **65.7**
representations as input and model the correlations across propositions. Both the local layers and
global layers are transformer layers (Vaswani et al., 2017). Positional information is encoded
separately for different source propositions. That is, suppose x[1] has M tokens, then the position of
the first token in x[2] is not M +1 but 1. The embedding of a token x[i]m [is obtained by adding the token]
embedding, the positional information, and the embedding of the category that the proposition x[i]
belongs to. The category embedding is learnt together with the rest of the network. We call this model
the hierarchical transformer (HAT). Intuitively, HAT models the structure of the source propositions
more explicitly compared to the first approach and therefore should be better at capturing reasoning
between source and target propositions. We will validate our hypothesis in §5.
5 EXPERIMENTS
We benchmark three models on IsarStep (§2), namely the seq2seq model with attention (RNNSearch)
(Bahdanau et al., 2015; Wu et al., 2016), transformer (Vaswani et al., 2017), and hierarchical
transformer (HAT). The input to the RNNSearch and the transformer is a concatenation of source
propositions (the first parameterisation approach described in §4). We train these models with the
same training data and report their performance on test sets. See appendix for experimental setup.
5.1 EVALUATION
Widely used metrics for text generation are BLEU score (Papineni et al., 2002) and ROUGE score
(Lin, 2004) that measure n-gram overlap between hypotheses and references. Both metrics are
not ideal in mathematical proofs since a proposition can be invalid due to one or two incorrect
tokens. Therefore, in addition to BLEU score, we also consider exact match between hypotheses
and references as our evaluation metric. We report top-1 accuracy and top-10 accuracy. The top-1
accuracy is the percentage of the best output sequences that are correct in the given dataset. The
top-10 accuracy is the percentage of target sequences appearing in the top 10 generated sequences.
It is possible that models generate alternative valid propositions that are not exactly the same as
the references. We further implemented a test suite to bring the generated propositions back to the
Isabelle environment and check their correctness using automatic theorem provers (ATPs).
-----
5.2 RESULTS
**BLEU and Exact Match** Table 1 presents the results of different models for the IsarStep task.
Overall, the neural seq2seq models achieve around 13–25% top-1 accuracies and 26–38% top-10
accuracies, which indicates that this task is non-trivial and yet not too difficult for neural networks.
Of the three models, the transformer (Vaswani et al., 2017) outperforms the RNNSearch (Bahdanau
et al., 2015; Wu et al., 2016) significantly and our HAT performs best. As mentioned in §2, adding
**F.5 is optional and is conjectured for better performance due to exploiting used lemmas explicitly.**
We experimented with both cases and found that adding this extra information indeed leads to further
improvement. This is consistent with the scenario when humans prove theorems: if humans are told
that certain lemmas are relevant to the current proof, they will use these lemmas and have a better
chance of success.
**Alternative Valid Propositions** We consider an output proposition P as an alternative valid intermediate proposition if 1) P Table 2: Percentage of correct
is a well-formed proposition and does not match the ground truth propositions.
at the surface form; 2) P does not match any substring of the
source (to avoid it being a simple copy of F.3 or an assumption
Model Base +F.5
in F.2); 3) both F.2 ⇒ _P and P ⇒_ **F.3 can be automatically**
proved by ATPs.[3] Note that this will only give us a lower bound Transformer 25.2 26.8
to the number of alternative propositions, due to the limitation HAT 27.6 29.4
of ATPs’ automation. Table 2 presents the percentage of correct
propositions on the test set. Correct proposition is a proposition
that matches either the corresponding ground truth or one of the alternative valid propositions. We can
see that alternative propositions contribute 5 percentage point more correct propositions, compared to
top-1 accuracy in Table 1.
**Automation Improvement** In lots of cases, ATPs cannot infer from one step to another automatically (i.e. F.2 ⇒ **F.3) without the crucial intermediate steps proposed by humans. We found that**
there are about 3000 cases in our test set that F.2 ⇒ **F.3 cannot be proved automatically by ATPs.**
And within these 3000 cases, 61 cases can be proved automatically given the generated intermediate
propositions from our HAT model: F.2 ⇒ _P and P ⇒_ **F.3. This is not a big improvement. Fur-**
ther progress is needed to improve seq2seq models’ reasoning capability in order to improve the
automation of theorem provers significantly.
**Better Generalisation of HAT** Since the transformer and HAT
have different source sequence encoding, we explore how well these
two models perform on examples with various source sequence Transformer
lengths. We categorise the examples on the IsarStep test set into 5 25.0% HAT
buckets based on their source lengths and calculate the top-1 accura- 20.0%
cies for different buckets, as shown in Fig. 3. Interestingly, although
we did not train the models with source sequences longer than 512, Accuracy
they can still achieve reasonable accuracies on long sequences. In 10.0%
particular, HAT performs significantly better than the transformer 5.0%
on sequences longer than 480. Especially in the length bucket of 0.0%
640–800, HAT doubles the accuracy of the transformer. 0-160 160-320 320-480 480-640 640-800
Transformer
25.0% HAT
20.0%
15.0%
10.0%
5.0%
0.0%
0-160 160-320 320-480 480-640 640-800
Figure 3: Accuracy of differ
**Importance of Category Information** We subsequently investi- ent source sequence lengths.
gate the effect of incorporating the category information for source
propositions into the models by removing the category embedding
for the input to the HAT encoder (Fig. 2), i.e. we are now modelling
_p(y | X) instead of p(y | X, C). We see a dramatic drop in accuracy: 14.6 versus 22.8 obtained_
by the HAT with category embedding included, indicating the importance of category information.
This is in line with human proofs: without knowing the logical relations between propositions, we do
3If F.2 ⇒ **F.3 is directly provable via ATPs, a trivial proposition (e.g. 1 = 1) can be considered as an**
alternative. This is hard to detect automatically but could still serve as a fair comparison across seq2seq models
as long as they can propose a well-formed trivial proposition in such scenario.
-----
not know what proposition is missing. This indicates that the models are not simply doing pattern
matching but should have captured some reasoning information.
6 QUALITATIVE ANALYSIS
In this section, we present an analysis of what has been learnt by the neural network models. To
summarise our findings: 1) the seq2seq models can learn the syntax of propositions correctly; 2)
the learned token embeddings are comprehensive in that related mathematical concepts are close
in cosine space; 3) manual inspection of the generated propositions reveal that models can learn
non-trivial mathematical reasoning and even more complicated multi-step reasoning.
**Token Embeddings** To investigate whether the seq2seq models have learnt mathematical reasoning, we checked whether the learnt token embeddings were meaningful. We first projected the learnt
embeddings for all the tokens in the vocabulary into a three-dimensional space via principal component analysis and chose random tokens and checked their 50 nearest neighbours in cosine distance.
We found that the embeddings of related concepts in mathematics were close, indicating that the
models have managed to learn the relations between mathematical concepts — the basic step towards
reasoning mathematically. For example, in Fig. 4, the neighbours of ‘Borel measurable’ are mostly
measure theory related including ‘almost everywhere’, ‘integrable’, and ‘null set’, while ‘arrow’
is close to ‘isomorphism’ (EpiMonoIso.category.iso), ‘identity’(Category.partial_magma.ide), and
‘inverse arrow’(EpiMonoIso.category.inv), which are concepts in category theory. Additionally, vector
arithmetic also seems to connect related mathematical definitions: for example, the three closest
tokens next to ‘bounded’ + ‘closed’ are ‘bounded’,‘closed’, and ‘compact’, where compactness can
be alternatively defined as boundedness and closedness (on a Euclidean space).
**Attention Visulisations** We next investigate how reasoning has been learnt by visualising attentions
from transformer (Vaswani et al., 2017). We find that important and related tokens are likely to attend
to each other. For example, Fig. 5 illustrates the visulisation of the last layer of the transformer encoder
for the source propositionsfrom the model is x70 _x57 F.2. The interpretation of those source propositions is that combining with: F.3: x70 ∈_ _x39 F.4: x57 ⊆_ _x39. The target proposition generated_
(The transformer model gives the correct answerF.4) x57 ⊆ _x39 we would like to infer the intermediate step so that the goal ∈_ _x70 ∈_ _x57 which implicitly applied the lemma x70 ∈_ _x39 can be proved._
_x ∈_ _A, A ⊆_ _B ⊢_ _x ∈_ _B_ (2)
that relates ∈ and ⊆. On the last self-attention layer of the transformer encoder (Fig. 5), ∈ and ⊆
attend to each other. Interestingly, the above reasoning seems robust. If we swap x57 and x39 in
This totally makes sense since (2) no longer applies (despite thatF.4 (i.e., the source is now F.2: F.3: x70 ∈ _x39 F.4: x39 ⊆_ _x57), the answer becomes ∈_ and ⊆ still attend to each other x70 ∈ _x39._
similarly as in Fig. 5) and x70 ∈ _x39 can only be discharged by proving itself._
**Multi-Step Reasoning** By further inspecting the generated propositions, we find that the model
can implicitly invoke multiple theorems as humans normally do. While this property can be found in
quite a few examples, here we show one of them due to the limited space. We refer the readers to the
appendix for more examples. Given the source F.2: dim(span(x0)) ≤ card(x2) F.3: card(x2) =
dim(x0) F.4: card(x2) ≤ dim(x0), finite(x2), where dim, span and card refer to the dimensionality,
the span, and the cardinality of a set of vectors, respectively, and the model gives the correct answer
dim(x0) card(x2). Here, dim(x0) card(x2) is derived by dim(span(x0)) card(x2) only
_≤_ _≤_ _≤_
if the model has implicitly learned the following theorem ⊢ dim(span(S)) = dim(S), while
dim(x0) card(x2) yields card(x2) = dim(x0) (in conjunction of card(x2) dim(x0)) only if
_≤_ _≤_
the model has implicitly invoked the antisymmetry lemma x ≤ _y, y ≤_ _x ⊢_ _x = y._
**Failures** We observe that incorrect propositions are well-formed and plausible propositions but
they are usually a copy of parts of the source propositions.
7 RELATED WORK
There have been a series of work that evaluates mathematical reasoning abilities of seq2seq models.
The tasks that these works attempt to solve include school-level mathematical problems (Ling et al.,
-----
Figure 4: Nearest neighbours of the tokens ‘Borel measurable’ (left) and ‘arrow’ (right) in cosine
space. The 512-dimensional embeddings are projected into 3-dimensional embeddings. Neighbours
are found by picking the top 50 tokens whose embeddings are closest to the selected token.
Figure 5: Attention visualisation of the last layer of the transformer encoder for the source propositions
**F.2: F.3: x70 ∈** _x39 F.4: x57 ⊆_ _x39. The generated target proposition is x70 ∈_ _x57._
2017; Saxton et al., 2019), function integration and ordinary differential equations (Lample & Charton,
2020), properties of differential systems (Charton et al., 2020), SAT formulas and temporal logic
(Finkbeiner et al., 2020). Our task is different from the previous ones in the sense that ours is
non-synthetic, with realistic vocabulary size (i.e., 30K vs. less than 100) and has a broad coverage
topics in research-level mathematics and computer science that have no general algorithmic solutions.
Our work is closely related to the most recent work on applying language modelling to theorem
proving. Urban & Jakubuv (2020) present initial experiments on generating conjectures using GPT-2
(Radford et al., 2019). Polu & Sutskever (2020) show that the GPT-3 language model (Brown
et al., 2020b) additionally pretrained with mathematical equations mined from the web can generate
propositions that enable theorem provers to prove more theorems automatically. Rabe et al. (2020)
pretrain a masked language models on proofs mined from the HOList dataset (Bansal et al., 2019)
and apply the pretrained models to the downstream tasks of type inference and predicting conjectures.
While both their work and ours find that transformer models have strong mathematical reasoning
capabilities, they have different objectives from ours. Their objectives are to show the effectiveness
of pretraining on downstream tasks; by contrast we are building benchmarks to test models’ ability of
solving mathematical problems. In fact, we can pretrain seq2seq models following their proposed
methods and verify their effectiveness on our dataset. We will leave this for future work.
There exists a few benchmarks for theorem proving. Kaliszyk et al. (2017) propose a machine learning
benchmark for higher order logic reasoning. Alemi et al. (2016) use convolutional networks for
_premise selection. Both of these tasks are classification problems, whereas our proposition generation_
task is a generation problem with a countably infinite search space. In the benchmarks for tactic
_synthesis (Huang et al., 2019; Bansal et al., 2019; Yang & Deng, 2019; Sanchez-Stern et al., 2019;_
Paliwal et al., 2019; Gauthier et al., 2017), an agent is asked to propose a sequence of tactics to solve
the current goal. Our task is complementary: the model is required to conjecture a goal (intermediate
proposition) that is likely to be useful in a derivation. Wu et al. (2020) proposed a synthetic inequality
theorem proving benchmark that studies the out-of-distribution generalization abilities of models.
-----
Conjecturing literals in tableau proofs using recurrent neural networks has been investigated by
Piotrowski & Urban (2020).
Other related work includes analogy/syntax driven conjecturing (Gauthier et al., 2016; Nagashima
& Parsert, 2018; Wang & Deng, 2020), goal classification/ranking (Goertzel & Urban; Brown
et al., 2020a), proof method recommendations (Nagashima, 2020; Nagashima & He, 2018), and
autoformalisation of mathematics (Wang et al., 2020; Szegedy, 2020).
**Hierarchical Models** Hierarchical models have been proposed to solve natural language processing
tasks such as document representation (Yang et al., 2016) and document summarisation (Zhang et al.,
2019; Liu & Lapata, 2019). Both our hierarchical transformer (HAT) and those models share the
similar spirit of introducing local layers to encode local sentences (or propositions) and global layers
to capture cross sentence (or proposition) information. However, our HAT is different from their
hierarchical models in the way of representing sentences (or propositions): while their models encode
sentences into fixed size vectors, the representation of a proposition in our model is a matrix of
dynamic size. The model by Liu & Lapata (2019) has a more sophisticated architecture for capturing
sentence representations compared to those by Yang et al. (2016) and Zhang et al. (2019), where they
introduce multi-head pooling to encode sentences with different attention weights. Compare to Liu
& Lapata (2019)’s model, our model does not introduce additional parameters beyond the standard
transformers. Another subtle difference between our model and the existing models is the way of
doing positional encoding. Unlike documents where the order of sentences matters, propositions
within each category of our IsarStep task do not require an order. Therefore, we do not encode the
positional information of different propositions.
8 CONCLUSION
We mined a large corpus of formal proofs and defined a proposition generation task as a benchmark
for testing machine learning models’ mathematical reasoning capabilities. In our defined task, the gap
between adjacent proof steps is big and therefore it cannot be simply solved by pattern matching and
rewriting. We evaluated the RNN attention model and the transformer on this dataset and introduced
a hierarchical transformer that outperforms the existing seq2seq model baselines especially on long
source sequences. Our analysis shows that the neural seq2seq models can learn non-trivial logical
relations and mathematical concepts. We hope that our work will drive the development of models
that can learn to reason effectively and eventually build systems that can generate human-readable
proofs automatically.
ACKNOWLEDGMENTS
Both Li and Paulson are supported by the ERC Advanced Grant ALEXANDRIA (Project 742178),
funded by the European Research Council. We would like to thank Dani Yogatama, Adhiguna
Kuncoro, Phil Blunsom, and Christian Szegedy for helpful comments on an earlier draft of this paper.
We also thank the anonymous reviewers for their insightful suggestions.
REFERENCES
Alexander A. Alemi, François Chollet, Niklas Eén, Geoffrey Irving, Christian Szegedy, and Josef
Urban. DeepMath - Deep Sequence Models for Premise Selection. In Proceedings of NeurIPS,
2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. In Proceedings of ICLR, 2015.
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. HOList:
An Environment for Machine Learning of Higher Order Logic Theorem Proving. In Proceedings
_of ICML, 2019._
Jasmin Christian Blanchette, Sascha Böhme, and Lawrence C Paulson. Extending Sledgehammer
with SMT solvers. In Proceedings of International Conference on Automated Deduction, 2011.
-----
Chad E Brown, Bartosz Piotrowski, and Josef Urban. Learning to advise an equational prover. 2020a.
[URL http://aitp-conference.org/2020/abstract/paper_32.pdf.](http://aitp-conference.org/2020/abstract/paper_32.pdf)
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. arXiv preprint arXiv:2005.14165, 2020b.
François Charton, Amaury Hayat, and Guillaume Lample. Deep differential system stability - learning
advanced computations from examples. CoRR, abs/2006.06462, 2020.
Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. Can neural
networks understand logical entailment? In Proceedings of ICLR, 2018.
Bernd Finkbeiner, Christopher Hahn, Markus N Rabe, and Frederik Schmitt. Teaching temporal
logics to neural networks. arXiv preprint arXiv:2003.04218, 2020.
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Initial experiments with statistical conjecturing
over large formal corpora. In Joint Proceedings of the FM4M, MathUI, and ThEdu Workshops,
_Doctoral Program, and Work in Progress at the Conference on Intelligent Computer Mathematics,_
2016.
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. TacticToe: Learning to Reason with HOL4 Tactics. In Proceedings of International Conference on Logic for Programming, Artificial Intelligence
_and Reasoning, 2017._
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional
sequence to sequence learning. In Proceedings of ICML, 2017.
Zarathustra Goertzel and Josef Urban. Usefulness of lemmas via graph neural networks. URL
[http://aitp-conference.org/2019/abstract/AITP_2019_paper_32.pdf.](http://aitp-conference.org/2019/abstract/AITP_2019_paper_32.pdf)
Adam Grabowski, Artur Kornilowicz, and Adam Naumowicz. Mizar in a nutshell. Journal of
_Formalized Reasoning, 3(2):153–245, 2010._
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735–1780, 1997.
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A Learning Environment for Theorem Proving. In Proceedings of ICLR, 2019.
Lukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. In Proceedings of ICLR, 2016.
Cezary Kaliszyk, François Chollet, and Christian Szegedy. HolStep: A Machine Learning Dataset
for Higher-order Logic Theorem Proving. In Proceedings of ICLR, 2017.
Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In Proceedings
_of ICLR, 2020._
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization
_branches out, pp. 74–81, 2004._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of ACL, 2017.
Yang Liu and Mirella Lapata. Hierarchical transformers for multi-document summarization. In
_Proceedings of ACL, 2019._
Yutaka Nagashima. Simple dataset for proof method recommendation in isabelle/hol. In International
_Conference on Intelligent Computer Mathematics, 2020._
Yutaka Nagashima and Yilun He. Pamper: Proof method recommendation system for isabelle/hol.
_[CoRR, 2018. URL http://arxiv.org/abs/1806.07239.](http://arxiv.org/abs/1806.07239)_
Yutaka Nagashima and Julian Parsert. Goal-oriented conjecturing for isabelle/hol, 2018.
-----
Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, and Christian Szegedy. Graph Representa[tions for Higher-Order Logic and Theorem Proving, 2019. URL http://arxiv.org/abs/](http://arxiv.org/abs/1905.10006)
[1905.10006.](http://arxiv.org/abs/1905.10006)
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic
evaluation of machine translation. In Proceedings of ACL, 2002.
Lawrence C Paulson. Isabelle: A generic theorem prover, volume 828. Springer Science & Business
Media, 1994.
Bartosz Piotrowski and Josef Urban. Guiding inferences in connection tableau by recurrent neural networks. In International Conference on Intelligent Computer Mathematics, pp. 309–314.
Springer, 2020.
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
Markus N Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical reasoning via
self-supervised skip-tree training. arXiv preprint arXiv:2006.04757, 2020.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating Correctness
[Proofs with Neural Networks, 2019. URL http://arxiv.org/abs/1907.07794.](http://arxiv.org/abs/1907.07794)
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In Proceedings of ICLR, 2019.
Christian Szegedy. A promising path towards autoformalization and general artificial intelligence. In
Christoph Benzmüller and Bruce Miller (eds.), Intelligent Computer Mathematics, 2020.
Andrew Trask, Felix Hill, Scott E. Reed, Jack W. Rae, Chris Dyer, and Phil Blunsom. Neural
arithmetic logic units. In Proceedings of NeurIPS, 2018.
Josef Urban and Jan Jakubuv. First neural conjecturing datasets and experiments. In Christoph
Benzmüller and Bruce R. Miller (eds.), Intelligent Computer Mathematics, 2020.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of NeurIPS, 2017.
Mingzhe Wang and Jia Deng. Learning to Prove Theorems by Learning to Generate Theorems, 2020.
[URL https://arxiv.org/abs/2002.07019.](https://arxiv.org/abs/2002.07019)
Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine translation in autoformalization of mathematics in mizar. Proceedings of ACM SIGPLAN
_International Conference on Certified Programs and Proofs, 2020._
[Makarius Wenzel. The Isabelle/Isar Implementation. https://isabelle.in.tum.de/dist/](https://isabelle.in.tum.de/dist/Isabelle2020/doc/isar-ref.pdf)
[Isabelle2020/doc/isar-ref.pdf, 2020. [Online; accessed 31-May-2020].](https://isabelle.in.tum.de/dist/Isabelle2020/doc/isar-ref.pdf)
Freek Wiedijk. The seventeen provers of the world: Foreword by Dana S. Scott, volume 3600.
Springer, 2006.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey,
Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson,
Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith
Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex
Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural
machine translation system: Bridging the gap between human and machine translation. CoRR,
[2016. URL http://arxiv.org/abs/1609.08144.](http://arxiv.org/abs/1609.08144)
Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Grosse. INT: An Inequality Benchmark for Evaluating
Generalization in Theorem Proving. arXiv preprint arXiv:2007.02924, 2020.
-----
Table 3: Additional results.
Top-1 Acc. Top-10 Acc. BLEU
Model
Base +F.5 Base +F.5 Base +F.5
Conv2Conv 8.7 8.3 21.7 20.8 48.66 46.54
Kaiyu Yang and Jia Deng. Learning to Prove Theorems via Interacting with Proof Assistants. In
_Proceedings of ICML, 2019._
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy.
Hierarchical attention networks for document classification. In Kevin Knight, Ani Nenkova, and
Owen Rambow (eds.), Proceedings of NAACL HLT, 2016.
Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
Xingxing Zhang, Furu Wei, and Ming Zhou. HIBERT: document level pre-training of hierarchical
bidirectional transformers for document summarization. In Proceedings of ACL, 2019.
A EXPERIMENTAL SETUP
For RNNSearch[4] (Bahdanau et al., 2015; Wu et al., 2016), we use 2-layer LSTMs (Hochreiter &
Schmidhuber, 1997) with 512 hidden units and 0.2 dropout rate. The hyperparameters for training the
transformer[5] are the same as transformer base (Vaswani et al., 2017), i.e. 512 hidden size, 2048 filter
size, 8 attention heads, and 6 layers for both the encoder and decoder. The hyperparameters for HAT
are the same, except that the number of local context layers is 4 and global context layers is 2. We
share the source and target token embeddings for all the three models. We use beam search decoding
with beam size 5 (for top1 accuracies) and 10 (for top10 accuracies). The configurations for different
models are the best ones we found based on validation performance. We train these models for 100K
steps and pick the checkpoint with the best BLEU on the validation set to evaluate on the test set.
Training the transformer and HAT takes 72 hours on 4 Tesla-V100 GPUs.
B ADDITIONAL EXPERIMENTAL RESULTS
We report additional experimental results from convolutional seq2seq models (Gehring et al., 2017)[6]
in Table 3. We use the setup of fconv_iwslt_de_en to train the model.
C TEST SUITE
We use the default Sledgehammer (Blanchette et al., 2011) method
in Isabelle as our automatic theorem prover for checking deriva- Table 4: Percentage of welltions. To ensure fair and efficient comparisons, we shut off three of formed propositions
its options: "isar_proofs", "smt_proofs" and "learn". The timeout
for Sledgehammer is 30s. We run the test suite on a platform
Model Base +F.5
with Intel 9700K CPU and 32G RAM, and it takes about 44 hours
to evaluate on the test set. Due to some technical issues (e.g., Transformer 58.2 60.1
the sampled example appears before Sledgehammer is introduced HAT 58.2 58.9
when booting Isabelle/HOL), 92/5000 examples from the test set
are not supported by the current version of our test suite. We
present the percentage of well-formed propositions (i.e., outputs that type checks in Isabelle/HOL) of
Transformer and HAT in Table 4.
[4Codebase: https://github.com/tensorflow/nmt](https://github.com/tensorflow/nmt)
[5Codebase: https://github.com/THUNLP-MT/THUMT/tree/pytorch](https://github.com/THUNLP-MT/THUMT/tree/pytorch)
6Codebase: [https://github.com/pytorch/fairseq/tree/master/examples/conv_](https://github.com/pytorch/fairseq/tree/master/examples/conv_seq2seq)
[seq2seq](https://github.com/pytorch/fairseq/tree/master/examples/conv_seq2seq)
-----
With such settings, Sledgehammer can automatically prove the goal F.3 in 1984/4908 examples. By
incorporating the ground truth F.1, the derivations (i.e., to prove both F.2 ⇒ **F.1 and F.1 ⇒** **F.3) can**
be closed automatically in 2243 examples. Among the 1125 examples that HAT produces an exact
match, 466 of them have a ‘small’ gap: Sledgehammer discharges the goal F.3 directly; 61 of gaps
are ‘just right’: the introduced intermediate step F.1 can help Sledgehammer bridge the gap; most of
the remaining 598 examples have ‘large’ gaps (in either F.2 ⇒ **F.1 or F.1 ⇒** **F.3) that are beyond the**
capability of Sledgehammer. It appears that the insignificant amount of automation improvement can
be attributed to the limited number of ‘just right’ gaps that are within the reach of Sledgehammer.
D ALTERNATIVE STEPS
Many alternative steps are trivially equivalent to the ground truth (e.g. A = B given the ground
truth being B = A, and P ∧ 1 = 1 given the truth being P ). However, we still manage to find a few
non-trivial ones, and one of them (#954 in the test set) even identifies a redundant derivation in the
Isabelle standard library:
**F.1 :**
0 = x1(x9)2(2π)in(x3, x9) (3)
**F.2 :**
_x7 = {w | w ̸∈_ path_image(x3) ∧ _n(x3, w) = 0}_ (4)
_x9_ _x7_ (5)
_∈_
**F.3 :**
_dx_
= 0 (6)
_x3_ _x_ _x9_
I _−_
**F.4 :**
_dx_
_z_ path_image(x3). (7)
_∀_ _̸∈_ _x3_ _x_ _z_ [=][ n][(]2[x]π[3][, z]i [)]
I _−_
_x7 = {w | w ̸∈_ path_image(x3) ∧ _n(x3, w) = 0}_ (8)
_x9_ _x7_ (9)
_∈_
Here, i is the imaginary unit, path_image(x3) returns the image of the contour x3 on the interval
[0, 1], and n(x3, x9) is the winding number of x3 around the point x9. F.2 ⇒ **F.1: combining (4) and**
(5) leads to n(x3, x9) that proves (3). F.1, F.4 ⇒ **F.3: joining (8) and (9) yields**
_x9_ path_image(x3) (10)
_̸∈_
_n(x3, x9) = 0_ (11)
By further joining (10) with (7) we have
_dx_
= _[n][(][x][3][, x][9][)]_
_x_ _x9_ 2πi
_−_
_x3_
which leads to (6) considering n(x3, x9) = 0 (i.e., (11)). Note that our F.1 is not used in the derivation
above hence redundant. Instead of the redundant ground truth, HAT proposed (10) which is clearly a
much better intermediate step.
E EXAMPLES OF CORRECT SYNTHESISES
In this section, we present some correctly synthesised propositions which will be labelled as F.1 in
each case.
-----
#605 in the validation set:
**F.1 :**
additive( _x1_ _, µx0_ ) (12)
_A_
**F.2 :**
subalgebra(x0, x1) (13)
**F.3 :**
measure_space( _x0_ _,_ _x1_ _, µx0_ ) (14)
_X_ _A_
**F.4 :**
_σ(_ _x0_ _,_ _x1_ ) (15)
_X_ _A_
positive( _x1_ _, µx0_ ) (16)
_A_
measure_space(v1, v0, v2) = (σ(v1, v0) positive(v0, v2) additive(v0, v2)) (17)
_∧_ _∧_
Here, x0 and x1 are measure spaces. For a measure space y, Xy, Ay, and µy are the three components
of y (i.e., y = (Xy, Ay, µy)), where Xy is the carrier set, Ay is a collection of subsets on Xy, and µy
is a measure defined onso that µx1 in additive( AAyx.1 F.2, µx ⇒1 ) (i.e.,F.1: x µ1x being a subalgebra of1 is countably addictive on x0 (i.e., (13)) implies Ax1 which is implied by Ax1 ⊆A xx01,
being a measure space) can be substituted with µx0 which yields (12). F.1, F.4 ⇒ **F.3: deriving (14)**
requires unfolding the definition of measure spaces (i.e., (17)), which requires v0 is a sigma algebra
on v1, the measure v2 is non-negative on v0, and v2 is countably additive on v0. Two of the three
requirements have already been satisfied by (15) and (16) respectively, while (12) entails the last one
and eventually leads to (14).
#2903 in the validation set:
**F.1 :**
_x29_ path_image(x7) (18)
_̸∈_
**F.2 :**
_x29_ proots(x0) proots_within(x0, box(x1, x2)) (19)
_∈_ _−_
path_image(x7) proots(x0) = (20)
_∩_ _{}_
**F.3 :**
_x29_ cbox(x1, x2) (21)
_̸∈_
**F.4 :**
cbox(x1, x2) = box(x1, x2) path_image(x7) (22)
_∪_
_x29_ proots(x0) proots_within(x0, box(x1, x2)) (23)
_∈_ _−_
Here, path_image(x7) is the image of the path function x7 on the interval [0, 1]; proots(x0) and
proots_within(x0, S) are, respectively, the roots of a polynomial x0 and the roots (of x0) within
(bounded) boxes on an Euclidean space.a set S; box(x1, x2) = {x | x1 < x < x F.2 ⇒2} andF.1 cbox(: x29 is a root ofx1, x2) = { xx0 (by (19)) that does not | x1 ≤ _x ≤_ _x2} are_
intersect with the path of x7 (i.e., (20)). F.1, F.4 ⇒ **F.3: combining with (22), (21) is equivalent to**
_x29_ box(x1, x2) _x29_ path_image(x7), which follows from joining (23) with (18).
_̸∈_ _∧_ _̸∈_
#1514 in the validation set:
**F.1 :**
_x4(2x10)_ (24)
_x4(x10)_
_[≤]_ _[x][9]_
**F.2 :**
_x4(2y)_
_x9 = Max_ (25)
_x4(y)_
_[|][ y][ ≤]_ _[x][8]_
_x10_ _x8_ (26)
_≤_
**F.3 :**
_x4(2x10)_ _x9x4(x10)_ (27)
_≤_
**F.4 :**
0 < x4(x10) (28)
-----
**F.2 ⇒** **F.1: (26) implies**
_x4(2x10)_ _x4(2y)_
_,_
_x4(x10)_ _x4(y)_
_[∈]_ _[|][ y][ ≤]_ _[x][8]_
hence (24) by the definition of Max. F.1, F.4 ⇒ **F.3 by arithmetic and the positivity of the denominator**
(i.e., (28)).
#1222 in the validation set:
**F.1 :**
_|ix0 +_
**F.2 :**
1 − _x[2]0[|][ = 1]_ (29)
_ix0 +_ 1 _x[2]0[|][2][ = 1]_ (30)
_|_ _−_
**F.3 :** q
(arcsin(x0)) = 0 (31)
_ℑ_
**F.4 :**
_ℑ(arcsin(v0)) = −_ ln(|iv0 + 1 − _v0[2][|][)]_ (32)
_e[−][v][0]_ = 1/(e[v][0] ) q (33)
**F.2 ⇒** **F.1 by arithmetic. F.1, F.4 ⇒** **F.3:**
_ℑ(arcsin(v0)) = −_ ln(|iv0 +
#35 in the validation set:
1 − _v0[2][|][) =][ −]_ [ln 1 = 0][.]
**F.1 :**
_x4 = x10[x12]_ (34)
**F.2 :**
_∀x. x3 = x6[x] ∧_ _x < len(x6) −→_ _x4 = x10[x]_ (35)
_x3 = x6[x12]_ (36)
_x12 < len(x6)_ (37)
**F.3 :**
_x4 = x7[x11]_ (38)
**F.4 :**
_x7 = x9#x10_ (39)
_x12 < len[x6]_ (40)
_x11 = x12 + 1_ (41)
Here, x10[x12] refers to the x12[th] element in the list x10; len is the length function on a list; x9#x10
is a list where the element x9 is concatenated to the front of the list x10. F.2 ⇒ **F.1 by instantiating**
the quantified variable x in (35) to x12 and combining with (36 - 37). F.1, F.4 ⇒ **F.3:**
_x4 = x10[x12] = (x9#x10)(x12 + 1) = x7[x11]._
-----
| [
"Lawrence C, Paulson",
"Wenda, Li",
"Lei, Yu",
"Yuhuai, Wu"
] | 2020-01-01T00:00:00 | ICLR 2021 | true | 52 | 17 | [
"Isabelle"
] | https://arxiv.org/abs/2006.09265 | https://arxiv.org/abs/2006.09265 | https://www.semanticscholar.org/paper/593499b654360101682edec1dd711fa7c09f6971 |
Teaching Arithmetic to Small Transformers | Large language models like GPT-4 exhibit emergent capabilities across general-purpose tasks, such as basic arithmetic, when trained on extensive text data, even though these tasks are not explicitly encoded by the unsupervised, next-token prediction objective. This study investigates how even small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective. We first demonstrate that conventional training data is not the most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp phase transitions as a function of training data scale, which, in some cases, can be explained through connections to low-rank matrix completion. Building on prior work, we then train on chain-of-thought style data that includes intermediate step results. Even in the complete absence of pretraining, this approach significantly and simultaneously improves accuracy, sample complexity, and convergence speed. We also study the interplay between arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and parameter scaling. Additionally, we discuss the challenges associated with length generalization. Our work highlights the importance of high-quality, instructive data that considers the particular characteristics of the next-word prediction loss for rapidly eliciting arithmetic capabilities. | This study investigates how small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective. | ## TEACHING ARITHMETIC TO SMALL TRANSFORMERS
**Nayoung Lee[∗]**
University of Wisconsin-Madison
[email protected]
**Jason D. Lee**
Princeton University
[email protected]
**Dimitris Papailiopoulos**
University of Wisconsin-Madison
[email protected]
**Kartik Sreenivasan[∗]**
University of Wisconsin-Madison
[email protected]
**Kangwook Lee**
University of Wisconsin-Madison
[email protected]
ABSTRACT
Large language models like GPT-4 exhibit emergent capabilities across generalpurpose tasks, such as basic arithmetic, when trained on extensive text data, even
though these tasks are not explicitly encoded by the unsupervised, next-token
prediction objective. This study investigates how even small transformers, trained
from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token
prediction objective. We first demonstrate that conventional training data is not the
most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp transitions as a function of training
data scale, which, in some cases, can be explained through connections to low-rank
matrix completion. Building on prior work, we then train on chain-of-thought
style data that includes intermediate step results. Even in the complete absence
of pretraining, this approach significantly and simultaneously improves accuracy,
sample complexity, and convergence speed. We also study the interplay between
arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and parameter scaling. Additionally, we discuss the challenges
associated with length generalization. Our work highlights the importance of
high-quality, instructive data that considers the particular characteristics of the
next-word prediction loss for rapidly eliciting arithmetic capabilities.[1]
1 INTRODUCTION
Large language models like GPT-3/4, PaLM, LaMDA (Brown et al., 2020; Chowdhery et al., 2022;
Thoppilan et al., 2022) have demonstrated general-purpose properties, often referred to as emergent
_abilities (Wei et al., 2022a), for a wide range of downstream tasks like language and code translation,_
compositional reasoning, and basic arithmetic operations (Webb et al., 2022; Nye et al., 2021; Wei
et al., 2022b; Shi et al., 2022; Wang et al., 2022; Srivastava et al., 2022; Chen et al., 2023). What is
perhaps surprising, is that these tasks are not explicitly encoded in the model’s training objective,
which typically is an auto-regressive, next-token-prediction loss.
Prior research has delved into exploring these capabilities and how they emerge as the scale and
of training compute, type of data, and model size vary (Wei et al., 2022a; Chung et al., 2022; Tay
et al., 2022). Untangling the factors, however, remains challenging due to the data complexity and
the variety of tasks examined. Driven by the curiosity to understand the factors that elicit these
capabilities in next-token predictors, we set out to pinpoint the key contributors that accelerate the
_∗Authors contributed equally to this paper._
[1Our code is available at https://github.com/lee-ny/teaching_arithmetic](https://github.com/lee-ny/teaching_arithmetic)
-----
Figure 1: We investigate four data formatting approaches: (i) Plain: standard addition formatting (Section 4),
**(ii) Reverse: reversing the output (Section 4), (iii) Simplified Scratchpad: recording the digit-wise sum and**
carry-ons (Section 6), and (iv) Detailed Scratchpad: providing detailed intermediate steps (Section 6). We train
small decoder-only transformers from scratch on addition data in these formats. The results (right) highlight the
crucial role of data formatting in accuracy and sample efficiency. Plain never reaches 100% accuracy and the
sample complexity for the remaining methods steadily improves with the level of details in the data format.
emergence of such abilities. These contributors may include the format and scale of data, model
scale, the presence of pre-training, and the manner of prompting.
To provide a more precise examination of these factors, our study is conducted in a controlled setting:
we first focus on teaching arithmetic to small decoder-only transformer models, such as NanoGPT
and GPT-2, when trained from random initialization. Starting with a model of 10.6M parameters and
scaling up to 124M parameters, we use the standard autoregressive next-token prediction loss. Our
objective is to understand if and to what degree these models can efficiently learn basic arithmetic
operations like addition, subtraction, multiplication, square root, and sine, thereby providing a clearer
lens through which to view the elicitation of emergent abilities. Below, we summarize our findings.
**Data format and sampling plays a significant role. We first observe that teaching a model addition**
(or any other operation) using standard addition samples, i.e., ‘A3A2A1 + B3B2B1 = C3C2C1’, is
suboptimal, as it requires the model to evaluate the most significant digit C3 of the result first, which
depends globally on all the digits of the two summands. By training on samples with reversed results,
_i.e., ‘A3A2A1 + B3B2B1 = C1C2C3’, we enable the model to learn a simpler function, significantly_
improving sample complexity. Additionally, balanced sampling of different “variants” of addition,
based on the number of carries and digits involved, further enhances learning. Even in this simple
setting, we observe relatively sharp phase transitions from 0 to 100% accuracy as a function of the
size of the training data. Although this may seem surprising, we observe that learning an addition
map on n digits from random samples is equivalent to completing a low-rank matrix. This connection
allows us to offer a reasonable explanation for such phase transitions.
**Chain-of-thought data during training. Building on these findings, we then explore the potential**
benefits of chain-of-thought (CoT) data during training. This format includes step-by-step operations
and intermediate results, allowing the model to learn the individual components of compositional
tasks. This format is directly borrowed from related literature, e.g., Ling et al. (2017); Wei et al.
(2022b); Zhou et al. (2022a;b). We find that CoT-type training data significantly improved learning in
terms of both sample complexity and accuracy in agreement with CoT fine-tuning literature (Nye
et al., 2021; Chung et al., 2022), but even in the complete absence of pretraining. We conjecture
that this is because breaking down the required compositional function to be learned into individual
components allows the model to learn a higher-dimensional but easier-to-learn function map, in
agreement with recent theoretical findings (Li et al., 2023; Malach, 2023). In Figure 1, we provide
examples of the data formatting methods explored in our work.
**Training on text and arithmetic mixtures and the role of few-shot prompting. We also explore the**
interplay between arithmetic and text data during training, as LLMs are trained on massive amounts
of data scraped from the internet (Bubeck et al., 2023; Peterson et al., 2019), where it is impractical to
carefully separate different types of data. We observe how the model’s perplexity and accuracy vary
with the ratio of text to arithmetic data. We find that jointly training on all the arithmetic operations
discussed earlier can improve the individual performance of each task and that going from zero-shot
to 1-shot prompting (showing one arithmetic example) yields a large accuracy improvement, but
there is no significant improvement in accuracy by showing more examples.
**The role of pre-training and model scale. We further investigate the role of pretraining by fine-**
tuning pretrained models like GPT-2 and GPT-3 (davinci) and observe that while the zero-shot
-----
performance on arithmetic operations is poor, prior “skills” acquired during pretraining facilitate
quick learning of some basic arithmetic tasks, even with a small number of finetuning samples.
However, finetuning on non-standard data, such as those that result from reverse formatting, can
interfere with the model’s performance when pretrained, leading to decreased accuracy. We finally
share our observations on how performance in arithmetic changes with scale, and although we find
that scale does aid when finetuning for these tasks, it is not a necessary trait.
**Compositional and length generalization. One might question if our trained models truly grasp**
arithmetic. Our findings present a nuanced answer. We find that length generalization beyond trained
digit lengths is still challenging. For instance, if a model is trained on all n-digit lengths, excluding a
specific length, it still struggles to accurately calculate this missing digit length. Consequently, the
models achieve high accuracy within trained digit lengths but struggle significantly beyond this range.
This suggests that the models learn arithmetic not as a flexible algorithm, but as a mapping function
constrained to trained digit lengths. While this significantly surpasses memorization, it falls short of
comprehensive arithmetic “understanding”.
**Novelty over prior work. Our approach heavily builds upon prior work that uses reasoning-**
augmented data to enhance model performance, and we do not purport originality in the types of
training data used, nor in achieving the highest performance with the smallest model parameters
possible. What sets our work apart is the primary focus on meticulously ablating our settings and
extensive studies on various sampling techniques, training data formats, data source mixing ratios,
and model scales. Our goal is to pinpoint the factors that contribute to the fast emergence of arithmetic
capabilities. In the process, we also provide several straightforward yet novel and insightful theoretical
explanations for some of the phase transition phenomena we observe. Our emphasis on arithmetic is
not due to its intrinsic significance — one can easily delegate calculations to external tools (Schick
et al., 2023; Gao et al., 2023). Instead, arithmetic serves as an emergent skill, easy to isolate and test,
facilitating a more precise exploration of emergent phenomena.
2 RELATED WORKS
**Instructional data/chain-of-thought.** Detailed reasoning in training data has roots predating
Transformers (Vaswani et al., 2017). Ling et al. (2017); Cobbe et al. (2021) use natural language to
generate reasoning steps while Roy & Roth (2016); Reed & De Freitas (2015); Chen et al. (2017);
Cai et al. (2017); Nye et al. (2021) show that symbolic reasoning may suffice. Nogueira et al. (2021)
stress the importance of large number of small-digit samples (Yuan et al., 2023). Razeghi et al.
(2022) observe a correlation between the frequency of numbers in the dataset and the performance
involving them. In contrast, we find that transformers can learn to add numbers that were not seen
during training. Chain-of-thought (Wei et al., 2022b) refers to the model’s improved accuracy when
prompted to produce intermediate reasoning steps. Zhou et al. (2022b) show that this can be achieved
by providing sufficiently informative exemplars as a few-shot prompt (Brown et al., 2020). Zhou et al.
(2022a) showed that least-to-most prompting can help GPT-3 solve problems decomposable into
simpler sub-problems, by sequentially solving these subproblems. We extend this notion to simple
addition and show that asking the model to output the least significant bit first has a similar effect.
**Arithmetic using Transformer models.** Our work focuses on decoder-only models as they are
widely used in LLMs (Brown et al., 2020; Touvron et al., 2023; MosaicML, 2023). However,
encoder-decoder models have also been extensively studied in the context of learning arithmetic
operations (Kim et al., 2021; Wang et al., 2021). Wallace et al. (2019) on the other hand, focus
on the impact of the learned embeddings. Ontanón et al. (2021) extensively study the problem of
compositional generalization on benchmark datasets, such as SCAN (Lake & Baroni, 2018; Drozdov
et al., 2022), and conclude that design choices, like relative position encoding (Shaw et al., 2018),
can improve performance. Charton (2022; 2021) show that Transformers can learn linear algebra
operations with carefully chosen encodings. Hanna et al. (2023) use mechanistic interpretability
techniques to explain the limited numerical reasoning capabilities of GPT-2. Dziri et al. (2023);
Jelassi et al. (2023); Yang et al. (2023) focus on the challenges of length generalization. A recent line
of work explores finetuning techniques to improve arithmetic capabilities in pretrained models (Qian
et al., 2022; Lightman et al., 2023; Uesato et al., 2022).
**Beyond Transformers. While we focus our attention on GPT-like models, there is a rich literature**
studying other seq-to-seq models such as recurrent neural networks (RNNs) (Bowman, 2013; Bowman
et al., 2014; Zaremba et al., 2014). Zaremba & Sutskever (2014) show that RNNs can learn how to
-----
execute simple programs with for-loops provided they are trained with curriculum learning. Sutskever
et al. (2014) show that LSTMs show improved performance on text-based tasks such as translation
when the source sentences are reversed, which is closely related to what we observe in addition.
Kaiser & Sutskever (2015) propose Neural GPUs which outperform prior RNNs on binary arithmetic
tasks and even show length generalization i.e., they can perform arithmetic on inputs of lengths that
were unseen during training. This is yet to be seen even in modern pre-trained models (Bubeck
et al., 2023) and therefore it is interesting to see if we can leverage some of these techniques and
apply them to existing modern architectures. Dehghani et al. (2018) propose Universal Transformers
(UTs) which introduce a recurrent transition function to apply recurrence over revisions of the vector
representation at each position as opposed to the different positions in the input. They show that on
the tasks from Zaremba & Sutskever (2014), UTs outperform traditional Transformers and RNNs.
3 PRELIMINARIES AND EXPERIMENTAL SETUP
In this section, we provide a detailed description of our experimental setup, including the model
architecture and an overview of the various data formatting and sampling techniques used.
**Model and Data.** To examine the individual factors at play, we use NanoGPT (Karpathy, 2022), a
lightweight implementation of the GPT family of models. NanoGPT is a decoder-only transformer
with six self-attention layers, six heads, and an embedding dimension of 384, resulting in approximately 10.6M parameters. Unless stated otherwise, we use character-level tokenization and absolute
position encoding. We train NanoGPT from random initialization, which we refer to as training
_from scratch, using the conventional next-token prediction objective. To study the effect of scale, we_
extend our experiments to GPT-2 and GPT-3 in Section 8. We investigate both training from scratch
as well as fine-tuning using a pretrained GPT-2, whereas, for GPT-3, we only consider fine-tuning
pretrained models. Refer to Appendix I for more details on the models and data used.
For arithmetic tasks like addition, subtraction, and multiplication, we define the training dataset for
a binary operator f (·) as Dtrain = {(ai, bi), yi}i[N]=1 [where][ y][i][ =][ f] [(][a][i][, b][i][)][. For unary operations like]
sine, the training dataset is formulated as Dtrain = {ai, yi}i[N]=1[, where][ y][i][ =][ f] [(][a][i][)][. The test dataset]
_Dtest is constructed by randomly sampling pairs of operands not included in Dtrain. We then apply_
different data formatting techniques on each data sample from the training dataset, creating the final
sequence that serves as the model’s input. Note that while we view ai as a single integer, the model
will see it as a sequence of digits after character-level tokenization.
**Data Formatting.** In the following sections, we
will delve into the four data formatting approaches
in our arithmetic experiments. See Figure 1 and Appendix J for examples. In Section 4, we explore
the limitations of the conventional plain-format data
and demonstrate how a simple reversal of the output
order can lead to substantial performance improvements and enhanced sample efficiency. We introduce
two Lemmas to support and explain these findings.
Additionally, in Section 6, we present results on the
simplified and detailed scratchpad formats, highlighting significant enhancements in sample efficiency for
learning addition. We also emphasize the importance
of carefully designing the intermediate steps in the
detailed scratchpad method. Note that the scratchpad formats are largely adopted from the literature
of chain-of-thought (CoT) training (Nye et al., 2021;
Zhou et al., 2022b).
Figure 2: Performance of 3-digit addition on various data sampling methods used: (i) Random: uniform sampling of operands; (ii) Balanced digits:
assigning higher sampling weights to operations
involving 1 and 2-digit numbers; (iii) Balanced
**carry: balancing the dataset to contain an equal**
number of carry-on operations; (iv) Balanced
**both: balancing digits and carry-ons. We observe**
that balanced data improves accuracy compared to
random sampling. Experiments on addition with
the ‘$’ symbol wrapped for each sample.
**Structured Data Sampling.** While data formatting plays a crucial role, we also discover that
choosing the samples carefully is also essential. When sampling operands for n-digit addition
uniformly at random between 1 to 10[n] _−_ 1, the dataset inevitably becomes highly skewed in terms
of the number of samples with (i) operands containing a certain number of digits and (ii) operands
-----
resulting in a certain number of carry-on[2] operations. For instance, in the case of 3-digit addition,
random sampling results in a meager 0.01% probability of selecting a 1-digit number. Additionally, 1
or 2 carry-on operations are more likely to occur than 0 or 3. To address this imbalance, we employ a
structured sampling approach. Specifically, we aim to (i) balance digits by assigning higher weights
to lower-digit numbers during the sampling process and (ii) balance carry-ons by ensuring an equal
distribution of examples with 0, 1, . . ., n carry-on operations. When sampling 10, 000 examples of
3-digit addition, we include all 100 1-digit additions, 900 2-digit samples and 9000 3-digit samples.
Note that while the number of samples increases, the fraction of all possible k−digit additions that
we sample for k = 2, 3 decreases due to the inherent skew. The split was chosen to ensure we saw a
“reasonable” fraction of all possible k−digit samples for all k. Similarly, we ensure that the number
of samples with 0, 1, 2, or 3 carry-ons are all approximately 2500.
Figure 2 reveals the importance of balancing. We observe improvements in accuracy across the board
while using balanced data when compared to random sampling. Further, random sampling performs
relatively poorly even for the simple task of 2−digit addition, possibly due to the fact that the model
has not seen enough of these examples. For the remaining experiments, we set the default dataset for
addition to be one that has both balanced digits and carry-ons.
4 DATA FORMAT CHALLENGES AND ARITHMETIC EMERGENCE
We start by examining integer addition. We first focus on 3-digit addition, i.e., where the two
summads have at most 3 digits (≤ 999). Later, in Section 7, we extend our findings to numbers with
up to 10 digits. Surprisingly, teaching addition can be more complex than expected.
**Training on Conventional Data.** We start by training NanoGPT on standard addition data
represented as ‘A3A2A1 + B3B2B1 = C3C2C1’, termed the plain format. However, as shown in
Figure 1, this leads to fairly poor performance. We suspect that this is because the next-token
prediction objective outputs the most significant digit (MSB) first. The following lemma clarifies the
necessity to access all operand digits for outputting the MSB first.
**Lemma 1. Let A and B be two n-digit numbers, and let C = A + B. Suppose an algorithm A**
_outputs the digits of C in decreasing order of significance, then A must have access to all digits of A_
_and B starting from the first digit that it outputs._
The lemma suggests that to train a model for addition and to output the MSB first, it is necessary to
emulate a “global” algorithm. Unlike the standard “local” algorithm for addition, which consists of
computing digit-wise sums and carry-ons, approximating the global algorithm would require learning
a more complicated function than necessary. The increased complexity results in decreased accuracy,
as observed in our experiments. Liu et al. (2023) refer to this phenomenon as attention glitches.
**Reversing the Output.** We propose that the reverse format ‘$A3A2A1 + B3B2B1 = C1C2C3$’[3] is
more suitable for next-word prediction models. The rationale behind this is that when generating the
sum by starting with the least significant digit (LSB), the model only needs to learn a local function
of three inputs per digit – the two relevant digits of the operands and the carry-on from the previous
digit. This local operation simplifies the function to be learned. The following lemma formalizes this:
**Lemma 2. There exists an algorithm that computes C = A + B for two n-digit numbers A and B**
_and outputs its digits in increasing order of significance such that, at each position i, the algorithm_
_only requires access to the i[th]_ _digits of A and B, as well as the carry-on from the previous position._
Lemma 2 directly follows from the standard algorithm for addition, which performs the sum and
carry-on operations digit by digit. The implications of these lemmata are evident in our experiments
when comparing the accuracy of the plain and reverse formats. As shown in Figure 1, training
on reversed outputs significantly enhances accuracy, with considerably fewer samples. What is
particularly remarkable is the rapid emergence of addition occurring between 1k to 4k samples for
reverse. During that, the model rapidly transitions from being unable to add two numbers to being
capable of perfectly adding. This leads us to ask:
2In this paper, we adopt the definition that a carry-on operation involves transferring information from
one digit position to another position of higher significance. Therefore, we refer to the “borrow” operation in
subtraction as a carry operation.
3We use ‘$’ symbol for data delimiter for the reverse format. Refer to Appendix B.1 for details.
-----
_Why does addition rapidly emerge as the number of training examples increases?_
MATRIX COMPLETION: AN INCOMPLETE TALE OF EMERGENCE
Although the rapid transition observed in the previous section may initially seem surprising, closer
examination reveals a fascinating equivalence – learning an addition map on n digits from random
samples can be considered as completing a rank-2 matrix. Establishing this connection with low-rank
matrix completion (LRMC) provides meaningful insights into the observed phenomenon. However
as we explain in this Section, this connection does not tell the complete story, and Transformers
possess generalization capabilities far beyond what LRMC would predict.
**Learning addition tables is Matrix Completion.** Learning addition from samples i + j can be
formulated as a rank-2 Matrix Completion (MC) problem, where we partially observe an n × n
matrix M, whose (i, j)-th entry represents i + j. M can be decomposed into the sum of two rank-1
matrices, N **1[T]** + 1N _[T]_, where N is a vector with entries {1, . . . n}, and 1 is the vector of ones.
Recovering a rank-2 matrix, in the absence of noise, can be sample-optimally performed by a simple
iterative algorithm from Király et al. (2015) (Algorithm 1 in Appendix B.2). As depicted in Figure 3a,
a rapid transition occurs at O(n), a well-known matrix recovery phenomenon (Recht, 2011).
We notice a similar rapid transition in (a) MC of Addition Matrix (b) Comparing LRMC & NanoGPT
NanoGPT. To investigate it, we focus on 2- 1.0 100
digit addition (i.e., n = 100) and evaluate 0.8 n = 20 80
the performance of learning addition through 0.6 n = 50 60
NanoGPT and LRMC (Figure 3a) by con- 0.4 n = 100n = 500 40 plain
structing train data as the revealed entries Success Probability0.2 est Accuracy (%)T 20 reverse
of the M matrix. Note that the dataset 0.0 0
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|n|= 20|||||
|n n|= 50 = 100|||||
|n|= 500|||||
|||||||
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
|||plain|||
|||revers matrix|e comp|letion|
10[1] 10[2] 10[3] 10[4] 0 1000 2000 3000 4000 5000
n = 20
n = 50
n = 100
n = 500
plain
reverse
matrix completion
is no longer balanced, as the revealed en- Number of Revealed Entries Number of train examples
tries are randomly sampled for LRMC exper- Figure 3: (a) We run Algorithm 1 (Király et al., 2015) on
iments, to match the standard MC probabilis- the addition matrix for n = 20, 50, 100, 500 and report
tic settings Recht (2011). In Figure 3b, both the success probability while varying the number of reNanoGPT and LRMC exhibit rapid transitions vealed entries. As expected, a sharp transition occurs when
approximately (n) entries are revealed. (b) We compare
at approximately 1500 samples. _O_
the performance of NanoGPT trained on a dataset con
While the observed rapid transition can be taining n = 100 samples (i.e., 2-digit addition) to that of
attributed to the principles of LRMC, shed- the corresponding LRMC problem using the same sample
ding light on the emergent arithmetic skill in set. Remarkably, at ≈ 1500 samples, both NanoGPT and
NanoGPT, this connection falls short of cap- Algorithm 1 begin learning addition almost flawlessly.
turing the full generalization capabilities displayed by NanoGPT.
**NanoGPT generalizes better than Matrix Completion solutions.** Upon further investigation,
we find that NanoGPT exhibits capabilities beyond LRMC. Notably, LRMC is constrained by its
inability to generalize in the presence of missing rows or columns. In our context, this equates to
certain numbers being omitted from the training data. To assess NanoGPT’s ability to overcome
this, we deliberately exclude specific numbers or digits from our training data and assess the model’s
ability in learning addition. Can the model still generalize to unseen numbers?
As shown in Table 1, the answer to this question is a resounding Yes! The model achieves almost perfect accuracy even when excluding half of all possible 3−digit numbers. NanoGPT can successfully
learn 3-digit addition even when numbers or digits are intentionally excluded from the training data,
thereby exhibiting generalization capabilities that far exceed what standard LRMC would predict.
Table 1: Impact of excluding numbers on addition task: NanoGPT models trained with 100/200/500 excluded
operands show no significant drop in accuracy and in some cases, the performance even improves.
Excluding Excluding Excluding
No Exclusion 100 numbers 200 numbers 500 numbers
Plain Rev Plain Rev Plain Rev Plain Rev
Overall Acc. 92.65%(±2.53) 99.87%(±0.24) 93.36%(±2.62) 99.82%(±0.29) 93.61%(±2.77) 99.78%(±0.17) 93.47%(±3.22) 100.0%(±0.0)
Exclusion Acc. - - 94.43%(±1.70) 99.87%(±0.24) 95.25%(±1.60) 99 88%(±0.08) 94.39%(±2.05) 100.0%(±0.0)
Specifically, we randomly choose 100/200/500 numbers and exclude them from the training data.
We then evaluate the trained models using two metrics: (i) Overall accuracy: which measures the
-----
accuracy over a random set of 10, 000 examples and (ii) Exclusion accuracy: which measures the
accuracy only over the excluded set (where either of the two operands is one of the excluded numbers).
Remarkably, excluding numbers from the training data sometimes leads to improved performance.
We conjecture that this may be due to a regularization effect, similar to random masking or cropping
images in vision tasks. In Appendix B.2.1, we further find that NanoGPT models can even generalize
to unseen digits, where a digit is absent from a particular ordinal position.
6 TRAINING ON CHAIN-OF-THOUGHT DATA EXPEDITES EMERGENCE
So far, we observed that simply reversing the output can result in remarkable performance, exceeding
that of LRMC in learning addition. Here, we investigate if it is possible to expedite the emergence
of addition by further enhancing the data format. As addition is a multi-step process, we explore
the idea of incorporating additional information about each intermediate step. We adopt a CoT
style approach, where we guide the model to learn addition step-by-step. We explore two levels
of detail in the provided instruction steps, as shown in Figure1: (i) Simplified Scratchpad with
minimal information – the sum and carry information for each digit/step. (ii) Detailed Scratch**pad with comprehensive information on detailed traces of execution for each intermediate step.**
(a) Sample Efficiency
(b) Token Efficiency
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
|||addition|||
||||||
|||reverse|||
||||||
|||simplifie detailed|d scratch scratchpa|pad d|
||||||
0 100k 200k 300k
addition
reverse
simplified scratchpad
detailed scratchpad
Number of tokens in train data
The results in Figure 4a show that the model trained
on simplified scratchpad achieves 100% accuracy
with only 2000 samples, whereas reverse requires
more than twice as many. Detailed scratchpad,
which provides even more fine grained information,
achieves perfect addition with just 1000 samples.
This indicates a clear message: incorporating more information enables the model to learn addition with far
fewer examples. We conjecture that this is because
breaking down the required compositional function
to be learned into individual, simpler components
allows the model to learn a higher-dimensional but
easier-to-learn function map, in agreement with recent theoretical work (Li et al., 2023; Malach, 2023).
100
80
60
40
20
100
80
60
40
est Accuracy (%)T 20
0
0 1k 2k 3k 4k 5k 6k 7K
Number of train examples
formation enables the model to learn addition with far Figure 4: (a) Comparison of sample efficiency:
fewer examples. We conjecture that this is because evaluating performance on training dataset with
breaking down the required compositional function different numbers of addition samples. While all
to be learned into individual, simpler components variants other than plain achieve 100% accuracy,
they differ in terms of sample complexity. (b) Num
allows the model to learn a higher-dimensional but
ber of tokens in train dataset required by NanoGPT
easier-to-learn function map, in agreement with re
to learn addition. Reverse is the most efficient
cent theoretical work (Li et al., 2023; Malach, 2023).
in terms of token usage for model training, as
We note that while CoT-style training enhances sam- the scratchpad methods, although more sample
efficient, require more tokens per sample.
ple efficiency, it may not necessarily be the most
“token-efficient” approach. To account for the cost associated with training and inference, we conduct
a cost analysis based on the number of tokens (number of training samples × number of tokens per
sample – see Appendix G for details) in the train data. encountered during training. The result in
Figure 4b shows that reverse is the most efficient in terms of token usage for model training. The
scratchpad methods, although more sample-efficient, require more tokens per sample.
In summary, incorporating scratchpad data and decomposing the addition task into steps offer a promising
strategy to improve the performance and efficiency of
small models in learning addition from scratch. Nevertheless, for practical usage, it is crucial to evaluate
both the number of samples for achieving the desired
performance and the actual token requirements during
training and inference.
100
80
60
40
20
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||plain||||
|||||||
|||reve simp|rse lified scra|tchpad||
|||||||
|||||||
|5-di||deta git 7|iled scrat -digit|chpad 10-digit||
50k 100k 150k 200k
plain
reverse
simplified scratchpad
detailed scratchpad
5-digit 7-digit 10-digit
Number of train examples
7 LONGER DIGITS AND BLENDING
ARITHMETIC WITH SHAKESPEARE
Figure 5: Comparison of sample efficiency for
ARITHMETIC WITH SHAKESPEARE 5, 7, and 10-digit additions. Training on plain
requires an increasing number of samples for
In this section, we go beyond 3-digit addition to en- higher digits, while the sample complexity for
compass a wider range of arithmetic tasks and longer other data formats remains relatively consistent.
digits to show that our insights on data sampling and
formatting hold true even in this regime. We also explore the effect of mixing arithmetic with text
data, and few-shot prompting.
-----
**Extending to longer digit addition.** We repeat the experiment from Section 3 with up to 10 digit
integers. Figure 5 shows that the behavior of all data formats remains similar across varying number
of digits. In fact, the performance gap between the modified formats and plain grows with longer
digits. While plain requires an increasing number of samples to learn higher-digit additions, the
reverse and scratchpad formats maintain a consistent sample complexity. We also observe similar
results in the fine-tuning setting, where we fine-tune a model initially trained on k-digits on k +1-digit
data. See Appendix C for details on the experimental setup and additional results.
**Mixing Text with Arithmetic Data.** While
the models so far were trained exclusively on
arithmetic tasks, in practice, LLMs utilize a combination of arithmetic and text data for training.
How does that affect the emergence of arithmetic skills? To explore that we incorporate
both addition samples and text into our train data
and evaluate the models with few-shot prompting (showing a few examples of addition in the
prompt) to see if it is able to be effectively conditioned for the appropriate context (arithmetic/text generation). As we see in Figure 6, we
find that few-shot prompting improves the performance of the model, allowing it to perform
addition accurately even in the plain format.
(a) Plain
5k 10k 15k 20k 25k 30k 35k 40k
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||Z 1|ero -sh|-sho ot|t|
|||||2 3|-sh -sh|ot ot||
|||||t|ext_|pro|mpt|
Zero-shot
1-shot
2-shot
3-shot
text_prompt
# Plain Addition Samples
(b) Detailed scratchpad
100
80
60
40
20
100
20
0
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Zero-s|hot|
|---|---|---|---|---|---|
|||||1-sho|t|
|||||2-sho 3-sho|t t|
|||||text_p|rompt|
|||||||
|||||||
Zero-shot
1-shot
2-shot
3-shot
text_prompt
# Detailed Scratchpad Samples
Figure 6: Performance of NanoGPT model trained with
the Shakespeare dataset, addition dataset in plain, and
detailed scratchpad format. The number of plain (left)
and detailed scratchpad (right) formatted samples are
varied. Performance is evaluated on zero-shot, few-shot,
and text prompts, with the shaded area representing the
standard deviation across various prompt exemplar sets.
Intriguingly, accuracy remains high using plain even with the inclusion of a text prompt preceding
“A+B=”. This is likely due to the structure of our mixed dataset where addition examples are
interspersed within Shakespeare text. With the incorporation of more addition examples, instances
where addition follows Shakespeare text increases, leading to a decrease in potential inconsistencies
when text content is present during addition test queries. We further analyze the effect of text on
prompting for both cases with and without text in the training data in Appendix E.
**Teaching arithmetic operations beyond addition.** We also note that the results on addition also
hold for broader mathematical operations such as subtraction, multiplication, sine, and square root
operations where each operation entails its unique challenges and intricacies. Refer to Appendix D
for detailed settings and results.
8 FINE-TUNING, SCALING, AND PRETRAINING IN LARGER MODELS
We extend our study from NanoGPT to larger
models like GPT-2 and GPT-3 to explore the
impact of pretraining and model size. Initially,
we compare the performance of NanoGPT and
GPT-2, both trained from scratch. This highlights the advantages of larger model scales, especially in zero-shot scenarios. Subsequently,
we delve into the impact of tokenization methods and model pretraining in GPT-2 models.
We then fine-tune a pretrained GPT-3 on various arithmetic tasks using different data formats, reaffirming the importance of data formatting for larger pretrained models.
80
60
40
20
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||NanoG GPT-2,|PT, cha char-lev|||r-level, scr el, scratc||||atch h||
||||GPT-2, GPT-2,|tiktoke tiktoke|||n, scratch n, pretrain||||ed||
||||GPT-2, GPT-2,|tiktoke tiktoke|||n+space, n+space,||||scratch pretrain|ed|
||||||||||||||
||||||||||||||
||||||||||||||
Train Iterations
Figure 7: Performance of various configurations of the
ious arithmetic tasks using different data for
GPT-2 model on the addition task. We compare the ef
mats, reaffirming the importance of data for
fects of tokenization methods, specifically character-level
matting for larger pretrained models.
tokenization versus Tiktoken (OpenAI’s BPE tokenizer),
training initialization (training from scratch versus train
**Comparing NanoGPT and GPT-2:** **Tok-**
ing from a pretrained GPT-2 model), and the inclusion
**enizer and Training Pretrained Models** We or exclusion of spaces between numbers. The results
repeat our experiments on a GPT-2 model highlight the significance of utilizing pretrained models
with 85M parameters, with twice as many lay- and incorporating spaces for consistent tokenization of
ers, heads, and embedding size compared to numbers when training a model for arithmetic tasks.
NanoGPT. The transition to a GPT-2 setup necessitates several modifications. Firstly, we shift to OpenAI’s Tiktoken BPE tokenizer. We also
-----
examined two different training approaches: training the model from random initialization (scratch)
and fine-tuning the pretrained model sourced from Huggingface. To circumvent potential inconsistent
tokenization of numbers, alterations were made in data formatting to include spaces between numbers.
Figure 7 shows that GPT-2 demonstrates high performance in addition tasks with both character-level
tokenization and Tiktoken with spaces between digits. This aligns with the results by Wallace et al.
(2019), suggesting that character-level tokenization exhibits stronger numeracy capabilities compared
to word or sub-word methods. Furthermore, comparing the models trained from scratch and the
models trained from the pretrained model, we observe that fine-tuning a pretrained model results in
better performance compared to training a model from scratch.
**GPT-3 experiments.** We consider three GPT-3 variants: Ada, Curie, and Davinci (OpenAI).
We fine-tune these models using the same four data formatting methods as our NanoGPT experiments.
Table 2: Evaluation of addition performance for fine-tuned GPT-3 models: Davinci, Curie, and Ada. In each
case, the model is finetuned on 1000 samples of addition in the corresponding format.
|GPT-3 Model|Zero-shot Plain Reverse Simplified Scratchpad Detailed Scratchpad|
|---|---|
|Davinci Curie Ada|2% 34% 80.9% 88.7% 99.5% 0.0% 1.4% 12.3% 10.7% 99.7% 0.0% 0.3% 6.3% 0.6% 99.8%|
|---|---|
The results in Table 2 show that starting with pretrained GPT-3 significantly improves performance
compared to training NanoGPT or GPT-2 from scratch with only 1000 examples (Figure 4a). Similar
to the result of training NanoGPT from scratch, the modified formats all outperform the plain format.
Detailed scratchpad data achieves near-perfect accuracy, albeit with increased training and inference
costs due to higher context length requirements. For our detailed experimental setup and further
experiments on larger models, including fine-tuning GPT-3 refer to Appendix F.
9 LIMITATIONS
**Length generalization.** In our experiments, we did not observe any instances where the model
could predict beyond the number of digits it had been trained on (see Appendix H). Shaw et al. (2018);
Sun et al. (2022) reported similar difficulties and proposed approaches such as relative positional
encodings. Anil et al. (2022) suggests that models can only perform out-of-distribution tasks by
combining fine-tuning, prompting, and scratchpad techniques.
**Model/Data scale.** Due to the smaller scale of our experiments, we were able to thoroughly
examine the impact of individual components on the model’s arithmetic learning capabilities. Our
model was limited to decoder-only architectures, primarily focusing on character-level tokenization.
Although we have some preliminary results on scaling up and incorporating BPE-based tokenization,
it is not clear if all our findings can be generalized to the scale of LLMs being used in practice.
**Beyond elementary arithmetic.** We choose to analyze simple arithmetic operations in order
to carefully isolate factors that contribute to emergence. While the existing literature has already
demonstrated the emergence of complicated abilities in practice, our work seeks to provide a better
understanding of this behavior by extensive ablations in a controlled setting.
10 CONCLUSION
In this study, we investigate teaching arithmetic operations to small randomly initialized transformers
using the next-token prediction objective. We carefully ablate different aspects of the training setting
so as to isolate the factors that contribute to the emergence of arithmetic capabilities. Our results reveal
that traditional training data is sub-optimal for learning arithmetic, and training on data with detailed
intermediate steps or even simply reversing the output improves accuracy and sample complexity.
We also study the effects of few-shot prompting, pretraining, and model scale. Despite improvements
from detailed data, length generalization remains a challenge, highlighting the need for better-curated
training data to ensure successful learning of specific algorithms as opposed to just learning an
approximate function map. The correct approach for learning multiple arithmetic operations, of
different levels of complexity, is still unclear. We anticipate that this research will contribute to a
more nuanced understanding of the mechanisms by which transformers (approximately) acquire
algorithmic skills.
-----
ACKNOWLEDGMENTS
This work was supported by the Institute for Foundations of Data Science (IFDS), ONR Grant No.
N00014-21-1-2806, N00014-23-1-2848, and a grant by FuriosaAI.
REFERENCES
Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Ramasesh,
Ambrose Slone, Guy Gur-Ari, Ethan Dyer, and Behnam Neyshabur. Exploring length generalization
in large language models. arXiv preprint arXiv:2207.04901, 2022.
Samuel R Bowman. Can recursive neural tensor networks learn logical reasoning? arXiv preprint
_arXiv:1312.6192, 2013._
Samuel R Bowman, Christopher Potts, and Christopher D Manning. Recursive neural networks for
learning logical semantics. CoRR, abs/1406.1827, 5, 2014.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence:
Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize
via recursion. arXiv preprint arXiv:1704.06611, 2017.
François Charton. Linear algebra with transformers. arXiv preprint arXiv:2112.01898, 2021.
François Charton. What is my math transformer doing?–three results on interpretability and generalization. arXiv preprint arXiv:2211.00170, 2022.
Xinyun Chen, Chang Liu, and Dawn Song. Towards synthesizing complex programs from inputoutput examples. arXiv preprint arXiv:1706.01284, 2017.
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to
self-debug. arXiv preprint arXiv:2304.05128, 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models.
_arXiv preprint arXiv:2210.11416, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal
transformers. arXiv preprint arXiv:1807.03819, 2018.
Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen,
Olivier Bousquet, and Denny Zhou. Compositional semantic parsing with large language models.
_arXiv preprint arXiv:2209.15003, 2022._
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West,
Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang
Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on
compositionality, 2023.
-----
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models, 2023.
Michael Hanna, Ollie Liu, and Alexandre Variengien. How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. arXiv preprint arXiv:2305.00586,
2023.
Samy Jelassi, Stéphane d’Ascoli, Carles Domingo-Enrich, Yuhuai Wu, Yuanzhi Li, and François
Charton. Length generalization in arithmetic transformers, 2023.
Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228,
2015.
[Andrej Karpathy. char-rnn. https://github.com/karpathy/char-rnn, 2015.](https://github.com/karpathy/char-rnn)
Andrej Karpathy. Andrej karpathy’s lightweight implementation of medium-sized gpts. GitHub,
[2022. URL https://github.com/karpathy/nanoGPT.](https://github.com/karpathy/nanoGPT)
Jeonghwan Kim, Giwon Hong, Kyung-min Kim, Junmo Kang, and Sung-Hyon Myaeng. Have you
seen that number? investigating extrapolation in question answering models. In Proceedings of the
_2021 Conference on Empirical Methods in Natural Language Processing, pp. 7031–7037, 2021._
Franz J Király, Louis Theran, and Ryota Tomioka. The algebraic combinatorial approach for low-rank
matrix completion. J. Mach. Learn. Res., 16(1):1391–1436, 2015.
Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills
of sequence-to-sequence recurrent networks. In International conference on machine learning, pp.
2873–2882. PMLR, 2018.
Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, and Samet Oymak.
Dissecting chain-of-thought: A study on compositional in-context learning of mlps. arXiv preprint
_arXiv:2305.18869, 2023._
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146,
2017.
Bingbin Liu, Jordan T Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang. Exposing attention
glitches with flip-flop language modeling. arXiv preprint arXiv:2306.00946, 2023.
Eran Malach. Auto-regressive next-token predictors are universal learners, 2023.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke
Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? arXiv
_preprint arXiv:2202.12837, 2022._
MosaicML. Introducing mpt-7b: A new standard for open source, commercially usable llms, 2023.
URL www.mosaicml.com/blog/mpt-7b. Accessed: 2023-05-05.
Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with
simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David
Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work:
Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114,
2021.
Santiago Ontanón, Joshua Ainslie, Vaclav Cvicek, and Zachary Fisher. Making transformers solve
compositional tasks. arXiv preprint arXiv:2108.04378, 2021.
-----
OpenAI. OpenAI platform. [URL https://platform.openai.com/docs/models/](https://platform.openai.com/docs/models/gpt-3)
[gpt-3. Accessed: 2023-09-28.](https://platform.openai.com/docs/models/gpt-3)
Joshua Peterson, Stephan Meylan, and David Bourgin. Open clone of openai’s unreleased
webtext dataset scraper. _GitHub, 2019._ [URL https://github.com/jcpeterson/](https://github.com/jcpeterson/openwebtext)
[openwebtext.](https://github.com/jcpeterson/openwebtext)
Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Jonathan
Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. Efficiently scaling transformer inference.
_Proceedings of Machine Learning and Systems, 5, 2023._
Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. Limitations of language models in
arithmetic and symbolic induction. arXiv preprint arXiv:2208.05051, 2022.
Alec Radford and Karthik Narasimhan. Improving language understanding by generative pre-training.
2018.
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. Impact of pretraining term
frequencies on few-shot reasoning. arXiv preprint arXiv:2202.07206, 2022.
Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research,
12(12), 2011.
Scott Reed and Nando De Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279,
2015.
Subhro Roy and Dan Roth. Solving general arithmetic word problems. _arXiv preprint_
_arXiv:1608.01413, 2016._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to
use tools, 2023.
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations.
_arXiv preprint arXiv:1803.02155, 2018._
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi,
Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057, 2022.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam
Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the
imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint
_arXiv:2206.04615, 2022._
Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary,
Xia Song, and Furu Wei. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554,
2022.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.
_Advances in neural information processing systems, 27, 2014._
Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia,
Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. Transcending scaling laws with
0.1% extra compute. arXiv preprint arXiv:2210.11399, 2022.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog
applications. arXiv preprint arXiv:2201.08239, 2022.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand
Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language
models, 2023.
-----
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia
Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and
outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing
_systems, 30, 2017._
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. Do nlp models know
numbers? probing numeracy in embeddings. arXiv preprint arXiv:1909.07940, 2019.
Cunxiang Wang, Boyuan Zheng, Yuchen Niu, and Yue Zhang. Exploring generalization ability of
pretrained language models on arithmetic and logical reasoning. In Natural Language Processing
_and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China,_
_October 13–17, 2021, Proceedings, Part I 10, pp. 758–769. Springer, 2021._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency
improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Taylor Webb, Keith J Holyoak, and Hongjing Lu. Emergent analogical reasoning in large language
models. arXiv preprint arXiv:2212.09196, 2022.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models.
_arXiv preprint arXiv:2206.07682, 2022a._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny
Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint
_arXiv:2201.11903, 2022b._
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang.
Gpt can solve mathematical problems without a calculator, 2023.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large
language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023.
Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
Wojciech Zaremba, Karol Kurach, and Rob Fergus. Learning to discover efficient mathematical
identities. Advances in Neural Information Processing Systems, 27, 2014.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in
large language models. arXiv preprint arXiv:2205.10625, 2022a.
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi.
Teaching algorithmic reasoning via in-context learning. arXiv preprint arXiv:2211.09066, 2022b.
-----
# Appendix
### Table of Contents
**A Proofs** **15**
**B Additional Experiments** **16**
B.1 Zero-Padding, Symbol Wrapping, and Testing . . . . . . . . . . . . . . . . . . 16
B.2 Low-Rank Matrix Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.3 Supplementary Experiments on Addition . . . . . . . . . . . . . . . . . . . . . 18
B.4 The Importance of Intermediate Step Design . . . . . . . . . . . . . . . . . . . 19
B.5 The Effect of Noisy Inputs on Accuracy . . . . . . . . . . . . . . . . . . . . . . 21
B.6 Analyzing the results on Sine/Sqrt . . . . . . . . . . . . . . . . . . . . . . . . . 23
**C Extending to Longer Digit Addition** **24**
C.1 Training from Random Initialization . . . . . . . . . . . . . . . . . . . . . . . 24
C.2 Fine-Tuning from Pretrained Models . . . . . . . . . . . . . . . . . . . . . . . 25
C.3 Impact of Formats on Fine-Tuning . . . . . . . . . . . . . . . . . . . . . . . . 27
**D Teaching Arithmetic Operations Beyond Addition** **28**
D.1 Extended Arithmetic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 28
D.2 Jointly Training on All Five Arithmetic Tasks . . . . . . . . . . . . . . . . . . . 30
**E** **Mixing Text with Arithmetic Data** **32**
E.1 Few-Shot Prompting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
E.2 Disentangling the effect of text on prompting . . . . . . . . . . . . . . . . . . . 32
E.3 Prompting with Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
**F** **Fine-tuning, Scaling, and Pretraining in Larger Models** **36**
**G Token Efficiency Across Data Formats** **38**
**H Length Generalization** **40**
**I** **Experimental Setup** **43**
I.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
I.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
I.3 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
I.4 Hyperparameter Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . 47
**J** **Prompt Examples** **48**
J.1 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
J.2 Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
J.3 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
J.4 Sine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
J.5 Square Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
J.6 Noisy Simple Scratchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
J.7 Example data for GPT-3 fine-tuning . . . . . . . . . . . . . . . . . . . . . . . . 53
-----
A PROOFS
Here, we present the proofs of Lemma 1 and 2.
**Lemma 1. Let A and B be two n-digit numbers, and let C = A + B. Suppose an algorithm A**
_outputs the digits of C in decreasing order of significance, then A must have access to all digits of A_
_and B starting from the first digit that it outputs._
_Proof. We begin by assuming for contradiction that there does exist an algorithm Algo that does_
not have access to all digits of A and B and still outputs C = A + B correctly for all n− digit
numbers A, B. Without loss of generality, say Algo does not have access to the k−th digit of A
where k ∈ [n] represents the position counting from the least significant digit. Then consider the
example B = (10[n] 1) and (A = 000 . . . Ak00 . . . 0) where B is just the integer with n 9’s and A
_−_
is just 0’s with Ak in the kth position. If Ak = 0, then Cn+1 = 0, but if Ak = 1, then Cn+1 = 1.
Therefore, without access to the k−th digit of A, there exist examples where the algorithm will surely
make a mistake. Therefore, by contradiction such an Algo cannot exist.
**Lemma 2. There exists an algorithm that computes C = A + B for two n-digit numbers A and B**
_and outputs its digits in increasing order of significance such that, at each position i, the algorithm_
_only requires access to the i[th]_ _digits of A and B, as well as the carry-on from the previous position._
_Proof. First note that the trivial algorithm for addition is exactly the proof of this Lemma. However,_
we present a more formal argument below for completeness. Let A, B be n−digit numbers and
_C = A + B be at most an (n + 1) digit number. Define the digits of A, B, and C as Ai, Bi, and Ci,_
respectively, for i ∈ [n] counting from the least significant digit once again. Then, the addition can be
performed using the following steps. First, Ci = (Ai + Bi + carryi) mod 10 where carryi is the
carry-on from the addition of digits at positionthen carryi = 0. The carry for the next position is then calculated as i − 1. If there is no carry from the previous position, carryi+1 = _Ai+Bi10+carryi_ .
j k
Putting this together, the algorithm for addition can be described as follows:
mod 10Step 1: Set and carry carry1i = 0+1 =. RepeatAi+Bi10 for+carry i =i {,1 Step 3:, . . ., n Output}: {Step 2: Ci} Compute Step 4: Output Ci = ( CAni+1 + = B carryi + carryn+1i.)
j k
It is easy to see that this algorithm computes the digits of the sum C correctly and requires only the
individual digits at position i and the carry from the previous position. Therefore, this algorithm
satisfies the conditions of the lemma.
-----
B ADDITIONAL EXPERIMENTS
B.1 ZERO-PADDING, SYMBOL WRAPPING, AND TESTING
**Zero-padding and Symbol wrapping** As discussed briefly in Section 3, we found a significant
benefit to using padding for multi-digit addition. Throughout our experiments, we use the plain
format without any such padding (denoted as “vanilla” below) as the default baseline representing the
conventional data format used in training. Nonetheless, we explore modifications to this plain format
to enhance performance; zero-padding, and wrapping with a single symbol. Zero-padding ensures a
fixed length for operands and the output. In the case of 3-digit addition, this means 3-digit operands
and a 4-digit output. For example, ‘112 + 29 = 141’ becomes ‘112 + 029 = 0141’. As shown in
Table 3. this modification significantly improves model performance. Next, we wrap each sample
using the ‘$’ symbol as in ’$112 + 29 = 141$’. We found this performs on par with zero-padding.
As a result, we adopt the ‘$’ symbol for efficient data delimiter, extending its use to the reverse format.
Figure 8 shows ‘$’-wrapping also enhances the performance of the reverse format. Despite the plain
format being improved with the ‘$’ delimiter, it remains short of the reverse format’s accuracy and
sample efficiency. We continue to maintain the original plain format as a baseline since it not only
exemplifies conventional data but further emphasizes the need for improved data formatting to ensure
efficient training. As such, for the reverse format, we have incorporated the ‘$’ delimiter in our
formatting modifications.
Table 3: Test accuracy of NanoGPT model on 3-digit addition trained on 10, 000 samples of plain format data,
comparing (i) vanilla format without modifications, (ii) Zero-padding format, and (iii) ‘$’-wrapped format.
The results show significant performance enhancement through zero-padding for fixed length and similar
improvements when deploying a single-symbol wrapping.
Vanilla Zero-pad ‘$’-Wrapped
88.17% 97.74% 97.76%
100
100
80
60
40
20
80
60
40
20
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
|||||reverse, reverse,|with $ wihout $||
||||||||
|||||plain, wi plain, wi|th $ hout $||
||||||||
||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
|||||reverse reverse|, with $, wihout $||
||||||||
|||||plain, w plain, w|ith $ ihout $||
||||||||
||||||||
2000 4000 6000 8000 10000
reverse, with $
reverse, wihout $
plain, with $
plain, wihout $
Number of train examples
2000 4000 6000 8000 10000
reverse, with $
reverse, wihout $
plain, with $
plain, wihout $
Number of train examples
Figure 8: Performance of NanoGPT (left) and GPT-2 (right) models on 3-digit addition using plain and reverse
format, both with and without ‘$’ delimiter. The addition of the ‘$’ symbol noticeably enhances performance in
both formats. Nevertheless, the plain format underperforms compared to the reverse format, particularly in terms
of sample efficiency. While we maintain the original plain format as a baseline – emphasizing the necessity for
improved data formatting for efficient emergence – we incorporate the ‘$’ wrapping in our modified reverse
format.
-----
B.2 LOW-RANK MATRIX COMPLETION
In our Low-Rank Matrix Completion experiment for the addition matrix (which is of rank-2), we
employ an iterative algorithm proposed by Király et al. (2015). This algorithm systematically searches
for a 2 × 2 submatrix in which three entries are known and one entry is unknown. It then fills the
unknown entry to ensure that the determinant of the 2 _×_ 2 submatrix becomes zero, where the solution
is known to be optimal. We present the full pseudo-code in Algorithm 1.
To assess the performance of the algorithm, we generate n × n addition matrices for various values
of n (e.g., 20, 50, 100, 500). We vary the number of revealed entries, randomly sampling a sparse
matrix where only a specified number of entries between n and n × n are known, while the remaining
entries are set to zero. We repeat this process 100 times for each number of revealed entries, tracking
the algorithm’s success or failure in finding the solution. We calculate the average success rate across
the trials and present the success probabilities in Figure 3a, where we observe a sharp phase transition
when O(n) entries are observed, as expected.
**Algorithm 1: Iterative 2 × 2 Matrix Completion Algorithm**
**Data: Data Matrix M ∈** R[n][×][n] with partially revealed entries. Assumed to be of Rank 2.
**Result:** **_M ∈_** R[n][×][n], Success/Fail.
**12 n n12 ←** 10 represents number of resolved submatrices. represents number of unresolved submatrices.
_←_ [c]
**3** **_M ←_** **_M_**
**4 while n1 ≥** 1 do
[c] /* As long as we resolved at least one submatrix in the previous
iteration *[/]
**5** _n1_ 0
**6** _n2 ←_ 0
**7** **for ← i = 1 to n do**
**8** **for j = 1 to n do**
/* do something *[/]
**10** **if** **_Mi,j is not revealed and all its neighbors are revealed then_**
**11** **_Mi,j =_** **_Mi+1M,ji+1×,jM+1i,j+1_**
[c] c [c]
**12** _n1_ _n1 + 1c_
c ←
**13** **if** **_Mi+1,j is not revealed and all its neighbors are revealed then_**
**14** **_Mi+1,j =_** **_Mi,jM×Mi+1i+1,j_** _,j+1_
[c] c [c]
**15** _n1_ _n1 + 1_ c
c ←
**16** **if** **_Mi+1,j+1 is not revealed and all its neighbors are revealed then_**
**17** **_Mi+1,j+1 =_** **_Mi+1,jM×i,jMi,j+1_**
[c] c [c]
**18** _n1_ _n1 + 1_ c
c ←
**19** **if** **_Mi,j,_** **_Mi+1,j,_** **_Mi,j+1,_** **_Mi+1,j+1 are all revealed then_**
**20** **continue**
**21** **else[c]** [c] [c] [c]
**22** _n2 ←_ _n2 + 1_
**23 if n2 > 0 then**
**24** **return** **_M_**, Fail
**25 else**
**26** **return** **_M[c][c], Success_**
B.2.1 GENERALIZING TO UNSEEN DIGITS
Building upon the model’s robustness to excluded numbers, we further investigate its ability to handle
excluded digits, where a digit is absent from a particular ordinal position. Intuitively, this should
-----
be even more challenging since excluding a digit means the model cannot learn directly how to
operate in that position. Instead, it would have to generalize and infer that digits act similarly across
all positions. We construct datasets with the number 5 excluded in 1st (LSB), 2nd, and 3rd (MSB)
positions, and train separate models on each of these datasets. We compare the resulting models
by evaluating overall accuracy on a test set of 10, 000 randomly sampled numbers, as well as their
accuracy specifically on samples with 5 in each position which we call exclusion accuracy.
The results presented in Table 4 indicate that the model is not as robust to excluding digits compared
to excluding numbers. However, it still achieves more than 66% accuracy on every test and maintains
an overall accuracy above 85%. Moreover, it appears that excluding a number in the least significant
position yields the worst performance. This can be attributed to the fact that learning addition in this
position is transferable to other positions since it is unaffected by carry-on operations. Failing to
learn addition in this position, however, will have a detrimental impact on other positions as well.
Table 4: Impact of excluding digits on addition task: We investigate whether GPT-based models can infer
addition on an excluded digit in a specific position from training data on other positions. We compare NanoGPT
models trained with and without an excluded digit and find that excluding digits is harder to learn but not entirely
impossible, with the worst performance observed when excluding the least significant digit.
|Excluded position|Input format|“5” in the “5” in the “5” in the Overall Acc 1st (LSB) digit 2nd digit 3rd (MSB) digit|
|---|---|---|
|No exclusion 1st (LSB) digit 2nd digit 3rd (MSB) digit|Plain Reverse Plain Reverse Plain Reverse Plain Reverse|92.65%(±2.53) 92.58%(±2.93) 93.59%(±2.57) 95.16%(±1.23) 99.87%(±0.24) 99.89%(±0.20) 99.95%(±0.1) 99.97%(±0.04) 93.87%(±1.64) 81.8%(±7.01) 94.04%(±1.45) 94.22%(±1.98) 98.00%(±1.30) 90.32%(±6.64) 98.22%(±1.27) 97.86%(±1.54) 92.53%(±1.87) 90.46%(±2.64) 83.87%(±3.47) 94.80%(±0.90) 97.91%(±1.47) 98.13%(±1.28) 91.4%(±6.13) 98.93%(±0.60) 90.85%(±2.77) 89.23%(±3.04) 90.94%(±2.78) 85.88%(±2.45) 98.84%(±0.47) 98.7%(±0.54) 98.78%(±0.59) 94.47%(±1.99)|
|---|---|---|
B.3 SUPPLEMENTARY EXPERIMENTS ON ADDITION
B.3.1 COMMUTATIVITY IN ADDITION
We explore whether NanoGPT, trained with a plain data format on 10,000 training samples, captures
the commutativity property of addition. This inquiry involves testing the equation (A+B) = (B +A)
across 9,900 instances.
As shown in Table 5, out of these 9,900 test cases, 8,799 samples exhibit commutative behavior.
While a majority (8,700/8,799) of these instances are straightforward — the model correctly computes
both (A + B) and (B + A) — it is noteworthy that among 164 cases where both outcomes are
incorrect, 99 (60.3%) still preserve commutativity. This suggests a considerable, though imperfect,
understanding of commutativity post-full training.
Table 5: Commutativity analysis of NanoGPT on 3-digit addition, examining the validity of (A + _B) = (B +_ _A)_
for 9,900 test examples.
|P(A + B = B + A, both correct)|P(A + B and B + A both incorrect)|P(A + B = B + A | both incorrect)|
|---|---|---|
|0.878 (8,700 samples)|0.016 (164 samples)|0.603 (99/164 samples)|
|---|---|---|
B.3.2 ADDITION INVOLVING SIGNED NUMBERS
While our primary focus has been on adding positive numbers, we posit that these results can extend to
signed numbers, which effectively encompass both addition and subtraction operations. We conduct
a preliminary experiment, training the model on various signed combinations of two 3-digit numbers.
Figure 9 shows that training on signed numbers presents a more challenging task compared to training
on addition or subtraction on only positive numbers. Notably, test accuracy for addition involving
same-sign operands, (+, +) and (−, −), surpass those involving opposite-sign operands, (+, −) and
-----
(−, +). This observation aligns with the expectation that learning addition is relatively simpler than
learning subtraction.
Signed Addition with 20k examples
+,+ +,- -,+ -,
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
Sign pairs
100
80
60
40
20
100
95
90
85
80
75
70
65
60
2.5k 5k 7.5k 10k 12.5k 15k 17.5k 20k
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
|||||||a|dditi|on|||
||||||||||||
|||||||s si|ubtr gne|action d add|ition||
||||||||||||
||||||||||||
||||||||||||
addition
subtraction
signed addition
Number of train examples
Figure 9: Performance on training NanoGPT on addition of signed numbers. (Left) Training on addition and
subtraction tasks using only positive numbers up to 3 digits, contrasted with the signed addition task involving
various sign combinations. (Right) Performance of the model trained on signed addition with 20,000 training
examples, across different sign combinations in the test data.
B.3.3 PERFORMANCE BASED ON DIFFERENT DIGIT LENGTHS
We analyze the performance of NanoGPT models by segregating the test accuracy based on the digit
count of operands. In Table 6 and Table 7, rows and columns correspond to the digit count in the first
and second operands, respectively.
+,+
L
+,
, rows and columns correspond to the digit count in the first
-,+
-,
Notably, with the plain format, performance decreases with fewer digits in the operands, and an
asymmetry based on operand lengths is observed. We speculate that this asymmetry arises because,
despite efforts to ensure balanced sampling, combinations of small digits, such as (1, 3)-digit or
(2, 1)-digit pairs, are underrepresented in our training dataset.
Table 6: Performance by digit length in plain format addition.
|Col1|1-digit 2-digit 3-digit|
|---|---|
|1-digit 2-digit 3-digit|72.8% 42.2% 77.0% 27.8% 60.0% 91.4% 38.6% 61.2% 97.0%|
|---|---|
In contrast, the reverse format result exhibits notably consistent performance across various digit
combinations, indicating near-perfect learning of addition by the model.
Table 7: Performance by digit length in reverse format addition.
|Col1|1-digit 2-digit 3-digit|
|---|---|
|1-digit 2-digit 3-digit|100.0% 99.8% 99.0% 96.4% 100.0% 100.0% 94.0% 99.4% 100.0%|
|---|---|
B.4 THE IMPORTANCE OF INTERMEDIATE STEP DESIGN
In this section, we underscore the significance of meticulously designing the intermediate steps in
_a Chain-of-Thought manner. Specifically, we investigate whether the enhanced sample efficiency_
of NanoGPT in detailed scratchpad format arises from its longer length or from the breakdown of
intermediate steps into simpler components.
-----
**Randomizing the intermediate steps** To discern the impact of length, we modify the intermediate
steps, replacing them with either a uniform token “#” or random tokens within the vocabulary (see
examples in Figure 10).
100
80
60
40
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||i i|vanilla nterm nterm|scra ediat ediat|tchpa e ste e ste|d ps - "# ps - ra|" ndom|
|||||||||||
0 2.5k 5k 7.5k 10k 12.5k 15k 17.5k 20k
vanilla scratchpad
intermediate steps - "#"
intermediate steps - random
Number of train examples
Figure 10: Replacing the intermediate steps with random characters. We ablate whether the sample efficiency
gain from the detailed scratchpad stems from the increased length in the format.
The results (Figure 10) indicate that the sample efficiency of the vanilla detailed scratchpad surpasses
the modified versions. This suggests that the advantage of detailed scratchpad format stems from the
breakdown into simpler functions rather than its increased length.
**Subtraction - two different intermediate step design** We further focus on the subtraction task and
conduct experiments to compare two different versions of the detailed scratchpad for this operation
(see examples in Figure 11). These trials shed light on the importance of decomposing the subtraction
task into simpler intermediate steps. Unlike addition, subtraction behaves differently depending on
whether the first operand (a) is greater than the second operand (b) or vice versa.
The first strategy (Version 1 in Figure 11) involves performing digit-wise subtraction starting from the
least significant bit (LSB) and considering borrows when necessary. However, this strategy produces
incorrect results when the first operand is smaller than the second operand. In such cases, we subtract
the number in the most significant bit (MSB) position multiplied by 10 to the power of (number
of digits in the output - 1) from the remaining digits in the output. An example illustrating this
approach is shown in Version 1, Case 2. Alternatively, we can adopt a more familiar strategy. If
the first operand is smaller than the second, we swap the operands and compute the negation of the
subtraction of the swapped operands: a − _b = −(b −_ _a) (referred to as Version 2)._
The results in Figure 12 indicate that Version 2, which involves comparing two operands, performs
considerably worse than Version 1. In Version 1, each intermediate step only requires the simpler
1-digit subtraction, along with addition in the final result processing step. Upon analyzing the failure
cases of Version 2, we observe that the majority of errors stem from incorrectly identifying which of
the two operands is larger, while the intermediate steps are handled correctly. This finding underscores
the significance of breaking down arithmetic operations into simpler intermediate steps. Unless
otherwise specified, we use Version 1 in all detailed scratchpad experiments.
-----
**Detailed scratchpad formatting for different arithmetic tasks**
Examples of two variations of detailed scratchpad formatting for subtraction, considering the scenario
where the first operand a is greater than the second operand b, and vice versa. In Version 1, a result
processing step is included in the final stage to handle negative outputs. In Version 2, the operands are
compared at the beginning, and if b is larger, their order is reversed.
**Prompt (Case 1. a −** _b ≥_ 0) :
Input:
367-128
Target:
**Version 1.** **Version 2.**
... ...
<scratch> <scratch>
[3,6,7] has 3 digits. [3,6,7] has 3 digits.
[1,2,8] has 3 digits. [1,2,8] has 3 digits.
[3,6,7] - [1,2,8], A=[], C=0, 7-8-0+10=9, 367>=128 # comparison of two operands
A->9, C->-1 [3,6,7] - [1,2,8], A=[], C=0, 7-8-0+10=9,
[3,6] - [1,2], A=[9], C=-1, 6-2-1=3, A->3 A->9, C->-1
, C->0 [3,6] - [1,2], A=[9], C=-1, 6-2-1=3, A->3
[3] - [1], A=[3,9], C=0, 3-1-0=2, A->2,, C->0
C->0 [3] - [1], A=[3,9], C=0, 3-1-0=2, A->2,
[] - [], A=[2,3,9] C->0
200+39=239, END # result processing [] - [], A=[2,3,9], END
</scratch> </scratch>
2 3 9 2 3 9
**Prompt (Case 2. a −** _b < 0) :_
Input:
128-367
Target:
**Version 1.** **Version 2.**
... ...
<scratch> <scratch>
[1,2,8] has 3 digits. [1,2,8] has 3 digits.
[3,6,7] has 3 digits. [3,6,7] has 3 digits.
[1,2,8] - [3,6,7], A=[], C=0, 8-7-0=1, A 128<367 : 128-367=-(367-128) # comparison
->1, C->0 [3,6,7] - [1,2,8], A=[], C=0, 7-8-0+10=9,
[1,2] - [3,6], A=[1], C=0, 2-6-0+10=6, A A->9, C->-1
->6, C->-1 [3,6] - [1,2], A=[9], C=-1, 6-2-1=3, A->3
[1] - [3], A=[6,1], C=-1, 1-3-1=-3, A->-3, C->0
, C->-1 [3] - [1], A=[3,9], C=0, 3-1-0=2, A->2,
[] - [], A=[-3,6,1] C->0
-300+61=-239, END # result processing [] - [], A=[2,3,9], END
</scratch> </scratch>
-2 3 9 -2 3 9
Figure 11: Two versions of detailed scratchpad formatting for subtraction.
B.5 THE EFFECT OF NOISY INPUTS ON ACCURACY
**Noisy intermediate steps in the scratchpad data.** We further investigate the significance of
providing accurate intermediate steps in the scratchpad during the training process. While this was
inspired by the findings of Min et al. (2022), it is inherently different. Min et al. (2022) show that
using random labels in ICL demonstrations caused minimal degradation when compared to the gold
labels. However, those models were trained on gold labels and then evaluated on multiple downstream
tasks. In our setting, the model is trained and evaluated on a single arithmetic task. Further, the
final result(or label) is left untouched as the correct answer to the arithmetic operation. We only
replace the intermediate steps. The goal of this study is to verify whether the model actually learns to
reason using the given intermediate steps or merely uses the scratchpad to improve its expressivity.
We compare the performance of training with our simplified scratchpad formatting, which includes
accurate A (digit sum) and C (carry) information, with formatting that includes random A, random
_C, or random A and C for each intermediate step, as depicted in Figure 1._
The results in Figure 13, demonstrate that the inclusion of noisy labels can impede sample efficiency.
However, with enough samples, the model ultimately achieves full accuracy. This suggests that while
-----
construction of intermediate steps on the model’s performance.
(a) Test accuracy on Addition
2000 4000 6000 8000 10000 12000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
|||||||A & C A|||
||||||correct random|A & C A|||
||||||random random|C A & C|||
||||||||||
Number of train examples
(b) Test accuracy on Subtraction
2000 4000 6000 8000 10000 12000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
||||||A & C A|||
||||c r|orrect andom|A & C A|||
||||r r|andom andom|C A & C|||
|||||||||
Number of train examples
100
80
60
40
20
prone to error propagation.
data and evaluate the accuracy of next-token predictions when the output sequence contains noise.
100.0
99.6% 100.0%
97.5
95.0
92.5
89.9%
90.0
87.5
est Accuracy (%)T 85.0
82.5
80.5%
80.0
Plain Reverse (DS)Ver.1 (DS)Ver.2
Data Formatting Methods
Figure 12: Comparison of performance among various data formatting approaches (plain, reverse, and two
versions of detailed scratchpad (DS)) for the subtraction task. The experiments were conducted on a NanoGPT
model trained on a dataset of 10,000 examples. Version 2, which incorporates operand comparison, exhibits
significantly lower performance compared to Version 1. This observation highlights the substantial impact of the
100
80
60
40
correct A & C
est Accuracy (%)
random A T 20
random C
random A & C
0
8000 10000 12000 0 2000
Figure 13: Comparison of training with simplified scratchpad formatting using correct A and C information with
formatting using random A/C and their effect on sample efficiency and accuracy. Results show that noisy labels
degrade sample efficiency, but with sufficient training data, the model eventually reaches full accuracy.
the model is capable of leveraging the information contained in the intermediate steps, it can also
gradually learn how to perform addition while disregarding the presence of noisy intermediate steps.
**Model robustness to noise in the auto-regressive output.**
robustness of models trained on plain or reverse formatted data (without noise) when exposed to
noise during an auto-regressive generation process. In particular, we aim to unravel how much the
-th output relies on the operands and preceding tokens in the addition result,
given that transformer models generate tokens sequentially in an autoregressive manner, making them
3-digit addition. We train models on either plain or reverse format
Specifically, in the plain format setting, we expect a well-performing model to generate the correct
output tokens O3, O2, O1 sequentially, where O3 = C3, O2 = C2, O1 = C1, and C3C2C1 represents
the correct answer. We consider two types of perturbation: (i) random perturbation, where we
modify the first two output tokens O3O2 to random numbers different from C3C2, and (ii) precise
perturbation, where we perturb only the second output token O2 by 1. The second case is particularly
relevant since a common error case is where the model misses a digit by 1. We provide the model with
an expression of the form “A3A2A1 + B3B2B1 = O3O2”, where O3O2 can be either (i) a random
incorrect number,generated by the model. A corresponding process is deployed for the reverse format, introducing a i.e., O3O2 ̸= C3C2, or (ii) O2 = C2 ± 1 mod 10, and observe the next token
noisy sequence to models trained on reverse format data.
-----
To evaluate the performance, we define two accuracy criteria for O1: exact accuracy, reckoning
O1 as accurate only when O1 = C1, and relaxed accuracy, considering O1 correct if it deviates
from the original output C1 by at most 1. In other words, C1 = O1, C1 = O1 + 1 mod 10 or
C1 = O1 − 1 mod 10.
Table 8: Prediction accuracy for the third digit output under different types of noise in the preceding output
tokens. Random perturbation, applies random flips whereas precise perturbation shifts the preceding output
tokens by 1. Relaxed accuracy, allows for a ±1 deviation from the true output whereas Exact accuracy is strict.
Reverse consistently outputs a number that is at most 1 different from the true output, even in the presence of
noise. The plain format has high exact accuracy in the presence of precise perturbation, as the noise in the output
token has a lower impact on predicting the next token, which is of lower significance. However, with completely
random noise, the plain format shows poor performance, suggesting a strong dependence on all digits. (See
Lemma 1 and 2).
|Perturbation Type|Random|Precise|
|---|---|---|
||Plain Reverse|Plain Reverse|
|Exact Acc Relaxed Acc|49.88% 81.26% 61.55% 100%|99.85% 90.47% 100% 100%|
|---|---|---|
The results presented in Table 8 reveal intriguing findings. We observe that the reverse format
consistently outputs a result that deviates by no more than 1 from the true answer, regardless
of whether the preceding outputs O3O2 are subjected to random or precise perturbation. This
consistency can be explained by Lemma 2, indicating that the reverse format only requires learning
a straightforward function of digit-wise addition for each corresponding position, along with the
carry-on (0 or 1). Therefore, even with noise in the preceding tokens, the model accurately performs
digit-wise addition, albeit with occasional carry-on prediction errors. With an exact accuracy of
81.26% even in the presence of random perturbation, the reverse format demonstrates the model’s
ability to rely less on the preceding output tokens, indicating a robust learned output mapping.
On the contrary, models using the plain format have to decipher a more intricate function drawing
from all digits within the sequence, as described by Lemma 1. Given that in addition, carry operations
transition from right to left (i.e., least to most significant digit), the introduction of precise perturbation
on preceding output tokens, which possess higher significance, has a minor impact on the output
(which has less significance). As a result, models trained using the plain format attain an exact
accuracy rate of 99.85% and a relaxed accuracy of 100% for cases involving precise perturbation.
Interestingly, under purely random perturbation, the plain format struggles, leading to a reduced
relaxed accuracy of 61.55% and exact accuracy of 49.88%. This suggests that the output mapping
learned by the plain format is not merely a function of the two operands but rather enmeshed in
complex dependencies on preceding output tokens.
B.6 ANALYZING THE RESULTS ON SINE/SQRT
Since sine and sqrt are arguably more complicated functions than the remaining arithmetic tasks, we
decided to more carefully analyze their performance. As shown in Figure 14, sin shows excellent
performance across all data formats around sin(x) = 0. We conjecture that this is because sin(x) ≈ _x_
for x ≈ 0, which is easy to learn. We also note that accuracy once again improves close to ±1
potentially for similar reasons.
-----
(a) Test accuracy on Sine
(b) Test accuracy on Square root
Sqrt
Sin
1.00 0.75 0.50 0.250.00 0.25 0.50 0.75 1.00
|Col1|Col2|
|---|---|
|||
||plain, =0|
||ar, =0 plain, =5e-4|
||ar, =5e-4|
|||
true y value
100
80
60
40
20
100
80
60
40
20
1.0 1.5 2.0 2.5 3.0
|Col1|Col2|
|---|---|
|||
||plain, =0|
||CoT, =0 plain, =5e-4|
||CoT, =5e-4|
|||
plain, =0
CoT, =0
plain, =5e-4
CoT, =5e-4
true y value
Figure 14: Error analysis of sine and square root functions, considering varying error tolerance (eps) thresholds
to determine correct output. The sine function demonstrates excellent performance across all data formats,
particularly around sin(x) = 0, where sin(x) ≈ _x for x ≈_ 0. Additionally, we observe improved accuracy near
_±1._
C EXTENDING TO LONGER DIGIT ADDITION
In this section, we extend our experiments beyond 3-digit addition and explore longer-digit settings,
ranging up to 10 digits. Our aim is to investigate whether our previous findings regarding the sample
efficiency of reverse and scratchpad formats hold true for larger numbers of digits.
We begin by observing that the phase transition behavior observed in previous sections also applies to
longer-digit addition. Furthermore, we discover that the advantages of using reverse and scratchpad
formats become even more pronounced as the number of digits increases. Next, we examine the
number of training samples required to learn k + 1 digit addition when fine-tuning a pretrained
model trained on k digit addition. We find that while the number of samples needed to further learn
_k + 1 digit addition remains relatively consistent for reverse and scratchpad formats, the plain format_
requires an increasing number of samples.
**Experimental setup and data generation.** To explore the performance of the model in higher-digit
addition scenarios, we extend the experimental setup described in Section 3. We adopt a balanced
sampling approach for training data with D digits, ensuring an equal number d of all combinations of
digits for both operands as follows:
We begin by sampling all 100-digit additions. For the remaining number of digits, ranging from
2 to D, we generate addition examples of the form “A + B = C”. The two operands, A and B,
are randomly sampled d = ⌊(N − 100)/(D(D + 1)/2 − 1)⌋ times for every D, where N is the
total number of training examples. Operand A is sampled between [10[k][1][−][1], 10[k][1] _−_ 1] and operand
B is sampled between [10[k][2][−][1], 10[k][2] 1], for all 1 _k1_ _k2_ _D, excluding the case where_
_k1 = k2 = 1. After sampling the two operands, we randomly interchange them to cover cases where−_ _≤_ _≤_ _≤_
A has fewer digits than B and vice versa.
C.1 TRAINING FROM RANDOM INITIALIZATION
We repeat the experiment from Section 3 on nanoGPT with longer digits. The results shown in
Figure 15 demonstrate a similar behavior to the findings observed in Figure 4a for 3-digit addition.
This indicates that our previous observations generalize to longer sequence lengths. Notably, the
performance gap between the modified formats (reverse, simplified scratchpad, and detailed scratchpad) and the plain format becomes even more significant in the context of higher digits. While the
plain format requires an increasing number of training examples to learn higher-digit additions, the
reverse or scratchpad formats exhibit a more consistent requirement in terms of the number of training
examples.
This prompts us to explore the differences between each format in a fine-tuning setting. Specifically,
we ask whether a model trained on reverse or scratchpad-formatted k digit addition data would find it
easier to learn k + 1 digit addition compared to a model trained with plain format addition.
-----
(a) 5-digit Addition
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||pl re|ain verse|
||||si d|mplified s etailed scr|
25k 50k 75k 100k
plain
reverse
simplified scratchpad
detailed scratchpad
Number of train examples
(b) 7-digit Addition
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
||pl re|ain verse||
||si d|mplified s etailed sc|cratchpad ratchpad|
50k 100k 150k 200k
plain
reverse
simplified scratchpad
detailed scratchpad
Number of train examples
(c) 10-digit Addition
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
||||plain rever|se||
|||||||
|||||||
||||simpl detail|ified scr ed scrat|atchpad chpad|
|||||||
|||||||
100k 200k 300k 400k 500k
plain
reverse
simplified scratchpad
detailed scratchpad
Number of train examples
100
80
60
40
20
100
80
60
40
20
100
80
60
40
20
Figure 15: Comparison of sample efficiency for 5, 7 and 10-digit additions: performance of models trained with
varying numbers of addition samples on each data format. The plain format data requires an increasing number
of training examples for higher digits, while the number of samples required for other methods remains relatively
consistent.
C.2 FINE-TUNING FROM PRETRAINED MODELS
In this section, we investigate the generalization ability of transformer models, specifically focusing
on their capacity to learn higher-digit additions based on their knowledge of lower-digit additions.
Additionally, we explore how the choice of data format affects the number of samples required to
learn higher-digit additions.
**Forgetting of k-digit addition when trained on k + 1-digit addition.**
We begin by fine-tuning a model that was initially trained on 3-digit addition. We fine-tune this
model using 4-digit addition training data, with each data format being used separately. To mitigate
the “catastrophic forgetting” phenomenon, we experiment with different learning rates, gradually
reducing the magnitude. We continue this process until the learning rate becomes too small for the
model to effectively learn 4-digit addition.
plain (lr=1e-4) simple (lr=1e-5)
reverse (lr=5e-6) detailed (lr=1e-6)
1001.0 1-digit accuracy 2-digit accuracy
500.8
0.60
3-digit accuracy 4-digit accuracy
1000.4
est Accuracy (%)T 500.2
0.00
0.00 50000.2 100000.4 0 0.6 50000.8 100001.0
Number of Iterations
Figure 16: Accuracy of 1 to 4-digit additions during fine-tuning of a pretrained model on 3-digit additions using
different data formats. The model is fine-tuned using only 4-digit addition data with corresponding formats. We
observe that the plain format ‘forgets’ 1 to 3-digit additions entirely when learning 4-digit addition. In contrast,
the detailed scratchpad method successfully learns 4-digit addition while maintaining high performance on 1 to
3-digit additions.
The results depicted in Figure 16 reveal interesting insights about the fine-tuning process. When
training the model using the plain format with only 4-digit addition data, there is an immediate drop
in accuracy for 1 to 3 digit additions. This indicates that the model experiences significant forgetting
of previously learned additions. In contrast, the reverse and scratchpad methods exhibit a more
favorable behavior. The model trained with these methods does not completely forget 1 or 2 digit
additions while learning 4-digit addition. Remarkably, the detailed scratchpad method stands out by
enabling the model to learn 4-digit addition without compromising its performance on 1 to 3 digit
-----
additions. Although there is a slight decrease in performance for 3-digit additions initially, the model
quickly recovers and picks up the knowledge again as it trains on 4-digit additions.
This result can be explained by the hypothesis that learning a k + 1 digit addition from a k-digit
model is an incremental process for the detailed scratchpad method. The model already has a solid
foundation in understanding the intermediate steps involved in addition, so it only needs to adapt to
longer sequences. In contrast, for the plain format, learning higher-digit additions requires the model
to establish new mappings to generate correct outputs, which is a more challenging task.
**Sample efficiency of fine-tuning k-digit models with k + 1-digit examples.** Building upon our
previous findings that fine-tuning a model solely on k +1-digit addition leads to a loss in performance
for k-digit addition, we modify our approach to prevent the loss of performance in the k-digit addition
task. Instead of training solely on k + 1-digit examples, we construct a dataset that includes all
addition tasks from 1-digit to k + 1-digit, with the method described in the previous section. By
doing so, we aim to maintain the performance of 1 to k-digit addition while enabling the model to
learn k + 1-digit addition during fine-tuning.
In this experiment, we investigate the number of k + 1-digit training examples required for the model
to effectively learn k + 1-digit addition when fine-tuning a pretrained model on k-digit addition. It is
important to note that this setting differs from the previous section (Section C.1), where we focused
on training models from random initialization. Here, we specifically focus on the fine-tuning process.
We fine-tune individual models pretrained on each data format (using k-digit addition) and further
train them using the same data format on a new dataset that includes all addition examples from
1-digit to k + 1-digit.
(a) Plain
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
||||||||3-digit 4-digit|
|||||||||
||||||||6-digit 8-digit|
|||||||||
|||||||||
0 10000 20000 30000 40000 50000 60000
3-digit
4-digit
6-digit
8-digit
Number of k+1 digit train examples
(b) Reverse
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
||||||-digit -digit|
|||||3 4||
|||||6 8|-digit -digit|
1000 2000 3000 4000 5000 6000
Number of k+1 digit train examples
100
80
60
40
20
100
80
60
40
20
(c) Simplified Scratchpad
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||3-digit 4-digit||
||||||||
||||||||
||||||6-digit 8-digit||
||||||||
||||||||
1000 2000 3000 4000 5000
3-digit
4-digit
6-digit
8-digit
Number of k+1 digit train examples
(d) Detailed Scratchpad
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
||||||||3-digit 4-digit|
|||||||||
||||||||6-digit 8-digit|
|||||||||
|||||||||
0 500 1000 1500 2000 2500 3000
3-digit
4-digit
6-digit
8-digit
Number of k+1 digit train examples
100
80
60
40
20
100
80
60
40
20
Figure 17: Fine-tuning performance of pretrained k-digit models using varying numbers of k + 1-digit examples,
with corresponding data formats. The plain format requires an increasing number of k + 1-digit examples as the
number of digits (k + 1) increases. In contrast, the modified formats (reverse, scratchpad) exhibit consistent
performance across different numbers of digits, requiring a relatively consistent number of examples to learn the
additional digit.
The results in Figure 17 demonstrate the number of k + 1-digit addition samples required for a
pretrained model capable of performing k-digit addition to learn the addition of k + 1 digits. The
findings reveal that modified formats (reverse, scratchpad) require a relatively small number of
samples (between 1000 and 5000) to learn the addition of an extra digit. In contrast, the plain format
-----
necessitates a significantly larger number of training examples, with the requirement increasing as
the number of digits grows.
This observation aligns with our previously established Lemma 2 and Lemma 1, which suggest that
learning higher-digit addition in the reverse format involves processing the i-th digit of the operands
and carrying from the previous position. This operation remains consistent regardless of the number
_of digits being added. As a result, the model primarily needs to learn how to handle longer digits to_
perform addition effectively.
In contrast, the plain addition format requires the model to learn a more complex function that
incorporates all digits from both operands. As the number of digits increases, the complexity of
this function grows as well. This highlights the greater difficulty faced by the plain format in
accommodating additions with a larger number of digits.
C.3 IMPACT OF FORMATS ON FINE-TUNING
We delve deeper into the impact of different formats on the fine-tuning process. Specifically, we
investigate whether training a model in one format helps in learning addition in another format,
and vice versa. To conduct this analysis, we begin with a model trained on each data format using
3-digit addition examples. We then individually fine-tune these pretrained models using different
data formats, on 4-digit addition examples.
The results depicted in Figure 18 highlight some reverse detailed
interesting findings. Firstly, we observe that a
model trained with the same format as the fine- 1001.0 finetuning with plain finetuning with reverse
tuning format exhibits faster learning in terms
of the number of iterations. For instance, train- 0.8
ing a model with the plain format outperforms
training a model pretrained with scratchpad for- 0.60
mats. This suggests that the model benefits from
the consistency and familiarity provided by the 1000.4
same format throughout the training process.
Additionally, we notice that fine-tuning a de- est Accuracy (%)T 500.2
tailed scratchpad pretrained model on other formats proves to be more challenging. This ob- 0.00
model to “unlearn” the intricacies of the ver- Number of Iterations
plain simple random init
reverse detailed
1001.0 finetuning with plain finetuning with reverse
500.8
0.60
finetuning with simple finetuning with detailed
1000.4
est Accuracy (%)T 500.2
0.00
0.00 50000.2 100000.4 0 0.6 50000.8 100001.0
Number of Iterations
bose detailed scratchpad format and adapt to
Figure 18: Performance of fine-tuning a 3-digit model
the new format. For example, the plain format
trained on different data formats (plain, reverse, simple
does not involve the use of alphabet characters scratchpad, detailed scratchpad, and random initializain the data, so a model pretrained with the plain tion) individually with different data formats of 4-digit
format would have a low probability of gener- addition. The results demonstrate that fine-tuning yields
ating alphabetic outputs. In contrast, a detailed the best performance when the pretrained model and the
scratchpad pretrained model would have encoun- fine-tuning format are consistent. Notably, fine-tuning
tered various alphabets and may have a tendency a detailed scratchpad format model shows suboptimal
to output them. Therefore, adjusting to a new performance. We hypothesize that this is due to the need
for the model to “unlearn” the rigid and verbose format
format requires additional effort for the model
and adapt to the new format.
to “unlearn” the patterns specific to the previous
format and effectively learn the new format it is being trained on.
These findings highlight the importance of considering format consistency during the fine-tuning
process, as it can impact the efficiency and effectiveness of the learning process. We will delve further
into this topic in the upcoming section 8, where we fine-tune pretrained GPT-3 models. Notably, we
observe that fine-tuning with reverse or simplified scratchpad formats actually yields worse results
compared to fine-tuning with plain formats. For a detailed exploration of these observations, please
refer to the forthcoming section.
-----
D TEACHING ARITHMETIC OPERATIONS BEYOND ADDITION
While this study has a primary focus on the addition operation and aims to comprehend the significance of data sampling and formatting, its findings are applicable beyond the realm of addition alone.
In this section, we expand our examination to include other arithmetic operations, thus demonstrating
the broader applicability of our insights. We consider a mix of arithmetic tasks, including binary
operations like subtraction and multiplication, and unary operations such as sine and square root.
Each operation entails its unique challenges and intricacies. For instance, subtraction introduces the
concept of negative numbers, multiplication can generate significantly longer outputs, and sine and
square root functions entail computations involving floating-point numbers, which are considered up
to four digits of precision in our work.
We acknowledge that while our examination is detailed, it does not encompass all the fundamental
arithmetic operations or the entire scope of floating-point arithmetic. Specifically, our focus is primarily on integer arithmetic for binary operations, considering a limited length of digits. Additionally,
for unary operations, we confine ourselves to a restricted number of digits below the decimal point.
In Section D.1, we delve into each arithmetic operation individually, exploring the impact of data
formatting and determining the relevancy of our insights across disparate tasks. Further, in Section D.2,
we perform an analysis of joint training across all five tasks, investigating the potential performance
implications for each individual task.
D.1 EXTENDED ARITHMETIC OPERATIONS
In order to extend our analysis to arithmetic operations beyond addition, we consider the following
tasks:
**Subtraction (−).** We consider subtraction of positive numbers up to 3 digits, written as
Areverse formatting. As with addition, scratchpad-based methods (iii, iv), present the intermediate steps3A2A1 − B3B2B1 = C3C2C1 in (i) plain formatting, and $A3A2A1 − B3B2B1 = C1C2C3$ in (ii)
of digit-wise subtraction and handling of carry-ons. These steps proceed from the least significant
bit (LSB) to the most significant bit (MSB). If the final result after computing all the digit-wise
subtractions is negative, we subtract the number in the most significant bit (MSB) position multiplied
by 10 to the power of (number of digits in the output - 1) from the remaining digits in the output. In
Section B.4, we present an alternative version of the detailed scratchpad formatting for subtraction.
**Multiplication (×).** We consider multiplication of positive numbers up to 2-digits. (i) Plain
formatting examples are formatted asformatted asintermediate step by conducting a series of multiplications between the first operand and each digit $A2A1 ∗ B2B1 = C1C2C3 AC24A$. The (iv) detailed scratchpad method simplifies each1 ∗ B2B1 = C4C3C2C1, while (ii) reverse formatting is
of the second operand, starting from the least significant bit (LSB) and moving toward the most
significant bit (MSB). For each step, we multiply the result by an exponentiation of 10 corresponding
to the relative digit position.
**Sine (sin ).** We consider decimal numbers within the range [−π/2, π/2], truncated to 4-digit
precision. (i) Plain formatting examples are formatted as sin(A0.A1A2A3A4) = B0.B1B2B3B4.
For (iv) detailed scratchpad method, we include the Taylor series expansion steps for sine, which
is represented as sin(x) = x − 3![1] _[x][3][ +][ 1]5!_ _[x][5][ −]_ 7![1] _[x][7][ +][ · · ·][ . These intermediate steps involve]_
exponentiation, which may not be any easier to compute than the sine operation itself.
**Square Root ([√]).** We consider decimal numbers within [1, 10), truncated to 4-digits of precision
with the format, written as sqrt(A0.A1A2A3A4) = B0.B1B2B3B4 for (i) plain formatting. For (iv)
detailed scratchpad method, we enumerate each step of Newton’s method to compute the square root
function. The iterative formula is given by xn = 2[1] [(][x][n][−][1][ +] _xnx−1_ [)][, where][ x][0][ is initialized as the]
floor of the square root value of the operand x. These intermediate steps involve a division operation,
which can be as complex as the square root operation itself.
For evaluation of sine and square root, we classify the result ˆyi as correct if the absolute difference
between ˆyi and the ground truth value yi is less than or equal to a predefined threshold ϵ ≥ 0.
-----
For each arithmetic task, we explore both the plain format and the detailed scratchpad format. The
detailed scratchpad formatting for each task is illustrated in Figure 19 and Appendix J. For subtraction,
the process involves breaking down the operation into intermediate steps of digit-wise subtraction,
including carry-ons when necessary. Unlike addition, subtraction requires an additional step to
handle cases where the first operand is smaller than the second. Further details on the detailed
scratchpad for subtraction can be found in Section B.4. For multiplication, each intermediate step
carries out a 2-digit × 1-digit multiplication between the first operand and each separate digit of the
second operand. For sine and square root, we utilize a sequence of iterative approximations instead
of algorithmic explanations. Specifically, Taylor’s series expansion steps for sine and Newton’s
method steps for square root are used. It is important to note that while addition, subtraction, and
multiplication are broken down into simpler operations at each step, CoT for sine and square root
functions requires intermediate steps involving operations like exponentiation or division, which
might not be inherently simpler.
**Detailed scratchpad formatting for different arithmetic tasks**
Examples of detailed scratchpad formatting for different arithmetic
tasks:
(1) Subtraction - includes borrows for intermediate steps, (2)
Multiplication - decomposes the second operand for 2-digit × 1-digit
multiplication at each step, (3) Sine - utilizes Taylor series
expansion, and (4) Square root - employs Newton’s method.
**Subtraction** **Sine**
Input: Input:
128-367 sin(1.5707)
Target: Target:
<scratch> <scratch>
[1,2,8] has 3 digits. x_0=1.5707
[3,6,7] has 3 digits. x_1: x_0 - 1/3! * (x^3), x_1=0.9247
[1,2,8] - [3,6,7], A=[], C=0, 8-7-0=1, x_2: x_1 + 1/5! * (x^5), x_2=1.0043
A->1, C->0 x_3: x_2 - 1/7! * (x^7), x_3=0.9996
[1,2] - [3,6], A=[1], C=0, 2-6-0+10=6, x_4: x_3 + 1/9! * (x^9), x_4=0.9997, END
A->6, C->-1 </scratch>
[1] - [3], A=[6,1], C=-1, 1-3-1=-3, A 0.9997
->-3, C->-1
[] - [], A=[-3,6,1]
-300+61=-239, END
</scratch>
-2 3 9
**Multiplication** **Sqrt**
Input: Input:
12*36 sqrt(2.7174)
Target: Target:
<scratch> <scratch>
[1,2] has 2 digits. x_0=1
[3,6] has 2 digits. x_1: 1/2*(1+2.7175/1)=1.8587, x_1=1.8587
[1,2] * 6, A=[7,2], k=1, B=[7,2], C x_2: 1/2*(1.8587+2.7175/1.8587)=1.6603, x_2
=0+72=72 =1.6603
[1,2] * 3, A=[3,6], k=10, B=[3,6,0], C x_3: 1/2*(1.6603+2.7175/1.6603)=1.6485, x_3
=72+360=432, END =1.6485
</scratch> x_4: 1/2*(1.6485+2.7175/1.6485)=1.6484, x_4
4 3 2 =1.6484, END
</scratch>
0.6484
Figure 19: Examples of the detailed scratchpad format for different arithmetic tasks such as subtraction, sine,
multiplication, and square root.
The results depicted in Figure 20 indicate that similar to the findings of addition, the detailed
scratchpad format significantly improves performance over plain or reverse formats and yields
efficient results even with few samples for subtraction and multiplication tasks. Interestingly, we
find reverse is not particularly effective in multiplication. On the other hand, the detailed scratchpad
format exhibits reduced efficiency for sin and compared to other operations (+, _,_ ). This
_[√]_ _−_ _×_
-----
(a) Subtraction
(b) Multiplication
100
80
60
40
20
100
80
60
40
20
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||||||pla|in|||
||||||re de|verse tailed|scrat|chpad|
||||||||||
||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
|||||plain rever detai|se led scr|atchpad|
||||||||
||||||||
||||||||
||||||||
2.5k 5k 7.5k 10k 12.5k 15k 17.5k 20k
plain
reverse
detailed scratchpad
Number of train examples
(c) Sine
1000 2000 3000 4000 5000 6000 7000
plain
reverse
detailed scratchpad
Number of train examples
(d) Square Root
100
80
60
40
20
100
80
60
40
20
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
|||||||plain plain|, =0, =5|e-4|
||||||||||
|||||||plain detai|, =5 led,|e-3 =0|
||||||||||
||||||||||
|||||||detai detai|led, led,|=5e-4 =5e-3|
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
||||||plain plain|, =0, =5|e-4|
|||||||||
||||||plain detai|, =5 led,|e-3 =0|
|||||||||
|||||||||
||||||detai detai|led, led,|=5e-4 =5e-3|
5k 10k 15k 20k 25k 30k 35k 40k
plain, =0
plain, =5e-4
plain, =5e-3
detailed, =0
detailed, =5e-4
detailed, =5e-3
Number of train examples
5k 10k 15k 20k 25k 30k 35k 40k
plain, =0
plain, =5e-4
plain, =5e-3
detailed, =0
detailed, =5e-4
detailed, =5e-3
Number of train examples
Figure 20: Performance of 3−digit subtraction, 2−digit multiplication, 4−digit precision sine and square
root with varying data formats. As with addition, reverse always produces improved sample complexity and
performance for all operations. For sine and square root, scratchpad formatting provides limited improvement.
This discrepancy can be attributed to the complexity of the intermediate steps involved in the detailed scratchpad.
discrepancy can be traced back to the complexity of the intermediate steps involved in the detailed
scratchpad. While addition, subtraction, and multiplication are decomposed into simpler functions,
sine and square root operations involve more intricate operations. For a broader analysis of the error
profile, see Appendix B.6.
D.2 JOINTLY TRAINING ON ALL FIVE ARITHMETIC TASKS
So far, we only considered the problem of learning different arithmetic operations individually. In
this section, we study the effect of jointly training on all five arithmetic tasks - addition, subtraction, multiplication, sine, and square root. We construct a single train dataset incorporating all
_√_
task Dtrain = {Dtrain[+] _[,][ D]train[−]_ _[,][ D]train[×]_ _[,][ D]train[sin]_ _[,][ D]train[}][, and randomize the sequence of tasks in our train]_
samples. For example, a randomly chosen segment of the training data may exhibit a task order
such as (+, −, sin .−, ×, ×, _[√], ...). We consider 10, 000 training examples for each task of addition,_
subtraction, sine, and square root and 3, 000 for multiplication.
The model’s performance, after training on our joint dataset Dtrain, is evaluated in both zero-shot and
few-shot settings. These results are also compared with the performance of models that were trained
_√_
separately on each dataset (Dtrain[+] _[,][ D]train[−]_ _[,][ D]train[×]_ _[,][ D]train[sin]_ _[,][ D]train[)][, identical to those used to construct]_
_Dtrain. In the few-shot setting, each task is given examples from any of the five arithmetic tasks (not_
necessarily related to the test task under consideration) or prompt texts, followed by test queries
specific to the task of interest. For further details on the few-shot prompting methods used, please
refer to Section E.
Table 9 shows that joint training significantly enhances the zero-shot performance for multiplication
and square root tasks, yet it slightly reduces the performance for subtraction. Generally, few-shot
prompting exhibits improved performance. Notably, the performance of few-shot prompting remains
consistent regardless of whether the exemplars provided are from unrelated tasks or are task-specific.
We propose that this consistency is due to our randomized task sequence during training, which
-----
presents the model with numerous instances where one task directly follows another, thus simulating
few-shot prompting with different tasks. Furthermore, we observe that text prompting performs
similar to zero-shot. We conjecture that this is because the training data does not include text data
and the model has never encountered text and therefore, text prompting serves as a random prefix
attached to our test query.
Table 9: Performance of models trained individually and jointly on five arithmetic tasks. The threshold ϵ for sin
and functions is set to 0. For the models trained jointly on all five tasks, we evaluate their performance in
_[√]_
both a zero-shot setting and a few-shot setting. In the few-shot setting, each task is presented with exemplars
from one of the five arithmetic tasks or prompted with text, followed by task-specific test queries. The results
show that few-shot prompting with any arithmetic operators (even unrelated to the test task) generally improves
performance. However, text prompting shows performance similar to the zero-shot setting.
|Col1|Trained on individual task|Trained jointly on all 5 tasks|
|---|---|---|
|||Few-shot exemplar format Zero-shot + – sin sqrt text ×|
|+ – × sin sqrt|84.06 79.97 4.58 35.03 19.85|87.96 96.45 96.90 96.92 97.06 97.01 88.71 72.83s 81.28 79.59 81.39 81.84 81.74 68.91 14.28 18.86 18.96 15.43 19.20 19.59 15.48 34.74 34.35 34.31 34.34 32.64 33.42 33.96 27.37 26.65 26.74 26.70 25.60 25.61 26.02|
|---|---|---|
-----
E MIXING TEXT WITH ARITHMETIC DATA
Until now, our focus was primarily on models trained exclusively on arithmetic tasks. However,
in practice, large language models (LLMs) utilize a combination of arithmetic and text data for
training. In this section, we broaden our scope by incorporating both addition samples and text into
our pretraining data. We then evaluate the trained models with various few-shot prompts to analyze if
the model is able to effectively identify the correct context.
**Experimental Setup.** We mix addition and text data in our experiment using the Shakespeare
dataset (Karpathy, 2015) that includes 1, 115, 394 tokens of text, 10, 000 plain addition examples
(120, 027 tokens) without the $ delimiter, and 3, 000 detailed scratchpad formatted addition examples
(813, 510 tokens). We fix the number of detailed scratchpad examples and plain addition examples
(3, 000 and 10, 000 respectively) while varying the number of each example type in the training process. The Shakespeare text is segmented into dialogue chunks, with a random number of consecutive
plain addition data and detailed scratchpad data inserted between them. We use a character-level
tokenizer with a vocabulary size of 80, containing all characters present in the dataset, including
alphabets, digits, and certain symbols like +, = and \n.
E.1 FEW-SHOT PROMPTING
Given the mixed nature (arithmetic and text) of our dataset, introducing relevant examples seems an
effective strategy to prime the model to generate the desired type of output. To assess the performance
of such few-shot (1/2/3−shot) prompting, we provide task-specific exemplars as illustrated in
Figure 21. Plain addition formatted exemplars are used for testing plain addition inputs, while
detailed scratchpad formatted exemplars are utilized for assessing performance on detailed scratchpad
formatted inputs. Additionally, we experiment with demonstrating text (see Appendix E.3. for details)
before querying addition (which we denote, Text-prompt). For each 1/2/3-shot and text prompting,
average performance is reported over a fixed set of exemplars. Standard deviations of these prompts
are denoted by shaded areas in the plots. The term “few-shot” refers to the reported mean of all
1/2/3-shot prompting results.
Figure 21: Few-shot prompting method. Few-shot prompting performance is evaluated by presenting relevant
exemplars of addition and detailed scratchpad formatted inputs. Each 1/2/3-shot prompting is tested on a fixed
five set of exemplars, and the accuracy is averaged over these evaluations.
Figure 22 shows that few-shot prompting directs the enhancement of performance, thereby allowing
plain addition to perform almost perfectly with 40,000 train samples. Intriguingly, performance
remains high on plain addition even with the inclusion of a text prompt, given a substantial number
of addition examples. We hypothesize that this is due to the structure of our mixed dataset where
addition examples are interspersed within Shakespeare data. With the incorporation of more addition
examples, instances where addition examples directly follow Shakespeare text increase, leading to a
decrease in potential inconsistencies when text content is present during addition test queries.
E.2 DISENTANGLING THE EFFECT OF TEXT ON PROMPTING
To disentangle the effects of the textual content in the training data, we train a model strictly on plain
addition, utilizing an enlarged vocabulary that also includes alphabet characters, thereby enabling text
-----
(a) Test accuracy on plain addition
100
80
60
40
(b) Test accuracy on detailed scratchpad
100
80
60
40
20
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
|||||||Zero-s|hot||
|||||||1-shot 2-shot|||
|||||||3-shot text_p|rompt||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||||Ze 1-|||
||||||ro-shot shot||
|||||2- 3-|shot shot||
|||||te|xt_prompt||
||||||||
||||||||
||||||||
5k 10k 15k 20k 25k 30k 35k 40k
Zero-shot
1-shot
2-shot
3-shot
text_prompt
Number of Addition Samples
1000 2000 3000 4000 5000
Zero-shot
1-shot
2-shot
3-shot
text_prompt
Number of Detailed Scratchpad Samples
Figure 22: Performance of NanoGPT model trained with the Shakespeare dataset, addition dataset in plain, and
detailed scratchpad format. The number of plain (left) and detailed scratchpad (right) formatted addition samples
are varied. Performance is evaluated on zero-shot, few-shot, and text prompts, with the shaded area representing
the standard deviation across various prompt exemplar sets. The results indicate a consistent enhancement in
model performance using few-shot prompting.
prompting. (Note that previous experimental settings on plain formatted additions used a vocabulary
size of 13, which only includes 10 numerals and 3 symbols - “+”,“=”,“\n”). We introduce a variant
of few-shot prompting, termed as noisy-prompt, which prompts the model with erroneous addition
exemplars, i.e.,, A + B = C, where C ̸= A + B.
100
80
60
40
20
0
0 5k 10k 15k 20k 25k 30k 35k 40k
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||Ze 1-|ro-sh shot|ot||
||||||||||||
||||||||2- 3-|shot shot|||
||||||||||||
||||||||||||
||||||||N Te|oisy-p xt-pro|romp mpt|t|
||||||||||||
||||||||||||
Number of Addition Samples
Figure 23: Performance of NanoGPT model trained exclusively on plain addition, but with an extended
vocabulary including both addition and alphabets (vocabulary size = 80). Few-shot prompting, using both correct
addition examples (1, 2, 3-shot) and incorrect addition examples (noisy-prompt) leads to enhanced performance,
while the use of text prompts results in a degradation of performance when the model is trained solely on
addition.
Figure 23 shows that few-shot prompting contributes to performance enhancement even when the
model is confined to training on a single plain addition task. Even in the presence of noisy prompting,
simply providing the model with the A + B = C format yields performance nearly identical to fewshot prompting, aligning with the result observed by Min et al. (2022). Conversely, we notice that
text prompts negatively influence performance when the model is trained only on addition. This
finding reinforces our earlier observation in Figure 6 that the advantageous impact of text prompts
originates from the combined text and addition data.
E.3 PROMPTING WITH TEXT
To extend on the few-shot prompting experiments from Section D.2, we also evaluate the effect of
prompting the model with pure-text prompts. If few-shot prompting with addition samples improves
accuracy through in-context learning, we expect few-shot prompting with text to hurt accuracy since
the text exemplars are out-of-context. We use five different types of text exemplars: (i) Prompt1: a
short text prompt that is not present in the Shakespeare dataset, (ii) Prompt2: a short text prompt
extracted from within Shakespeare dataset, (iii) Prompt3: a longer form text prompt extracted from
within the Shakespeare dataset, (iv) Prompt4: a prompt that includes numbers, and (v) Prompt5: a
-----
long text prompt that is not present in the Shakespeare dataset. More details on the text prompts can
be found in Figure 24.
**Text prompts for few-shot experiments**
Examples of the different text prompts used in the few-shot
experiment. Each exemplar is separated by ‘---’.
**Prompt 1. Short, /∈** **Shakespeare** **Prompt 3. Long, ∈** **Shakespeare**
et tu brute JULIET:
--- Romeo!
hello, world ROMEO:
--- My dear?
how are you doing? ---
--- MENENIUS:
agi is coming This is good news:
--- I will go meet the ladies. This Volumnia
boom! stability Is worth of consuls, senators, patricians,
---
**Prompt 2. Short,** **Shakespeare** LADY ANNE:
_∈_ Foul devil, for God's sake, hence, and trouble us
JULIET: not;
Romeo! For thou hast made the happy earth thy hell,
--- Fill'd it with cursing cries and deep exclaims.
All: ---
Resolved. resolved. BUCKINGHAM:
--- I fear he will.
VOLUMNIA: How now, Catesby, what says your lord?
Why, I pray you? ---
--- CATESBY:
CORIOLANUS: Bad news, my lord: Ely is fled to Richmond;
Nay! prithee, woman,-- And Buckingham, back'd with the hardy Welshmen,
--- Is in the field, and still his power increaseth.
MENENIUS:
I mean, thy general.
**Prompt 4. Has number, /∈** **Shakespeare**
I go 16-12
That's the code to my heart, ah
I go 1-6-1-2
Star
---
Like a river flows 17-23
Surely to the sea 15-22
Darling, so it goes 46-92
Some things are meant to be
---
I got my first real 6-string
Bought it at the five and dime
Played it 'til my fingers bled
Was the summer of '69
---
I think someday I might just 5-3-2-1 get a real job
I spent half of my life 1-2-3 in a bus or on a flight
I'm getting off 17-36-8-2 the road and in a real job
---
Every time that 27-67-29 I look in the mirror
All these lines on my 1-3-92-5 face getting clearer
The past 45-5-3 is gone
**Prompt 5. Long, /∈** **Shakespeare**
Is this the real life? Is this just fantasy? Caught in a landside, no escape from reality.
Open your eyes, look up to the skies and see.
I'm just a poor boy, I need no sympathy. Because I'm easy come, easy go,
Little high, little low,
Any way the wind blows doesn't really matter to me, to me.
---
It's my life
And it's now or never
I ain't gonna live forever
I just want to live while I'm alive
My heart is like an open highway
Like Frankie said, I did it my way
---
Destruction leads to a very rough road but it also breeds creation
-----
And earthquakes are to a girl's guitar, they're just another good vibration
And tidal waves couldn't save the world from Californication
--I want to stay
But I need to go
I want to be the best for you
But I just don't know what to do
'Cause baby, say I've cried for you
The time we have spent together
Riding through this English whether
--Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum mattis in leo vel
gravida.
Pellentesque libero elit, scelerisque varius vehicula a, hendrerit et tellus.
Proin convallis neque nisl, nec lobortis est scelerisque tincidunt.
Nunc venenatis auctor urna.
Class aptent taciti sociosqu ad litora torquent per conubia nostra.
Figure 24: Text prompt exemplars for few-shot experiments.
(a) NanoGPT, Test accuracy on plain addition
100
100
80
60
40
20
80
60
40
20
0
0 5k 10k 15k 20k 25k 30k 35k 40k
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||Zer Pro|o-shot mpt1|
|||||||||||
|||||||||Pro Pro|mpt2 mpt3|
|||||||||||
|||||||||||
|||||||||Pro Pro|mpt4 mpt5|
Zero-shot
Prompt1
Prompt2
Prompt3
Prompt4
Prompt5
Number of Addition Samples
(c) NanoGPT, Test accuracy on detailed scratchpad
0
0 5k 10k 15k 20k 25k 30k 35k 40k
|b) GPT-2, Test accuracy on plain addition|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||||||||||
||||||||Zer Pro|o-shot mpt1|
||||||||||
||||||||Pro Pro|mpt2 mpt3|
||||||||||
||||||||||
||||||||Pro Pro|mpt4 mpt5|
Zero-shot
Prompt1
Prompt2
Prompt3
Prompt4
Prompt5
Number of Addition Samples
(d) GPT-2, Test accuracy on detailed scratchpad
100
80
60
40
20
100
80
60
40
20
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||||||Zero-shot Prompt1|
|||||||Prompt2 Prompt3|
|||||||Prompt4 Prompt5|
||||||||
||||||||
||||||||
Zero-shot
Prompt1
Prompt2
Prompt3
Prompt4
Prompt5
Number of Detailed Scratchpad Samples
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
|||||||Zero-shot Prompt1 Prompt2|
|||||||Prompt3 Prompt4|
|||||||Prompt5|
Zero-shot
Prompt1
Prompt2
Prompt3
Prompt4
Prompt5
Number of Detailed Scratchpad Samples
Figure 25: Experiments on few-shot prompting with different text prompts: (i) Prompt1: short text not
in Shakespeare dataset (ii) Prompt2: short text within Shakespeare dataset (iii) Prompt3: long text within
Shakespeare dataset (iv) Prompt4: text with numbers (v) Prompt5: long text not in the Shakespeare dataset. Each
prompt (Prompt 1-5) consists of five distinct exemplars. The solid lines represent the mean performance across
the five exemplars, while the shaded area indicates the standard deviation. We observe that the effectiveness of
text prompts varies greatly depending on the exemplars used.
The results presented in Figure 25 show notable variations in evaluation accuracy for addition,
depending on the chosen text prompts. Longer text prompts (Prompt 5) typically result in a more
significant decline in performance. With the exception of NanoGPT trained on plain addition, the
result in Figure 26 indicates that employing text prompts followed by test addition queries tends to
have an adverse impact on the overall model performance, whereas incorporating relevant few-shot
exemplars (1/2/3-shot) is beneficial. This aligns well with our intuition on the benefits on in-context
learning.
-----
(a) NanoGPT, Test accuracy on plain addition
100
(b) GPT-2, Test accuracy on plain addition
100
80
60
40
20
80
60
40
20
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||Zero-s|hot|
|||||||1-shot 2-shot||
|||||||3-shot text_p|rompt|
0 5k 10k 15k 20k 25k 30k 35k 40k
Zero-shot
1-shot
2-shot
3-shot
text_prompt
Number of Addition Samples
(c) NanoGPT, Test accuracy on detailed scratchpad
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||Zero-s|hot|
|||||||1-shot 2-shot||
|||||||3-shot text_p|rompt|
0 5k 10k 15k 20k 25k 30k 35k 40k
Zero-shot
1-shot
2-shot
3-shot
text_prompt
Number of Addition Samples
(d) GPT-2, Test accuracy on detailed scratchpad
100
80
60
40
20
100
90
80
70
60
50
40
30
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||Ze|ro-shot|
|||||1- 2- 3-|shot shot shot|
|||||te|xt_prompt|
|||||||
|||||||
|||||||
|||||||
Zero-shot
1-shot
2-shot
3-shot
text_prompt
Number of Detailed Scratchpad Samples
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||Ze|ro-shot|
|||||1- 2-|shot shot|
|||||3- te|shot xt_prompt|
|||||||
Zero-shot
1-shot
2-shot
3-shot
text_prompt
Number of Detailed Scratchpad Samples
Figure 26: Performance of NanoGPT and GPT-2 model trained with entire Shakespeare dataset and a varying
number of samples of plain addition, and addition with detailed scratchpad dataset. Performance is evaluated on
test prompts formatted as plain addition and detailed scratchpad. Few-shot experiments are based on an average
of 5 exemplars, while text prompts involve an average of 25 exemplars. The shaded area represents the standard
deviation. Our observations indicate that few-shot prompting consistently improves performance, whereas test
prompts generally have a negative impact.
F FINE-TUNING, SCALING, AND PRETRAINING IN LARGER MODELS
This section focuses on bridging the gap between our experiments on NanoGPT and the more realistic
setting of larger language models like GPT-2 and GPT-3. We begin by comparing the performance of
NanoGPT and GPT-2 models when trained from random initialization. This comparison highlights
the improved performance achieved with the larger model scale, especially in the zero-shot setting.
Subsequently, we delve into the impact of tokenization methods and model pretraining in GPT-2
models. Our exploration reveals the crucial role of pretrained models and the consistent tokenization
of numbers (achieved by introducing spaces) during the training phase for arithmetic tasks. Building
on these findings, we proceed to fine-tune a pretrained GPT-3 model on various arithmetic tasks,
employing different data formats.
**Comparing NanoGPT and GPT-2.** To examine the impact of scale on arithmetic performance,
we explore a larger GPT-2 model with 85 million parameters, featuring twice as many self-attention
layers, heads, and embedding size compared to the previously used NanoGPT model. We train
the GPT-2 model from scratch using character-level tokenization, jointly on text and addition tasks,
adopting both plain and detailed scratchpad formats; an approach mirroring the setting in Section 7.
The results depicted in Figure 27 demonstrate that the larger model outperforms in both plain and
detailed scratchpad evaluations. For a comprehensive analysis of GPT-2, including few-shot learning
and the influence of text prompts, refer to Figure 25 and Figure 26.
**Going from character-level tokenization to BPE.** The transition to a GPT-2 setup necessitates
several modifications. Firstly, we shift to OpenAI’s Tiktoken BPE tokenizer, which is the default
tokenizer for the pretrained GPT-2 model, featuring a vocabulary size of 50,257. We also examined
two different training approaches: training the model from random initialization (scratch) and
fine-tuning the pretrained model sourced from Huggingface. To ensure uniform digit tokenization,
-----
(a) Test acc. on plain addition
5k 10k 15k 20k 25k 30k 35k 40k
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||Nan|oGP|T, Z|ero-s|hot||
|||||Nan|oGP|T, Fe|w-s|hot||
|||||GPT GPT|2, Z 2, Fe|ero-s w-S|hot hot|||
|||||||||||
NanoGPT, Zero-shot
NanoGPT, Few-shot
GPT2, Zero-shot
GPT2, Few-Shot
# Plain Addition Samples
(b) Test acc. on detailed scratchpad
100.0
99.5
99.0
98.5
98.0
97.5
97.0
96.5
100
80
60
40
20
96.0
1000 2000 3000 4000 5000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
||||N|anoGP|T, Zer|o-shot||
||||N|anoGP|T, Few|-shot||
||||G|PT2, Z|ero-sh|ot||
||||G|PT2, F|ew-Sh|ot||
|||||||||
NanoGPT, Zero-shot
NanoGPT, Few-shot
GPT2, Zero-shot
GPT2, Few-Shot
# Detailed Scratchpad Samples
Figure 27: Performance of NanoGPT and GPT-2 model trained with entire Shakespeare dataset and a varying
number of samples of plain addition, and addition with detailed scratchpad dataset. Performance is evaluated on
test prompts formatted as plain addition and detailed scratchpad. Few-shot experiments are based on an average
of 5 exemplars, while text prompts involve an average of 25 exemplars. The shaded area represents the standard
deviation. Our observations indicate that few-shot prompting consistently improves performance, whereas test
prompts generally have a negative impact.
alterations were made in data formatting to include spaces between numbers. This change aims to
circumvent potential inconsistent tokenization of numbers while utilizing the Tiktoken tokenizer.
Figure 7 shows that GPT-2 demonstrates high performance in addition tasks with both character-level
tokenization and Tiktoken with spaces between digits. This aligns with the results by Wallace et al.
(2019), suggesting that character-level tokenization exhibits stronger numeracy capabilities compared
to a word or sub-word methods. Furthermore, comparing the models trained from scratch and the
models trained from the pretrained model, we observe that fine-tuning a pretrained model results in
better performance compared to training a model from scratch.
**GPT-3 experiments: Supervised fine-tuning.** We extend our experiments to verify if our observations hold while fine-tuning larger pre-trained models. In the following, we consider three GPT-3
variants: Ada, Curie, and Davinci. Note that since we perform fine-tuning using the OpenAI APIs, by
default only the completions are loss generating tokens. Therefore, these experiments are slightly
different when compared to the previous settings. We fine-tune these models using the same four
data formatting methods as our NanoGPT experiments: (i) plain formatting, (ii) reverse formatting,
(iii) simplified scratchpad, and (iv) detailed scratchpad. These formats are identical to those from our
NanoGPT experiments except for one aspect. We introduce spaces between numbers in plain and
_reverse formatting to ensure consistent tokenization._
Due to budget constraints, all experiments were conducted using a fine-tuning dataset of 1, 000
examples, and models were trained for 4 epochs. Performance evaluation was carried out on 1, 000
examples that were disjoint from the training dataset. Note that this training scale is significantly
smaller than our experiments on NanoGPT, which employed 10, 000 training examples for 5, 000
iterations, with evaluations conducted on 10, 000 test examples. However, given these models’
extensive pretraining on large data corpora, this scale can be deemed rational.
The results for addition and subtraction tasks are presented in Table 2 and Table 10, respectively. We
observed that initiating with a pretrained GPT-3 model significantly improves performance compared
to training NanoGPT or GPT-2 models from random initialization with only 1000 samples. This
indicates the utility of leveraging pretrained models for improved arithmetic performance. Interestingly, while reverse formatting and simplified scratchpad formats improve addition performance,
they adversely affect subtraction performance. This observation is consistent with our earlier finding
depicted in Figure 18, wherein transitioning from one data format to another often results in lower
performance compared to initiating training from random initialization. We postulate that this discrepancy may be due to the pretrained GPT-3 model’s requirement to adapt to the reversed approach and
“unlearn” its knowledge of plain formatting arithmetic, thereby introducing additional complexity. On
-----
the other hand, the detailed scratchpad method achieves excellent performance, albeit with increased
training and inference costs due to higher token requirements.
Table 10: Evaluation of subtraction performance for fine-tuned GPT-3 models: Davinci, Curie, and Ada. In each
case, the model is finetuned on 1000 samples of addition in the corresponding format.
|GPT-3 Model|Zero-shot Plain Reverse Simplified Scratchpad Detailed Scratchpad|
|---|---|
|Davinci Curie Ada|0.1% 84.8% 66.0% 15.4% 99.5% 0.1% 24.1% 6% 3.8% 92.5% 0.0% 3.7% 2.6% 3.4% 81.5%|
|---|---|
Table 11: Evaluation of sine and square root performance for fine-tuned GPT-3 models: Davinci, Curie, and Ada.
In each case, the model is finetuned on 1000 samples of addition in the corresponding format.
|Col1|Sine|Square Root|
|---|---|---|
|GPT-3 Model eps|Zero-shot Plain Detailed Scratchpad|Zero-shot Plain Detailed Scratchpad|
|---|---|---|
|0 Davinci 5e-4 5e-3|0% 11.0% 10.3% 0% 35.9% 29.7% 0.4% 85.5% 72.8%|0% 0.7% 4.6% 0% 7.5% 17.2% 0% 59% 60.5%|
|---|---|---|
|0 Curie 5e-4 5e-3|0.0% 8.6% 1.2% 0.4% 32.7% 5.4% 0.9% 80.8% 15%|0.0% 0.7% 2.1% 0.1% 6.5% 6.0% 0% 52.7% 30.2%|
|---|---|---|
|0 Ada 5e-4 5e-3|0.0% 5.8% 4.3% 0.0% 21.4% 9.1% 0.3% 67.8% 25.2%|0.0% 0.3% 2.7% 0.0% 3.8% 11.9% 0.0% 32.2% 45.8%|
|---|---|---|
For the more complex sine and square root tasks as shown in Table 11, we found that training with
only 1000 samples is insufficient to generate exact answers (eps=0). The GPT-3 model, fine-tuned
with 1,000 samples, performs worse than the NanoGPT model trained with 10,000 samples. Further
experiments with larger training datasets are necessary for deeper insights and improved performance
on these tasks.
It is worth mentioning that while few-shot prompting notably improves the performance of all
three GPT-3 models, their zero-shot performance is quite poor (as shown in the leftmost column
of the tables). However, post-training, few-shot prompting becomes less effective as OpenAI’s
fine-tuning process trains the model on individual prompts and desired completions serially, rather
than in concatenation with multiple examples like in our NanoGPT experiments. Consequently, our
comparisons primarily focus on the zero-shot performances of each task.
G TOKEN EFFICIENCY ACROSS DATA FORMATS
Figure 4a demonstrates that more detailed training data leads to improved sample efficiency. However,
this comparison does not account for the cost associated with training and inference. To address this,
we conduct a cost analysis based on (i) the number of tokens within the train dataset (measuring
the efficiency of the training dataset), and (ii) the number of tokens encountered during training.
For (i) token-efficiency of train dataset, we calculate the number of tokens within the dataset which
can be derived by number of samples × number of tokens per sample. For instance, the mean
token count for a single training example in a 3-digit addition task is 13 for plain format, 15 for
reverse format, 64 for simplified scratchpad format, and 281 for detailed scratchpad format. For (ii)
token-efficiency for training, we calculate the number of tokens encountered by the model which
is number of iterations × context length of the model × batch size. This approach ensures our cost
calculation accounts for a vanilla implementation of attention with no additional optimizations (Pope
et al., 2023). Table 12 presents the number of tokens required for prompting and completion in each
data format, per example. Evidently, the detailed scratchpad method uses considerably more tokens
compared to other techniques.
-----
The result in Figure 4b indicates that reverse formatting is the most token-efficient approach for training dataset construction. While detailed scratchpad training is more sample efficient, it necessitates a
larger number of tokens per sample, both during training and inference. Given that the inference cost
for commercial models is determined by the number of tokens utilized per inference call (sum of
prompting and completion tokens), the use of models trained on detailed scratchpad formats may
escalate overall costs. Furthermore, since the cost of a single forward pass is quadratic in the number
of tokens, this is important to consider. On the other hand, the result in Figure 28 shows that for the
same number of tokens input to the model to be trained on, the model is trained faster with detailed
scratchpad data. Therefore, for practical usage, it is crucial to evaluate both the number of samples
needed for achieving the desired performance and the actual token demands during training and
inference.
Table 12: Token requirements for prompting and completion per single example of 3-digit addition.
Plain Reverse Simplified Scratchpad Detailed Scratchpad
Prompt 8 9 23 23
Completion 5 6 41 258
**Total** **13** **15** **64** **281**
100
95
90
85
80
75
70
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
||||||pl re|ain verse||||
||||||si de|mple sc tailed|ratchp scratc|ad hpad||
|||||||||||
|0.0 0||.5 1.0 1.5 2.0 2.5 3.0 Number of Tokens (1e8) 1e8||||||||
Figure 28: Model performance by the number of tokens input to the model during training.
-----
H LENGTH GENERALIZATION
In this section, we present results from experiments conducted to assess the model’s ability to generalize across different digit lengths. Initially, we exclude training examples featuring 2-digit operands
from the 10,000-sample addition dataset, yielding a reduced dataset of 7,655 samples, consisting
solely of 1 or 3-digit operands. The model is trained with reverse format and its performance is evaluated on test dataset containing 100 random samples of 1-digit, 2-digit, 3-digit, and 4-digit additions.
The results in Figure 29 demonstrate that the NanoGPT model is incapable of performing 2-digit
and 4-digit additions. This suggests an inherent necessity for exposure to all digit combinations to
perform accurate calculations and lacks generalization capabilities for unseen digit lengths.
Additionally, we investigate the model’s ability to extrapolate over larger digit lengths. The model is
trained on 7-digit plain-formatted additions (each digit addition comprises 16650 samples, except
1-digit addition, which is trained on 100 samples). Its ability to add add 8-digit numbers is then put to
test. The results in Figure 29 show that the model is unable to generalize to a greater number of digits
beyond what it has been trained on. Similarly, when training the model on 10-digit binary numbers, it
fails to generalize to 11-digit binary additions, further confirming its limited ability to handle unseen
digit combinations.
(a) Trained on 1 and 3 digit addition
0 5000 10000 15000 20000
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
||||||1 digit|||
||||||2 digit 3 digit|||
||||||4 digit|||
|||||||||
|||||||||
|||||||||
Iterations
(b) Trained on 1 – 7 digit addition
0 5000 10000 15000 20000
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
||||1 digit 2 digit 3 digit|||
||||4 digit 5 digit|||
||||6 digit 7 digit|||
||||8 digit|||
|||||||
|||||||
Iterations
100
80
60
40
20
100
80
60
40
20
Figure 29: Generalization experiments testing NanoGPT’s performance on unseen numbers of digits in addition
tasks. (Left): NanoGPT trained on reverse formatted addition with 1 and 3 digits, and tested on additions ranging
from 1 to 4 digits. (Right): NanoGPT trained on up to 7-digit plain formatted addition and tested on additions
ranging from 1 to 8 digits. In both cases, NanoGPT exhibits an inability to perform addition on digits it has not
been exposed to.
We further explore the impact of detailed scratchpad formatting. The model trained on additions of
up to 3 digits, struggles to generalize to 4-digit additions. Notably, it randomly drops a single digit
from the 4-digit number, erroneously perceiving it as a 3-digit number. We illustrate this difficulty
in Figure 30 through multiple detailed error cases, ranging from instances in which only the test
query is provided (Case 1) to scenarios where all intermediate steps are provided except only the final
answer (Case 5). The prompts are highlighted in light grey and the responses generated by our trained
NanoGPT model are highlighted in light green. These cases emphasize the model’s shortcomings in
accurately managing larger digit lengths.
**Examples for length generalization prompts**
Results obtained by prompting the NanoGPT model with larger digits
than those it was trained on. The model is trained using detailed
scratchpad formats with 3-digit numbers. We evaluate its performance
on 4-digit numbers, with varying levels of provided information. The
prompt input is highlighted in a light blue box, while the model’s
output is highlighted in a light green box.
**Case 1: Just asking the question**
-----
Input:
8465+3541
Target:
<scratch>
[8,4,6] has 3 digits.
[3,5,1] has 3 digits.
[8,4,6] + [3,5,1], A=[], C=0, 6+1+0=7, A->7, C->0
[8,4] + [3,5], A=[7], C=0, 4+5+0=9, A->9, C->0
[8] + [3], A=[9,7], C=0, 8+3+0=11, A->1, C->1
[] + [], A=[1,9,7] C=1, END
</scratch>
1 1 9 7
**Case 2: Giving number of digits**
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6] + [3,5,4], A=[], C=0, 6+4+0=10, A->0, C->1
[8,4] + [3,5], A=[0], C=1, 4+5+1=10, A->0, C->1
[8] + [3], A=[0,0], C=1, 8+3+1=12, A->2, C->1
[] + [], A=[2,0,0] C=1, END
</scratch>
1 2 0 0
**Case 3: Giving 1 intermediate steps**
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1], A=[], C=0, 5+1+0=6, A->6, C->0
[8,4] + [3,5], A=[6], C=0, 4+5+0=9, A->9, C->0
[8] + [3], A=[9,6], C=0, 8+3+0=11, A->1, C->1
[] + [], A=[1,9,6] C=1, END
</scratch>
1 1 9 6
**Case 4: Giving all but one intermediate steps**
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1], A=[], C=0, 5+1+0=6, A->6, C->0
[8,4,6] + [3,5,4], A=[6], C=0, 6+4+0=10, A->0, C->1
[8,4] + [3,5], A=[0,6], C=1, 4+5+1=10, A->0, C->1
[8] + [3], A=[0,0,6], C=1, 8+3+1=12, A->2, C->1
[] + [], A=[2,0,6] C=1 END
</scratch>
1 0 0 6
**Case 5: Giving all intermediate steps, all it has to do is aggregate the answer**
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1], A=[], C=0, 5+1+0=6, A->6, C->0
[8,4,6] + [3,5,4], A=[6], C=0, 6+4+0=10, A->0, C->1
[8,4] + [3,5], A=[0,6], C=1, 4+5+1=10, A->0, C->1
[8] + [3], A=[0,0,6], C=1, 8+3+1=12, A->2, C->1
[] + [], A=[2,0,0,6] C=1 END
</scratch>
1 2 0 6
**Case 6: Giving 1 in-context example**
Input:
8465+3541
Target:
<scratch>
-----
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1], A=[], C=0, 5+1+0=6, A->6, C->0
[8,4,6] + [3,5,4], A=[6], C=0, 6+4+0=10, A->0, C->1
[8,4] + [3,5], A=[0,6], C=1, 4+5+1=10, A->0, C->1
[8] + [3], A=[0,0,6], C=1, 8+3+1=12, A->2, C->1
[] + [], A=[2,0,0,6] C=1, END
</scratch>
1 2 0 0 6
Input:
1946+3598
Target:
<scratch>
[1,9,4] has 3 digits.
[3,5,8] has 3 digits.
[1,9,4] + [3,5,8], A=[], C=0, 4+8+0=12, A->2, C->1
[1,9] + [3,5], A=[2], C=1, 9+5+1=15, A->5, C->1
[1] + [3], A=[5,2], C=1, 1+3+1=5, A->5, C->0
[] + [], A=[5,5,2] C=0, END
</scratch>
5 5 2
**Case 7: Giving 1 In-context example, and all intermediate steps**
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1], A=[], C=0, 5+1+0=6, A->6, C->0
[8,4,6] + [3,5,4], A=[6], C=0, 6+4+0=10, A->0, C->1
[8,4] + [3,5], A=[0,6], C=1, 4+5+1=10, A->0, C->1
[8] + [3], A=[0,0,6], C=1, 8+3+1=12, A->2, C->1
[] + [], A=[2,0,0,6] C=1, END
</scratch>
1 2 0 0 6
Input:
1946+3598
Target:
<scratch>
[1,9,4,6] has 4 digits.
[3,5,9,8] has 4 digits.
[1,9,4,6] + [3,5,9,8], A=[], C=0, 6+8+0=14, A->4, C->1
[1,9,4] + [3,5,9], A=[4], C=1, 4+9+1=14, A->4, C->1
[1,9] + [3,5], A=[4,4], C=1, 9+5+1=15, A->5, C->1
[1] + [3], A=[5,4,4], C=1, 1+3+1=5, A->5, C->0
[] + [], A=[5,5,4,4] C=0, END
</scratch>
5 5 4
Figure 30: Example results on the model’s output when prompted with a larger number of digits than those it
was trained on.
-----
I EXPERIMENTAL SETUP
In this section, we summarize the datasets, models and hyperparameters used for experiments. All
of our experiments on NanoGPT and GPT-2 models are run using PyTorch 2.1 and CUDA 11.7 on
Nvidia 2808 TIs and NVIDIA 3090s. Detailed dependencies are provided on our github repository[4].
I.1 DATASET
In this section, we explain the details of the datasets used for our experiments. For arithmetic tasks,
we construct our own datasets as described below while we use the standard shakespeare (Karpathy,
2015) dataset for text.
**Arithmetic Tasks** As mentioned above, for all arithmetic tasks, we prepare our own datasets.
We refer to the training dataset for a binary operator f (·) as Dtrain = {(x[1]i _[, x]i[2][)][, y][i][}]i[N]=1_ [where]
_yi = f_ (x[1]i _[, x]i[2][)][. Similarly, the test dataset][ D][test][ is constructed by randomly sampling pairs of]_
operands that do not appear in Dtrain. During both training and inference, we then apply different
formatting techniques (see Section 3), to construct the final sequence that is input to the model. We
would like to repeat that both the careful choice of samples in the training dataset as well as their
formatting play a crucial role in the final performance of the model.
**Text** For text data, we use the Shakespeare dataset which was introduced by Karpathy (2015)
originally featured in the blog post “The Unreasonable Effectiveness of Recurrent Neural Networks”.
It consists of 40,000 lines of dialogue carefully curated from William Shakespeare’s plays. The dataset
comprises of a total of 1,115,394 characters and 64 unique tokens(when using the character-level
tokenizer that we employed in all NanoGPT experiments).
I.1.1 DATA BALANCING
As mentioned in Section 3, we carefully sample our data to ensure that they are “balanced” with
respect to the number of carries and number of digits. As mentioned earlier, sampling the operands
uniformly at random would lead to an extremely skewed dataset. To avoid this, we try to (i) Balance
**digits by sampling lower-digit numbers with higher weights and (ii) Balance carry-ons by sampling**
such that we have equal number of examples with 0, 1, 2 and 3 carry-on operations.
Specifically, we create a balanced dataset of 10, 000 samples. This dataset includes all 100 1-digit
additions and a random sampling of 900 2-digit additions (including both (2 + 1) and (1 + 2)
digit additions) and 9, 000 3-digit additions. For the 3-digit addition samples, we employ rejection
_sampling to ensure an equal distribution of carry-ons (0, 1, 2, or 3). For the test dataset, we uniformly_
sample 10, 000 addition examples that do not overlap with the train dataset. Results in Figure 2 and
Table 13 demonstrate a clear advantage of the employed data balancing methods.
For the train dataset, we follow a specific approach based on the number of examples. For sample
sizes smaller than 10, 000 (e.g., 500, 1, 000, 2, 000, 3, 000, 4, 000, 5, 000), we include all 1-digit
additions and a proportionate number of 2-digit samples (e.g., for a total of 5, 000 samples, we
include 900 × 5, 000/10, 000 = 450 two-digit additions). The remaining samples are filled with
3-digit additions from the constructed train dataset of 10,000 samples. For sample sizes larger than
10,000 (e.g., 20,000, 40,000), we include all examples from the 10,000-sample train dataset and then
add additional samples as needed. Similar to before, we perform rejection sampling to maintain an
equal number of carry operations. Table 14. provides detailed information on the number of samples
with 1-digit, 2-digit, and 3-digit additions, as well as the number of carry-ons.
For the other arithmetic operations (subtraction, multiplication, sine, and square root), we construct
the train dataset using the following approach: (i) For subtraction, we use the same pairs of operands
that were used for addition. (ii) For multiplication, we include all 100 cases of a 1-digit number
multiplied by a 1-digit number. Additionally, we randomly sample multiplications involving operands
of up to 2 digits. (iii) For sine, we sample a random number in [π/2, π/2] and truncate it to 4 decimal
places. (iv) For square root, we sample a random number between [1, 10] and truncate it to 4 decimal
[4https://github.com/lee-ny/teaching_arithmetic](https://github.com/lee-ny/teaching_arithmetic)
-----
places. For the test dataset, we sample 10, 000 data points (7, 000 for multiplication) that do not
overlap with the train dataset.
Table 13: Performance of addition on various data sampling methods used: (i) Random - uniform sampling of
operands; (ii) Balanced digits - sampling more 1 and 2-digit operations ; (iii) Balanced carry - balancing the
dataset to contain an equal number of carry-on operations. Experiments on addition with zero-padding each
operand and output to have 3 and 4 digits, respectively. We observe that balancing the dataset can significantly
improve the performance or arithmetic operations.
Data Sampling Overall 1-digit 2-digit Carry-0 Carry-1 Carry-2 Carry-3
Random 97.74 98.00 96.20 95.88 98.61 98.74 94.98
Balanced Digits 98.13 **100.00** **99.70** **98.87** **98.64** 98.13 95.93
Balanced Carry-Ons **98.29** **100.00** **99.70** 98.38 97.56 **99.02** **98.22**
Table 14: Number of examples of digit 1/2/3 and 0/1/2/3 carry-ons for NanoGPT experiments on addition for
different number of samples varying from 500 to 40, 000.
|Total number|1-digit 2-digit 3-digit|0-carry-ons 1-carry-ons 2-carry-ons 3-carry-ons|
|---|---|---|
|500 1000 2000 3000 4000 5000 10000 20000 40000|100 45 355 100 90 810 100 180 1720 100 270 2630 100 360 3540 100 450 4450 100 900 9000 121 1937 17942 132 3939 35929|163 141 97 99 283 268 236 213 535 502 481 482 781 782 748 689 1020 1016 958 1006 1279 1271 1229 1221 2500 2500 2500 2500 5000 5000 5000 5000 10000 10000 10000 10000|
|---|---|---|
I.1.2 DATA FORMATTING
For each of the four formatting techniques, as applied to each arithmetic operation we provide the
details below. (i) Plain refers to the simplest formatting where we simply create a sequence as the
mathematical representation of the corresponding operation (e.g., A3A2A1 + B3B2B1 = C3C2C1).
For (ii) Reverse, we simply reverse the digits of the output so that they appear in increasing order
from LSB to MSB (e.g., $A3A2A1 + B3B2B1 = C1C2C3$). (iii) Simplified Scratchpad and (iv)
**Detailed Scratchpad provide algorithmic reasoning steps like (Nye et al., 2021; Zhou et al., 2022b)**
so as to help the model get more “information” per sample. Our intuition is that this approach nudges
the model towards actually learning the algorithm of addition or subtraction rather than merely trying
to fit the training examples. Refer to Appendix J for detailed examples of data formatting for each
arithmetic operation.
-----
**Different data formatting methods for addition**
Four input formatting methods used for the addition task:
**(i) Plain: standard formatting of addition**
**(ii) Reverse: flips the order of the output and encapsulates each data sample with the‘$’ symbol**
at the start and end.
**(iii) Simplified Scratchpad: provides carry and digit-sum information for each step of addition,**
from the LSB to the MSB[5].
**(iv) Detailed Scratchpad: provides explicit details of intermediate steps of addition.**
**Plain** **Detailed Scratchpad**
128+367=495 Input:
128+367
**Reverse** Target:
<scratch>
$128+367=594$ [1,2,8] has 3 digits.
[3,6,7] has 3 digits.
**Simplified Scratchpad** [1,2,8] + [3,6,7], A=[], C=0, 8+7+0=15, A->5, C->1
[1,2] + [3,6], A=[5], C=1, 2+6+1=9, A->9, C->0
Input: [1] + [3], A=[9,5], C=0, 1+3+0=4, A->4, C->0
128+367 [] + [], A=[4,9,5] C=0, END
Target: </scratch>
A->5, C->1 4 9 5
A->9, C->0
A->4, C->0.
495
Figure 31: The four input formatting methods used for the addition task. We progressively increase the amount
of detail with each format.
Note that we wrap each data sample in the reverse format with the ‘$’ symbol at the beginning and end
as a delimiter. We originally observed improved performance in both the plain and reverse formats
when the operands and outputs were zero-padded to a fixed length (e.g., 3 and 4 digits, respectively,
for 3-digit addition). But later realized that a single symbol can effectively replace zero-padding.
While we maintain the original plain format without padding as a baseline – emphasizing the necessity
for improved data formatting for efficient emergence – we incorporate the ‘$’-encapsulation in our
modified reverse format. For further details, refer to Appendix B.1.
**Addition (+).** We focus on additions of positive numbers up to 3-digits, in which the plain
formatting would look like A3A2A1 + B3B2B1 = C3C2C1. For experiments on comparing data
sampling presented in Figure 2, we pad the two operands and the output with zero, to be of length 3
and 4 respectively. For all other experiments, we do not utilize zero-padding. For Scratchpad-based
methods (iii, iv), we provide the digit-wise addition (denoted as A) and carry-on (denoted as C)
information for intermediate steps from the least significant bit (LSB) to the most significant bit
(MSB).
**Subtraction (−).** We consider subtraction of positive numbers up to 3 digits, written as
Areverse formatting. As with addition, scratchpad-based methods (iii, iv), present the intermediate steps3A2A1 − B3B2B1 = C3C2C1 in (i) plain formatting, and $A3A2A1 − B3B2B1 = C1C2C3$ in (ii)
of digit-wise subtraction and handling of carry-ons. These steps proceed from the least significant
bit (LSB) to the most significant bit (MSB). If the final result after computing all the digit-wise
subtractions is negative, we subtract the number in the most significant bit (MSB) position multiplied
by 10 to the power of (number of digits in the output - 1) from the remaining digits in the output. In
Section B.4, we present an alternative version of the detailed scratchpad formatting for subtraction.
**Multiplication (×).** We consider multiplication of positive numbers up to 2-digits. (i) Plain
formatting examples are formatted as A2A1 ∗ B2B1 = C4C3C2C1, while (ii) reverse formatting is
5We deviate from the strict definition of “most significant bit” (MSB) and “least significant bit” (LSB),
typically associated with binary numbers, and reinterpret them for the purpose of this paper as the most significant
“digit” and least significant “digit”, respectively.
-----
formatted asintermediate step by conducting a series of multiplications between the first operand and each digit $A2A1 ∗ B2B1 = C1C2C3C4$. The (iv) detailed scratchpad method simplifies each
of the second operand, starting from the least significant bit (LSB) and moving toward the most
significant bit (MSB). For each step, we multiply the result by an exponentiation of 10 corresponding
to the relative digit position.
**Sine (sin ).** We consider decimal numbers within the range [−π/2, π/2], truncated to 4-digit
precision. (i) Plain formatting examples are formatted as sin(A0.A1A2A3A4) = B0.B1B2B3B4.
For (iv) detailed scratchpad method, we include the Taylor series expansion steps for sine, which
is represented as sin(x) = x − 3![1] _[x][3][ +][ 1]5!_ _[x][5][ −]_ 7![1] _[x][7][ +][ · · ·][ . These intermediate steps involve]_
exponentiation, which may not be any easier to compute than the sine operation itself.
**Square Root ([√]).** We consider decimal numbers within [1, 10), truncated to 4-digits of precision
with the format, written as sqrt(A0.A1A2A3A4) = B0.B1B2B3B4 for (i) plain formatting. For (iv)
detailed scratchpad method, we enumerate each step of Newton’s method to compute the square root
function. The iterative formula is given by xn = 2[1] [(][x][n][−][1][ +] _xnx−1_ [)][, where][ x][0][ is initialized as the]
floor of the square root value of the operand x. These intermediate steps involve a division operation,
which can be as complex as the square root operation itself.
For evaluation of sine and square root, we classify the result ˆyi as correct if the absolute difference
between ˆyi and the ground truth value yi is less than or equal to a predefined threshold ϵ ≥ 0.
I.2 MODEL
For all experiments, we use a Decoder-only Transformer architecture. Specifically, we primarily use
the NanoGPT model, a scaled-down variant of the GPT-2 model with half the number of self-attention
layers, heads, and embedding dimension. Note that we use character-level tokenization instead of
using the OpenAI’s BPE tokenizer (Tiktoken) of vocabulary size 50257, making the vocabulary
size significantly smaller. We use a learnable absolute positional embedding initialized randomly,
following the GPT-2 model. Are results are generated using a temperature of 0.8.
In the case of arithmetic tasks performed on plain and reverse formatting, we set a context length of
256 for NanoGPT experiments. The length of a single train example falls within the range of 13 to
15, approximately. However, when conducting experiments on scratchpad formatting, we increase
the context length to 1024. This adjustment allows us to accommodate more examples per batch. In
the case of simplified scratchpad, the length of each train example is approximately 64, while the
detailed scratchpad has a length of approximately 281. For GPT-2 experiments we fix the context
length to 1024 for all experiments. See Table 15 for details on model configuration.
For experiments on fine-tuning a pretrained large language model, we use OpenAI’s GPT-3 model Ada, Curie, and Davinci.
Table 15: NanoGPT and GPT-2 model configuration
Model Input Formatting Context Length Self-Attn Layers Num Heads Embedding Dim
Plain, Reverse 256 6 6 384
NanoGPT
Scratchpad 1024 6 6 384
Plain, Reverse 1024 12 12 768
GPT-2
Scratchpad 1024 12 12 768
I.3 TRAINING
Our overall experimental setup closely follows the standard training procedures for language models Karpathy (2022); Brown et al. (2020). We train models using the autoregressive loss for all tokens,
including the prompts, rather than limiting the training to only the answers. In addition, unlike
popular approaches for synthetic tasks, we pack multiple training examples to fill up the entire context
window. We also randomly sample the starting position, allowing the model to process shifted inputs.
-----
Figure 32: The GPT-2 Architecture. Image from (Radford & Narasimhan, 2018). NanoGPT model is a smaller
model with half the number of self-attention layers, multi-heads, and embedding dimensions.
I.4 HYPERPARAMETER CONFIGURATIONS
In this section, we provide a detailed overview of the hyperparameter configuration used in our
experiments in Table 16 and 17. To enhance memory efficiency and training speed, we employ flash
attention. For most experiments, we utilize the bfloat16 data type. However, when working with
Nvidia 2080 GPUs, which do not support bfloat16, we switch to float16. It is worth noting that we
did not observe significant differences in training and evaluation performance between the two data
types.
The learning rate is chosen from {1e-3, 5e-4, 1e-4, 5e-5} based on validation loss. For the scratchpad
format, NanoGPT is trained longer since the number of tokens per sample is higher and it requires
more iterations to converge.
For the GPT-2 experimentation, we reduced the batch size to 8 to accommodate the GPU memory
limitations. However, to mitigate the impact of the smaller batch size, we employed gradient accumulation steps. This approach involves taking multiple steps between gradient updates, effectively
increasing the effective batch size to 64. For specific hyperparameter details, please refer to Table 17.
Table 16: Hyper Parameters used for NanoGPT experiments on arithmetic tasks
Input Format Batch Size Optimizer LR Betas Iterations Warmup Iter Wt decay Dropout
Plain, Reverse 256 AdamW 0.001 (0.9, 0.99) 5000 100 0.1 0.2
Scratchpad 16 AdamW 0.001 (0.9, 0.99) 50000 0 0.1 0.2
Table 17: Hyper Parameters used for GPT-2 experiments on arithmetic tasks
Input Format Batch Size Optimizer LR Betas Iterations Warmup Iter Wt decay Dropout
Plain, Reverse 64 AdamW 0.0005 (0.9, 0.99) 5000 100 0.1 0.2
Scratchpad 64 AdamW 0.0005 (0.9, 0.99) 20000 0 0.1 0.2
-----
Table 18: Hyper Parameters used for tandem training experiments in Section 7.
Model Batch Size Optimizer LR Betas Iterations Warmup Iter Wt decay Dropout
NanoGPT 16 AdamW 0.001 (0.9, 0.99) 5000 0 0.1 0.2
GPT-2 40 AdamW 0.0006 (0.9, 0.95) 50000 2000 0.1 0.2
(a) NanoGPT, plain addition
(b) NanoGPT, detailed scratchpad addition
(c) NanoGPT, Perplexity
80
60
40
20
|Col1|100|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||100 80 (%) y|||1. y|30||
||60 Accurac 40 Test 20|||1. Perplexit 1.|25 20 15||
||||||||
|||||1.|||
||||||||
||0 30000 40000 50000 0 10000 20 ions I (e)||||10 0 10000 2 (f)||
||||||||
(d) GPT-2, plain addition (e) GPT-2, detailed scratchpad addition (f) GPT-2, Perplexity
100
80 2.0
80
60 1.8
60
40 1.6
40 Perplexity
est Accuracy (%)T20 est Accuracy (%)T 20 1.4
0 0 1.2
0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000
Iterations Iterations Iterations
Figure 33: Training loss curves for NanoGPT and GPT-2 trained with varying numbers of plain (Add) and
detailed scratchpad (DS) samples as well as the shakespeare dataset as described in Section 7. As we can see,
the model continues to improve in addition accuracy as the number of iterations increases. However, the training
perplexity on Shakespeare also tends to increase, which indicates some overfitting. However, we note that the
model still outputs “reasonable” text when prompted with shakespeare text.
J PROMPT EXAMPLES
In this section, we provide three examples of each formatting (plain, reverse, simplified scratchpad,
detailed scratchpad) of arithmetic operations (+, −, ×, sin, _[√])._
J.1 ADDITION
-----
**Addition Examples**
**Plain** **Detailed Scratchpad**
266+738=1004 Input:
980+743=1723 396+262
41+34=75 Target:
<scratch>
[3,9,6] has 3 digits.
**Reverse** [2,6,2] has 3 digits.
[3,9,6] + [2,6,2] , A=[] , C=0 , 6+2+0=8 , A->8 , C->0
$913+524=1437$ [3,9] + [2,6] , A=[8] , C=0 , 9+6+0=15 , A->5 , C->1
$226+598=824$ [3] + [2] , A=[5,8] , C=1 , 3+2+1=6 , A->6 , C->0
$35+58=93$ [] + [] , A=[6,5,8] C=0 , END
</scratch>
6 5 8
**Simplified Scratchpad** Input:
796+890
Input: Target:
922+244 <scratch>
Target: [7,9,6] has 3 digits.
A->6 , C->0 [8,9,0] has 3 digits.
A->6 , C->0 [7,9,6] + [8,9,0] , A=[] , C=0 , 6+0+0=6 , A->6 , C->0
A->1 , C->1. [7,9] + [8,9] , A=[6] , C=0 , 9+9+0=18 , A->8 , C->1
1166 [7] + [8] , A=[8,6] , C=1 , 7+8+1=16 , A->6 , C->1
Input: [] + [] , A=[6,8,6] C=1 , END
285+43 </scratch>
Target: 1 6 8 6
A->8 , C->0 Input:
A->2 , C->1 788+989
A->3 , C->0. Target:
328 <scratch>
Input: [7,8,8] has 3 digits.
993+849 [9,8,9] has 3 digits.
Target: [7,8,8] + [9,8,9] , A=[] , C=0 , 8+9+0=17 , A->7 , C->1
A->2 , C->1 [7,8] + [9,8] , A=[7] , C=1 , 8+8+1=17 , A->7 , C->1
A->4 , C->1 [7] + [9] , A=[7,7] , C=1 , 7+9+1=17 , A->7 , C->1
A->8 , C->1. [] + [] , A=[7,7,7] C=1 , END
1842 </scratch>
1 7 7 7
-----
J.2 SUBTRACTION
**Subtraction Examples**
**Plain** **Detailed Scratchpad**
266-738=-472 Input:
980-743=237 396-262
41-34=7 Target:
<scratch>
[3,9,6] has 3 digits.
**Reverse** [2,6,2] has 3 digits.
[3,9,6] - [2,6,2] , A=[] , C=0 , 6-2-0=4 , A->4 , C->0
$913-524=983$ [3,9] - [2,6] , A=[4] , C=0 , 9-6-0=3 , A->3 , C->0
$226-598=273-$ [3] - [2] , A=[3,4] , C=0 , 3-2-0=1 , A->1 , C->0
$35-58=32-$ [] - [] , A=[1,3,4]
100+34=134 , END
</scratch>
**Simplified Scratchpad** 1 3 4
Input:
Input: 796-890
396-262 Target:
Target: <scratch>
A->4 , C->0 [7,9,6] has 3 digits.
A->3 , C->0 [8,9,0] has 3 digits.
A->1 , C->0 [7,9,6] - [8,9,0] , A=[] , C=0 , 6-0-0=6 , A->6 , C->0
100+34=134. [7,9] - [8,9] , A=[6] , C=0 , 9-9-0=0 , A->0 , C->0
134 [7] - [8] , A=[0,6] , C=0 , 7-8-0=-1 , A->-1 , C->-1
Input: [] - [] , A=[-1,0,6]
796-890 </scratch>
Target: -9 4
A->6 , C->0 Input:
A->0 , C->0 788-989
A->-1 , C->-1 Target:
-100+6=-94. <scratch>
-94 [7,8,8] has 3 digits.
Input: [9,8,9] has 3 digits.
788-989 [7,8,8] - [9,8,9] , A=[] , C=0 , 8-9-0+10=9 , A->9 , C->-1
Target: [7,8] - [9,8] , A=[9] , C=-1 , 8-8-1+10=9 , A->9 , C->-1
A->9 , C->-1 [7] - [9] , A=[9,9] , C=-1 , 7-9-1=-3 , A->-3 , C->-1
A->9 , C->-1 [] - [] , A=[-3,9,9]
A->-3 , C->-1 -300+99=-201 , END
-300+99=-201. </scratch>
-201 -2 0 1
-----
J.3 MULTIPLICATION
**Multiplication Examples**
**Plain** **Detailed Scratchpad**
5*32=160 Input:
66*76=5016 22*52
67*74=4958 Target:
<scratch>
[2,2] has 2 digits.
**Reverse** [5,2] has 2 digits.
[2,2] * 2, A=[4,4], k=1, B=[4,4], C=0+44=44
$5*32=061$ [2,2] * 5, A=[1,1,0], k=10, B=[1,1,0,0], C=44+1100=1144, END
$66*76=6105$ </scratch>
$67*74=8594$ 1 1 4 4
Input:
8*69
Target:
<scratch>
[8] has 1 digits.
[6,9] has 2 digits.
[8] * 9, A=[7,2], k=1, B=[7,2], C=0+72=72
[8] * 6, A=[4,8], k=10, B=[4,8,0], C=72+480=552, END
</scratch>
5 5 2
Input:
52*34
Target:
<scratch>
[5,2] has 2 digits.
[3,4] has 2 digits.
[5,2] * 4, A=[2,0,8], k=1, B=[2,0,8], C=0+208=208
[5,2] * 3, A=[1,5,6], k=10, B=[1,5,6,0], C=208+1560=1768, END
</scratch>
1 7 6 8
J.4 SINE
**Sine Examples**
**Plain** **Detailed Scratchpad**
sin(1.0313)=0.8579 Input:
sin(-0.6909)=-0.6373 sin(1.0313)
sin(-0.5719)=-0.5413 Target:
<scratch>
x_0=1.0313
x_1: x_0 - 1/3! - [(xˆ3) , x_1=0.8484]
x_2: x_1 + 1/5! - [(xˆ5) , x_2=0.8581]
x_3: x_2 - 1/7! - [(xˆ7) , x_3=0.8578]
x_4: x_3 + 1/9! - [(xˆ9) , x_4=0.8578 , END]
</scratch>
0.8578
Input:
sin(-0.6909)
Target:
<scratch>
x_0=-0.6909
x_1: x_0 - 1/3! - [(xˆ3) , x_1=-0.636]
x_2: x_1 + 1/5! - [(xˆ5) , x_2=-0.6374]
x_3: x_2 - 1/7! - [(xˆ7) , x_3=-0.6374]
x_4: x_3 + 1/9! - [(xˆ9) , x_4=-0.6375 , END]
</scratch>
-0.6375
Input:
sin(-0.5719)
Target:
<scratch>
x_0=-0.5719
x_1: x_0 - 1/3! - [(xˆ3) , x_1=-0.5408]
x_2: x_1 + 1/5! - [(xˆ5) , x_2=-0.5414]
x_3: x_2 - 1/7! - [(xˆ7) , x_3=-0.5414]
x_4: x_3 + 1/9! - [(xˆ9) , x_4=-0.5415 , END]
</scratch>
-0.5415
-----
J.5 SQUARE ROOT
**Square Root Examples**
**Plain** **Detailed Scratchpad**
sqrt(7.2726)=2.6967 Input:
sqrt(3.6224)=1.9032 sqrt(7.1042)
sqrt(1.0895)=1.0437 Target:
<scratch>
x_0=2
x_1: 1/2*(2+7.1042/2)=2.776, x_1=2.776
x_2: 1/2*(2.776+7.1042/2.776)=2.6675, x_2=2.6675
x_3: 1/2*(2.6675+7.1042/2.6675)=2.6653, x_3=2.6653
x_4: 1/2*(2.6653+7.1042/2.6653)=2.6653, x_4=2.6653, END
</scratch>
2.6653
Input:
sqrt(6.2668)
Target:
<scratch>
x_0=2
x_1: 1/2*(2+6.2668/2)=2.5667, x_1=2.5667
x_2: 1/2*(2.5667+6.2668/2.5667)=2.5041, x_2=2.5041
x_3: 1/2*(2.5041+6.2668/2.5041)=2.5033, x_3=2.5033
x_4: 1/2*(2.5033+6.2668/2.5033)=2.5033, x_4=2.5033, END
</scratch>
2.5033
Input:
sqrt(8.3216)
Target:
<scratch>
x_0=2
x_1: 1/2*(2+8.3216/2)=3.0804, x_1=3.0804
x_2: 1/2*(3.0804+8.3216/3.0804)=2.8909, x_2=2.8909
x_3: 1/2*(2.8909+8.3216/2.8909)=2.8847, x_3=2.8847
x_4: 1/2*(2.8847+8.3216/2.8847)=2.8847, x_4=2.8847, END
</scratch>
2.8847
J.6 NOISY SIMPLE SCRATCHPAD
We provide one example for each case of adding noise in the simplified scratchpad experiments
discussed in Section B.5.
**Noisy Simple Scratchpad Examples**
We provide one example for each case of adding noise in the simplified
scratchpad experiments discussed in Section B.5. The input prompt
is highlighted in light blue, while the remaining part is highlighted
in light green. We construct the dataset to have either correct or
random digit-sum A and carry information C. For all cases, the final
answer remains accurate.
**Prompt:**
Input:
686+886
Target:
**Correct A & C** **Random C** **Random A** **Random A & C**
A->2 , C->1 A->2 , C->0 A->0 , C->1 A->8 , C->1
A->7 , C->1 A->7 , C->0 A->9 , C->1 A->1 , C->0
A->5 , C->1. A->5 , C->1. A->9 , C->1. A->2 , C->1.
1572 1572 1572 1572
-----
J.7 EXAMPLE DATA FOR GPT-3 FINE-TUNING
We provide an example from the training dataset consisting of one prompt-completion pair used for
fine-tuning the GPT-3 model using OpenAI’s API. The prompt is highlighted in light grey, while
the completion is highlighted in light green. Note that for plain and reverse formatting, we include
spacing between digits to ensure consistent tokenization of numbers. “###” is used as the stop
sequence for generation.
J.7.1 ADDITION
**Addition Examples**
**Plain** **Detailed Scratchpad**
6 7 7 + 8 9 8 =1 5 7 5### Input:
356+787
Target:
**Reverse** <scratch>
[3,5,6] has 3 digits.
7 4 9 + 7 8 5 = 4 3 5 1### [7,8,7] has 3 digits.
[3,5,6] + [7,8,7], A=[], C=0, 6+7+0=13, A->3, C->1
[3,5] + [7,8], A=[3], C=1, 5+8+1=14, A->4, C->1
**Simplified Scratchpad** [3] + [7], A=[4,3], C=1, 3+7+1=11, A->1, C->1
[] + [], A=[1,4,3] C=1, END
Input: </scratch>
32+981 1 1 4 3###
Target:
A->3, C->0
A->2, C->1
A->0, C->1.
1013###
J.7.2 SUBTRACTION
**Subtraction Examples**
**Plain** **Detailed Scratchpad**
2 0 4 - 5 0 1 = - 2 9 7### Input:
848-367
Target:
**Reverse** <scratch>
[8,4,8] has 3 digits.[3,6,7] has 3 digits.
7 3 4 - 9 6 7 = 3 3 2 -### [8,4,8] - [3,6,7] , A=[] , C=0 , 8-7-0=1 , A->1 , C->0
[8,4] - [3,6] , A=[1] , C=0 , 4-6-0+10=8 , A->8 , C->-1
[8] - [3] , A=[8,1] , C=-1 , 8-3-1=4 , A->4 , C->0
**Simplified Scratchpad** [] - [] , A=[4,8,1]
400+81=481 , END
Input: </scratch>
695-489 4 8 1###
Target:
A->6 , C->-1
A->0 , C->0
A->2 , C->0
200+6=206.
206###
-----
J.7.3 SINE
**Sine Examples**
**Plain** **Detailed Scratchpad**
sin(-0.8649) Input:
-0.7611### sin(-1.3516)
Target:
x_0=-1.3516
x_1: -1.3516 - 1/3! - [(x]*[x]*[x), x_1=-0.9401]
x_2: -0.9401 + 1/5! - [(x]*[x]*[x]*[x]*[x), x_2=-0.9777]
x_3: -0.9777 - 1/7! - [(x]*[x]*[x]*[x]*[x]*[x]*[x), x_3=-0.9761]
x_4: -0.9761 + 1/9! - [(x]*[x]*[x]*[x]*[x]*[x]*[x]*[x]*[x), x_4=-0.9762, END]
</scratch>
-0.9762###
J.7.4 SQUARE ROOT
**Square Root Examples**
**Plain** **Detailed Scratchpad**
sqrt(1.2178) Input:
1.1035### sqrt(5.5808)
Target:
<scratch>
x_0=2
x_1: 1/2*(2+5.5808/2)=2.3952, x_1=2.3952
x_2: 1/2*(2.3952+5.5808/2.3952)=2.3625, x_2=2.3625
x_3: 1/2*(2.3625+5.5808/2.3625)=2.3623, x_3=2.3623
x_4: 1/2*(2.3623+5.5808/2.3623)=2.3623, x_4=2.3623 , END
</scratch>
2.3623###
-----
| [
"Nayoung, Lee",
"Kartik, Sreenivasan",
"Jason, Lee",
"Kangwook, Lee",
"Dimitris, Papailiopoulos"
] | 2023-10-28T00:00:00 | ICLR 2024 Poster | true | 52 | 2 | null | https://openreview.net/forum?id=YfhuG7xHQ8 | https://arxiv.org/abs/2307.03381 | https://www.semanticscholar.org/paper/0db0af0cd3ceb0531a050a03e6ceb849580ff53b |
Annotating Derivations: A New Evaluation Strategy and Dataset for Algebra Word Problems | We propose a new evaluation for automatic solvers for algebra word problems, which can identify mistakes that existing evaluations overlook. Our proposal is to evaluate such solvers using derivations, which reflect how an equation system was constructed from the word problem. To accomplish this, we develop an algorithm for checking the equivalence between two derivations, and show how derivation annotations can be semi-automatically added to existing datasets. To make our experiments more comprehensive, we include the derivation annotation for DRAW-1K, a new dataset containing 1000 general algebra word problems. In our experiments, we found that the annotated derivations enable a more accurate evaluation of automatic solvers than previously used metrics. We release derivation annotations for over 2300 algebra word problems for future evaluations. | A new evaluation for automatic solvers for algebra word problems is proposed, which can identify mistakes that existing evaluations overlook, and derivation annotations can be semi-automatically added to existing datasets. | # Annotating Derivations: A New Evaluation Strategy and Dataset for Algebra Word Problems
**Shyam Upadhyay[1]** **and Ming-Wei Chang[2]**
1University of Illinois at Urbana-Champaign, IL, USA
2Microsoft Research, Redmond, WA, USA
[email protected]
[email protected]
**Abstract**
We propose a new evaluation for automatic solvers for algebra word problems,
which can identify mistakes that existing
evaluations overlook. Our proposal is to
evaluate such solvers using derivations,
Costs of apple and orange are in ratio 5 : 15 at the Acme Market.
Mark wanted some fruits so he buys 5 apples and 5 oranges for 100
dollars. Find cost of each.
(m=15,n=5) 5m=15n,5m+5n=100 Am=Bn Cm + Dn = E
**Solution** **Equation System** **Derivation**
which reflect how an equation system was
constructed from the word problem. To
accomplish this, we develop an algorithm
for checking the equivalence between two
derivations, and show how derivation annotations can be semi-automatically added
to existing datasets. To make our experiments more comprehensive, we include
the derivation annotation for DRAW-1K, a
new dataset containing 1000 general algebra word problems. In our experiments,
we found that the annotated derivations
enable a more accurate evaluation of automatic solvers than previously used metrics. We release derivation annotations for
over 2300 algebra word problems for future evaluations.
**1** **Introduction**
Automatically solving math reasoning problems is
a long-pursued goal of AI (Newell et al., 1959;
Bobrow, 1964). Recent work (Kushman et al.,
2014; Shi et al., 2015; Koncel-Kedziorski et al.,
2015) has focused on developing solvers for alge_bra word problems, such as the one shown in Fig-_
ure 1. Developing a solver for word problems can
open several new avenues, especially for online
education and intelligent tutoring systems (Kang
et al., 2016). In addition, as solving word problems requires the ability to understand and analyze natural language, it serves as a good test-bed
for evaluating progress towards goals of artificial
intelligence (Clark and Etzioni, 2016).
494
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|Costs of apple and orange are in ratio Mark wanted some fruits so he buys 5 ap|5||15|
Figure 1: An algebra word problem with its solution, equa_tion system and derivation. Evaluating solvers on derivation_
is more reliable than evaluating on solution or equation system, as it reveals errors that other metric overlook.
An automatic solver finds the solution of a
given word problem by constructing a deriva_tion,_ consisting of an un-grounded equation
system[1] ({Am = Bn, Cm + Dn = E} in Figure 1)
and _alignments_ of numbers in the text to
its coefficients (blue edges). The derivation identifies a grounded equation system
_{5m = 15n, 5m + 5n = 100}, whose solution can_
then be generated to answer the problem. A
derivation precisely describes how the grounded
equation system was constructed from the word
problem by the automatic solver. On the other
hand, the grounded equation systems and the solutions are less informative, as they do not explain
which span of text aligns to the coefficients in the
equations.
While the derivation is clearly the most informative structure, surprisingly, no prior work
evaluates automatic solvers using derivations directly. To the best of our knowledge, none of the
current datasets contain human-annotated derivations, possibly due to the belief that the current
evaluation metrics are sufficient and the benefit
of evaluating on derivations is minor. Currently,
the most popular evaluation strategy is to use so_lution accuracy (Kushman et al., 2014; Hosseini_
et al., 2014; Shi et al., 2015; Koncel-Kedziorski et
1Also referred to as a template. We use these two terms
interchangeably.
-----
al., 2015; Zhou et al., 2015; Huang et al., 2016),
which computes whether the solution was correct
or not, as this is an easy-to-implement metric. Another evaluation strategy was proposed in (Kushman et al., 2014), which finds an approximate
derivation from the gold equation system and uses
it to compare against a predicted derivation. We
follow (Kushman et al., 2014) and call this evaluation strategy the equation accuracy. [2]
In this work, we argue that evaluating solvers
against human labeled derivation is important. Existing evaluation metrics, like solution accuracy
are often quite generous — for example, an incorrect equation system, such as,
_{m + 5 = n + 15,_ _m + n = 15 + 5},_ (1)
can generate the correct solution of the word problem in Figure 1. While equation accuracy appears
to be a stricter metric than solution accuracy, our
experiments show that the approximation can mislead evaluation, by assigning higher scores to an
inferior solver. Indeed, a correct equation system,
(5m = 15n, 5m+5n = 100), can be generated by
using a wrong template, Am = Bn, Am + An = C,
and aligning numbers in the text to coefficients incorrectly. We show that without knowing the correct derivation at evaluation time, a solver can be
awarded for the wrong reasons.
The lack of annotated derivations for word
problems and no clear definition for comparing
derivations present technical difficulties in using
derivation for evaluation. In this paper, we address
these difficulties and for the first time propose to
evaluate the solvers using derivation accuracy. To
summarize, the contributions of this paper are:
_• We point out that evaluating using derivations_
is more precise compared to existing metrics.
Moreover, contrary to popular belief, there is
a meaningful gap between the derivation accuracy and existing metrics, as it can discover
crucial errors not captured previously.
_• We formally define when two derivations are_
equivalent, and develop an algorithm that can
determine the same. The algorithm is simple
2Note that an approximation of the derivation is necessary, as there is no annotated derivation. From the brief description in their paper and the code released by Kushman et
al. (2014), we found that their implementation assumes that
the first derivation that matches the equations and generates
the correct solution is the correct reference derivation against
which predicted derivations are then evaluated.
495
Word Problem **_x_** We are mixing a solution of 32% sodium and
another solution of 12%
sodium. How many liters
of 32% and 12% solution
will produce 50 liters of a
20% sodium solution?
Textual Numbers _Q(x)_ _{321, 121, 322, 122, 50, 20}_
Equation System **_y_** 32m + 12n = 20 ∗ 50,
_m + n = 50_
Solution _m = 20, n = 30_
**Template** _T_ `Am + Bn = C ∗` `D,`
```
m + n = C
```
**Coefficients** _C(T_ ) `A, B, C, D`
**Alignments** _A_ 50{32 →1 →C, 20A →, 12D1} → `B,`
**EquivTNum** _{[321, 322], [121, 122]}_
**Derivation** **_z_** (T, A)
Table 1: The symbols we used in the paper. Our proposed
annotations are shown in bold. Equivalent textual numbers,
described in EquivTNum, are distinguished with subscripts.
to implement, and can accurately detect the
equivalence even if two derivations have very
different syntactic forms.
_• We annotated over 2300 word algebra prob-_
lems[3] with detailed derivation annotations,
providing high quality labeled semantic
parses for evaluating word problems.
**2** **Evaluating Derivations**
We describe our notation and revisit the notion of
derivation introduced in (Kushman et al., 2014).
We then formalize the notion of derivation equivalence and provide an algorithm to determine it.
**Structure of Derivation** The word problem in
Table 1 shows our notation, where our proposed
annotations are shown in bold. We denote a word
problem by x and an equation system by y.
An un-grounded equation system (or template)
_T is a family of equation systems parameter-_
ized by a set of coefficients C(T ) = {ci}i[k]=1[,]
where each coefficient ci aligns to a textual num_ber (e.g., four) in the word problem._ We also
refer to the coefficients as slots of the template.
We use (A, B, C, . . .) to represents coefficients and
(m, n, . . .) to represent the unknown variables in
the templates.
Let Q(x) be the set of all the textual numbers in
the problem x, and C(T ) be the coefficients to be
determined in the template T . An alignment is a
set of tuples A = {(q, c) | q ∈Q(x), c ∈C(T ) ∪
_{ϵ}} aligning textual numbers to coefficient slots,_
3available at https://aka.ms/datadraw
-----
**Algorithm 1 Evaluating Derivation**
**Input: Predicted (Tp, Ap) and gold (Tg, Ag) derivation**
**Output: 1 if predicted derivation is correct, 0 otherwise**
1: if |C(Tp)| ̸= |C(Tg)| then _▷_ different # of coeff. slots
2: return 0
3: end if
4: Γ ← TEMPLEQUIV(Tp,Tg)
5: if Γ = ∅ **then** _▷_ not equivalent templates
6: return 0
7: end if
8: if ALIGNEQUIV(Γ, Ap, Ag) then _▷_ Check alignments
9: return 1
10: end if
11: return 0
12:
13: procedure TEMPLEQUIV(T1, T2)
14: _▷_ Note that here |C(T1)| = |C(T2)| holds
15: Γ ←∅
16: **for each 1-to-1 mapping γ : C(T1) →C(T2) do**
17: match ← True
18: **for t = 1 · · · R do** _▷R : Rounds_
19: Generate random vector v
20:21: _Aif Solve1 ←{(v(Ti →1, Ac1i)) ̸}=,A Solve2 ←{((Tv2i →, A2γ) then(ci))}_
22: match ← False; break
23: **end if**
24: **end for**
25: **if match then Γ ←** Γ ∪{γ}
26: **end for**
27: return Γ _▷_ Γ ̸= ∅ iff the templates are equivalent
28: end procedure
29:
30: procedure ALIGNEQUIV(Γ, A1, A2)
31: **for mapping γ ∈** Γ do
32: **if following holds true,**
where a tuple (q, ϵ) indicates that the number q is
not relevant to the final equation system.
Note that there may be multiple semantically
equivalent textual numbers. e.g., in Figure 1, either of the 32 can be aligned to coefficient slot A
in the template. These equivalent textual numbers
are marked in the EquivTNum field in the annotation. If two textual numbers q, q[′] _∈_ EquivTNum,
then we can align a coefficient slot to either q or
_q[′], and generate a equivalent alignment._
An alignment A and a template T together identify a derivation z = (T, A) of an equation system. Note that there may be multiple valid derivations, using one of the equivalent alignments. We
assume there exists a routine Solve(y) that find
the solution of an equation system. We use a Gaussian elimination solver for our Solve routine. We
use hand-written rules and the quantity normalizer
in Stanford CoreNLP (Manning et al., 2014) to
identify textual numbers.
**Derivation Equivalence** We define two derivations (T1, A1) and (T2, A2) to be equivalent iff the
corresponding templates T1, T2 and alignments
_A1, A2 are equivalent._
Intuitively, two templates T1, T2 are equivalent
if they can generate the same space of equation
systems – i.e., for every assignment of values to
slots of T1, there exists an assignment of values to
slots of T2 such that they generate the same equation systems. For instance, template (2) and (3)
below are equivalent
_m = A + Bn_ _m = C −_ _n_ (2)
_m + n = A_ _m −_ `Cn = B.` (3)
because after renaming (A, B, C) to (B, C, A) respectively in template (2), and algebraic manipulations, it is identical to template (3). We can
see that any assignment of values to correspond_ing slots will result in the same equation system._
Similarly, two alignments A1 and A2 are equivalent if corresponding slots from each template
align to the same textual number. For the above
example, the alignment {1 → `A, 3 →` `B, 4 →` `C}`
in template (2), and alignment {1 → `B, 3 →`
```
C, 4 → A} in template (3) are equivalent. Note
```
that the alignment {1 → `A, 3 →` `B, 4 →` `C} for (2)`
is not equivalent to {1 → `A, 3 →` `B, 4 →` `C} in`
(3), because it does not respect variable renaming.
Our definition also allows two alignments to be
496
(q, c) ∈ _A1 ⇐⇒{(q, γ(c)) or (q[′], γ(c))} ∈_ _A2_
33: where (q[′], q) ∈ EquivTNum
34: **then return 1**
35: **end if**
36: **end for**
37: return 0
38: end procedure
equivalent, if they use textual numbers in equivalent positions for corresponding slots (as described
by EquivTNum field).
In the following, we carefully explain how template and alignment equivalence are determined
algorithmically. Algorithm 1 shows the complete
algorithm for comparing two derivations.
**Template Equivalence** We propose an approximate procedure TEMPLEQUIV (line 13) that detects equivalence between two templates. The procedure relies on the fact that under appropriate renaming of coefficients, two equivalent templates
will generate equations which have the same solutions, for all possible coefficient assignments.
For two templates T1 and T2, with the same
number of coefficients (T1) = (T2), we rep_|C_ _|_ _|C_ _|_
resent a choice of renaming coefficients by γ, a
-----
1-to-1 mapping from (T1) to (T2). The two
_C_ _C_
templates are equivalent if there exists a γ such
that solutions of the equations identified by T1
and T2 are same, for all possible coefficient assignments. The TEMPLEQUIV procedure exhaustively tries all possible renaming of coefficients
(line 16), checking if the solutions of the equation systems generated from a random assignment
(line 19) match exactly. It declares equivalence if
for a renaming γ, the solutions match for R = 10
such random assignments.[4] The procedure returns
all renamings Γ of coefficients between two templates under which they are equivalent (line 27).
We discuss its effectiveness in §3.
**Alignment** **Equivalence** The TEMPLEQUIV
procedure returns every mapping γ in Γ under
which the templates were equivalent (line 4).
Recall that γ identifies corresponding slots, c
and γ(c), in T1 and T2 respectively. We describe
alignment equivalence using these mappings.
Two alignments A1 and A2 are equivalent if corresponding slots (according to γ) align to the same
textual number. More formally, if we find a mapping γ such that for each tuple (q, c) in A1 there is
(q, γ(c)) in A2, then the alignments are equivalent
(line 33). We allow for equivalent textual numbers
(as identified by EquivTNum field) to match when
comparing tuples in alignments.
The proof of correctness of Algorithm 1 is
sketched in the appendix. Using Algorithm 1,
we can define derivation accuracy, to be 1 if the
predicted derivation (Tp, Ap) and the reference
derivation (Tg, Ag) are equivalent, and 0 otherwise.
**Properties of Derivation Accuracy** By comparing derivations, we can ensure that the following errors are detected by the evaluation.
Firstly, correct solutions found using incorrect
equations will be penalized, as the template used
will not be equivalent to reference template. Secondly, correct equation system obtained by an incorrect template will also be penalized for the
same reason. Lastly, if the solver uses the correct
template to get the correct equation system, but
aligns the wrong number to a slot, the alignment
will not be equivalent to the reference alignment,
and the solver will be penalized too.
4Note that this procedure is a Monte-Carlo algorithm, and
can be made more precise by increasing R. We found making R larger than 10 did not have an impact on the empirical
results.
497
We will see some illustrative examples of above
errors in §5.3. Note that the currently popular evaluation metric of solution accuracy will not detect
_any of these error types._
**3** **Annotating Derivations**
As none of the existing benchmarks contain
derivation annotations, we decided to augment existing datasets with these annotations. We also annotated DRAW-1K, a new dataset of 1000 general
algebra word problems to make our study more
comprehensive. Below, we describe how we reduced annotation effort by semi-automatic generated some annotations.
Annotating gold derivations from scratch for all
problems is time consuming. However, not all
word problems require manual annotation – sometimes all numbers appearing in the equation system can be uniquely aligned to a textual number
without ambiguity. For such problems, the annotations are generated automatically.[5] We identify word problems which have at least one align_ment ambiguity – multiple textual numbers with_
the same value, which appears in the equation system. A example of such a problem is shown in
Figure 1, where there are three textual numbers
with value 5, which appears in the equation system. Statistics for the number of word problems
with such ambiguity is shown in Table 2.
We only ask annotators to resolve such alignment ambiguities, instead of annotating the entire
derivation. If more than one alignments are genuinely correct (as in word problem of Table 1), we
ask the annotators to mark both (using the Equiv_TNum field). This ensures our derivation annota-_
tions are exhaustive – all correct derivations are
marked. With the correct alignment annotations,
templates for all problems can be easily induced.
**Annotation Effort** To estimate the effort required to annotate derivations, we timed our annotators when annotating 50 word problems (all
involved alignment ambiguities). As a control, we
also asked annotators to annotate the entire derivation from scratch (i.e., only provided with the
word problem and equations), instead of only fixing alignment ambiguities. When annotating from
scratch, annotators took an average of 4 minute per
word problem, while when fixing alignment ambiguities this time dropped to average of 1 minute
5Annotations for all problems are manually verified later.
-----
**Dolphin-L** DOLPHIN-L is the linear-T2 subset
of the DOLPHIN dataset (Shi et al., 2015), which
focuses on number word problems – algebra word
problems which describe mathematical relationships directly in the text. All word problems in
the linear-T2 subset of the DOLPHIN dataset can
be solved using linear equations.
**DRAW-1K** _Diverse Algebra Word (DRAW-1K),_
consists of 1000 word problems crawled from
algebra.com. Details on the dataset creation
can be found in the appendix. As ALG-514 was
also crawled from algebra.com, we ensured
that there is little overlap between the datasets.
We randomly split DRAW-1K into train, development and test splits with 600, 200, 200 problems respectively. We use 5-fold cross validation
splits provided by the authors for DOLPHIN-L and
ALG-514.
**4.1** **Evaluation**
We compare derivation accuracy against the following evaluation metrics.
**Solution Accuracy** We compute solution accuracy by checking if each number in the reference
solution appears in the generated solution (disregarding order), following previous work (Kushman et al., 2014; Shi et al., 2015).
**Equation** **Accuracy** An approximation of
derivation accuracy that is similar to the one used
in Kushman et al. (2014). We approximate the
reference derivation ˜z by randomly chosen from
the (several possible) derivations which lead to
the gold y from x. Derivation accuracy is computed against this (possibly incorrect) reference
derivation. Note that in equation accuracy, the
approximation is used instead of annotated derivation. We include the metric of equation accuracy
in our evaluations to show that human annotated
derivation is necessary, as approximation made by
equation accuracy might be problematic.
**4.2** **Our Solver**
We train a solver using a simple modeling approach inspired by Kushman et al. (2014) and
Zhou et al. (2015). The solver operates as follows. Given a word problem, the solver ranks
all templates seen during training, Γtrain, and selects the set of the top-k (we use k = 10) templates Π Γtrain. Next, all possible derivations
_⊂_
_D(Π) that use a template from Π are generated_
Dataset DRAW-1K ALG-514 DOLPHIN-L
# problems 1000 514 832
w/ ambiguity 21% 23% 35%
vocab. 2.21k 1.83k 0.33k
Number of Templates
before 329 30 273
after 224 24 203
% reduction 32% 20% 25%
Table 2: Statistics of the datasets. At least 20% of problems
in each dataset had alignment ambiguities that required human annotations. The number of templates before and after
annotation is also shown (reduction > 20%).
per word problem. We attained a inter-annotator
agreement of 92% (raw percentage agreement),
with most disagreements arising on EquivTNum
field.[6]
**Reconciling Equivalent Templates** The number of templates has been used as a measure of
dataset diversity (Shi et al., 2015; Huang et al.,
2016), however prior work did not reconcile the
equivalent templates in the dataset. Indeed, if
two templates are equivalent, we can replace one
with the other and still generate the correct equations. Therefore, after getting human judgements
on alignments, we reconcile all the templates using TEMPLEQUIV as the final step of annotation.
TEMPLEQUIV is quite effective (despite being
approximate), reducing the number of templates
by at least 20% for all datasets (Table 2). We
did not find any false positives generated by the
TEMPLEQUIV in our manual examination. The reduction in Table 2 clearly indicates that equivalent
templates are quite common in all datasets, and
number of templates (and hence, dataset diversity)
can be significantly overestimated without proper
reconciliation.
**4** **Experimental Setup**
We describe the three datasets used in our experiments. Statistics comparing the datasets is shown
in Table 2. In total, our experiments involve over
2300 word problems.
**Alg-514** The dataset ALG-514 was introduced
in (Kushman et al., 2014). It consists of 514 general algebra word problems ranging over a variety of narrative scenarios (distance-speed, object
counting, simple interest, etc.).
6These were adjudicated on by the first author.
498
-----
|Setting Soln. Acc. Eqn. Acc.|Deriv. Acc.|
|---|---|
ALG-514
|TE 76.2 72.7 TD 78.4 73.9 TD - TE 2.2 1.2|75.5 77.8 2.3|
|---|---|
DRAW-1K
|TE 52.0 48.0 TD 55.0 48.0 TD - TE 3.0 0|48.0 53.0 5.0|
|---|---|
DOLPHIN
|TE 55.1 50.1 TD 57.5 36.8 TD - TE 2.4 -13.3|44.2 54.9 10.7|
|---|---|
Table 3: TE and TD compared using different evaluation metrics. Note that while TD is clearly superior to TE due to extra
supervision using the annotations, only derivation accuracy is
able to correctly reflect the differences.
and scored. The equation system ˆy identified by
highest scoring derivation ˆz is output as the prediction. Following (Zhou et al., 2015), we do not
model the alignment of nouns phrases to variables,
allowing for tractable inference when scoring the
generated derivations. The solver is trained using a
structured perceptron (Collins, 2002). We extract
the following features for a (x, z) pair,
**Template Features.** Unigrams and bigrams of
lemmas and POS tags from the word problem x,
conjoined with |Q(x)| and |C(T )|.
**Alignment Tuple Features.** For two alignment
tuples, (q1, c1), (q2, c2), we add features indicating whether c1 and c2 belong to the same equation
in the template or share the same variable. If they
belong to the same sentence, we also add lemmas
of the nouns and verbs between q1 and q2 in x.
**Solution Features.** Features indicating if the solution of the system identified by the derivation are
integer, negative, non-negative or fractional.
**5** **Experiments**
Are solution and equation accuracy equally capable as derivation accuracy at distinguishing between good and bad models? To answer this question, we train the solver under two settings such
that one of the settings has clear advantage over
the other, and see if the evaluation metrics reflect
this advantage. The two settings are,
499
|Setting Soln. Acc. Eqn. Acc.|Deriv. Acc.|
|---|---|
DRAW-1K + Alg-514
|TE 32.5 31.5 TE∗ 60.5 56.0 TD 62.0 53.0|29.5 54.0 59.5|
|---|---|
|TD - TE∗ 1.5 -3.0|5.5|
|---|---|
DRAW-1K + Dolphin
|TE 41.0 37.5 TE∗ 58.5 55.5 TD 60.0 53.0|37.5 51.5 58.0|
|---|---|
|TD - TE∗ 1.5 -2.5|6.5|
|---|---|
Table 4: When combining two datasets, it is essential to reconcile templates across datasets. Here TE[∗] denotes training
on equations after reconciling the templates, while TE simply
combines datasets naively. As TE[∗] represents a more appropriate setting, we compare TE[∗] and TD in this experiment.
TE (TRAIN ON EQUATION) Only the (x, y)
pairs are provided as supervision. Similar to
(Kushman et al., 2014; Zhou et al., 2015), the
solver finds a derivation which agrees with the
equation system and the solution, and trains on it.
Note that the derivation found by the solver may
be incorrect.
TD (TRAIN ON DERIVATION) (x, z) pairs obtained by the derivation annotation are used as supervision. This setting trains the solver on humanlabeled derivations. Clearly, the TD setting is a
more informative supervision strategy than the TE
setting. TD provides the correct template and correct alignment (i.e. labeled derivation) as supervision and is expected to perform better than TE,
which only provides the question-equation pair.
We first present the main results comparing different evaluation metrics on solvers trained using
the two settings.
**5.1** **Main Results**
We compare the evaluation metrics in Table 3. We
want to determine to what degree each evaluation
metric reflects the superiority of TD over TE.
We note that solution accuracy always exceeds
derivation accuracy, as a solver can sometimes get
the right solutions even with the wrong derivation. Also, solution accuracy is not as sensitive as derivation accuracy to improvements in
the solver. For instance, solution accuracy only
changes by 2.4 on Dolphin-L when comparing
TE and TD, whereas derivation accuracy changes
-----
by 10.7 points. We found that the large gap on
Dolphin-L was due to several alignment errors in
the predicted derivations, which were detected by
derivation accuracy. Recall that over 35% of the
problems in Dolphin-L have alignment ambiguities (Table 2). In the TD setting, many of these
errors made by our solver were corrected as the
gold alignment was part of supervision.
Equation accuracy too has several limitations.
For DRAW-1K, it cannot determine which solver
is better and assigns them the same score. Furthermore, it often (incorrectly) considers TD to
be a worse setting than TE, as evident from decrease in the scores (for instance, on DOLPHINL). Recall that equation accuracy attempts to approximate derivation accuracy by choosing a random derivation agreeing with the equations, which
might be incorrect.
**Study with Combining Datasets** With several
ongoing annotation efforts, it is a natural question to ask is whether we can leverage multiple
datasets in training to generalize better. In Table 4, we combine DRAW-1K’s train split with
other datasets, and test on DRAW-1K’s test split.
DRAW-1K’s test split was chosen as it is the largest
test split with general algebra problems (recall
Dolphin-L contains only number word problems).
We found that in this setting, it was important
to reconcile the templates across datasets. Indeed,
when we simply combine the two datasets in the
TE setting, we notice a sharp drop in performance
(compared to Table 3). However, if we reconciled
all templates and then used the new equations for
training (called TE[∗] setting in Table 4), we were
able to see improvements from training on more
data. We suspect difference in annotation style led
to several equivalent templates in the combined
dataset, which got resolved in TE[∗]. Therefore, in
Table 4, we compare TE[∗] and TD settings.[7]
In Table 4, a trend similar to Table 3 can be
observed – solution accuracy assigns a small improvement to TD over TE[∗]. Derivation accuracy
clearly reflects the fact that TD is superior to TE[∗],
with a larger improvement compared to solution
accuracy (eg., 5.5 vs 1.5). Equation accuracy, as
before, considers TD to be worse than TE[∗].
Note that this experiment also shows that differences in annotation styles across different algebra problem datasets can lead to poor performance
7In TE∗, the model still trains only using equations, without access to derivations. So TD is still better than TE[∗].
500
Dataset Ours KAZB Best Result
ALG-514 76.2 68.7 **79.7 (ZDC)**
DOLPHIN-L 55.1 37.5 46.3[‡] (SWLLR)
DRAW-1K 52.0 43.2 –
Table 5: Comparison of our solver and other state-of-the-art
systems, when trained under TE setting. All numbers are
solution accuracy. See footnote for details on the comparison
to SWLLR.
when combining these datasets naively. Our findings suggest that derivation annotation and template reconciliation are crucial for such multi-data
supervision scenarios.
**5.2** **Comparing Solvers**
To ensure that the results in the previous section were not an artifact of any limitations of our
solver, we show here that our solver is competitive to other state-of-the-art solvers, and therefore
it is reasonable to assume that similar results can
be obtained with other automatic solvers.
In Table 5, we compare our solver to KAZB, the
system of Kushman et al. (2014), when trained
under the existing supervision paradigm, TE (i.e.,
training on equations) and evaluated using solution accuracy. We also report the best scores on
each dataset, using ZDC and SWLLR to denote the
systems of Zhou et al. (2015) and Shi et al. (2015)
respectively. Note that our system and KAZB are
the only systems that can process all three datasets
without significant modification, with our solver
being clearly superior to KAZB.
**5.3** **Case Study**
We discuss some interesting examples from the
datasets, to show the limitations of existing metrics, which derivation accuracy overcomes.
**Correct Solution, Incorrect Equation** In the
following example from the DOLPHIN-L dataset,
by choosing the correct template and the wrong
alignments, the solver arrived at the correct solutions, and gets rewarded by solution accuracy.
The sum of 2(q1) numbers is 25(q2). 12(q3)
less than 4(q4) times one(q5) of the numbers is
16(q6) more than twice(q7) the other number.
Find the numbers.
_‡SWLLR also had a solver which achieves 68.0, using over_
9000 semi-automatically generated rules tailored to number
word problems. We compare to their similarity based solver
instead, which does not use any such rules, given that the rulebased system cannot be applied to general word problems.
-----
Note that there are seven textual numbers
(q1, . . ., q7) in the word problem. We can arrive at
the correct equations ({m + n = 25, 4m − 2n =
16 + 12}), by the correct derivation,
_m + n = q2_ _q4m −_ _q7n = q6 + q3._
However, the solver found the following derivation, which produces the incorrect equations
({m + n = 25, 2m − _n = 2 + 12}),_
_m + n = q2_ **q1m −** **q5n = q7 + q3.**
Both the equations have the same solutions (m =
13, n = 12), but the second derivation is clearly
using incorrect reasoning.
**Correct Equation,** **Incorrect Alignment** In
such cases, the solver gets the right equation system, but derived it using wrong alignment. Solution accuracy still rewards the solver. Consider the
problem from the DOLPHIN-L dataset,
The larger of two(q1) numbers is 2(q2) more
than 4(q3) times the smaller. Their sum is 67(q4).
Find the numbers.
The correct derivation for this problem is,
_m_ _q3n = q2_ _m + n = q4._
_−_
However, our system generated the following
derivation, which although results in the exact
same equation system (and thus same solutions),
is clearly incorrect due incorrect choice of ”two”,
_m_ _q3n = q1_ _m + n = q4._
_−_
Note that derivation accuracy will penalize the
solver, as the alignment is not equivalent to the reference alignment (q1 and q2 are not semantically
equivalent textual numbers).
**Bad Approx. in Equation Accuracy** The following word problem is from the ALG-514
dataset:
Mrs. Martin bought 3(q1) cups of coffee
and 2(q2) bagels and spent 12.75(q3) dollars.
Mr. Martin bought 2(q4) cups of coffee and
5(q5) bagels and spent 14.00(q6) dollars. Find
the cost of one(q7) cup of coffee and that of
one(q8) bagel.
501
The correct derivation is,
_q1m + q2n = q3_ _q4m + q5n = q6._
However, we found that equation accuracy used
the following incorrect derivation for evaluation,
_q1m + q2n = q3_ **q2m + q5n = q6.**
Note while this derivation does generate the correct equation system and solutions, the derivation
utilizes the wrong numbers and misunderstood the
word problem. This example demonstrates the
needs to evaluate the quality of the word problem
solvers using the annotated derivations.
**6** **Related Work**
We discuss several aspects of previous work in the
literature, and how it relates to our study.
**Existing Solvers** Current solvers for this task
can be divided into two broad categories based
on their inference approach – template-first and
_bottom-up. Template-first approaches like (Kush-_
man et al., 2014; Zhou et al., 2015) infer the
derivation z = (T, A) sequentially. They first predict the template T and then predict alignments
_A from textual numbers to coefficients. In con-_
trast, bottom-up approaches (Hosseini et al., 2014;
Shi et al., 2015; Koncel-Kedziorski et al., 2015)
_jointly infer the derivation z = (T, A). Inference_
proceeds by identifying parts of the template (eg.
```
Am + Bn) and aligning numbers to it ({2 → A,
```
3 → `B}). At any intermediate state during in-`
ference, we have a partial derivation, describing
a fragment of the final equation system (2m + 3n).
While our experiments used a solver employing
the template-first approach, it is evident that performing inference in all such solvers requires constructing a derivation z = (T, A). Therefore, annotated derivations will be useful for evaluating all
such solvers, and may also aid in debugging errors.
Other reconciliation procedures are also discussed (though briefly) in earlier work. Kushman et al. (2014) reconciled templates by using
a symbolic solver and removing pairs with the
same canonicalized form. Zhou et al. (2015) also
reconciled templates, but do not describe how it
was performed. We showed that reconciliation
is important for correct evaluation, for reporting
dataset complexity, and also when combining multiple datasets.
-----
**Labeling Semantic Parses** Similar to our work,
efforts have been made to annotate semantic
parses for other tasks, although primarily for providing supervision. Prior to the works of Liang
et al. (2009) and Clarke et al. (2010), semantic parsers were trained using annotated logical
forms (Zelle and Mooney, 1996; Zettlemoyer and
Collins, 2005; Wong and Mooney, 2007, inter
_alia), which were expensive to annotate._ Recently, Yih et al. (2016) showed that labeled semantic parses for the knowledge based question
answering task can be obtained at a cost comparable to obtaining answers. They showed significant improvements in performance of a questionanswering system using the labeled parses instead
of answers for training. More recently, by treating
word problems as a semantic parsing task, Upadhyay et al. (2016) found that joint learning using both explicit (derivation as labeled semantic
parses) and implicit supervision signals (solution
as responses) can significantly outperform models
trained using only one type of supervision signal.
**Other Semantic Parsing Tasks** We demonstrated that response-based evaluation, which is
quite popular for most semantic parsing problems (Zelle and Mooney, 1996; Berant et al., 2013;
Liang et al., 2011, inter alia) can overlook reasoning errors for algebra problems. A reason for
this is that in algebra word problems there can be
several semantic parses (i.e., derivations, both correct and incorrect) that can lead to the correct solution using the input (i.e., textual number in word
problem). This is not the case for semantic parsing problems like knowledge based question answering, as correct semantic parse can often be
identified given the question and the answer. For
instance, paths in the knowledge base (KB), that
connect the answer and the entities in the question
can be interpreted as legitimate semantic parses.
The KB therefore acts as a constraint which helps
prune out possible semantic parses, given only the
problem and the answer. However, such KB-based
constraints are unavailable for algebra word problems.
**7** **Conclusion and Discussion**
We proposed an algorithm for evaluating derivations for word problems. We also showed how
derivation annotations can be easily obtained by
only involving annotators for ambiguous cases.
We augmented several existing benchmarks with
502
derivation annotations to facilitate future comparisons. Our experiments with multiple datasets
also provided insights into the right approach to
combine datasets – a natural step in future work.
Our main finding indicates that derivation accuracy leads to a more accurate assessment of algebra word problem solvers, finding errors which
other metrics overlook. While we should strive
to build such solvers using as little supervision
as possible for training, having high quality annotated data is essential for correct evaluation.
The value of such annotations for evaluation becomes more immediate for online education scenarios, where such word solvers are likely to be
used. Indeed, in these cases, merely arriving at the
correct solution, by using incorrect reasoning may
prove detrimental for teaching purposes. We believe derivation based evaluation closely mirrors
how humans are evaluated in schools (by forcing
solvers to show “their work”).
Our datasets with the derivation annotations
have applications beyond accurate evaluation. For
instance, certain solvers, like the one in (Roy and
Roth, 2015), train a relevance classifier to identify
which textual numbers are relevant to solving the
word problem. As we only annotate relevant numbers in our annotations, our datasets can provide
high quality supervision for such classifiers. The
datasets can also be used in evaluation test-beds,
like the one proposed in (Koncel-Kedziorski et al.,
2016).
We hope our datasets will open new possibilities for the community to simulate new ideas and
applications for automatic problem solvers.
**Acknowledgments**
The first author was supported on a grant
sponsored by DARPA under agreement number FA8750-13-2-0008. We would also like to
thank Subhro Roy, Stephen Mayhew and Christos Christodoulopoulos for useful discussions and
comments on earlier versions of the paper.
**References**
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy
Liang. 2013. Semantic parsing on Freebase from
question-answer pairs. In Proceedings of the 2013
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1533–1544, Seattle, Wash-_
ington, USA, October. Association for Computational Linguistics.
-----
Daniel G. Bobrow. 1964. A question-answering system for high school algebra word problems. In Pro_ceedings of the October 27-29, 1964, fall joint com-_
_puter conference, part I, pages 591–614. ACM._
Peter Clark and Oren Etzioni. 2016. My computer is
an honor student but how intelligent is it? standardized tests as a measure of ai. AI Magazine, 37(1):5–
12.
James Clarke, Dan Goldwasser, Ming-Wei Chang, and
Dan Roth. 2010. Driving semantic parsing from
the world’s response. In Proceedings of the Four_teenth Conference on Computational Natural Lan-_
_guage Learning, pages 18–27, Uppsala, Sweden,_
July. Association for Computational Linguistics.
Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings
_of the 2002 Conference on Empirical Methods in_
_Natural Language Processing, pages 1–8. Associ-_
ation for Computational Linguistics, July.
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Process-_
_ing (EMNLP), pages 523–533, Doha, Qatar, Octo-_
ber. Association for Computational Linguistics.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
887–896, Berlin, Germany, August. Association for
Computational Linguistics.
Bo Kang, Arun Kulshreshth, and Joseph J. LaViola Jr.
2016. Analyticalink: An interactive learning environment for math word problem solving. In Pro_ceedings of the 21st International Conference on In-_
_telligent User Interfaces, pages 419–430. ACM._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Ang. 2015.
Parsing algebraic word problems into equations.
_Transactions of the Association for Computational_
_Linguistics, 3:585–597._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In Proceedings of
_the 2016 Conference of the North American Chap-_
_ter of the Association for Computational Linguistics:_
_Human Language Technologies, pages 1152–1157,_
San Diego, California, June. Association for Computational Linguistics.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
503
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281, Baltimore, Maryland, June. Association
for Computational Linguistics.
Percy Liang, Michael Jordan, and Dan Klein. 2009.
Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of
_the 47th Annual Meeting of the ACL and the 4th In-_
_ternational Joint Conference on Natural Language_
_Processing of the AFNLP, pages 91–99, Suntec, Sin-_
gapore, August. Association for Computational Linguistics.
Percy Liang, Michael Jordan, and Dan Klein. 2011.
Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of
_the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 590–599, Port-_
land, Oregon, USA, June. Association for Computational Linguistics.
Christopher D. Manning, Mihai Surdeanu, John Bauer,
Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proc. of 52nd Annual
_Meeting of the Association for Computational Lin-_
_guistics: System Demonstrations._
Allen Newell, John C. Shaw, and Herbert A. Simon.
1959. Report on a general problem-solving program. In IFIP Congress, volume 256, page 64.
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In Proceedings of the
_2015 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 1743–1752, Lisbon,_
Portugal, September. Association for Computational
Linguistics.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Process-_
_ing, pages 1132–1142, Lisbon, Portugal, September._
Association for Computational Linguistics.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang,
and Wen-tau Yih. 2016. Learning from Explicit
and Implicit Supervision Jointly For Algebra Word
Problems. In Proceedings of EMNLP, pages 297–
306.
Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing
with lambda calculus. In Proceedings of the 45th
_Annual Meeting of the Association of Computational_
_Linguistics, pages 960–967, Prague, Czech Repub-_
lic, June. Association for Computational Linguistics.
Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of
semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual
-----
**Lemma 1.** The procedure TEMPLEQUIV returns
Γ ̸= ∅ iff templates T1, T2 are equivalent (w.h.p.).
**Proof** First we prove that with high probability
we are correct in claiming that a γ found by the
algorithm leads to equivalence. Let probability of
getting the same solution even when the template
are not equivalent be ϵ(T1, T2, γ) < 1. The probability that solution is same for R rounds for T1, T2
which are not equivalent is ≤ _ϵ[R], which can be_
made arbitrarily small by choosing large R. Therefore, with a large enough R, obtaining Γ ̸= ∅ from
TEMPLEQUIV implies there is a γ under which
templates generate equations with the same solution, and by definition, are equivalent.
Conversely, if templates are equivalent, it implies ∃γ[∗] such that under that mapping for any assignment, the generated equations have the same
solution. As we iterate over all possible 1-1 mappings γ between the two templates, we will find
_γ[∗]_ eventually.
**Proposition** Algorithm 1 returning 1 implies
derivations (Tp, Ap) and (Tg, Ag) are equivalent.
**Proof** Algorithm returns 1 only if TEMPLEQUIV
found a Γ ̸= ∅, and ∃γ ∈ Γ, following holds
(q, c) _Ag_ (q, γ(c)) _Ap_
_∈_ _⇐⇒_ _∈_
i.e., the corresponding slots aligned to the same
textual number. TEMPLEQUIV found a Γ ̸= ∅ implies templates are equivalent (w.h.p). Therefore,
_∃γ ∈_ Γ such that the corresponding slots aligned
to the same textual number implies the alignments
are equivalent under mapping γ. Together they imply that the derivation was equivalent (w.h.p.).
_Meeting of the Association for Computational Lin-_
_guistics (Volume 2: Short Papers), pages 201–206,_
Berlin, Germany, August. Association for Computational Linguistics.
J. M. Zelle and R. J. Mooney. 1996. Learning to
Parse Database Queries using Inductive Logic Proramming. In AAAI.
Luke S. Zettlemoyer and Michael Collins. 2005.
Learning to map sentences to logical form: Structured classification with probabilistic categorial
grammars. In UAI ’05, Proceedings of the 21st Con_ference in Uncertainty in Artificial Intelligence, Ed-_
_inburgh, Scotland, July 26-29, 2005, pages 658–666._
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using
quadratic programming. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 817–822, Lisbon, Portugal,_
September. Association for Computational Linguistics.
**A** **Creating DRAW-1K**
We crawl over 100k problems from http://
algebra.com. The 100k word problems include some problems which require solving nonlinear equations (e.g. finding roots of quadratic
equations). We filter out these problems using
keyword matching. We also filter problems whose
explanation do not contain a variable named “x”.
This leaves us with 12k word problems.
**Extracting Equations** A word problem on
algebra.com is accompanied by a detailed explanation provided by instructors. In our crawler,
we use simple pattern matching rules to extract all
the equations in the explanation. The problems
often have sentences which are irrelevant to solving the word problem (e.g. “Please help me, I am
stuck.”). During cleaning, the annotator removes
such sentences from the final word problem and
performs some minor editing if necessary.[8]
1000 problems were randomly chosen from
these pool of 12k problems, which were then
shown to annotators as described earlier to get the
derivation annotations.
**B** **Proof of Correctness (Sketch)**
For simplicity, we will assume that EquivTNum is
empty. The proof can easily be extended to handle
the more general situation.
8In some cases, some of the numbers in the text are
rephrased (“10ml” to “10 ml”) in order to allow NLP pipeline
work properly.
504
-----
| [
"Ming-Wei, Chang",
"Mirella, Lapata",
"Phil, Blunsom",
"Alexander, Koller",
"Shyam, Upadhyay"
] | 2017-04-01T00:00:00 | null | false | 51 | 11 | null | https://aclanthology.org/E17-1047 | null | https://www.semanticscholar.org/paper/ba84b8f16efc032945cd8ed960c121a9bcd064d5 |
NaturalProofs: Mathematical Theorem Proving in Natural Language | Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning. As a step in this direction, we develop NaturalProofs, a multi-domain corpus of mathematical statements and their proofs, written in natural mathematical language. NaturalProofs unifies broad coverage, deep coverage, and low-resource mathematical sources, allowing for evaluating both in-distribution and zero-shot generalization. Using NaturalProofs, we benchmark strong neural methods on mathematical reference retrieval and generation tasks which test a system's ability to determine key results that appear in a proof. Large-scale sequence models show promise compared to classical information retrieval methods, yet their performance and out-of-domain generalization leave substantial room for improvement. NaturalProofs opens many avenues for research on challenging mathematical tasks. | NaturalProofs is developed, a multi-domain corpus of mathematical statements and their proofs, written in natural mathematical language that unifies broad coverage, deep coverage, and low-resource mathematical sources, allowing for evaluating both in-distribution and zero-shot generalization. | ## NATURALPROOFS: Mathematical Theorem Proving in Natural Language
**Sean Welleck[1][,][2], Jiacheng Liu[1], Ronan Le Bras[2],**
**Hannaneh Hajishirzi[1][,][2], Yejin Choi[1][,][2], Kyunghyun Cho[3][,][4]**
1Paul G. Allen School of Computer Science & Engineering, University of Washington
2Allen Institute for Artificial Intelligence
3New York University
4CIFAR Fellow in Learning in Machines & Brains
```
[email protected]
```
**Abstract**
Understanding and creating mathematics using natural mathematical language –
the mixture of symbolic and natural language used by humans – is a challenging
and important problem for driving progress in machine learning. As a step in this
direction, we develop NATURALPROOFS, a multi-domain corpus of mathematical
statements and their proofs, written in natural mathematical language. NATURALPROOFS unifies broad coverage, deep coverage, and low-resource mathematical
sources, allowing for evaluating both in-distribution and zero-shot generalization.
Using NATURALPROOFS, we benchmark strong neural methods on mathematical
reference retrieval and generation tasks which test a system’s ability to determine
key results that appear in a proof. Large-scale sequence models show promise
compared to classical information retrieval methods, yet their performance and
out-of-domain generalization leave substantial room for improvement. NATURALPROOFS opens many avenues for research on challenging mathematical tasks.[1]
**1** **Introduction**
Solving the problem of understanding and creating mathematics using natural mathematical language
– the mixture of symbolic and natural language used by humans – is a path towards developing agents
capable of reasoning. The mixture of symbolic and natural text, along with the existence of a formal
counterpart, offers a unique setting for studying reasoning that complements research involving
natural language alone or purely within a formal system. Constructing a mathematical proof involves
symbolic manipulation, logical and analogical reasoning, as well as knowledge retrieval. Common
sense and natural language abilities are needed to articulate the proof in a concise, comprehensible
form. Moreover, systems that operate on mathematical text have applications in education and
scientific discovery, while bridging informal and formal mathematics can be a key driver of progress
in automated reasoning [5, 19, 36].
Recently, techniques from natural language processing have driven advances in formalized mathemat_ics (e.g. Polu and Sutskever [29], Rabe et al. [30], Wu et al. [46]), in which mathematics is written in_
a verifiable formal language that resembles source code, such as Mizar [40], Lean [7], or Metamath
[25]. However, this setting does not directly address the informal aspect of human mathematics,
which is conveyed with a mixture of symbolic and natural language [13]. This aspect is crucial,
since advancing human understanding is a goal of mathematics [39], and a significant fraction of
mathematical knowledge is in natural language text [36].
[1Dataset and code available at https://github.com/wellecks/naturalproofs.](https://github.com/wellecks/naturalproofs)
Preprint. Under review.
-----
**Source** **ProofWiki**
**Theorem Category of Monoids is Category**
Let Mon be the category of monoids.
Then Mon is a metacategory.
**Proof** Let us verify the axioms (C1) up to (C3) for a metacategory. We have
Composite of Homomorphisms on Algebraic Structure is Homomorphism, verifying (C1).
We have monoid (S, ◦). Now, (C2) follows from
Identity Mapping is Left Identity and Identity Mapping is Right Identity.
Finally, (C3) follows from Composition of Mappings is Associative.
Hence Mon is a metacategory.
**Source** **Textbook: Real Analysis**
**Theorem Suppose that f is continuous on the closed interval [a, b] and differentiable on the**
open interval (a, b), and f (a) = f (b).
Then f _[′](c) = 0 for some c in the open interval (a, b)._
**Proof** Since f is continuous on [a, b], f attains a maximum and a minimum value on [a, b] (Theorem
2.2.9). If these two extreme values are the same, then f is constant on (a, b), so f _[′](x) = 0 for_
all x in (a, b). If the extreme values differ, then at least one must be attained at some point c in
the open interval (a, b), and f _[′](c) = 0, by Theorem 2.3.7._
Table 1: Example theorems and their proofs from NATURALPROOFS. Given a theorem, the mathematical retrieval task consists of retrieving the references (underlined) that occur in its proof.
NATURALPROOFS contains data from ProofWiki, Stacks, and two textbooks; we show two sources
here and two other sources in Table 12. See Figure 1 and Figure 2 for data format details.
In this paper, we describe NATURALPROOFS, a multi-domain corpus of mathematical statements and
their proofs, written in natural mathematical language. NATURALPROOFS contains broad-coverage
data from ProofWiki,[2] _deep-coverage data from the Stacks project,[3]_ and low-resource, real-world
data from mathematics textbooks. NATURALPROOFS unifies these sources in a common schema and
is made publicly available as a resource to drive progress on tasks involving informal mathematics,
complementing existing work in this direction (e.g. [11, 12, 42]).
Using NATURALPROOFS, we consider mathematical reference retrieval, an analogue of premise
selection [1, 12]: given a mathematical claim, retrieve the set of references (theorems, lemmas,
definitions) that occur in its proof. This task represents a crucial facet of mathematical reasoning,
in which a mathematician determines the key results that appear in a proof. As a bridge towards
generative tasks using NATURALPROOFS, we consider mathematical reference generation, which
requires additionally recovering the order and number of references in each proof.
In addition to standard in-distribution evaluation, the multi-domain nature of NATURALPROOFS
allows for evaluating out-of-distribution, zero-shot generalization. We design an evaluation protocol
that tests a system’s ability to retrieve references for novel theorems in each setting, and benchmark
methods based on large-scale neural sequence models [8, 20], including a strong joint retrieval
method that better refines the top of the ranked list, as well as an autoregressive variant for reference
generation. The neural methods are effective for in-domain retrieval compared to classical techniques,
yet out-of-distribution generalization, leveraging symbolic mathematical content, and fully recovering
a proof’s references remain as fundamental challenges. NATURALPROOFS opens many possibilities
for developing and evaluating machine learning methods on challenging mathematical tasks.
**2** **Related Work**
**Machine learning for mathematical theorem proving. A large portion of work integrating machine**
learning with mathematical reasoning has focused on formalized mathematics. Early work by Urban
[40] used machine learning for selecting relevant premises in the Mizar mathematical library that
are passed to an automated theorem prover, which was later explored with deep neural networks [1].
Bansal et al. [3] developed the HOList benchmark based on the HOL Light theorem prover, while
other benchmark tasks use the Coq [18, 47], Metamath [43, 41, 29], or Isabelle [23] environments.
[2https://proofwiki.org/](https://proofwiki.org/)
[3https://stacks.math.columbia.edu/](https://stacks.math.columbia.edu/)
-----
**Source** **All PWiki Stacks** **RA** **NT**
|m N|32,579|19,734 12,479 298 68|
|---|---|---|
|Tokens Theore Lines Refs|46.7 5.9 1.8|38.2 60.6 33.6 23.7 3.6 9.7 8.4 4.5 2.8 0.2 0.0 0.0|
|N Tokens Proof Lines Refs|32,012 181.5 24.9 5.6|19,234 12,479 235 64 199.3 155.5 128.9 97.2 25.8 23.4 36.1 16.1 7.4 3.0 1.6 0.9|
|N Definition Tokens Lines Refs|14,230 48.4 5.0 2.9|12,420 1,687 86 37 45.0 73.2 58.6 32.6 4.2 10.7 13.3 5.1 3.3 0.4 0.0 0.0|
|N Tokens Other Lines Refs|1,974 212.1 34.4 5.7|1,006 968 – – 286.1 135.2 – – 46.7 21.7 – – 9.2 2.0 – –|
Table 3: NATURALPROOFS dataset statistics.
Numbers represent mean value, except for "N"
rows which represent count. RA is the Real
Analysis textbook; NT is the Number Theory
textbook. See Table 14 for detailed statistics.
Table 2: The reference graph. Nodes are state_ments and edges are reference links. An edge_
pointing from A to B means that the proof for
_theorem B refers to statement A. Edges can start_
from any type of statement, but they always end
at a theorem. In our tasks, the dataset is split so
that all theorems in the evaluation sets are leaf
nodes in the reference graph.
These formalized settings differ from NATURALPROOFS, which uses mathematical language as
humans write it. Szegedy [36] argues for leveraging both informal and formal mathematics through
autoformalization. Wang et al. [42] explore translating between informal and formal mathematics,
including via a dataset based on ProofWiki, though their dataset is not made available. Ferreira and
Freitas [11, 12] propose a classification-based natural language premise selection task and a dataset
based on ProofWiki, while NATURALPROOFS covers multiple domains and provides evaluation and
benchmarks for full retrieval and generative tasks.
**Mathematics and language benchmarks. Several datasets evaluate a model’s ability to solve**
multiple-choice algebraic word problems [34, 24, 2] or arithmetic problems [35] with varying degrees of natural language. Lample and Charton [21] evaluate neural sequence models on symbolic
integration problems, while Hendrycks et al. [15] propose a benchmark based on math competition problems. NATURALPROOFS focuses on theorem proving rather than calculation, which we
hypothesize evaluates different skills, and may prove useful in bridging formal and informal settings.
**Large-scale neural language models. Large-scale unsupervised pretraining of language models has**
led to significant advances in many natural language processing domains (e.g. [8, 31, 32, 4]). Recent
work suggests that these models store knowledge in their parameters [28], are capable of reasoning
in mathematical [30, 46] and language [6, 37] domains, and are effective for information retrieval
tasks [26, 27]. These advances motivate our work, which explores mathematical reasoning in natural
language with large-scale language models through a retrieval task.
**3** **The NATURALPROOFS Dataset**
The NATURALPROOFS Dataset is a large-scale, multi-domain dataset for studying mathematical
reasoning in natural language. NATURALPROOFS consists of 32k theorem statements and proofs,
14k definitions, and 2k other types of pages (e.g. axioms, corollaries) derived from three domains:
_broad-coverage data from ProofWiki, an online compendium of mathematical proofs written by a_
community of contributors; deep-coverage data from the Stacks project, a collaborative web-based
textbook of algebraic geometry; and low-resource, real-world data from mathematics textbooks.
Table 1 shows example theorems and proofs from NATURALPROOFS, and Table 3 shows statistics.
**Multi-domain. NATURALPROOFS provides a common schema for mathematical statements, proofs,**
and the references that appear in each. Its multiple domains provide a challenging evaluation
-----
setting for models and opens opportunities for investigating domain transfer, out-of-distribution
generalization, and methods for low-resource settings. This differs from existing resources that focus
only on ProofWiki [11, 12], and reflects shifts in natural language processing towards multi-domain
settings [44, 17], out-of-distribution generalization [22, 14, 38], and few- or zero-shot generalization
in resource-constrained settings [4, 9].
**Structure. Each statement in NATURALPROOFS is either a theorem or a definition. NATURAL-**
PROOFS provides the statement’s title, contents, and references. The contents is a list of sequences,
where each sequence contains one line of mixed text and LATE[X, with reference links displayed in their]
natural language forms. A theorem is associated with one or more proofs when available. A proof
contains a title, contents, and references in the same format as a statement. Finally, we collect other
pages (e.g. axioms, corollaries). A reference is a theorem, definition, or other page that is linked to
within the contents of a statement or proof. Figure 2 shows the data format for theorems, definitions,
and proofs in NATURALPROOFS. All statements and the reference links connecting them form a
_reference graph, shown in Table 2. The reference graph can contain cycles, e.g. Pythagoras’s_
```
Theorem and Sum of Squares of Sine and Cosine refer to each other in their proofs.
```
**Data sources and preprocessing. We describe how we retrieve data from each source and give an**
overview of preprocessing; for full details see Appendix A.1 and the Jupyter notebooks we release.
- ProofWiki. We download the public ProofWiki XML dump,[4] which contains a snapshot of all
pages on ProofWiki. We filter pages according to manually designed rules (e.g. redirects, files,
categories), and determine page type, title, contents, and references using each page’s WikiMedia
data structure.
- Stacks. We pull the Stacks GitHub repo,[5] which contains multiple LATE[X files for various sub-topics]
in algebraic geometry. We extract statements and proofs by LATE[X environment names. For example,]
the content enclosed by \begin{theorem} and \end{theorem} would be considered a theorem.
- Textbooks. We searched for open-source math textbooks with rich theorem-proof structures and
reference links. Of those, we picked Introduction to Real Analysis[6] (RA in short) by William
F. Trench and Elementary Number Theory: Primes, Congruences, and Secrets[7] (NT in short)
by William Stein. We downloaded the LATE[X source of each textbook, and similarly extracted]
statements and proofs by environment names. In both textbooks, every statement is either a theorem
or a definition – there are no statements that fall under "others".
**4** **NATURALPROOFS Reference Retrieval and Generation Tasks**
NATURALPROOFS opens many possible machine learning tasks that involve natural mathematical
language. We consider mathematical reference retrieval: given a theorem x, retrieve the set of
references y that occur in its proof. An example is shown in Table 1, where the task is to retrieve
the underlined references given the title and contents of the theorem Category of Monoids is
```
Category. As a proof is ultimately written as an ordered collection of statements with references
```
often occurring more than once, we also consider mathematical reference generation: generate the
_sequence of references that occur in a given theorem’s proof. These tasks represent a crucial aspect_
of theorem proving, in which a mathematician determines the key results that appear in a proof.
**Reference retrieval and generation. Each theorem x has a proof containing a sequence of references**
(see §3). We consider two tasks:y = (r1, . . ., r|y|), where each reference retrieval r andm ∈R generation is either a theorem, definition, or other statement.
In the retrieval task, given an input theorem x, a model assigns a score to each reference in R,
inducing a ranked list ˆr[(1)], . . ., ˆr[(][|R|][)]. These ranked references are evaluated against the ground-truth
reference set using standard retrieval metrics such as mean average precision (MAP), recall (REC@k),
[4https://proofwiki.org/xmldump/latest.xml. We use the November 12, 2020 version. ProofWiki](https://proofwiki.org/xmldump/latest.xml)
is licensed under CC BY-SA 3.0.
[5https://github.com/stacks/stacks-project. We use the April 15, 2021 version (commit 4df67b8).](https://github.com/stacks/stacks-project)
Stacks is licensed under GNU Free Documentation License.
[6https://digitalcommons.trinity.edu/mono/7/. Retrieved on April 15, 2021. We did not use the](https://digitalcommons.trinity.edu/mono/7/)
supplementary materials. This textbook is licensed under CC BY-NC-SA 3.0.
[7https://github.com/williamstein/ent. Retrieved on April 15, 2021. We provide a script to down-](https://github.com/williamstein/ent)
load and format the publicly available latex source.
-----
**Split** **P+S** **ProofWiki** **Stacks** **RA** **NT**
Table 4: NATURALPROOFS retrieval dataset statistics. P+S refers to the combined dataset from
_|E|_
train 21,446 12,424 9,022 – –
valid 1,914 1,139 775 – –
test 1,911 1,135 776 167 40
**Refs |R|** train 42,056 28,473 13,583 – –
valid 45,805 30,671 15,134 – –
test 45,805 30,671 15,134 384 105
**Refs/Ex |y|** train 5.9 7.5 3.6 – –
valid 5.6 7.5 2.9 – –
test 5.6 7.4 2.9 2.2 1.5
the ProofWiki and Stacks sources. RA (Real Analysis) and NT (Number Theory) are data from
mathematical textbook sources that we use for zero-shot evaluation.
and full recovery (FULL@k), which checks whether all references in the proof are in the top-k
predicted rankings. This reflects the goal of fully proving a theorem using a fixed number of results.
In the generation task, a model produces a variable-length sequence of references (ˆr1, . . ., ˆr|yˆ|[)][ given]
an input x, with the goal of exactly matching the ground-truth reference sequence (r1, . . ., r|y|).
Unlike retrieval, generation requires the model to correctly predict the total number of references, the
number of occurrences of each unique reference, and their orders in the proof.
**Input-output examples. Using NATURALPROOFS, we derive examples of the form (x, y), where**
**x = (x1, . . ., xT ) is a theorem, and y = (r1, . . ., r** **y** ) is the sequence of references that occur in the
_|_ _|_
proof of x. For retrieval, we transform each sequence into a set y = {r1, . . ., r|y|}. The set of all
references, R, consists of theorems, definitions, and other statements (see §3). We use theorems with
at least one proof that has at least one reference, resulting in a dataset with roughly 25k examples
and a reference set R with 46k unique references. We partition the dataset into ProofWiki-only,
Stacks-only, and textbook-only datasets. Table 4 summarizes the size, total references, and average
references per example in each dataset.
**Training and evaluation splits. We design training and evaluation splits that reflect the real-world**
scenario of proving newly seen theorems at evaluation time. This requires careful attention, since
naively sampling evaluation examples would yield evaluation theorems that appear as references in
the training set. To ensure that the theorems in the evaluation set have no overlap with the references
in the training set, we form an evaluation set using a randomly sampled subset of reference graph
_leaf nodes, and use the remaining nodes as the training set (Table 2). We use roughly half of the_
evaluation set for validation and the other half for testing. Since evaluation theorems are not referred
to in training examples, the reference set for training is smaller than that for evaluation (Table 4).
**5** **Methods**
As benchmark methods for our tasks, we introduce two parallel retrieval methods, and a sequential
_retrieval method trained for sequence generation. See Appendix B for further implementation details._
**Parallel retrieval. Given a theorem x, a retrieval model should assign high scores to references in**
the proof of x and low scores to all other references, which corresponds to minimizing,
_L(x, y) = KL (p∗(R|x)∥pθ(R|x))_ (1)
exp (sθ(x, r))
_∝−_ Xr∈y log **r[′]∈R** [exp (][s][θ][(][x][,][ r][′][)) +][ const][,] (2)
P
where each distribution is over reference indices (i.e. in ∆[(][|R|][)]), and p (r **x)** I[r **y]. The**
_∗_ _|_ _∝_ _∈_
denominator requires scores sθ(x, r) for all references, making backpropagation too expensive
_|R|_
when a large-scale neural model is used to compute reference representations. As a result we consider
two variants: a pairwise model that approximates Equation 1, and a joint model that computes
Equation 1 but with implicit vector representations of each reference.
-----
**Pairwise parameterization. This model contrasts each positive reference with a set of negatives,**
exp(sθ(x, r))
(x, r, y ) = log (3)
_L_ _−_ _−_ exp(sθ(x, r)) + **r−∈y−** [exp(][s][θ][(][x][,][ r][−][))] _[,]_
where r is a reference that occurs in the proof of x, and y is a (small) set of negative references.
[P]−
We call this a pairwise parameterization since the score of each reference against the theorem x is
computed independently of the other references, sθ(x, r) = fθ[thm]1 [(][x][)][⊤][g]θ[ref]2 [(][r][)][. This model represents]
retrieval methods such as the dense passage retriever [20] and similar methods [26], and allows for
evaluating large-scale sequence models, in our case BERT [8], on mathematical reference retrieval.
**Joint parameterization. The second model scores all references in a single pass,**
_pθ(_ **x) = softmax (Rfθ(x)),** (4)
_R |_
where R ∈ R[|R|×][d] is a reference embedding matrix and fθ(x) ∈ R[d] is a neural theorem encoder.
This model allows for computing Equation 1 exactly in our setting, but it must learn implicit
representations of each reference, i.e. without observing reference contents. To give the model access
to representations that were learned using reference contents, we populate its embedding matrix as,
_g[ref](r1)_
**R =** _. . ._ _,_ (5)
_g[ref](r_ )
_|R|_
where g[ref](x) is obtained by pretraining an independent model.
**Sequential generation and retrieval. Finally, we consider an autoregressive model,**
_|y|+1_
_pθ(rt_ **r<t, x),** (6)
_|_
_t=1_
Y
_pθ(r1, . . ., r|y| | x) =_
where r **y** +1 is a special eos token denoting the end of the reference sequence. The autoregressive
_|_ _|_ _⟨_ _⟩_
model is trained to maximize the log-likelihood of ground-truth reference sequences. Unlike the
parallel retrieval models, this model predicts the order and total number of references and can predict
multiple occurrences of each reference. It also adjusts its predictions based on preceding predictions.
For generation, a standard decoding algorithm (e.g. beam search) is used to generate a reference sequence{ˆr1, . . ., ˆr|yˆ|[}][ followed by references ordered according to the first step’s probabilities,] ˆy = (ˆr1, . . ., ˆr|yˆ| _[⟨][eos][⟩][). For retrieval, we populate a ranked list using generations][ p]θ[(][r]1[|][x][)][.]_
**6** **Experiments**
First, we benchmark the neural retrieval methods (§5) on mathematical reference retrieval in terms
of their in-domain performance (Table 5) and their out-of-domain performance on an evaluation set
formed from the textbooks in NATURALPROOFS (Table 7). We perform several analyses to better
understand each method’s strengths, weaknesses, and the factors that contribute to their performance.
**In-domain performance. The BERT-based retrieval models show strong in-domain performance**
compared to the classical TF-IDF and naive baselines in terms of average precision, recall, and
the ability to fully recover all true references within the top-k results, as seen in Table 5. On both
ProofWiki and Stacks, the pairwise models outperform TF-IDF, with improvements that are consistent
across reference types (Appendix Table 17).
Joint parameterization substantially improves over the pairwise models that are the starting point of
joint training. On ProofWiki, the joint model ranks roughly 4 out of every 10 true references within
its top 10 rankings (R@10 42.45) compared to 1 out of 10 for TF-IDF, and an impressive 75% within
its top 100. For roughly half of the theorems, the joint model’s top 100 references contain all of the
references needed to prove the theorem (Full@100 50.22). On Stacks the recall@10 is similar at
roughly 40%, with a higher full recovery rate of 66% for the top 100 results.
The gains from the joint parameterization are most prominent on ProofWiki, e.g. increasing mAP
from 16.82 to 36.75. Joint parameterization particularly excels at refining the top of the ranked
-----
|Col1|ProofWiki Stacks mAP R@10 R@100 Full@10 Full@100 mAP R@10 R@100 Full@10 Full@100|Col3|
|---|---|---|
||mAP R@10 R@100 Full@10 Full@100 mAP R@10 R@100 Full@10 Full@100||
|Random Frequency TF-IDF|0.04 0.00 0.19 0.00 0.00 3.38 5.90 24.30 0.44 2.29 6.19 10.27 23.09 4.14 9.43|0.07 0.05 0.60 0.00 0.13 0.91 1.76 11.27 0.13 2.45 13.64 25.46 47.36 18.94 37.76|
|+pair BERT (P+S) +joint +pair BERT (P/S) +joint|13.54 20.10 58.75 6.17 31.28 32.71 37.59 73.72 17.71 48.90 16.82 23.73 63.75 7.31 38.50 36.75 42.45 75.90 20.35 50.22|18.58 34.42 71.80 28.48 65.21 26.88 35.71 72.68 28.99 66.11 20.93 37.43 74.21 30.03 66.37 28.32 39.10 73.61 31.96 65.59|
Table 5: In-domain performance on the mathematical reference retrieval task (test set). BERT (P/S)
is finetuned on the part of dataset with the same source as the evaluation set, whereas BERT (P+S) is
finetuned on the combined dataset from ProofWiki and Stacks sources. Recall is micro-averaged.
**Source** **ProofWiki**
**Theorem** **Category of Monoids is Category**
Let Mon be the category of monoids.
Then Mon is a metacategory.
**Ground-Truth Reference** **Rank (Pairwise)** **Rank (Joint)**
Metacategory 1 1
Identity Mapping is Left Identity 4 5
Identity Mapping is Right Identity 5 4
Monoid 11 2
Composition of Mappings is Associative 21 8
Identity Mapping is Automorphism 117 64
Composite of Homomorphisms is Homomorphism 261 54
**Rank** **Reference (Pairwise)** **Reference (Joint)**
1 _Metacategory_ _Metacategory_
2 Monoid Category is Category _Monoid_
3 Monoid Category Identity Morphism
4 _Identity Mapping is Left Identity_ _Identity Mapping is Right Identity_
5 _Identity Mapping is Right Identity_ _Identity Mapping is Left Identity_
6 Category Associative
7 Composition of Morphisms Identity (Abstract Algebra)/Two-Sided Identity
8 Dual Category is Category _Composition of Mappings is Associative_
9 Identity Morphism Composition of Morphisms
10 Morphism Category Semigroup
Table 6: Retrieval for a representative theorem. Top: predicted ranks for ground-truth references using
the pairwise (left) and its joint (right) BERT models. Bottom: top 10 retrievals from the pairwise
(left) and joint (right) models. A retrieved reference is italicized when it is a ground-truth reference.
list compared to pairwise parameterization; the percentage improvement in the @10 metrics are
larger than those for @100 metrics. On Stacks, the improvements are more modest: though mAP
improves by 40%, the other metrics are relatively close, suggesting that advances beyond the joint
model are needed. This demonstrates the importance of evaluating on multiple domains: each domain
presents novel challenges for driving advances in modeling. Finally, the BERT models trained on both
ProofWiki and Stacks (BERT (P+S)) show the possibility of training a single multi-domain model,
albeit with lower per-domain performance than the models trained individually on each domain.
**Qualitative evaluation. Table 6 shows model predictions for a representative theorem, Category**
```
of Monoids is Category. The pairwise model retrieves three out of seven true references within
```
its top 50 results, while the joint model retrieves five out of seven. The top 10 results for both models
are comprised of references that are related to category theory, which is the subject of the theorem.
This illustrates the model’s ability to retrieve relevant references, while highlighting its inability to
always perform the fine-grained distinction between a relevant reference and one that occurs in the
ground-truth proof(s). Arguably, such a system is still useful for providing hints to a user, so long as
the user is confident that all of the true references are in a reasonably small set of results.
**Out-of-domain performance. While strong in-domain performance drives applications in scenarios**
where training data is available, an ambitious goal is building a system with mathematical retrieval
skills that automatically generalize to new resources. To evaluate the retrieval methods in this
-----
|Col1|Real Analysis Number Theory|Col3|
|---|---|---|
||mAP R@10 Full@10 mAP R@10 Full@10||
|TF-IDF BERT-pair (P) +joint BERT-pair (S) +joint|15.79 34.65 27.54 13.24 24.01 19.16 11.24 20.97 16.77 11.56 21.28 14.97 7.04 11.55 9.58|16.42 39.62 30.00 15.12 41.51 35.00 15.85 41.51 35.00 12.58 26.42 20.00 14.88 26.42 20.00|
Table 7: Zero-shot retrieval performance on out-of-domain textbooks.
**Sequence** **Multiset** **Set**
**Model** **EM** **Edit(↓)** **BLEU4** **BLEU2** **Len** **EM** **F1** **EM** **F1** **BLEU1**
_*-set_ 51.74 35.70 9.75 47.73 0.97 89.03 97.04 100.0 100.0 94.09
_*-multiset_ 49.42 38.13 9.71 47.71 1.00 100.0 100.0 100.0 100.0 100.0
_*-halfseq_ 0.00 70.49 6.13 12.08 0.30 0.00 56.86 0.65 58.01 16.87
Joint 0.00 98.81 0.00 **3.42** 2.82 0.00 **19.24** 0.00 **19.65** **15.15**
Autoregressive **3.87** **90.65** 0.00 2.59 **0.97** **4.00** 13.14 **4.90** 15.04 10.06
_*-set_ 18.09 58.51 7.18 29.50 0.83 49.96 82.57 100.0 100.0 65.57
_*-multiset_ 19.23 58.09 16.68 52.89 1.00 100.0 100.0 100.0 100.0 100.0
_*-halfseq_ 0.00 58.84 25.88 29.17 0.41 0.00 63.33 4.21 70.26 30.55
**ProofWiki** Joint 0.00 93.03 0.00 6.88 1.42 0.09 25.30 0.18 **30.76** 19.27
Autoregressive **3.69** **84.30** **5.48** **11.90** **1.18** **3.78** **25.61** **4.65** 28.97 **20.81**
Table 8: In-domain generation results. We show the autoregressive model, a retrieval-only baseline
using the top-5 predictions from the joint retrieval model, and oracle benchmarks for correctly
predicting the first half of the sequence (*-halfseq), the full multiset with randomized order (*_multiset), and the full set with randomized order (*-set). The best model-based method is in bold._
zero-shot, out-of-domain setting, we use each textbook from NATURALPROOFS as an evaluation set.
This tests situations where the same theorem is expressed using different language (e.g. Table 13),
generalization across data formats, and whether retrieval ability from in-domain training transfers.
Table 7 shows the results. The pairwise BERT model trained on ProofWiki underperforms TF-IDF
on the Real Analysis textbook, and has comparable performance on the Number Theory textbook.
Joint training did not improve out of domain performance, despite its favorable in-domain impact.
Training BERT on ProofWiki outperforms training on Stacks, showing that the training domain
impacts out-of-domain generalization. ProofWiki’s broad coverage of mathematics may help the
model generalize better than the deep, single-topic coverage in Stacks.
The BERT models show some evidence of generalizing to out-of-domain mathematical sources,
yet they do not show an advantage over traditional retrieval methods despite strong in-domain
performance. This aligns with recent findings about neural retrieval models in various zero-shot
settings [38]. An exciting research direction is using NATURALPROOFS to develop and evaluate
methods which improve not only in-domain performance, but out-of-domain generalization.
**6.1** **Reference Generation**
Next, we establish a benchmark for recovering the sequence of references occurring in the proof of
each theorem via the reference generation task (§4).
**Metrics. We evaluate predicted reference sequences against ground-truth sequences using order-**
aware sequence metrics, as well as unordered multiset and set-based metrics. Sequence metrics
include exact match (EM), edit-distance (Edit), standard BLEU4 score which uniformly weights
1-4 gram precision, BLEU2 with only 1-2 gram precision, and average length ratio [predicted]true (Len).
Unordered metrics include exact match, F1-score (corpus level), and 1-gram precision BLEU1.
**Methods. We use the autoregressive model to generate a reference sequence for each theorem using**
beam search. As a retrieval-only baseline, we form a sequence using the joint retrieval model’s top-5
-----
**Init** **Model** **mAP**
– Pairwise 16.99
– Autoregressive 17.77
_f_ [thm] Autoregressive 25.07
_f_ [thm], R Autoregressive 35.37
– Joint 18.71
_f_ [thm] Joint 28.95
_f_ [thm], R Joint **37.51**
Table 9: Initializing with pairwise
components, and autoregressive
retrieval (ProofWiki).
|Train Lang. NatProof|Eval PW Stacks|
|---|---|
|||
| |0.14 0.30 0.04 0.86 16.99 21.21|
Table 10: Language pretraining
and NATURALPROOFS finetuning (pairwise retrieval, mAP).
**Title Content** **PW Stacks**
4.97 12.34
**8.10** 12.69
6.33 **13.45**
16.19 19.12
**24.48** 19.15
**BERT** 16.99 **21.21**
Table 11: Excluding () the title
or content of theorems and references (pairwise retrieval, mAP).
predictions, ordered by retrieval score. To judge performance and provide a benchmark for future
work, we provide three oracle baselines: correctly predicting the first half of the sequence (*-halfseq),
the full multiset of references with random order (*-multiset), and the set with random order (*-set).
**Results. Table 8 shows the in-domain generation results. The task is challenging, with the autoregres-**
sive model exactly matching the ground-truth sequence roughly 3% of the time. The autoregressive
model improves over the retrieval-only baseline on order-aware metrics, aside from BLEU2 on Stacks.
It does length-prediction reasonably well, with length-ratios of 0.97 and 1.18, yet the multiset and set
metrics indicate that the autoregressive model struggles to correctly predict the correct references,
even after discarding order. The oracle baselines indicate substantial room for future improvement–
for instance, predicting only half of each sequence correctly would move ProofWiki BLEU4 from
5.48 to 25.88. Developing models along the full spectrum from set-based retrieval, to reference
generation, to full proof generation is an exciting use-case for NATURALPROOFS.
**6.2** **Ablation Studies**
**Initialization and autoregressive retrieval. As shown in Table 9, the autoregressive model trained**
for sequence generation substantially improves over the pairwise retrieval model, yet underperforms
the joint model, which is trained specifically for retrieval. Initializing the joint and autoregressive
models using the pairwise model was necessary for achieving high performance; in particular, the
reference information conveyed through the embedding matrix (Equation 5) was crucial.
**Language pretraining and NATURALPROOFS training. The BERT model has two learning phases:**
pretraining on language data, and finetuning on NATURALPROOFS. As seen in Table 10, relying
on language-pretraining alone without fine-tuning on NATURALPROOFS (top row) led to poor
performance. Conversely, training from scratch on NATURALPROOFS (middle row) was unsuccessful,
suggesting that language pretraining served as an effective initialization for mathematical retrieval.
**Title and content ablation. Each theorem statement and reference consists of a title, as well as**
contents that is a mixture of symbolic mathematics and natural language. As seen in Table 11,
ProofWiki’s titles contain a large amount of useful information for retrieval– TF-IDF and the pairwise
BERT model performed better with only access to titles. In principal, the title+content model could
learn to ignore the contents if needed, so its lower performance shows a deficiency in the pairwise
model. On Stacks, the model performs best with both sources of information, though the degree of
improvement suggests that leveraging the mathematical content remains as a fundamental challenge.
**7** **Conclusion**
Building agents that understand and create mathematics using natural mathematical language
is a challenging research direction, providing a means for evaluating and developing machine
learning methods capable of symbolic reasoning and natural language understanding. As a step in
this direction, we develop NATURALPROOFS, a multi-domain dataset for studying mathematical
reasoning in natural language. NATURALPROOFS allows for evaluating in-domain performance,
and out-of-domain generalization in broad and deep coverage mathematics, as well as real-world,
low-resource settings. We establish benchmarks for retrieval and generation tasks that represent key
steps in real-world theorem proving, and are tractable, yet challenging, for current large-scale neural
sequence models. NATURALPROOFS opens many promising avenues for future research.
-----
**References**
[1] A. A. Alemi, F. Chollet, N. Een, G. Irving, C. Szegedy, and J. Urban. DeepMath - Deep sequence
models for premise selection. In Advances in Neural Information Processing Systems, pages
2243–2251, 2016.
[2] A. Amini, S. Gabriel, S. Lin, R. Koncel-Kedziorski, Y. Choi, and H. Hajishirzi. MathQA:
Towards interpretable math word problem solving with operation-based formalisms. In
NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies - Proceedings of the Conference,
volume 1, 2019.
[3] K. Bansal, S. Loos, M. Rabe, C. Szegedy, and S. Wilcox. Holist: An environment for machine learning of higher-order theorem proving. In 36th International Conference on Machine
Learning, ICML 2019, volume 2019-June, 2019.
[4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan,
P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child,
A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray,
B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei.
Language Models are Few-Shot Learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F.
Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33,
[pages 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
```
cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
```
[5] N. C. Carter and K. G. Monks. Lurch: a word processor that can grade students’ proofs. In
C. Lange, D. Aspinall, J. Carette, J. H. Davenport, A. Kohlhase, M. Kohlhase, P. Libbrecht,
P. Quaresma, F. Rabe, P. Sojka, I. Whiteside, and W. Windsteiger, editors, Joint Proceedings
of the MathUI, OpenMath, PLMMS and ThEdu Workshops and Work in Progress at CICM,
[Bath, UK, volume 1010 of CEUR Workshop Proceedings. CEUR-WS.org, 2013. URL http:](http://ceur-ws.org/Vol-1010/paper-04.pdf)
```
//ceur-ws.org/Vol-1010/paper-04.pdf.
```
[6] P. Clark, O. Tafjord, and K. Richardson. Transformers as Soft Reasoners over Language.
In C. Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on
Artificial Intelligence, IJCAI-20, pages 3882–3890. International Joint Conferences on Artificial
Intelligence Organization, 2020.
[7] L. M. de Moura, S. Kong, J. Avigad, F. van Doorn, and J. von Raumer. The lean theorem prover
(system description). In A. P. Felty and A. Middeldorp, editors, CADE, volume 9195 of Lecture
Notes in Computer Science, pages 378–388. Springer, 2015. ISBN 978-3-319-21400-9. URL
```
http://dblp.uni-trier.de/db/conf/cade/cade2015.html#MouraKADR15.
```
[8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the
North American Chapter of the Association for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota,
June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL
```
https://www.aclweb.org/anthology/N19-1423.
```
[9] A. Ebrahimi, M. Mager, A. Oncevay, V. Chaudhary, L. Chiruzzo, A. Fan, J. Ortega, R. Ramos,
A. Rios, I. Vladimir, G. A. Giménez-Lugo, E. Mager, G. Neubig, A. Palmer, R. A. C. Solano,
N. T. Vu, and K. Kann. Americasnli: Evaluating zero-shot natural language understanding of
pretrained multilingual models in truly low-resource languages, 2021.
[[10] European Organization For Nuclear Research and OpenAIRE. Zenodo, 2013. URL https:](https://www.zenodo.org/)
```
//www.zenodo.org/.
```
[11] D. Ferreira and A. Freitas. Natural language premise selection: Finding supporting statements for mathematical text. In Proceedings of the 12th Language Resources and Evaluation
Conference, pages 2175–2182, Marseille, France, May 2020. European Language Resources
[Association. ISBN 979-10-95546-34-4. URL https://www.aclweb.org/anthology/2020.](https://www.aclweb.org/anthology/2020.lrec-1.266)
```
lrec-1.266.
```
-----
[12] D. Ferreira and A. Freitas. Premise selection in natural language mathematical texts. In
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,
pages 7365–7374, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/
v1/2020.acl-main.657. [URL https://www.aclweb.org/anthology/2020.acl-main.](https://www.aclweb.org/anthology/2020.acl-main.657)
```
657.
```
[13] T. Gowers, J. Barrow-Green, and I. Leader. The Princeton Companion to Mathematics. Princeton University Press, USA, illustrated edition edition, 2008. ISBN 0691118809.
[14] D. Hendrycks, X. Liu, E. Wallace, A. Dziedzic, R. Krishnan, and D. Song. Pretrained transformers improve out-of-distribution robustness, 2020.
[15] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the math dataset, 2021.
[16] S. Holland, A. Hosny, S. Newman, J. Joseph, and K. Chmielinski. The dataset nutrition label:
A framework to drive higher data quality standards. arXiv preprint arXiv:1805.03677, 2018.
[17] J. Hu, S. Ruder, A. Siddhant, G. Neubig, O. Firat, and M. Johnson. XTREME: A massively
multilingual multi-task benchmark for evaluating cross-lingual generalisation. In H. D. III
and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning,
volume 119 of Proceedings of Machine Learning Research, pages 4411–4421. PMLR, 13–18
[Jul 2020. URL http://proceedings.mlr.press/v119/hu20b.html.](http://proceedings.mlr.press/v119/hu20b.html)
[18] D. Huang, P. Dhariwal, D. Song, and I. Sutskever. Gamepad: A learning environment for
[theorem proving. In International Conference on Learning Representations, 2019. URL https:](https://openreview.net/forum?id=r1xwKoR9Y7)
```
//openreview.net/forum?id=r1xwKoR9Y7.
```
[19] D. Kang, A. Head, R. Sidhu, K. Lo, D. Weld, and M. A. Hearst. Document-Level Definition
Detection in Scholarly Documents: Existing Models, Error Analyses, and Future Directions. In
Proceedings of the First Workshop on Scholarly Document Processing, pages 196–206, Online,
Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.sdp-1.22. URL
```
https://www.aclweb.org/anthology/2020.sdp-1.22.
```
[20] V. Karpukhin, B. O˘guz, S. Min, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage
retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906, 2020.
[21] G. Lample and F. Charton. Deep learning for symbolic mathematics. In International
[Conference on Learning Representations, 2020. URL https://openreview.net/forum?](https://openreview.net/forum?id=S1eZYeHFDS)
```
id=S1eZYeHFDS.
```
[22] R. Le Bras, S. Swayamdipta, C. Bhagavatula, R. Zellers, M. E. Peters, A. Sabharwal, and
Y. Choi. Adversarial filters of dataset biases, 2020. ISSN 23318422.
[23] W. Li, L. Yu, Y. Wu, and L. C. Paulson. Isarstep: a benchmark for high-level mathematical
[reasoning. In International Conference on Learning Representations, 2021. URL https:](https://openreview.net/forum?id=Pzj6fzU6wkj)
```
//openreview.net/forum?id=Pzj6fzU6wkj.
```
[24] W. Ling, D. Yogatama, C. Dyer, and P. Blunsom. Program induction by rationale generation:
Learning to solve and explain algebraic word problems. In ACL 2017 - 55th Annual Meeting of
the Association for Computational Linguistics, Proceedings of the Conference (Long Papers),
volume 1, 2017. doi: 10.18653/v1/P17-1015.
[25] N. D. Megill and D. A. Wheeler. Metamath: A Computer Language
for Mathematical Proofs. Lulu Press, Morrisville, North Carolina, 2019.
```
http://us.metamath.org/downloads/metamath.pdf.
```
[26] R. Nogueira and K. Cho. Passage re-ranking with bert, 2020.
[27] C. Nogueira dos Santos, X. Ma, R. Nallapati, Z. Huang, and B. Xiang. Beyond [CLS]
through ranking by generation. In Proceedings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP), pages 1722–1727, Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.134. URL
```
https://www.aclweb.org/anthology/2020.emnlp-main.134.
```
-----
[28] F. Petroni, T. Rocktäschel, P. Lewis, A. Bakhtin, Y. Wu, A. H. Miller, and S. Riedel. Language
models as knowledge bases? In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical
Methods in Natural Language Processing and 9th International Joint Conference on Natural
Language Processing, Proceedings of the Conference, 2020. doi: 10.18653/v1/d19-1250.
[29] S. Polu and I. Sutskever. Generative language modeling for automated theorem proving, 2020.
[30] M. N. Rabe, D. Lee, K. Bansal, and C. Szegedy. Mathematical reasoning via self-supervised
skip-tree training. In International Conference on Learning Representations, 2021. URL
```
https://openreview.net/forum?id=YmqAnY0CMEy.
```
[31] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are
unsupervised multitask learners. 2019.
[32] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J.
Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal
[of Machine Learning Research, 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/](http://jmlr.org/papers/v21/20-074.html)
```
20-074.html.
```
[33] S. Rothe, S. Narayan, and A. Severyn. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8:264–280, 2020.
[doi: 10.1162/tacl_a_00313. URL https://www.aclweb.org/anthology/2020.tacl-1.](https://www.aclweb.org/anthology/2020.tacl-1.18)
```
18.
```
[34] S. Roy and D. Roth. Solving general arithmetic word problems. In Conference Proceedings
EMNLP 2015: Conference on Empirical Methods in Natural Language Processing, 2015. doi:
10.18653/v1/d15-1202.
[35] D. Saxton, E. Grefenstette, F. Hill, and P. Kohli. Analysing mathematical reasoning abilities
of neural models. In International Conference on Learning Representations, 2019. URL
```
https://openreview.net/forum?id=H1gR5iR5FX.
```
[36] C. Szegedy, editor. A Promising Path Towards Autoformalization and General Artificial
Intelligence, 2020.
[37] O. Tafjord, B. D. Mishra, and P. Clark. Proofwriter: Generating implications, proofs, and
abductive statements over natural language. ArXiv, abs/2012.13048, 2020.
[38] N. Thakur, N. Reimers, A. Rücklé, A. Srivastava, and I. Gurevych. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663,
[4 2021. URL https://arxiv.org/abs/2104.08663.](https://arxiv.org/abs/2104.08663)
[39] W. P. Thurston. On proof and progress in mathematics. arXiv:math/9404236, Mar. 1994. URL
```
http://arxiv.org/abs/math/9404236. arXiv: math/9404236.
```
[40] J. Urban. Mptp 0.2: Design, implementation, and initial experiments. J. Autom. Reason., 37
[(1–2):21–43, Aug. 2006. ISSN 0168-7433. doi: 10.1007/s10817-006-9032-3. URL https:](https://doi.org/10.1007/s10817-006-9032-3)
```
//doi.org/10.1007/s10817-006-9032-3.
```
[41] M. Wang and J. Deng. Learning to Prove Theorems by Learning to Generate Theorems. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors,
Advances in Neural Information Processing Systems, volume 33, pages 18146–18157. Cur[ran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/](https://proceedings.neurips.cc/paper/2020/file/d2a27e83d429f0dcae6b937cf440aeb1-Paper.pdf)
```
d2a27e83d429f0dcae6b937cf440aeb1-Paper.pdf.
```
[42] Q. Wang, C. Brown, C. Kaliszyk, and J. Urban. Exploration of neural machine translation
in autoformalization of mathematics in Mizar. In CPP 2020 - Proceedings of the 9th ACM
SIGPLAN International Conference on Certified Programs and Proofs, co-located with POPL
2020, 2020. doi: 10.1145/3372885.3373827.
[43] D. Whalen. Holophrasm: a neural automated theorem prover for higher-order logic, 2016.
-----
[44] A. Williams, N. Nangia, and S. R. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL HLT 2018 - 2018 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies Proceedings of the Conference, 2018. ISBN 9781948087278. doi: 10.18653/v1/n18-1101.
[45] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf,
M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L.
Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Transformers: State-of-the-art
natural language processing. In Proceedings of the 2020 Conference on Empirical Methods
in Natural Language Processing: System Demonstrations, pages 38–45, Online, Oct. 2020.
[Association for Computational Linguistics. URL https://www.aclweb.org/anthology/](https://www.aclweb.org/anthology/2020.emnlp-demos.6)
```
2020.emnlp-demos.6.
```
[46] Y. Wu, M. Rabe, W. Li, J. Ba, R. Grosse, and C. Szegedy. Lime: Learning inductive bias for
primitives of mathematical reasoning, 2021.
[47] K. Yang and J. Deng. Learning to prove theorems via interacting with proof assistants. In 36th
International Conference on Machine Learning, ICML 2019, volume 2019-June, 2019.
**Checklist**
1. For all authors...
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s
contributions and scope? [Yes]
(b) Did you describe the limitations of your work? [Yes] We discussed limitations throughout our experimental analysis.
(c) Did you discuss any potential negative societal impacts of your work? [N/A] Our
work pertains to use of natural language in mathematical theorem proving, and more
generally reasoning in artificial intelligence. Although a general reasoning agent may
present negative societal impacts, we do not foresee any immediate negative societal
impact from the domain, dataset, tasks, and study that we present here. Instead,
we foresee positive societal impacts through education and scientific discovery from
building systems that understand and create natural mathematical content.
(d) Have you read the ethics review guidelines and ensured that your paper conforms to
them? [Yes]
2. If you are including theoretical results...
(a) Did you state the full set of assumptions of all theoretical results? [N/A] We did not
include theoretical results.
(b) Did you include complete proofs of all theoretical results? [N/A]
3. If you ran experiments (e.g. for benchmarks)...
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We released
our code as a GitHub repo and our dataset on Zenodo.
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they
were chosen)? [Yes] We specified data splits in section 4, and hyperparameters in
Appendix B.
(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] We report results from a single run of each experiment
due to computational constraints.
(d) Did you include the total amount of compute and the type of resources used (e.g.,
type of GPUs, internal cluster, or cloud provider)? [Yes] We specified the computing
resources in Appendix B.
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
(a) If your work uses existing assets, did you cite the creators? [Yes] In section 3, we cited
the authors of mathematical textbooks we used as data sources. ProofWiki and Stacks
are collaboratively created on the web.
-----
(b) Did you mention the license of the assets? [Yes] We noted the license of each data
source in section 3, and verified that all permit redistribution with modification for
non-commercial purposes.
(c) Did you include any new assets either in the supplemental material or as a URL?
[Yes] We released the NATURALPROOFS dataset on Zenodo, and provide additional
resources in a public Github repository.
(d) Did you discuss whether and how consent was obtained from people whose data you’re
using/curating? [N/A] The licenses of the data indicate that our usage is permitted.
(e) Did you discuss whether the data you are using/curating contains personally identifiable
information or offensive content? [N/A] The data we are using/curating contains no
PII or offensive content.
5. If you used crowdsourcing or conducted research with human subjects...
(a) Did you include the full text of instructions given to participants and screenshots, if
applicable? [N/A] We did not use crowdsourcing or conduct research with human
subjects.
(b) Did you describe any potential participant risks, with links to Institutional Review
Board (IRB) approvals, if applicable? [N/A]
(c) Did you include the estimated hourly wage paid to participants and the total amount
spent on participant compensation? [N/A]
-----
**Appendix**
**A** **Dataset Details**
Table 12 shows example theorems and proofs from more data sources. Table 13 shows an example
of the same theorem extracted from different sources. Table 14 gives more detailed statistics of the
dataset. Figure 1 shows the JSON format of an example theorem, whereas Figure 2 shows the data
schema we use to standardize data collected from different sources.
**Source** **Stacks**
**Theorem Lemma 9.7**
Let S be a scheme. Let f : X → _S be locally of finite type with X quasi-compact. Then_
size(X) ≤ size(S).
**Proof** We can find a finite affine open covering X = _i=1,...n_ _[U][i][ such that each][ U][i][ maps into an affine]_
open Si of S. Thus by Lemma 9.5 we reduce to the case where both S and X are affine. In this
case by Lemma 9.4 we see that it suffices to show
[S]
_A[x1, . . ., xn]_ max 0, _A_ _._
_|_ _| ≤_ _{ℵ_ _|_ _|}_
We omit the proof of this inequality.
**Source** **Textbook: Number Theory**
**Theorem Proposition 2.1.13**
If gcd(a, n) = 1, then the equation ax ≡ _b (mod n) has a solution, and that solution is unique_
modulo n.
**Proof** Let R be a complete set of residues modulo n, so there is a unique element of R that is congruent
to b modulo n.
By Lemma 2.1.12, aR is also a complete set of residues modulo n, so there is a unique element
_ax ∈_ _aR that is congruent to b modulo n, and we have ax ≡_ _b (mod n)._
Table 12: Example theorems and their proofs from the Stacks and Number Theory textbook sources.
**Source** **ProofWiki**
**Theorem Solution of Linear Congruence/Unique iff Coprime to Modulus**
If gcd{a, n} = 1, then ax ≡ _b (mod n) has a unique solution._
**Proof** From Solution of Linear Congruence: Existence:
the problem of finding all integers satisfying the linear congruence ax ≡ _b (mod n)_
is the same problem as:
the problem of finding all the x values in the linear Diophantine equation ax − _ny = b._
Let: gcd{a, n} = 1
Let x = x0, y = y0 be one solution to the linear Diophantine equation: ax − _ny = b_
From Solution of Linear Diophantine Equation, the general solution is:
_∀k ∈_ Z : x = x0 + nk, y = y0 + ak
But: ∀k ∈ Z : x0 + nk ≡ _x0 (mod n)_
Hence x ≡ _x0 (mod n) is the only solution of ax ≡_ _b (mod n)._
**Source** **Textbook: Number Theory**
**Theorem Units**
If gcd(a, n) = 1, then the equation ax ≡ _b (mod n) has a solution, and that solution is unique_
modulo n.
**Proof** Let R be a complete set of residues modulo n, so there is a unique element of R that is congruent
to b modulo n.
By Lemma 2.1.12, aR is also a complete set of residues modulo n, so there is a unique element
_ax ∈_ _aR that is congruent to b modulo n, and we have ax ≡_ _b (mod n)._
Table 13: Example of the same theorem extracted from two different sources.
**A.1** **Preprocessing Details**
**ProofWiki. The theorem, definition, and proof contents are contained in a WikiMedia section that**
is determined for each page type according to a hand-defined rule. Since the roughly 1,000 other
pages have varying page structures, we use their entire contents instead of a single section’s contents.
-----
|Source Type Attr|All mean 25%50% 75%|ProofWiki mean 25% 50% 75%|Stacks mean 25%50%75%|Textbook: RA mean 25%50%75%|Textbook: NT mean 25%50%75%|
|---|---|---|---|---|---|
|N Chars Theorem Tokens Lines Refs|32,579 - - - 320.0 146 275 433 46.7 21 39 63 5.9 2 4 8 1.8 0 0 3|19,734 - - - 277.9 93 238 393 38.2 14 32 53 3.6 1 3 5 2.8 0 3 4|12,479 - - - 388.6 215 331 491 60.6 35 52 76 9.7 4 8 12 0.2 0 0 0|298 - - - 278.2 152 225 355 33.6 19 29 41 8.4 4 7 11 0.0 0 0 0|68 - - - 158.4 98 140 179 23.7 14 21 30 4.5 2 4 5 0.0 0 0 0|
|N Chars Proof Tokens Lines Refs|32,012 - - - 1,123.8 388 770 1,449 181.5 57 121 236 24.9 8 16 32 5.6 2 3 7|19,234 - - - 1,170.0 444 810 1,470 199.3 68 134 254 25.8 9 18 33 7.4 2 5 9|12,479 - - - 1,053.1 280 705 1,422 155.5 36 101 211 23.4 6 15 31 3.0 1 2 4|235 - - - 1231.0 442 876 1,634 128.9 50 92 165 36.1 14 27 47 1.6 0 1 2|64 - - - 655.7 327 551 732 97.2 47 87 115 16.1 8 13 18 0.9 0 1 1|
|N Definition Chars Tokens Lines Refs|14,230 - - - 362.3 152 300 491 48.4 18 39 65 5.0 1 4 6 2.9 0 2 4|12,420 - - - 349.3 131 289 478 45.0 15 35 61 4.2 1 3 6 3.3 1 3 5|1,687 - - - 459.0 251 380 577 73.2 41 61 91 10.7 5 9 13 0.4 0 0 1|86 - - - 411.8 246 356 509 58.6 33 49 74 13.3 8 11 17 0.0 0 0 0|37 - - - 199.5 118 159 262 32.6 21 28 43 5.1 3 4 7 0.0 0 0 0|
|N Chars Other Tokens Lines Refs|1,974 - - - 1,399.8 712 1,1091,680 212.1 101 158 250 34.4 18 28 42 5.7 1 3 7|1,006 - - - 1,836.5 1,0181,4312,131 286.1 145 206 337 46.7 28 39 49 9.2 4 7 11|968 - - - 945.9 480 802 1,198 135.2 70 113 168 21.7 10 18 27 2.0 0 1 3|||
Table 14: NATURALPROOFS dataset statistics (detailed).
In addition to well-formed axiom and corollary statements, the other pages include misformatted
theorem or definition statements that occur as references elsewhere in the corpus.
**Stacks and textbooks. The raw data we obtain from Stacks and textbook sources are LATE[X source]**
code. For each data source, we look up with a pre-defined list of environment names, and parse the
contents enclosed in these environments into statements or proofs. Each proof is associated with the
environment that immediately precedes it. As a result, each theorem has at most one proof. Table 15
lists the mapping from LATE[X environment name to the data type in the N][ATURAL][P][ROOFS][ taxonomy.]
A few misc notes:
- In Stacks, statements do not have titles, but each has a label with semantic meaning (e.g.
```
sets-lemma-bound-finite-type for the example in Table 12), so we use it as a pseudo
```
title.
- In the Number Theory textbook, proofs are bounded by (\proof, \bbox) instead of
(\begin{proof}, \end{proof}).
**Source** Textbook: NT
**LATE[X env]** **Type**
theorem theorem
lemma theorem
corollary theorem
proposition theorem
definition definition
proof proof
|Source|Textbook: RA|
|---|---|
|LATEX env|Type|
|theorem lemma corollary definition proof|theorem theorem theorem definition proof|
Table 15: Mappings from LATE[X environment names to N][ATURAL][P][ROOFS][ data types for each]
|Source|Stacks|
|---|---|
|LATEX env|Type|
|theorem lemma proposition definition remark remarks proof|theorem theorem theorem definition other other proof|
data source. As an example, for Stacks, the mapping from lemma to theorem in row 2 means
that an environment enclosed by \begin{lemma} and \end{lemma} is considered a theorem in
NATURALPROOFS.
**A.2** **ProofWiki categories.**
For ProofWiki, we also provide category tags for each statement. ProofWiki contains statements
encompassing a broad coverage of mathematical topics (i.e. categories). In ProofWiki, each category
has zero or more sub-categories, and sub-categories have sub-sub-categories, and so on, forming
-----
```
"id": 5480,
"type": "theorem",
"label": "Category of Monoids is Category",
"categories": [ "Category of Monoids" ],
"toplevel_categories": [ "Algebra", "Set Theory", "Abstract Algebra", "Category Theory" ],
"recursive_categories": [
"Category Theory",
"Algebra",
"Abstract Algebra",
"Category of Monoids",
"Set Theory",
"Examples of Categories"
],
"title": "Category of Monoids is Category",
"contents": [
"Let $\\mathbf{Mon}$ be the [[Definition:Category of Monoids|category of monoids]].",
"Then $\\mathbf{Mon}$ is a [[Definition:Metacategory|metacategory]]."
],
"refs": [
"Definition:Category of Monoids",
"Definition:Metacategory"
],
"ref_ids": [ 22919, 21454 ],
"proofs": [
{
"contents": [
"Let us verify the axioms $(C1)$ up to $(C3)$ for a [[Definition:Metacategory|metacategory]].",
"We have [[Composite of Homomorphisms on Algebraic Structure is Homomorphism]], verifying $(C1)$.",
"We have [[Identity Mapping is Automorphism]] providing $\\operatorname{id}_S$ for every
[[Definition:Monoid|monoid]] $\\left({S, \\circ}\\right)$.",
"Now, $(C2)$ follows from [[Identity Mapping is Left Identity]] and
[[Identity Mapping is Right Identity]].",
"Finally, $(C3)$ follows from [[Composition of Mappings is Associative]].",
"Hence $\\mathbf{Mon}$ is a [[Definition:Metacategory|metacategory]].",
"{{qed}}",
"[[Category:Category of Monoids]]",
"sppgcr1pruam0jkf2euhyvt6y3jpnt0"
],
"refs": [
"Definition:Metacategory",
"Composite of Homomorphisms is Homomorphism/Algebraic Structure",
"Identity Mapping is Automorphism",
"Definition:Monoid",
"Identity Mapping is Left Identity",
"Identity Mapping is Right Identity",
"Composition of Mappings is Associative",
"Definition:Metacategory"
],
"ref_ids": [ 21454, 3852, 418, 19948, 217, 4387, 1494, 21454 ]
}
]
}
```
Figure 1: NATURALPROOFS JSON for the theorem and proof shown in Table 1. Using the notation in
section 4, an (x, y) example is formed where x is the concatenation of 'title' and 'contents',
and y is a set formed with 'ref_ids' of one of the proofs.
a category graph.[8] We recursively scrape the category pages starting from Category:Content
`Categories,[9]` and consider categories directly under Category:Proofs By Topic as top-level
categories. Figure 3 shows the high-level structure of the ProofWiki category graph.
In the ProofWiki raw data, each statement page is tagged with several categories (the 'categories'
field). In addition, we find the top-level categories (the 'toplevel_categories' field) as well as
exhaustive categories (the 'recursive_categories' field) for each theorem by running flood-fill
on the category graph. Figure 4 and Figure 5 show some statistics of the top-level categories.
8It is not strictly a tree or DAG, because there are several skip connections (e.g. Complex Analysis is both
a top-level category and a sub-category under Analysis) and circular dependencies (e.g. Metric Spaces and
```
Pseudometric Spaces are sub-category of each other)
```
[9https://proofwiki.org/wiki/Category:Content_Categories](https://proofwiki.org/wiki/Category:Content_Categories)
-----
```
Dataset: {
'dataset': {
'theorems': [Statement],
'definitions': [Statement],
'others': [Statement],
'retrieval_examples': [int], // deprecated
},
'splits': {
'train': {
'ref_ids': [int],
'examples': [(int, int)],
// pairs of theorem id and index of proof
},
'valid': {
'ref_ids': [int],
'examples': [(int, int)],
},
'test': {
'ref_ids': [int],
'examples': [(int, int)],
},
},
```
```
Statement: {
'id': int,
'type': string,
'label': string,
'categories': [string],
'toplevel_categories': [string], // ProofWiki only
'recursive_categories': [string], // ProofWiki only
'title': string,
'contents': [string],
'refs': [string],
'ref_ids': [int],
'proofs': [Proof], // for theorems only
}
Proof: {
'contents': [string],
'refs': [string],
'ref_ids': [int],
```
Figure 2: NATURALPROOFS dataset schema.
```
Content Categories
...
Definitions
...
Definitions by Topic
...
Definitions/Branch of Mathematics
Definitions/Abstract Algebra
Definitions/Algebra
Definitions/Analysis
...
Definitions/Topology
Proofs
...
Proofs by Topic
Abstract Algebra
Additive Functions
Examples of Additive Functions
Monotone Additive Function is Linear
Additive Groups
...
Zero Elements
Algebra
Analysis
...
Trigonometry
```
Figure 3: ProofWiki category graph. Nested structure represents sub-categories. Some nesting
omitted here for simplicity.
Figure 5: Number of top-level categories
per theorem, ProofWiki.
Figure 4: Frequency of top-level categories, ProofWiki.
-----
**B** **Implementation Details and Experimental Setup**
**Model input format.** We format each statement (x or r) as, [CLS] title [SEP] content [SEP], and
we truncate the statement when the sequence exceeds the model’s maximum length. Each sequence
is tokenized using the bert-base-cased tokenizer.
Figure 6: Model diagrams for the mathematical reference retrieval task. (a) The basic pairwise scoring
of a theorem (x) and a reference (r). We use two independently parameterized BERT models (fθ and
**gφ) to encode theorems and references, respectively. The theorem embedding fθ and gφ(x) and the**
reference embedding gφ(r) are taken to produce a pairwise score s(x, r). (b) The training schema
for the pairwise parameterization model. A small negative reference set y is chosen, and the model
_−_
is trained to maximize the probability that the ground-truth reference is selected. (c) The inference
schema for the pairwise parameterization model. The complete reference set R is ranked for each
theorem. (d) The schema for the joint parameterization model. The decoder takes the pre-computed
embeddings matrix from the pairwise inference step, and does a one-step generation to predict a
distribution over the reference set. References are ranked based on their probability masses in this
distribution. (e) The schema for the sequential generation and retrieval model. It resembles the joint
model, except that its decoder does a multi-step generation to rollout an ordered list of references.
**B.1** **Pairwise model**
Models are implemented with transformers [45] and pytorch-lightning[10]. The theorem encoder fθ[thm]1 [is parameterized using the][ bert-base-cased][ architecture and initialized with its param-]
eters. The reference encoder gθ[ref]2 [is also parameterized and initialized with (a separate instance of)]
```
bert-base-cased.
```
**Training.** Models are trained for 500,000 steps on one Quadro RTX 8000 GPU. Each batch contains
a maximum of 16,384 (2[14]) tokens. Validation is done every 5,000 steps. The model with the highest
mAP computed on the validation set is selected for final evaluation.
**Negatives.** We use in-batch negatives as in [20], which computes a score matrix S = TR[⊤] _∈_
R[B][×][B] on a batch of theorem embeddings T ∈ R[B][×][d] and reference embeddings R ∈ R[B][×][d], then
defines the loss as _i=1_ [softmax][(][S][[][i,][ :])][, which treats elements on the diagonal of][ S][ as positives and]
off-diagonal elements as negatives.
[P][B]
**Evaluation.** The full set of inputs x and the full set of references R are pre-encoded using their
respective trained models (i.e. two instances of BERT). Then the encodings for each possible x, r
pair are used to obtain scalar scores, inducing a ranked list of all |R| references for each input x.
**B.2** **Autoregressive**
We implement the autoregressive model as a sequence-to-sequence encoder-decoder model. Following
Rothe et al. [33], we parameterize the encoder and decoder using BERT models. This allows for
initializing with pairwise model components. Concretely, we implement the architecture using the
```
transformers EncoderDecoderModel class with bert-base-cased encoder and decoder.
```
[10https://github.com/PyTorchLightning/pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning)
-----
Let fθ1 (x) denote the encoder and hθ2 (r<t, fθ1 (x)) denote the decoder. The decoder has an embedding matrix R ∈ R[(][|R|][+2)][×][d], where each row represents a reference or special token ⟨bos⟩, ⟨eos⟩.
At each step t, given a theorem and sequence of tokens (⟨bos⟩, r1, . . ., rt−1), the decoder produces a
next-token distribution pθ(·|x, r<t) = softmax(Rht + b), where ht ∈ R[d] is the final hidden state
obtained from the decoder hθ2 (r<t, fθ1 (x)), and b ∈ R[(][|R|][+2)] is a bias vector.
The model is trained using cross-entropy loss with the ground-truth (x, y) pairs, where y =
(⟨bos⟩, r1, . . ., r|y|, ⟨eos⟩) is a reference sequence.
**Initialization.** Let fθ˜[thm]1 and gθ[ref]˜2 [be the theorem and reference encoder from a trained pairwise]
model (§B.1). The initialization settings listed in Table 9 are as follows. f [thm] means initializing the
encoder fθ1’s parameters as θ1 = θ[˜]1, and then updating them during training. R means initializing
and freezing the decoder’s embedding matrix as (omitting the ⟨bos⟩ and ⟨eos⟩ rows),
_gθ[ref]˜2_ [(][r][1][)]
**R =** _. . ._ _._
_gθ[ref]˜2_ [(][r][|R|][)]
**Training.** Models are trained for 50 epochs on one Quadro RTX 8000 GPU. Each batch contains a
maximum of 16,384 (2[14]) tokens. Validation is done every 5 epochs. The model with the highest
mAP computed on the validation set is selected for final evaluation.
**Generation evaluation.** Let ˆy ∼F(pθ, x) denote decoding a sequence ˆy = (r1, . . ., r|yˆ|[,][ ⟨][eos][⟩][)]
given model pθ and input x, using decoding algorithm F. For the reference generation task (§6.1),
we use beam search with beam size 20, based on a preliminary search over beam size {1,10,20,50}.
For retrieval evaluation only, we use greedy decoding (beam size 1) with a 1-gram repetition mask
since duplicates are not used during retrieval evaluation. For all decoding algorithms, we use the
```
transformers implementations.
```
**Retrieval evaluation.** A retrieval model produces a ranked list r[(1)], . . ., r[(][|R|][)] given an input x. We evaluate our autoregressive model as a retrieval model by producing a ranked list
**ryˆ[(1)] = (, . . .,r[(1)] r, . . .,[(][|]y[ˆ]|), . . ., r[|]y[ˆ]|) r after removing duplicates, and the remaining references are ordered according(|R|), where the first |yˆ| references come from the model’s generated sequence**
to the model’s first-step probabilities, pθ(r1 **x,** _bos_ ). In preliminary experiments we found the first
_|_ _⟨_ _⟩_
step’s probabilities to perform slightly better than using the last step’s probabilities.
**B.3** **Joint retrieval**
We implement the joint retrieval model as a one-step variant of the autoregressive retrieval model,
_pθ(·|x) = softmax(Rht + b),_ (7)
wheremented using the same encoder-decoder architecture as the autoregressive model (§B.2). This was a ht ∈ R[d] is the final hidden state obtained from hθ2 (⟨bos⟩, fθ1 (x)), and fθ1, hθ2 are impledesign decision made to closely compare the effect of autoregressive vs. joint parameterizations; an
alternative implementation could use an encoder-only model.
The model is trained using KL-divergence loss, using per-example reference-distributions
**y1** **r** **y**
_|_ _|_ _∈_
0 otherwise _[,]_
_p_ (r **x, y) =**
_∗_ _|_
where y = {r1, . . ., r|y|} is the ground-truth reference set.
We use the same training settings that were used with the autoregressive model (§B.2).
**B.4** **Retrieval Metrics**
For the mathematical reference retrieval task, we evaluate with standard retrieval metrics – mean
average prevision (mAP) and recall@k (R@k) – and a Full@k metric that measures ability to fully
recover all true references within the top-k results. We use k = 10 and k = 100 for our evaluation.
-----
**mAP.** Suppose for retrieval example (x, y) the model ranks all references as r[(1)], . . ., r[(][|R|][)]. The
average precision is computed as
_|R|_ _j_
I[r[(][j][)] **y]** _k=1_ [I][[][r][(][k][)][ ∈] **[y][]]** _._
_∈_ _j_
_j=1_ P
X
AP =
mAP is the mean of AP across all retrieval examples.
**R@k.** For each retrieval example, the recall@k is
_k_
_j=1_ [I][[][r][(][j][)][ ∈] **[y][]]**
R@k = _._
**y**
P _|_ _|_
We aggregate recall@k by micro-averaging across retrieval examples.
**Full@k.** For each retrieval example, the fully-recovering indicator is formally defined as
Full@k = I **r ∈{r[(][j][)]** _| 1 ≤_ _j ≤_ _k}_ _._
**rY∈y**
The overall Full@k metric is thus the mean of this fully-recovering indicator across all retrieval
examples.
**C** **Additional Results**
|Col1|ProofWiki Stacks mAP R@10 R@100 Full@10 Full@100 mAP R@10 R@100 Full@10 Full@100|Col3|
|---|---|---|
||mAP R@10 R@100 Full@10 Full@100 mAP R@10 R@100 Full@10 Full@100||
|Random Frequency TF-IDF|0.04 0.00 0.33 0.00 0.00 3.54 5.99 24.44 0.88 2.28 6.33 10.31 21.82 4.74 8.69|0.08 0.10 0.43 0.00 0.13 1.03 1.86 10.86 0.13 2.19 13.45 24.95 48.24 19.61 36.77|
|BERT-pair (P+S) +joint BERT-pair +joint|13.84 19.31 56.99 8.60 31.96 33.85 37.15 72.25 17.12 48.46 16.99 22.91 62.03 9.22 36.96 37.51 41.39 75.92 20.54 50.75|17.29 33.29 74.14 23.61 63.23 25.12 36.00 74.24 27.35 64.13 21.21 38.00 75.67 28.77 66.19 26.55 39.81 75.71 30.58 66.06|
Table 16: In-domain performance on the mathematical reference retrieval task (validation set). BERT
is finetuned on the part of dataset with the same source as the evaluation set, whereas BERT (P+S) is
finetuned on the combined dataset from ProofWiki and Stacks sources. Recall is micro-averaged.
|Col1|ProofWiki Stacks|Col3|
|---|---|---|
||All Theorems Definitions Others All Theorems Definitions Others||
|Frequency TF-IDF BERT|3.54 7.25 5.02 1.49 6.33 10.07 2.33 2.19 16.99 14.71 13.39 11.06|1.03 1.14 0.33 0.48 13.45 12.11 15.51 13.94 21.21 19.31 24.39 17.10|
Table 17: Retrieval performance (mAP) by reference type (validation set).
**Performance by reference type.** In Table 17 we break down the in-domain retrieval performance
by reference type. BERT shows a consistent improvement over TF-IDF on all types of references.
On ProofWiki, TF-IDF does much worse on definitions and other types than on theorems, whereas
BERT gives a more balanced performance on different types of references.
**D** **Supplementary Materials**
**Dataset documentation and intended uses.** We use the Dataset Nutrition Labels framework [16]
for dataset documentation. For the Statistics module, please refer to Table 3, Figure 4 and Figure 5.
-----
**Metadata**
**Filename** `proofwiki.json`
```
stacks.json
ra-trench.json
nt-stein.json
```
**Format** json
**Url** `https://doi.org/10.5281/zenodo.4632538`
**Domain** natural language processing
**Keywords** mathematics, theorems, proofs, language
**Type**
**Rows** 80,795
**Columns** 9
**Missing** none
**License** CC BY-SA 4.0 (proofwiki.json)
CC BY-NC-SA 4.0 (ra-trench.json)
GFDL 1.2 (stacks.json)
MIT License (ra-stein script)
**Released** June 2021
**Range** N/A
**Description** This dataset is a collection of mathematical
statements and proofs in natural language.
It collects data from multiple sources,
encompassing broad-coverage of all math
topics, deep-dive with a selected topic, and
low-resource scenarios. The dataset provides
theorems, proof(s) to each theorem when
applicable, and in-proof references to other
mathematical statements.
Table 18: Dataset Nutrition Labels for NATURALPROOFS.
**Provenance**
**Source**
ProofWiki
[(https://proofwiki.org/)](https://proofwiki.org/)
Stacks
[(https://stacks.math.columbia.edu/)](https://stacks.math.columbia.edu/)
Textbook: Real Analysis
[(https://digitalcommons.trinity.edu/mono/7/)](https://digitalcommons.trinity.edu/mono/7/)
Textbook: Number Theory
[(https://wstein.org/ent/)](https://wstein.org/ent/)
**Author**
Name Sean Welleck et al.
Email `[email protected]`
**Variables**
**id** A unique ID for this statement.
**type** The type of this statement;
either theorem, definition, or other.
**label** A string description of this statement.
**categories** A list of topics that this statement
pertains. For ProofWiki data only.
**title** A descriptive title of this statement.
**contents** The content of this statament or
proof, written in LATE[X.]
**refs** A list of labels of statements that this
statement or proof refers to in its content.
**ref_ids** IDs for items in refs.
**proofs** A list of proofs for this theorem.
May be empty.
The NATURALPROOFS dataset is intended to be used by researchers to build or evaluate machines on
predicting references in proofs, generating proofs to mathematical theorems, or other related tasks.
It should not be regarded as source of truth for defining particular mathematical concepts, proving
particular mathematical theorems, or the existence of such proof(s). In that case the user is advised to
consult authoritative mathematical resources.
**Dataset URL.** [The NATURALPROOFS dataset is hosted at https://doi.org/10.5281/zenodo.](https://doi.org/10.5281/zenodo.4632538)
```
4632538. Additional instructions and resources are provided in the Github repo https://github.
com/wellecks/naturalproofs.
```
**Author statement and license.** We bear all responsibility in case of violation of rights. We confirm
that the data sources we use are licensed to permit redistribution with modification for non-commercial
purposes.
**Hosting, licensing, and maintenance plan.** The dataset is hosted and maintained through Zenodo [10],[11] and the code is hosted by GitHub. The code is released under the MIT license. The
dataset is released under per-file licenses: CC BY-SA 4.0 (proofwiki.json), CC BY-NC-SA 4.0
(ra-trench.json), GFDL 1.2 (stacks.json), MIT License (ra-stein script). Zenodo meta
[11https://zenodo.org/](https://zenodo.org/)
-----
data is openly available under the CC0 license, and all open content is openly accessible through
open APIs.[12]
**Links to access the dataset and its metadata.** [The NATURALPROOFS dataset is hosted at https:](https://doi.org/10.5281/zenodo.4632538)
```
//doi.org/10.5281/zenodo.4632538. Additional instructions and resources are provided in the
```
[Github repo https://github.com/wellecks/naturalproofs.](https://github.com/wellecks/naturalproofs)
**Data format.** We store the dataset as JSON files. The dataset can be read using common JSON
libraries (e.g. the built-in json module in Python) and following the dataset schema in Figure 2.
**Long-term preservation.** We ensure this by uploading the dataset to the Zenodo dataset repository.
**Explicit license.** The code is released under the MIT license. The dataset is released under per-file
licenses: CC BY-SA 4.0 (proofwiki.json), CC BY-NC-SA 4.0 (ra-trench.json), GFDL 1.2
(stacks.json), MIT License (ra-stein script). Zenodo meta-data is openly available under
the CC0 license, and all open content is openly accessible through open APIs.
**Structured metadata.** We release the metadata along with the dataset on Zenodo.
**Persistent dereferenceable identifier.** `https://doi.org/10.5281/zenodo.4632538.`
**Reproducibility.** We ensure this by releasing our code on GitHub, which includes instructions to
reproduce the evaluation numbers in the paper.
[12https://about.zenodo.org/](https://about.zenodo.org/)
-----
| [
"Sean, Welleck",
"Jiacheng, Liu",
"Ronan Le, Bras",
"Hannaneh, Hajishirzi",
"Yejin, Choi",
"Kyunghyun, Cho"
] | 2021-01-01T00:00:00 | NeurIPS 2021 | true | 51 | 16 | null | https://arxiv.org/abs/2104.01112 | https://arxiv.org/abs/2104.01112 | https://www.semanticscholar.org/paper/4cc1fb128fa3abf6f90d567744767e8fd6315e1d |
UniGeo: Unifying Geometry Logical Reasoning via Reformulating Mathematical Expression | Geometry problem solving is a well-recognized testbed for evaluating the high-level multi-modal reasoning capability of deep models. In most existing works, two main geometry problems: calculation and proving, are usually treated as two specific tasks, hindering a deep model to unify its reasoning capability on multiple math tasks. However, in essence, these two tasks have similar problem representations and overlapped math knowledge which can improve the understanding and reasoning ability of a deep model on both two tasks. Therefore, we construct a large-scale Unified Geometry problem benchmark, UniGeo, which contains 4,998 calculation problems and 9,543 proving problems. Each proving problem is annotated with a multi-step proof with reasons and mathematical expressions. The proof can be easily reformulated as a proving sequence that shares the same formats with the annotated program sequence for calculation problems. Naturally, we also present a unified multi-task Geometric Transformer framework, Geoformer, to tackle calculation and proving problems simultaneously in the form of sequence generation, which finally shows the reasoning ability can be improved on both two tasks by unifying formulation. Furthermore, we propose a Mathematical Expression Pretraining (MEP) method that aims to predict the mathematical expressions in the problem solution, thus improving the Geoformer model. Experiments on the UniGeo demonstrate that our proposed Geoformer obtains state-of-the-art performance by outperforming task-specific model NGS with over 5.6% and 3.2% accuracies on calculation and proving problems, respectively. | A large-scale Unified Geometry problem benchmark, UniGeo, is constructed and a unified multi-task Geometric Transformer framework, Geoformer, is presented to tackle calculation and proving problems simultaneously in the form of sequence generation, which finally shows the reasoning ability can be improved on both two tasks by unifying formulation. | ## UniGeo: Unifying Geometry Logical Reasoning via Reformulating Mathematical Expression
**Jiaqi Chen[1][,][3], Tong Li[2], Jinghui Qin[4],** **Pan Lu[5],**
**Liang Lin[1], Chongyu Chen[6],** **Xiaodan Liang[1][,][2][∗]**
1Sun Yat-sen University, 2Shenzhen Campus of Sun Yat-sen University,
3The University of Hong Kong, 4Guangdong University of Technology,
5University of California, Los Angeles, 6DarkMatter AI Research
**Abstract**
Geometry problem solving is a wellrecognized testbed for evaluating the
high-level multi-modal reasoning capability
of deep models. In most existing works,
two main geometry problems: calculation
and proving, are usually treated as two
specific tasks, hindering a deep model to
unify its reasoning capability on multiple
math tasks. However, in essence, these two
tasks have similar problem representations
and overlapped math knowledge which can
improve the understanding and reasoning
ability of a deep model on both two tasks.
Therefore, we construct a large-scale Unified
**Geometry** problem benchmark, **UniGeo,**
which contains 4,998 calculation problems
and 9,543 proving problems. Each proving
problem is annotated with a multi-step proof
with reasons and mathematical expressions.
The proof can be easily reformulated as a
proving sequence that shares the same formats
with the annotated program sequence for calculation problems. Naturally, we also present
a unified multi-task Geometric Transformer
framework, Geoformer, to tackle calculation
and proving problems simultaneously in the
form of sequence generation, which finally
shows the reasoning ability can be improved
on both two tasks by unifying formulation.
Furthermore, we propose a Mathematical
Expression Pretraining (MEP) method that
aims to predict the mathematical expressions
in the problem solution, thus improving
the Geoformer model. Experiments on the
UniGeo demonstrate that our proposed Geoformer obtains state-of-the-art performance by
outperforming task-specific model NGS with
over 5.6% and 3.2% accuracies on calculation
and proving problems, respectively.[1]
_∗Corresponding author._
[1Data and code: https://github.com/chen-judge/UniGeo](https://github.com/chen-judge/UniGeo)
[https://github.com/chen-judge/UniGeo-MindSpore](https://github.com/chen-judge/UniGeo-MindSpore)
|Col1|Col2|Col3|
|---|---|---|
|Program Sequence|||
|Equal | N0 | Double | V0|||
|Proving Sequence Midpoint | TU | = | UV | SSS |ȏTUW | ؆ | ȏUVX | CPCTC | ğT | = |ğVUX|||
**Calculation** **Proving**
AB is the diameter of Given VX=UW and
circle O and point C TW=UX. U is the
is on the circle. midpoint of TV.
IfğOCA=25瀽 (N0), Complete the proof
thenğBOC=(). that ğT=ğVUX.
Unified Geometric Unified Geometric
Transformer Transformer
**Program Sequence**
Equal | N0 | Double | V0
**Solution**
**Proving Sequence**
Ĩ OA = OC
ħ ğOCA = ğOAC = 25瀽 Midpoint | TU | = | UV | SSS |ȏTUW |
ħ ğ BOC = 2 ğOAC = 50瀽 ؆ | ȏUVX | CPCTC | ğT | = |ğVUX
**Pretraining** **Unified Training**
Figure 1: The pipeline of pretraining and unified training of our proposed Geoformer. We pretrain the model
by predicting the mathematical expression extracted
from the solution of calculation problems. After that,
we consider the calculation and proving as downstream
tasks, and feed both types of data to Geoformer for unified training.
**1** **Introduction**
Achieving logical reasoning abilities is still challenging for neural networks, especially in some
mathematical reasoning tasks, such as math word
problems (MWP) (Zhang et al., 2020a,b; Qin et al.,
2020, 2021; Yang et al., 2022a,b; Mishra et al.,
2022a,b; Lu et al., 2022), mathematical theorem
proving (Li et al., 2020; Welleck et al., 2021),
etc. Recently, geometry problem solving (Sachan
et al., 2020; Chen et al., 2021; Lu et al., 2021a;
Zhang et al., 2022) has also attracted much attention in the NLP community, which requires
comprehensive reasoning capabilities in parsing
multimodal information and utilizing mathematical knowledge. Specifically, geometry problem
solving mainly contains two categories: calculation and proving. For calculation problems, both
recent GeoQA (Chen et al., 2021) and Inter-GPS
(Lu et al., 2021a) propose multiple-choice geom
-----
|Proof|Reasons|Expressions|
|---|---|---|
|Step1|Midpoint|TU = UV|
|Step2|SSS|ȏTUW ؆ ȏUVX|
|Step3|CPCTC|ğT = ğVUX|
**Calculation Problem** **Proving Problem**
AB is the diameter of circle O and Given VX=UW and TW=UX. U is the midpoint
point C is on the circle. If ğOCA of TV. Complete the proof that ğT=ğVUX.
= 25瀽 (N0), then ğBOC=().
**Proof** **Reasons** **Expressions**
A. 30r B. 40r C. 50r D. 60r
Step1 Midpoint TU = UV
**Answer: C.50r**
Step2 SSS ȏTUW ؆ ȏUVX
**Problem Solution:**
ĨOA = OC ħğOCA=ğOAC=25瀽 ħğBOC=2ğOAC=50瀽 Step3 CPCTC ğT = ğVUX
**Annotated Program Sequence:** **Proving Sequence:**
Equal | N0 | Double | V0 Midpoint | TU | = | UV | SSS |ȏTUW | ؆ | ȏUVX | CPCTC | ğT | = |ğVUX
Figure 2: We unify geometry logical reasoning in the proposed UniGeo dataset. Except for the calculation problem
provided in the GeoQA benchmark (Chen et al., 2021), we collect some proving problems (right) which contain
clear mathematical expressions and corresponding reasons that can be reformulated as proving sequence to unify
with the program sequence in the calculation problems.
etry problem benchmarks annotated with specific
symbolic programs or logic forms, which inspire
the neural networks’ potential ability to give an interpretable prediction. On the subject of geometry
proving, the existing work (Chou et al., 1996, 2000;
Gan et al., 2019) mainly relies on well-designed
proving systems and forward chaining search methods rather than neural-based models. Therefore,
there is still a huge gap between the works on these
two types of geometry problems, which are usually
considered as two areas.
Recently, much work (Raffel et al., 2020; Cho
et al., 2021; Lu et al., 2021b; Li et al., 2022;
Alayrac et al., 2022) has presented unified models
for various vision-language reasoning and generation tasks since the underlying visual/linguistic
understanding and reasoning abilities are largely
common. Inspired by the mainstream progress, we
suppose that a unified model for geometry problem
solving is also necessary. To begin, calculation and
proving tasks share some fundamental skills and
knowledge in geometric reasoning. Therefore, it
is desirable to explore the general understanding
and reasoning ability of the unified neural network
in the math domain. Besides, the unified model
doesn’t need auxiliary models to determine whether
the problem is a calculation or proving problem and
further select task-specific models, where cumulative errors can be introduced.
To this end, a framework addressing geometry
problems uniformly at both the data level and the
model level is valuable and expected. However, the
existing proving data is small-scale and annotated
in an incompatible format. To achieve our goal,
we collect lots of geometry proving data from an
online education website and build a single multitask benchmark, UniGeo, in which the provided
proof can be reformulated as a causal proving sequence so that the calculation and proving problems are unified in data format, as shown in Figure
2. Our UniGeo contains 4,998 calculation problems and 9,543 proving problems, which can verify
the high-level geometry logical reasoning capabilities in neural models.
Taking advantage of the unified formulation of
two geometry tasks, we further propose a novel unified geometric transformer (Geoformer) which is
able to handle geometry calculation and proof reasoning simultaneously and outperforms the taskspecialized models on both tasks. To learn an efficient Geoformer for unifying geometry logical reasoning, we also propose a mathematical reasoning
pre-training method named Mathematical Expression Pretraining (MEP), which is based on the problem solution, since the solution prediction can serve
as a universal task for all math problems. Specifically, we extract the mathematical expressions and
remove the redundant text description in the solution for MEP. These expressions are rich in implicit
math knowledge and can also be formulated as
the solution sequence target. We further fine-tune
the unified Geoformer to predict program/proving
sequences for calculation and proving problems
simultaneously. The pipeline of pretraining and
unified training is demonstrated in Figure 1. Experiments on the UniGeo benchmark show that
our proposed Geoformer achieves state-of-the-art
performance, getting 5.6% and 3.2% accuracy improvements on calculation and proving problems,
respectively, compared to the task-specific model
NGS (Chen et al., 2021).
Our contributions can be summarized as follows:
- We construct a unified geometry reasoning
-----
benchmark, named UniGeo, which contains
both calculation and proving problems.
- The proving problems in UniGeo are annotated with proof steps, in which the mathematical expressions can be reformulated as
proving sequences to match the program sequences in calculation problems.
- We propose a unified geometric transformer
framework, which is pretrained by predicting
mathematical expressions in the solution and
then fine-tuned on calculation and proving
problems simultaneously.
**2** **Related Work**
**Geometry Problem Solving** Several geometry
datasets (Seo et al., 2014, 2015; Sachan et al., 2017;
Alvin et al., 2017; Sachan and Xing, 2017) have
been constructed to facilitate the development of geometry problem solving. However, these previous
geometry datasets are either not publicly available
or built up with small sizes, which limit the development of relevant research. Besides, the latest
datasets (Lu et al., 2021a; Chen et al., 2021; Cao
and Xiao, 2022) only focus on the arithmetic calculation skill for geometry problem solving and fail
to take into account comprehensive geometry reasoning abilities like logical proving. For instance,
GeoQA (Chen et al., 2021) provides 4,998 calculation problems annotated with a symbolic program
sequence that corresponds to the problem solution.
Instead, we propose a new large-scale geometry
dataset, which covers a wide range of sub-tasks and
reasoning skills including calculation and proving.
To the best of our knowledge, we are the first work
to collect so many geometry proving problems for
training the neural network and provide detailed
sequence annotations corresponding to the proofs
which can be unified with calculation problems and
facilitate model learning.
**Geometry Theorem Proving** Theorem proving
in the geometry domain (Gelernter et al., 1960;
Chou et al., 1996, 2000; Ye et al., 2011; Yu et al.,
2019a; Gan et al., 2019) is a long-standing artificial intelligence task. For example, (Chou et al.,
1996) developed an initial automated geometry theorem proving system by designing a set of fullangle-based rules. Similarly, the expert system
JGEX (Ye et al., 2011) is proposed to prove fullangle geometry problems with a well-defined deductive database. More recently, some pioneering
efforts (Li et al., 2020; Tafjord et al., 2020; Welleck
et al., 2021) have been attempted to learn automatic
proofing systems from large-scale natural language
corpus or mathematical propositions. However,
how to achieve an automatic neural-based prover in
the geometry domain is still less studied. Therefore,
we propose a unified Geoformer that can generate
proof given a geometry diagram and statements
from scratch.
**3** **Unifying Geometry Reasoning**
In this work, we aim to unify geometry logical
reasoning for both calculation and proving problems. To this end, we first construct a geometry
proving dataset that requires multiple reasoning
abilities while solving the problems. Furthermore,
we reformulate the proof as sequence form which is
consistent with the program sequence in the calculation problems of the current GeoQA (Chen et al.,
2021).
**3.1** **UniGeo Benchmark**
**3.1.1** **Data Collection**
We discover an online education website, IXL[2],
which contains various types of geometry problems from high school textbooks. We utilize some
crawler scripts in Python to crawl a large amount
of proving data from this educational website automatically. After selecting the proving problem
carefully, we ask some well-trained workers to
check the quality of collected data, such as ensuring that each problem has complete diagram and
clear proof. All the calculation problems are inherited from the GeoQA dataset, containing 4,998
calculation problems with program sequence annotation which corresponds to problem solution and
can be predicted by generative models. We also
organize five well-trained college students to translate the problems in GeoQA dataset from Chinese
to English so that the language of the two types
of geometry data is consistent. Finally, we unify
these newly collected proving data with the GeoQA
dataset and construct our UniGeo benchmark to be
a testbed for unified geometry logical reasoning.
**3.1.2** **Data Analysis**
In this section, we mainly analyze the newly collected proving problems in the UniGeo benchmark.
We collect a total of 9,543 proving problems where
each data contains a colored geometry diagram, a
description text, and the proof with reasons and
[2https://www.ixl.com/math/geometry](https://www.ixl.com/math/geometry)
-----
|Midpoint|TU|=|UV|SSS|ȏTUW|؆|
|---|---|---|---|---|---|---|
|ȏUVX|CPCTC|ğT|=|ğVUX|
|---|---|---|---|---|
|R19|E5|=|E2|R30|E4|؆|
|---|---|---|---|---|---|---|
|E1|R6|E3|=|E0|
|---|---|---|---|---|
Proving Sequence
Midpoint TU = UV SSS ȏTUW ؆ ȏUVX CPCTC ğT = ğVUX
Elements: TU, UV, ͫTUW, ͫUVX, ğT, ğVUX
Randomly Shuffle
Elements: ğVUX(E0), ͫUVX(E1), UV(E2), ğT(E3), ͫTUW(E4), TU(E5)
Target Sequence
R19 E5 = E2 R30 E4 ؆ E1 R6 E3 = E0
Figure 3: Illustration of converting proving sequence to the target sequence which is considered as training target
for the proving problem.
**3.2** **Reformulate Expressions in the Proof**
Based on the collected proving data, we aim to reformulate the mathematical expressions as target
sequences to unify with the program sequence in
calculation problems, thus achieving a reasonable
unified geometry reasoning task. The reasons and
expressions in the collected proof are still textual,
thus we first translate them into a sequence format.
As shown in Figure 3, we organize the proof as
the proving sequence which contains three types of
tokens: reasons R (e.g., Midpoint, SSS, CPCTC),
operators OP (e.g., =, =), and geometry elements
_[∼]_
_E (e.g., TU, △TUW_, etc.). The reasons are inserted in front of the proof expressions (including
operators and elements) to form the proving sequence.
Moreover, we reformulate the proving sequence
as the final target sequence which can be predicted
by generative models. As mentioned in Section
3.1.2, we have summarized all the reasons into a
set, thus, each reason can be considered as a token
_Ri, where i is the index in the predefined reasons_
set. For the operators, we just reserve their original representation as the tokens. As for geometry
elements, however, we first fetch all the geometry
elements in the proving sequence, and construct the
list of geometry elements. To increase the diversity
of proving problems, we randomly shuffle these
elements to form a new elements list and convert
each element in the proving sequence to a token Ei,
where i corresponds to its position in the shuffled
geometry elements list. Benefiting from this, we
produce diverse target sequences. Even if similar
topics may exist in the training and testing sets, the
target sequence tends to be completely different,
avoiding that the model simply learns some typical
proof patterns. Note that the shuffled elements list
will also be added as text to the end of the problem
|All|Train Val Test|
|---|---|
|All 9,543|6,675 1,421 1,447|
|---|---|
|Parallel 443 Triangle 3,035 Quadrangle 1,704 Congruent 2,808 Similarity 1,553|311 61 71 2,134 452 449 1,170 260 274 1,974 414 420 1,086 234 233|
|---|---|
Table 1: Statistics for the proving problems in UniGeo.
There are five reasoning sub-tasks for geometry proving.
expressions. There are totally 37 categories of reasons, which are explanations for each step of the
proof, including the reasoning skills or the geometry theorems applied. And the expression is a concrete mathematical proof of each step, consisting
of operator and geometry element. For example, in
Figure 2, the reason Midpoint represents using the
definition of midpoint to get an expression TU =
_UV. SSS stands for "side, side, side" and means that_
we have two congruent triangles with all three sides
equal. CPCTC stands for "corresponding parts of
congruent triangles are congruent", therefore, we
get the final expression ∠T = ∠V UX.
As shown in Table 1, the proving data is divided
into train, validation, and test splits with a ratio
of 7:1.5:1.5. The dataset consists of five sub-tasks
which also represent five different topics of proving
problems: parallel, triangle, quadrangle, congruent,
and similarity. The distribution of these types of
proving problems can be viewed in Table 1. In the
experiments, we also provide the detailed performance of models on these sub-tasks.
-----
|Geoformer Decoder|Col2|
|---|---|
|B|is|the|diameter|of|
|---|---|---|---|---|
|B|is|the|<MASK>|of|
|---|---|---|---|---|
|O|A|=|O|
|---|---|---|---|
|N0|Double|V0|
|---|---|---|
Geoformer Encoder Geoformer Decoder
Text Embedding
Source text: A B is the diameter of …
A B is the <MASK> of … 濖
Solution Sequence: Ĩ O A = O C […]
Diagram Embedding
ResNet Program Sequence: Equal N0 Double V0
…
**Pretraining & Task Target**
Figure 4: An illustration of our proposed geometric transformer. We concatenate the embeddings of text and
diagram, which are fed into the transformer encoder-decoder to generate target sequence. For pretraining, the
targets are source text and the mathematical expressions extracted from the solution. And during the fine-tuning
stage, the training objective is program sequence or proving sequence. Note that we achieve unified fine-tuning
for calculation and proving problems simultaneously. For brevity, this illustration shows only one example of a
calculation problem.
text and fed into the model while training.
In summary, by reformulating the expressionbased proof as the target sequence, we define a
multimodal high-level reasoning task. This scheme
is adopted for the following reasons. First, it simplifies the task representation with a clear sequence
prediction. Second, although the target token is
reduced to a smaller space, it is still challenging for
models to understand the correspondence between
inputs (including problem diagram, text, and the
elements to be selected) and target token. Third,
by applying the expression reformulation, we unify
the proving problem with the calculation problem
to construct the UniGeo benchmark, which requires
multiple reasoning capabilities.
**4** **Unified Geometric Transformer**
**4.1** **Overview**
Although the NGS model (Chen et al., 2021) is designed for geometry calculation problems, its performance degrades about 5% after unified training
on the UniGeo (Table 2). Therefore, based on the
VL-T5 (Cho et al., 2021) model which is capable
of handling multiple multimodal tasks uniformly,
we propose a geometric transformer (Geoformer)
that can conduct comprehensive reasoning on both
calculation and proving problems. Figure 4 demonstrates the structure of the model, which consists
of a bidirectional multimodal encoder and an autoregressive text decoder. In order to promote the
performance of Geoformer, we first pretrain it us
ing the solution provided in calculation problems,
as well as applying masked LM task to enhance
the representation of the text encoder. At the finetuning stage, we train the model end-to-end with
calculation and proving problems simultaneously
to acquire stronger comprehensive reasoning ability
on geometry problem solving, rather than optimizing the model on two tasks separately.
**4.2** **Unified Pretraining**
**4.2.1** **Mathematical Expression Pretraining**
Different from the popular pretraining paradigm
that fine-tuning models with large-scale natural corpus, geometry problems are mainly described by
some mathematical languages and also solved by
mathematical knowledge, which is far from natural
language. Therefore, we propose Mathematical Expression Pretraining (MEP) to pretrain our unified
Geoformer with the mathematical corpus.
**Formulating Solution Sequence** The GeoQA
dataset provides a problem solution that explains
the idea and process of solving the problem in the
form of text description. Similar to formulating the
proof expression into the sequence, mathematical
expression in the solution can also be reformulated
into solution sequence for prediction. We remove
the redundant text description in the solution and
only utilize the mathematical expressions for pretraining. Specifically, we only reserve the geometry
elements entities, operation symbols, and numbers,
all of which involve abundant geometry mathemat
-----
**5** **Experiments**
**5.1** **Experimental Settings**
**Datasets** We conduct experiments on the UniGeo, containing GeoQA (Chen et al., 2021) dataset
and our newly collected proving problems. The
GeoQA dataset involves 4,998 calculation problems with corresponding annotated program sequence, which illustrates the calculating process
of the given problems and is considered as training and testing target. Besides, the GeoQA also
provides the problem solution which is not utilized
by previous works but is used for pretraining in
this work. We also construct a proving dataset with
9,543 problems, which are split to train, validate,
and test subsets in a ratio of 7.0: 1.5: 1.5. We further define five sub-tasks: Parallel, Triangle, Quadrangle, Congruent, and Similarity, to provide the
detailed performance of models. To unify geometry
reasoning, we also translate the Chinese calculation problems into English, so that the language
of calculation and proving problems are consistent. We also have considered the Inter-GPS dataset
(Lu et al., 2021a). However, it mainly adopts the
rule-based parser to translate the problem text into
formal language and doesn’t have the sequence
annotation which can be unified with the proving
sequence in our work. Therefore, the Inter-GPS
dataset is not compatible with unified training on
both calculation and proving problems, and we
mainly conduct experiments on GeoQA and newly
collected proving data.
**Evaluation Metrics** For the calculation problems, we follow the evaluation metrics in GeoQA,
i,e, the accuracy of solving all the problems and two
main subsets: angle and length problems. Following the IsarStep (Li et al., 2020), we adopt top-1
accuracy and top-10 accuracy for evaluating the
proofs. Top-K accuracy computes the percentage
where the ground-truth proof is among the top K
generated proving sequence. Since the models possibly generate alternative valid proving sequences
that are not completely consistent with the provided proof, we mainly use more reasonable top-10
accuracy for evaluating the proving problems.
**Implementation Details** We fill the diagram
with a white background to make it equal in length
and width, and resize it to 224×224, which is further split into 49 patches with a size of 32×32
each. Then we apply ResNet (He et al., 2016) to
extract patch features which are further mapped
ical knowledge and can be organized as solution
sequence. In addition, the number in solution is
replaced with a token NSi where i represents the
order that the number appears in the solution. Different from taking word-level tokenization for natural text description, we adopt char-level tokenization for geometry elements since some geometry
elements share common characters with specific geometric meanings. For example, both line OC and
∠OCA contain points O and C, but this relation
will disappear if OC and ∠OCA are considered as
basic tokens. In summary, the formulated solution
sequence has rich mathematical knowledge and can
be learned by models to enhance the understanding
of mathematical reasoning process.
**4.2.2** **Masked Language Modeling**
We also explore applying the Masked Language
Modeling (MLM) task for solving geometry problems. Following (Cho et al., 2021), we mask 30%
of input text tokens with <mask> tokens. Then the
model is trained to recover the masked text in a
unified text generation manner.
**4.3** **Fine-tuning Unified Geoformer**
We combine the above two pretraining tasks to
pretrain the unified geometric transformer. After
that, fine-tuning the unified Geoformer is straightforward since we have unified the outputs of all
downstream tasks into a sequence format. We load
the weights from the pretrained model and keep the
weights of the diagram encoder fixed, following
the NGS model (Chen et al., 2021). Then, we optimize the rest parts of the model end-to-end using a
mixture of calculation and proving data.
**4.4** **Unified Training Objective**
All of the pre-training and fine-tuning tasks in this
work are unified in the form of text generation, thus
sharing the same training objective. The generation
loss Lg is the negative log-likelihood (NLL) of the
target sequence:
_L_
_Lg(θ) = L[1]_ log Pt(yt|x, y1, ..., yt−1; θ),
_t=1_
X
where θ are the parameters of the entire Geoformer
architecture except for the diagram encoder, x is
the input of both problem text and the extracted
diagram feature, yt are the target tokens, Pt is the
distribution of the next token, L is the length of
sequence.
-----
**Calculation (%)** **Proving (%)**
**Methods** **Data** All Angle Length All Par. Tri. Qua. Con. Sim.
FiLM (Perez et al., 2017) Calculation 31.7 34.0 29.7 - - - - - -
RN (Santoro et al., 2017) Calculation 38.0 42.8 32.5 - - - - - -
MCAN (Yu et al., 2019b) Calculation 39.7 45.0 34.6 - - - - - -
BERT (Devlin et al., 2018) Calculation 54.7 65.8 42.1 - - - - - -
NGS (Chen et al., 2021) Calculation 56.9 69.8 39.2 - - - - - -
Geoformer (Ours) Calculation 60.3 71.5 **49.1** - - - - - -
BERT Proving - - - 48.0 15.5 48.1 28.5 49.5 77.6
NGS Proving - - - 53.2 13.2 56.6 29.8 57.1 79.4
Geoformer (Ours) Proving - - - 55.7 19.4 68.3 20.4 60.6 72.5
BERT UniGeo 52.0 63.1 39.2 48.1 15.4 48.0 31.7 49.5 75.1
NGS UniGeo 51.9 63.6 38.8 47.4 11.2 46.9 31.3 48.3 77.6
Geoformer (Ours) UniGeo 60.9 72.2 48.8 55.8 18.1 68.8 20.4 60.3 73.3
Geoformer + Pretraining (Ours) UniGeo **62.5** **75.5** 48.8 **56.4 19.4 69.4 20.4 60.3 75.0**
Table 2: The accuracy comparison of various methods and baseline models under different data settings. The
newly collected proving problems provide five sub-tasks (as defined in Table 1) for evaluation.
into flattened 1D sequences to construct final diagram embeddings. Our Geoformer is implemented
by PyTorch (Paszke et al., 2017). We use the
Adam (Loshchilov and Hutter, 2017) optimizer
with β1 = 0.9 and β2 = 0.999. The learning
rate is 2e[−][4], the batch size is set to 10, and models
are trained within 100 epochs. We train our unified
Geoformer on randomly shuffled calculation problems and proving problems simultaneously. For
pretraining, we maintain the settings as mentioned
above, but replace the training label with the solution sequence and set the learning rate to 5e[−][4].
**5.2** **Experimental Results**
Table 2 demonstrates the results of our methods
and baselines on the calculation and proving problems. We divided the experiments into three parts
according to the data used by the model, i.e., the
calculation problems from the GeoQA dataset, our
newly collected proving problems, and the unified
benchmark of both calculation and proving problems. A detailed analysis is shown below.
**Baselines** FiLM (Perez et al., 2017), RN (Santoro et al., 2017), MCAN (Yu et al., 2019b) are
three multimodal models with strong cross-modal
reasoning abilities that well address the compositional language and elementary visual reasoning
benchmark, CLEVR (Johnson et al., 2017). They
can predict the possibly correct option in calculation problems by using visual question answering.
However, this approach does not work well in ge
|Methods Data|Top-1 Top-10|
|---|---|
|NGS UniGeo NGS + Pretraining UniGeo Geoformer UniGeo Geoformer + Pretraining UniGeo|17.4 47.4 19.2 49.6 50.2 55.8 51.3 56.4|
|---|---|
Table 3: Performance comparison on proving problems
with different evaluation metrics.
ometry problem solving since the MCAN achieves
the answer accuracy of only 39.7%. The “BERT"
model here refers to "BERT2Prog + Diagram" in
GeoQA that BERT and ResNet are utilized to encode text and diagram data separately. Finally, the
features of these two modalities are fused to guide
the generation of target sequence. The NGS model
is specially designed for solving the calculation
problems in the GeoQA dataset. We also re-run the
experiment on the English version of the GeoQA
dataset using the NGS model and obtain a performance of 56.9%.
**The performance comparison on proving prob-**
**lems** We conduct some experiments on the collected proving problems. In Table 2, Par, Tri,
Qua, Con, Sim represent five sub-tasks respectively. When using proving data only, the NGS
model achieves a total performance of 53.2%. The
proposed Geoformer obtains a top-10 accuracy of
55.7% on proving problems. There is a huge difference in the performance of sub-tasks due to the difficulty of various geometric reasoning skills varies
-----
In ȏABC, D is a point on QRĠST and QTĠRS . Complete
AC, if ğDBC=ğA, BC=3 the proof that RS؆QT
(N0), AC=6 (N1), then the
**Ground Truth:**
length of CD is ().
Alternate Interior Angles Theorem | ğRQS | ؆ | ğQST |
**Solution:** Alternate Interior Angles Theorem | ğQSR | ؆ | ğSQT |
ĨğDBC=ğA,ğC=ğC,ħȏBCDīȏACB, Reflexive Property of Congruence | QS | ؆ | QS |
ħCD/BC=BC/AC ħCD/3=3/6 ħCD=1.5 **ASA | ȑQRS | ؆ | ȑSTQ | CPCTC | RS | ؆ | QT**
**Unified Geoformer & Ground Truth:**
**Unified Geoformer:**
Proportion | N0 | N1 | N0
Alternate Interior Angles Theorem | ğRQS | ؆ | ğSQT |
**Specialized Geoformer:** Alternate Interior Angles Theorem | ğQSR | ؆ | RS |
Add | N0 | N1 | Proportion | N0 | V0 | N1 Reflexive Property of Congruence | QS | ؆ | QS |
Figure 5: The left calculation case shows a situation where a unified Geoformer works better than a task-specialized
Geoformer since the related similar triangle knowledge also exists in proving problems. Through multi-task learning, the model is enhanced on the understanding of similar triangle problems. In the failure proving case on the
right, the Geoformer model outputs some incorrect proof (red) and misses part of the proof (the bold in ground
truth).
greatly. The accuracy rate of proving parallel related problems is only 19.4%, while proving similarity is relatively simple, which can reach an accuracy of 72.5%. Table 3 also provides the results of
top-1 accuracy metric. When applying pretraining
on the unified NGS and Geoformer models, we can
get a 19.2% and 51.3% top-1 accuracy respectively.
**The performance of unified training** Our motivation is to unify the geometry logical reasoning
and we have already unified the data format. Thus,
apart from training on calculation and proving problems separately, we design the unified Geoformer,
which is trained with the mixture of both types of
problems. It can be observed that the NGS model
suffers a severe performance decline when trained
on both tasks simultaneously, in which the accuracy
of calculation and proving problems decrease 5.0%
and 5.8% respectively. However, our proposed Geoformer avoids this phenomenon and obtains an
impressive performance on two tasks simultaneously. Specifically, the unified Geoformer achieves
60.9% and 55.8% accuracy on calculation and proving problems, outperforming two task-specific Geoformer models on two geometry tasks. The reasoning ability can be enhanced on both two tasks
with the unified formulation.
**The effectiveness of pretraining** To further promote the performance of unified Geoformer, We
extract a large number of mathematical expressions
from the solution of calculation problem as the
pre-training target. These expressions are rich in
implicit math knowledge and can also be formulated as solution sequence. Applying the pretrain
|Methods|Calculation Proving|
|---|---|
|Geoformer Geoformer + MLM Geoformer + MEP Geoformer + MLM + MEP|60.9 55.8 61.3 56.2 61.8 56.1 62.5 56.4|
|---|---|
Table 4: Ablation study for different pretraining methods. MLM and MEP represent masked language modeling and mathematical expression pretraining.
ing method, the Geoformer+Pretraining model is
further improved to 62.5% and 56.4% accuracy
on calculation and proving problems respectively,
obtaining 5.6% and 3.2% accuracy improvement
compared to task-specialized NGS models, which
achieves state-of-the-art performance on UniGeo
benchmark.
**5.3** **Ablation Study**
We explore the effectiveness of different pretraining settings for the ablation study. In Table 4, we
experiment unified Geoformer with two pretraining ways: masked language modeling (MLM) and
mathematical expression pretraining (MEP). Using only MLM, the performance of the Geoformer
model does not change significantly. When MEP is
used alone, the performance of the model on calculation problems is improved obviously. When using
both pre-training methods, the model makes an improvement significantly on both types of problems,
obtaining the highest 62.5% and 56.4% on calculation and proving problems, respectively. Thus, we
apply this setting to the training of Geoformer.
-----
**5.4** **Case Study**
As shown in Figure 5, we conduct a case study
for discussing the ability and limitation of our proposed unified Geoformer. For the left case, the
unified Geoformer works well on the calculation
problem, benefiting from the multi-task learning
framework. As we can see in the problem solution,
it first utilizes the knowledge of similar triangles
to get △BCD ∼△ACB, and then lists a proportional relation to get CD = 3/6 ∗ 3 = 1.5. Compared to task-specialized Geoformer that predicts
a wrong program sequence, the prediction made
by our unified Geoformer is completely consistent
with ground truth. This is probably because the
unified model acquires a stronger understanding
of similar triangle knowledge after simultaneously
training on proving problems (containing many
problems proving similar triangles). Therefore,
multi-task learning is beneficial in geometry reasoning. We also select a typical failure case. The
unified Geoformer chooses two wrong geometry
elements for the proof steps and also fails to give
the last two critical proof steps. The geometry problems are still challenging for current neural-based
approaches.
**6** **Conclusion**
Recently, geometry problem solving has attracted
much attention in AI research while previous works
mainly focus on geometry calculation problems.
It is significant to explore the unified reasoning
abilities of neural models on multiple math tasks.
Therefore, we integrate geometry calculation and
proving problems, and construct a unified geometry benchmark, UniGeo, containing 9,543 proving problems with proof reasons and mathematical
expressions that can be reformulated as proving
sequence to unify with the program sequence of
calculation problems. We also propose a unified
Geoformer that can address calculation and proving
problems simultaneously. Besides, a mathematical
expression pretraining way is proposed to promote
the performance of the unified Geoformer. Experiments show that our Geoformer can well address
two challenging geometry tasks with a single set
of model weights, outperforming task-specialized
models and obtaining state-of-the-art performance.
**Limitations**
To explore the logical reasoning ability of neural
network models in the geometry domain, we pro
pose a unified method for two major and similar
tasks (calculation and proving) in geometry problems. Although we have achieved state-of-the-art
performance on these two tasks simultaneously, the
unified Geoformer still has some limitations. First,
the answer accuracy of the neural-network-based
approaches is still far from the real-world application when addressing such complex tasks which
require high-level reasoning ability. Second, the
data construction of such mathematical logical reasoning tasks requires a heavy manual collection
and annotation process, which also limits the type
and difficulty of geometry problems, thereby leading to the failure of neural network models to learn
and process more sophisticated cases.
**Acknowledgements**
This work was supported in part by National
Key R&D Program of China under Grant
No.2020AAA0109700, National Natural Science Foundation of China (NSFC) under
Grant No.U19A2073, Grant No.61976233
and Grant No.62206314, Guangdong Province
Basic and Applied Basic Research (Regional
Joint Fund-Key) Grant No.2019B1515120039,
Guangdong Outstanding Youth Fund (Grant
No.2021B1515020061), GuangDong Basic and
Applied Basic Research Foundation under Grant
No.2022A1515011835, China Postdoctoral Science Foundation under Grant No.2021M703687,
Shenzhen Fundamental Research Program
(Project No.RCYX20200714114642083) and
CAAI-Huawei MindSpore Open Fund. And the
Open Project of Anhui Provincial Key Laboratory
of Multimodal Cognitive Computation, Anhui University, No.MMC202107. We thank MindSpore
for the partial support of this work, which is a new
deep learning computing framwork[3].
**References**
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc,
Antoine Miech, Iain Barr, Yana Hasson, Karel
Lenc, Arthur Mensch, Katie Millican, Malcolm
Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. arXiv preprint
_arXiv:2204.14198._
Chris Alvin, Sumit Gulwani, Rupak Majumdar, and
Supratik Mukhopadhyay. 2017. Synthesis of solutions for shaded area geometry problems. In The
_Thirtieth International Flairs Conference._
[3https://www.mindspore.cn/](https://www.mindspore.cn/)
-----
Jie Cao and Jing Xiao. 2022. An augmented benchmark dataset for geometric question answering
through dual parallel text encoding. In Proceedings
_of the 29th International Conference on Computa-_
_tional Linguistics (COLING), pages 1511–1520._
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan
Liang, Lingbo Liu, Eric P Xing, and Liang Lin.
2021. Geoqa: A geometric question answering
benchmark towards multimodal numerical reasoning. In Findings of the Association for Computa_tional Linguistics (ACL-IJCNLP)._
Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal.
2021. Unifying vision-and-language tasks via text
generation. In International Conference on Machine
_Learning, pages 1931–1942. PMLR._
Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong
Zhang. 1996. Automated generation of readable
proofs with geometric invariants. Journal of Auto_mated Reasoning, 17(3):325–347._
Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong
Zhang. 2000. A deductive database approach to automated geometry theorem proving and discovering.
_Journal of Automated Reasoning, 25(3):219–246._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Wenbin Gan, Xinguo Yu, Ting Zhang, and Mingshu
Wang. 2019. Automatically proving plane geometry
theorems stated by text and diagram. International
_Journal of Pattern Recognition and Artificial Intelli-_
_gence, 33(07):1940003._
Herbert Gelernter, James R Hansen, and Donald W
Loveland. 1960. Empirical explorations of the geometry theorem machine. In Western Joint IRE_AIEE-ACM Computer Conference, pages 143–149._
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on
_computer vision and pattern recognition, pages 770–_
778.
Justin Johnson, Bharath Hariharan, Laurens Van
Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and
Ross Girshick. 2017. Clevr: A diagnostic dataset for
compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on
_Computer Vision and Pattern Recognition (CVPR),_
pages 2901–2910.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven
Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding
and generation. In ICML.
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C Paulson. 2020. Isarstep: a benchmark for high-level
mathematical reasoning. In The International Con_ference on Learning Representations (ICLR)._
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. _arXiv preprint_
_arXiv:1711.05101._
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
Huang, Xiaodan Liang, and Song-Chun Zhu. 2021a.
Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. In
_The Joint Conference of the 59th Annual Meeting of_
_the Association for Computational Linguistics and_
_the 11th International Joint Conference on Natural_
_Language Processing (ACL-IJCNLP)._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian
Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter
Clark, and Ashwin Kalyan. 2022. Dynamic
prompt learning via policy gradient for semistructured mathematical reasoning. arXiv preprint
_arXiv:2209.14610._
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao,
Wei Zhang, Zhou Yu, Xiaodan Liang, and SongChun Zhu. 2021b. Iconqa: A new benchmark for
abstract diagram understanding and visual language
reasoning. In The 35th Conference on Neural Infor_mation Processing Systems Track on Datasets and_
_Benchmarks (NeurIPS)._
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard
Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark,
and Ashwin Kalyan. 2022a. Lila: A unified benchmark for mathematical reasoning. In The 2022 Con_ference on Empirical Methods in Natural Language_
_Processing (EMNLP)._
Swaroop Mishra, Arindam Mitra, Neeraj Varshney,
Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and
Ashwin Kalyan. 2022b. Numglue: A suite of fundamental yet challenging mathematical reasoning
tasks. In Proceedings of the 60th Annual Meet_ing of the Association for Computational Linguistics_
_(ACL), pages 3505–3523._
Adam Paszke, Sam Gross, Soumith Chintala, Gregory
Chanan, Edward Yang, Zachary DeVito, Zeming
Lin, Alban Desmaison, Luca Antiga, and Adam
Lerer. 2017. Automatic differentiation in pytorch.
_._
Ethan Perez, Florian Strub, Harm De Vries, Vincent
Dumoulin, and Aaron Courville. 2017. Film: Visual
reasoning with a general conditioning layer. arXiv
_preprint arXiv:1709.07871._
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng
Tang, and Liang Lin. 2021. Neural-symbolic solver
for math word problems with auxiliary tasks. In Pro_ceedings of the 59th Annual Meeting of the Associa-_
_tion for Computational Linguistics and the 11th In-_
_ternational Joint Conference on Natural Language_
_Processing (ACL-IJCNLP)._
Jinghui Qin, Lihui Lin, Xiaodan Liang, Rumin Zhang,
and Liang Lin. 2020. Semantically-aligned universal tree-structured solver for math word problems.
-----
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 3780–3789.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, Peter J Liu, et al. 2020. Exploring the limits
of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67.
Mrinmaya Sachan, Avinava Dubey, Eduard H Hovy,
Tom M Mitchell, Dan Roth, and Eric P Xing. 2020.
Discourse in multimedia: A case study in extracting geometry knowledge from textbooks. Computa_tional Linguistics, 45(4):627–665._
Mrinmaya Sachan, Kumar Dubey, and Eric Xing. 2017.
From textbooks to knowledge: A case study in
harvesting axiomatic knowledge from textbooks to
solve geometry problems. In Proceedings of Em_pirical Methods in Natural Language Processing_
_(EMNLP), pages 773–784._
Mrinmaya Sachan and Eric Xing. 2017. Learning
to solve geometry problems from natural language
demonstrations in textbooks. In Proceedings of the
_6th Joint Conference on Lexical and Computational_
_Semantics, pages 251–261._
Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia,
and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances
_in neural information processing systems (NeurIPS),_
pages 4967–4976.
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, and
Oren Etzioni. 2014. Diagram understanding in geometry questions. In Proceedings of the AAAI Con_ference on Artificial Intelligence (AAAI)._
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of Empirical Methods in Nat_ural Language Processing (EMNLP), pages 1466–_
1476.
Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter
Clark. 2020. Proofwriter: Generating implications,
proofs, and abductive statements over natural language. In Findings of the Association for Compu_tational Linguistics (ACL-IJCNLP)._
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh
Hajishirzi, Yejin Choi, and Kyunghyun Cho. 2021.
Naturalproofs: Mathematical theorem proving in
natural language. In The 35th Conference on Neu_ral Information Processing Systems (NeurIPS)._
ZhiCheng Yang, Jinghui Qin, Jiaqi Chen, and Xiaodan
Liang. 2022a. Unbiased math word problems benchmark for mitigating solving bias. In Findings of the
_Association for Computational Linguistics: NAACL_
_2022, pages 1401–1408. Association for Computa-_
tional Linguistics.
Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin,
and Xiaodan Liang. 2022b. Logicsolver: Towards interpretable math word problem solving with
logical prompt-enhanced learning. _arXiv preprint_
_arXiv:2205.08232._
Zheng Ye, Shang-Ching Chou, and Xiao-Shan Gao.
2011. An introduction to java geometry expert. In
_International Workshop on Automated Deduction in_
_Geometry, pages 189–195. Springer._
Xinguo Yu, Mingshu Wang, Wenbin Gan, Bin He,
and Nan Ye. 2019a. A framework for solving explicit arithmetic word problems and proving
plane geometry theorems. _International Journal_
_of Pattern Recognition and Artificial Intelligence,_
33(07):1940005.
Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and
Qi Tian. 2019b. Deep modular co-attention networks for visual question answering. In Proceed_ings of the IEEE conference on computer vision and_
_pattern recognition (CVPR), pages 6281–6290._
Jipeng Zhang, Ka Wei LEE, Ee-Peng Lim, Wei Qin,
Lei Wang, Jie Shao, Qianru Sun, et al. 2020a.
Teacher-student networks with multiple decoders for
solving math word problem. In Proceedings of the
_Twenty-Ninth International Joint Conference on Ar-_
_tificial Intelligence (IJCAI)._
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graphto-tree learning for solving math word problems. In
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics (ACL), pages_
3928–3937.
Ming-Liang Zhang, Fei Yin, Yi-Han Hao, and ChengLin Liu. 2022. Plane geometry diagram parsing. In
_Proceedings of the Thirty-First International Joint_
_Conference on Artificial Intelligence (IJCAI), pages_
1636–1643.
-----
| [
"Pan, Lu",
"Jiaqi, Chen",
"Tong, Li",
"Jinghui, Qin",
"Liang, Lin",
"Chongyu, Chen",
"Xiaodan, Liang"
] | 2022-12-05T00:00:00 | EMNLP 2022 Main | true | 51 | 7 | null | http://arxiv.org/abs/2212.02746 | https://arxiv.org/abs/2212.02746 | https://www.semanticscholar.org/paper/72fce949725b20428e5f56247fef5c6bd1ce6154 |
Learning Knowledge Base Inference with Neural Theorem Provers | N/A | The NTP presented here is realized via a differentiable version of the backward chaining algorithm that operates on substitution representations and is able to learn complex logical dependencies from training facts of small knowledge bases. | # Learning Knowledge Base Inference with Neural Theorem Provers
**Tim Rockt¨aschel and Sebastian Riedel**
University College London
London, UK
_{t.rocktaschel,s.riedel}@cs.ucl.ac.uk_
**Abstract**
In this paper we present a proof-of-concept
implementation of Neural Theorem Provers
(NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable
with respect to representations of symbols in
a knowledge base and can thus learn representations of predicates, constants, as well as
rules of predefined structure. Furthermore,
they still allow us to incorporate domainknowledge provided as rules. The NTP presented here is realized via a differentiable version of the backward chaining algorithm. It
operates on substitution representations and
is able to learn complex logical dependencies
from training facts of small knowledge bases.
**1** **Introduction**
Current state-of-the-art methods for automated
knowledge base (KB) construction learn distributed
representations of fact triples (Nickel et al., 2012;
Riedel et al., 2013; Socher et al., 2013; Chang et
al., 2014; Neelakantan et al., 2015; Toutanova et al.,
2015). An open question is how to enable first-order
reasoning with commonsense knowledge (Nickel et
al., 2015). We believe a promising direction towards this goal is the integration of deep neural networks with the capabilities of theorem provers. Neural networks can learn to generalize well when ob
45
serving many input-output examples, but lack interpretability and straightforward ways of incorporating domain-specific knowledge. Theorem provers
on the other hand provide effective ways to reason
with logical knowledge. However, by operating on
discrete symbols they do not make use of similarities between predicates or constants in training data
(e.g., LECTURERAT ∼ PROFESSORAT, ORANGE ∼
LEMON, etc).
Recent neural network architectures such as Neural Turing Machines (Graves et al., 2014, NTMs),
Memory Networks (Weston et al., 2015b), Neural
Stacks/Queues (Grefenstette et al., 2015; Joulin and
Mikolov, 2015), Neural Programmer (Neelakantan
et al., 2016), Neural Programmer-Interpreters (Reed
and de Freitas, 2016) and Hierarchical Attentive
Memory (Andrychowicz and Kurach, 2016) replace
discrete functions and data structures by end-to-end
differentiable counterparts. As such, they can learn
complex behaviour from raw input-output examples
via gradient-based optimization.
NTMs and their relatives are capable of learning
programs and could in principle learn to emulate a
theorem prover. However, they might not be the
most efficient neural architecture for learning firstorder reasoning from input-output examples. Akin
to NTMs, which are end-to-end differentiable counterparts of Turing machines, we investigate Neural
_Theorem Provers (NTPs): end-to-end differentiable_
versions of automated theorem provers. A distinguishing property of NTPs is that they are differentiable with respect to symbol representations in a
knowledge base. This enables us to learn representations of symbols in ground atoms (predicates and
-----
constants) and parameters of first-order rules of predefined structure using backpropagation. Furthermore, NTPs can seamlessly reason with provided
domain-specific rules. As NTPs operate on distributed representations of symbols, a single handcrafted rule can be leveraged for many proofs of
queries with similar symbol representations. Finally,
NTPs allow for a high degree of interpretability by
providing such proofs.
Our contributions are threefold: (i) we present
the construction of an NTP based on differentiable
backward chaining and unification, (ii) we show that
when provided with rules this NTP can perform firstorder inference in vector space like a discrete theorem prover would do on symbolic representations,
and (iii) we demonstrate that NTPs can learn representations of symbols and first-order rules of predefined structure.
**2** **Related Work**
Combining neural and symbolic approaches for relational learning and reasoning has let to many
promising neural network architectures over the past
decades (Garcez et al., 2012). Early proposals for
neural-symbolic networks are limited to proposi_tional formulae (e.g., EBL-ANN (Shavlik and Tow-_
ell, 1989), KBANN (Towell and Shavlik, 1994)
and C-ILP (Garcez and Zaverucha, 1999)). Other
neural-symbolic approaches focus on first-order inference, but do not allow one to learn vector representations of symbols from training facts of a KB
(e.g., SHRUTI (Shastri, 1992), Neural Prolog (Ding,
1995), CLIP++ (Franc¸a et al., 2014) and Lifted
Relational Neural Networks (Sourek et al., 2015)).
Neural Reasoner (Peng et al., 2015) translates query
representations in vector space without rule representations and can thus not incorporate domainspecific knowledge. Rockt¨aschel et al. (2014),
Rockt¨aschel et al. (2015), Vendrov et al. (2016) and
Hu et al. (2016) regularize distributed representations via domain-specific rules, but do not learn such
rules from data and only support a restricted subset
of first-order rules. The NTP proposed here builds
upon differentiable backward chaining and is thus
related to Unification Neural Networks (Komendantskaya, 2011; H¨olldobler, 1990), but operates on
vector representations of symbols instead of scalar
46
values. Yin et al. (2015) and Andreas et al. (2016)
map queries to multiple differentiable modules that
can be used to retrieve answers from a KB. Clark
et al. (2014) extract common-sense knowledge from
textbooks in form of rules to improve KB inference
by soft-matching and non-recursive forward inference. Lee et al. (2016) propose a Tensor Product
Representation to answer Facebook bAbI (Weston
et al., 2015a) questions. Gu et al. (2015) traverse
KBs in vector space to answer queries. Socher et
al. (2012) and Bowman et al. (2015) demonstrate
that recursive neural networks can learn to evaluate
propositional logic expressions.
**3** **Differentiable Backward Chaining**
Backward chaining is a common method for automated theorem proving, and we refer the reader
to Russell and Norvig (1995) for details. Given a
goal/query (e.g. GRANDPARENTOF(X, Y)), backward chaining finds substitutions of free variables with constants of facts in a KB (e.g.
_{X/ABE, Y/BART})._ This is achieved by recursively iterating through rules that translate a
goal into sub-goals which it attempts to prove,
thereby exploring possible proofs. For example,
the KB could contain the following rule that can
be applied to find answers for the above goal:
_∀X, Y, Z : PARENTOF(X, Y) ∧_ PARENTOF(Y, Z) ⇒
GRANPARENTOF(X, Z). For the rest of the paper
we assume all free variables are universally quantified. Furthermore, we call the conjunction of atoms
before the implication symbol the left-hand side (or
body) of the rule and the atom after the implication
the right-hand side (or head) of the rule.
The proof exploration in backward-chaining is divided into two functions called OR and AND. The
former attempts to prove a goal by unifying it with
every rule’s right-hand side in a KB, yielding intermediate substitutions. For rules where this succeeds,
the left-hand side and substitution is passed to the
AND function. AND then attempts to prove every
atom in the body sequentially by first applying substitutions and subsequently calling OR. This is repeated recursively until unification fails, atoms are
proven by unification with facts in the KB, or a certain proof-depth is exceeded.
-----
sentation of a predicate symbol #1 and the first argument of the predicate #2. For example, the goal
GRANDPAOF(ABE, X) can be specified by G and
vector representations g = [vGRANDPAOF, vABE]. Furthermore, based on the structure G it is clear that
proofs of that goal will be substitutions for X (e.g.
**vBART). Akin to goals, we divide substitutions into**
structures and representations, as well as a scalar
score τ ∈ (0, 1) that measures the success of the
substitution. For example, proofs of goals of the
structure G as defined above will be substitutions
with the structure S = _X/#1_ accompanied by sub_{_ _}_
stitution representations (e.g. s = [vBART]).
With this divide we can now redefine operations
in backward chaining as follows. Operations that
concern variables and rules are mapping goal and
substitution structures (G and S) to new structures
that instantiate sub-networks. In contrast, operations
on symbols of predicates and constants can be computed in vector space in a differentiable manner. The
resulting recursively constructed NTP is end-to-end
differentiable. An overview of the model architecture with an example is given in Figure 1 and discussed in detail below.
**OR** The entry point to the NTP is an OR network (Figure 1a) that for a given goal and substitution structure (G and S) instantiates a sub-network
for each one of the N rules in a knowledge base
_T_ . The unification of the ith rule’s right-hand
side with a goal structure results in a new substitution structure Si. When provided with a goal
representation, a unification network is computing
the unification success in vector space. For example, assume at some proof-depth D in the NTP
we unify GRANDFATHEROF(ABRAHAM, Q) with
GRANDPAOF(ABE, LISA). This will result in a new
substitution structure Si[′] [=][ {][Q/][#][1][}][, representation]
**s = [vLISA] and success τ** _[D]_ that is passed further in
the network. In contrast to discrete unification that
checks for symbol equality, we calculate a soft unification from the previous unification success of the
outer network τ _[D][+1]_ and the similarity of predicate
and constant representations as follows:
_τpredicate = sigmoid(v[T]GRANDFATHEROF[v][GRANDPA][)][ (1)]_
_τarg1 = sigmoid(v[T]ABRAHAM[v][A][BE][)]_ (2)
_τ_ _[D]_ = min(τ _[D][+1], τpredicate, τarg1)_ (3)
|Col1|Col2|
|---|---|
|||
|||
|O|O . . .|OrD|
|---|---|---|
||||
||||
||||
|vLisa s|τ =vBart vHomer|0.7 0.9 S = {Q/# 1} 0.1|
|---|---|
|T G S OrD merge ... · · · vLisa 0.2 s|τ =vBart 0.9 s|τ =vLisa 0.7 vHomer 0.1 ∅ T AnG d1 DS1 · · · vv fp aa tr he en rt OO ff (( XY,, YZ )) lhs T A G nN d S DN s|τ = vLisa 0.7 s|τ = vAbraham 1.0 vgrandpaOf(vAbe, vLisa) rhs unify vgrandfatherOf(X, Z) rhs unify||
|G = # 1(# 2, Q) · · · S = { } g = [vgrandfatherOf, vAbraham] s|τ = 1.0||
|Col1|. . .|Col3|
|---|---|---|
||||
||||
||||
**vLisa** 0.2
**s** _τ =_ **vBart** 0.9 _S =_ _Q/#1_
_|_ _{_ _}_
**vHomer 0.1**
_S_ And[D] T _S_ merge And[D]
_G_
_T_ _G[′]_ _S[′]_
_T_ _GG[′]_ _SS[′]_ _TAndG[′]_ _[D]S[′]_ And[D]
**(b)** AndAnd[D] _· · ·_
_S[′]_ = {Q/Z, Y/#1}
## · · ·
_T_ _G[′]_ _S_
)] tail(subgoals) Or[D][−][1]
_X, Y )_ head(subgoals) _G[′]_ = #1(#2, Y )
**g = [vfatherOf, vAbraham]**
subst.
_S = {Q/Z, X/#1}_
**s|τ =** **vAbraham 1.0**
**(c)**
1.0
|Col1|Col2|Col3|Col4|
|---|---|---|---|
**(a)**
**vLisa** 0.2
**s** _τ =_ **vBart** 0.9
_|_
**vHomer 0.1**
_T_ _G_ _S_
_T_ _GG[′]_
And
_S[′]_ = {
|·|·|
|---|---|
_T_ _G[′]_ _S_
Or[D][−][1]
_G[′]_
**g**
subst.
_S = {Q/Z, X/#1}_
**s|τ =** **vAbraham 1.0**
|T G S|AndD|
|---|---|
[vparentOf(Y, Z)]
**vfatherOf(X, Y )**
**Figure 1: Overview of differentiable backward chaining.**
**Goal and Substitution Structures** The key idea
behind the proof-of-concept NTP presented here is
to recursively construct a neural network by replacing operations on symbols in backward chaining
with differentiable operations on distributed representations. To build such a network we separate
goals and substitutions into vector representations
of involved predicates and constants, and structures
that define the connections of a neural network.
For instance, G = #1(#2, X) is an example of a
structure of an entire class of goals. This structure
encodes that such goals encompass a vector repre
47
-----
r1 r2 r5 r6 r7NTP = Gold r1 r2 r5 r6 r7NTP = Infer NTP = Factorizer1 r2 r5 r6 r7 r1 r2 r5 r6 r7NTP = Induce
a, b
b, c
a, c
d, e
f, g
g, h
f, h
i, j
k, l
l, m
k, m
n, o
p, q
q, r
p, r
s, t
u, v
v, w
u, w
v, y
z,
,
,
,
,
,
**Figure 2: Predictions of different NTP modes on a toy KB**
where every column (within a subplot) represents a predicate
and every row an entity-pair. Training facts (red) and test
facts (blue) in the first subplot are consistent with two rules:
_r1(X, Y )∧r1(Y, Z) ⇒_ _r2(X, Z) and r3(X, Y )∧r4(X, Y ) ⇒_
_r5(X, Y ). The other three subplots show predictions between_
0 (white) and 1 (black) of the different modes discussed in text.
timized in the same way as symbol representations.
**4** **Experiments and Results**
We implemented an NTP with differentiable backward chaining in TensorFlow (Abadi et al., 2015).
Symbol representations are initialized randomly and
constrained to unit-length. During training we iterate over the set of known facts, and optimize negative log-likelihood of the proof success of every
fact based on all other facts (and rules) using Adam
(Kingma and Ba, 2015). Furthermore, for every
training fact we sample an unobserved fact for the
same predicate (but different entity-pair) and optimize its proof with a target success of zero.
Our NTP implementation is tested on toy KBs for
different scenarios shown in the four different subplots in Figure 2. Every column (within a subplot)
represents a predicate and every row an entity-pair.
First, we run the NTP with given ground-truth rules
without training symbol or rule representations, and
test whether it can act as a discrete theorem prover.
As expected, given rules the NTP can infer all test
**AND** The new substitution structure calculated by
unification instantiates an AND network (Figure 1c)
at depth D that attempts to sequentially prove the
left-hand side atoms of the rule given the current
substitutions. If the rule’s left-hand side structure
is empty (e.g. when the right-hand side represents
a fact in the KB) the AND network simply passes
the substitutions and their success through (Figure
1b). Otherwise, it applies the substitution on the first
atom of the left-hand side, resulting in a new goal
structure and representation, and instantiates an OR
network with that structure and the previous substitution.
For example, assume we have unified
the right-hand side of FATHEROF(X, Y ) _∧_
PARENTOF(Y, Z) _⇒_ GRANDFATHEROF(X, Z)
with the goal GRANDFATHEROF(ABRAHAM, Q).
The result is a unification success τ _[D]_ as calculated
in Eq. 3, as well as a new substitution structure
_S = {Q/Z, X/#1} where s = [vABRAHAM] becomes_
the input to the AND network. This network will
first apply the substitution to FATHEROF(X, Y ),
resulting in a new goal structure G[′] = #1(#2, Y ).
This structure now instantiates another NTP
(i.e. an OR module) of depth D − 1, which
attempts to prove the input goal representation
**g = [vFATHEROF, vABRAHAM].**
For every proof, i.e., every possible substitution of the structure S[′] = _Q/Z, Y/#1_, a new
_{_ _}_
AND module is instantiated that attempts to recursively prove the remainder of the left-hand side
(PARENTOF(Y, Z) in the example above). Finally,
the successes of all identical substitutions (i.e. substitutions to the same variables or representations of
constants) are merged by taking their max.
Note that given a KB, goal structure and depth,
the network structure of the NTP is fully specified
and many goals of the same structure can be used to
perform training and inference with the NTP.
**Trainable Rules** NTPs are not only differentiable
with respect to symbol representations in the KB,
but also latent symbol representations in first-order
rules of predefined structure. For instance, we could
assume that for some predicates in a KB a transitive relationship holds. We can define a rule template #1(X, Y ) #1(Y, Z) #2(X, Z) whose latent
_∧_ _⇒_
predicates v#1, v#2 are trainable parameters and op
48
-----
facts (2nd subplot in Figure 2). The third subplot
shows predictions when we let the NTP try to reconstruct training facts only with the help of other
facts by learning symbol representations (similar to
other representation learning approaches for KB inference). Finally, a core benefit of the NTP is visible once we provide few reasonable rule templates[1]
and optimize for rule representations that best explain observed facts (4th subplot). We found that
this can work remarkably well, but also noticed that
the quality of trained rules is varying with different
random initialization of the rule’s parameters. We
need to investigate in future work how the robustness of rule learning in NTPs can be improved.
**5** **Conclusion and Future Work**
We proposed neural theorem provers for knowledge
base inference via differentiable backward chaining,
which enables learning of symbol representations
and parameters of rules of predefined structure.
Our current implementation has severe computational limitations and does not scale to larger KBs
as it investigates all possible proof paths. However,
there are many possibilities to improve upon the presented architecture. For instance, one can batchunify all rules whose right-hand side have the same
structure and employ existing architectures such as
Memory Networks or hierarchical attention for this
task. Furthermore, it is possible to partition and
batch rules not only by their right-hand side but also
left-hand side structure to instantiate a single AND
module for every partition. To further speed-up the
prover, we want to investigate processing batches of
queries, as well as differentiable ways of maintaining only the N best instead of all possible substitution representations at every depth of the prover. In
addition, we will work on more flexible versions of
neural theorem provers, for instance, where unification, rule selection and application itself are trainable functions, or where facts in a KB and goals can
be natural language sentences.
**Acknowledgments**
We thank Isabelle Augenstein, Dirk Weissenborn,
Johannes Welbl and the reviewers for comments
1We use #1(X, Y ) ⇒ #2(X, Y ), #1(X, Y ) ∧ #2(X, Y ) ⇒
#3(X, Y ) and #1(X, Y ) ∧ #1(Y, Z) ⇒ #2(X, Z).
49
on drafts of this paper. This work was supported
by Microsoft Research through its PhD Scholarship
Programme and an Allen Distinguished Investigator
Award.
**References**
[Abadi et al.2015] Martın Abadi, Ashish Agarwal, Paul
Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu
Devin, et al. 2015. Tensorflow: Large-scale machine
learning on heterogeneous systems.
[Andreas et al.2016] Jacob Andreas, Marcus Rohrbach,
Trevor Darrell, and Dan Klein. 2016. Learning to
compose neural networks for question answering. In
_NAACL._
[Andrychowicz and Kurach2016] Marcin Andrychowicz
and Karol Kurach. 2016. Learning efficient algorithms with hierarchical attentive memory. _arXiv_
_preprint arXiv:1602.03218._
[Bowman et al.2015] Samuel R Bowman, Christopher
Potts, and Christopher D Manning. 2015. Recursive
neural networks can learn logical semantics. In CVSC.
[Chang et al.2014] Kai-Wei Chang, Wen-tau Yih, Bishan
Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In EMNLP.
[Clark et al.2014] Peter Clark, Niranjan Balasubramanian, Sumithra Bhakthavatsalam, Kevin Humphreys,
Jesse Kinkead, Ashish Sabharwal, and Oyvind
Tafjord. 2014. Automatic construction of inferencesupporting knowledge bases. In AKBC.
[Ding1995] Liya Ding. 1995. Neural prolog-the concepts, construction and mechanism. In 3rd Int. Con_ference Fuzzy Logic, Neural Nets, and Soft Comput-_
_ing._
[Franc¸a et al.2014] Manoel VM Franc¸a, Gerson Zaverucha, and Artur S dAvila Garcez. 2014. Fast relational learning using bottom clause propositionalization with artificial neural networks. Machine learning,
94(1):81–104.
[Garcez and Zaverucha1999] Artur S d’Avila Garcez and
Gerson Zaverucha. 1999. The connectionist inductive learning and logic programming system. Applied
_Intelligence, 11(1):59–77._
[Garcez et al.2012] Artur S d’Avila Garcez, Krysia
Broda, and Dov M Gabbay. 2012. _Neural-_
_symbolic learning systems: foundations and applica-_
_tions. Springer._
[Graves et al.2014] Alex Graves, Greg Wayne, and Ivo
Danihelka. 2014. Neural turing machines. _arXiv_
_preprint arXiv:1410.5401._
-----
[Grefenstette et al.2015] Edward Grefenstette,
Karl Moritz Hermann, Mustafa Suleyman, and
Phil Blunsom. 2015. Learning to transduce with
unbounded memory. In NIPS.
[Gu et al.2015] Kelvin Gu, John Miller, and Percy Liang.
2015. Traversing knowledge graphs in vector space.
In EMNLP.
[H¨olldobler1990] S H¨olldobler. 1990. A structured connectionist unification algorithm. In AAAI.
[Hu et al.2016] Zhiting Hu, Xuezhe Ma, Zhengzhong Liu,
Eduard Hovy, and Eric Xing. 2016. Harnessing
deep neural networks with logic rules. arXiv preprint
_arXiv:1603.06318._
[Joulin and Mikolov2015] Armand Joulin and Tomas
Mikolov. 2015. Inferring algorithmic patterns with
stack-augmented recurrent nets. In NIPS.
[Kingma and Ba2015] Diederik Kingma and Jimmy Ba.
2015. Adam: A method for stochastic optimization.
_ICLR._
[Komendantskaya2011] Ekaterina Komendantskaya.
2011. Unification neural networks: unification by
error-correction learning. _Logic Journal of IGPL,_
19(6):821–847.
[Lee et al.2016] Moontae Lee, Xiaodong He, Wen-tau
Yih, Jianfeng Gao, Li Deng, and Paul Smolensky.
2016. Reasoning in vector space: An exploratory
study of question answering. In ICLR.
[Neelakantan et al.2015] Arvind Neelakantan, Benjamin
Roth, and Andrew McCallum. 2015. Compositional
vector space models for knowledge base completion.
In ACL.
[Neelakantan et al.2016] Arvind Neelakantan, Quoc V
Le, and Ilya Sutskever. 2016. Neural programmer:
Inducing latent programs with gradient descent. In
_ICLR._
[Nickel et al.2012] Maximilian Nickel, Volker Tresp, and
Hans-Peter Kriegel. 2012. Factorizing yago: scalable
machine learning for linked data. In WWW.
[Nickel et al.2015] Maximilian Nickel, Kevin Murphy,
Volker Tresp, and Evgeniy Gabrilovich. 2015. A
review of relational machine learning for knowledge
graphs: From multi-relational link prediction to automated knowledge graph construction. IEEE.
[Peng et al.2015] Baolin Peng, Zhengdong Lu, Hang Li,
and Kam-Fai Wong. 2015. Towards neural networkbased reasoning. In RAM.
[Reed and de Freitas2016] Scott Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In ICLR.
[Riedel et al.2013] Sebastian Riedel, Limin Yao, Andrew
McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal
schemas. In NAACL.
50
[Rockt¨aschel et al.2014] Tim Rockt¨aschel, Matko Bosnjak, Sameer Singh, and Sebastian Riedel. 2014. Lowdimensional embeddings of logic. In SP14.
[Rockt¨aschel et al.2015] Tim Rockt¨aschel, Sameer Singh,
and Sebastian Riedel. 2015. Injecting Logical Background Knowledge into Embeddings for Relation Extraction. In NAACL.
[Russell and Norvig1995] Stuart J Russell and Peter
Norvig. 1995. _Artificial Intelligence: Modern Ap-_
_proach. Prentice Hall, 3rd edition._
[Shastri1992] Lokendra Shastri. 1992. Neurally motivated constraints on the working memory capacity of
a production system for parallel processing: Implications of a connectionist model based on temporal synchrony. In Conference of the Cognitive Science Soci_ety. Psychology Press._
[Shavlik and Towell1989] Jude W Shavlik and Geoffrey G Towell. 1989. An approach to combining explanation-based and neural learning algorithms.
_Connection Science, 1(3):231–253._
[Socher et al.2012] Richard Socher, Brody Huval,
Christopher D Manning, and Andrew Y Ng. 2012.
Semantic compositionality through recursive matrixvector spaces. In EMNLP.
[Socher et al.2013] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning
with neural tensor networks for knowledge base completion. In NIPS.
[Sourek et al.2015] Gustav Sourek, Vojtech Aschenbrenner, Filip Zelezny, and Ondrej Kuzelka. 2015.
Lifted relational neural networks. _arXiv preprint_
_arXiv:1508.05128._
[Toutanova et al.2015] Kristina Toutanova, Danqi Chen,
Patrick Pantel, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of
text and knowledge bases. In EMNLP.
[Towell and Shavlik1994] Geoffrey G Towell and Jude W
Shavlik. 1994. Knowledge-based artificial neural networks. Artificial intelligence, 70(1):119–165.
[Vendrov et al.2016] Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of
images and language. In ICLR.
[Weston et al.2015a] Jason Weston, Antoine Bordes,
Sumit Chopra, and Tomas Mikolov. 2015a. Towards
ai-complete question answering: A set of prerequisite
toy tasks. arXiv preprint arXiv:1502.05698.
[Weston et al.2015b] Jason Weston, Sumit Chopra, and
Antoine Bordes. 2015b. Memory networks. In ICLR.
[Yin et al.2015] Pengcheng Yin, Zhengdong Lu, Hang Li,
and Ben Kao. 2015. Neural enquirer: Learning to
query tables. arXiv preprint arXiv:1512.00965.
-----
| [
"Tim, Rocktäschel",
"Danqi, Chen",
"Sebastian, Riedel",
"Sameer, Singh",
"Tim, Rocktaschel",
"Jay, Pujara"
] | 2016-06-01T00:00:00 | null | false | 50 | 0 | null | https://aclanthology.org/W16-1309 | null | https://www.semanticscholar.org/paper/f9c74e8650b74d78e20a8912b7dec185e477c4d1 |
NaturalProver: Grounded Mathematical Proof Generation with Language Models | Theorem proving in natural mathematical language – the mixture of symbolic and natural language used by humans – plays a central role in mathematical advances and education, and tests aspects of reasoning that are core to intelligence. Yet it has remained underexplored with modern generative models. We study large-scale language models on two new generation tasks: suggesting the next step in a mathematical proof, and full proof generation. We develop NaturalProver, a language model that generates proofs by conditioning on background references (e.g. theorems and definitions that are either retrieved or human-provided), and optionally enforces their presence with constrained decoding. On theorems from the NaturalProofs benchmark, NaturalProver improves the quality of next-step suggestions and generated proofs over fine-tuned GPT-3, according to human evaluations from university-level mathematics students. NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time, which is to our knowledge the first demonstration of these capabilities using neural language models. | NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time, which is to the authors' knowledge the first demonstration of these capabilities using neural language models. | ## NATURALPROVER: Grounded Mathematical Proof Generation with Language Models
**Sean Welleck[1][,][2][∗], Jiacheng Liu[1][∗], Ximing Lu[2], Hannaneh Hajishirzi[1][,][2], Yejin Choi[1][,][2]**
1Paul G. Allen School of Computer Science & Engineering, University of Washington
2Allen Institute for Artificial Intelligence, ∗Equal contribution
```
[email protected]
```
**Abstract**
Theorem proving in natural mathematical language – the mixture of symbolic and
natural language used by humans – plays a central role in mathematical advances
and education, and tests aspects of reasoning that are core to intelligence. Yet
it has remained underexplored with modern generative models. We study largescale language models on two new generation tasks: suggesting the next step in a
mathematical proof, and full proof generation. We develop NATURALPROVER, a
language model that generates proofs by conditioning on background references
(e.g. theorems and definitions that are either retrieved or human-provided), and
optionally enforces their presence with constrained decoding. On theorems from
the NATURALPROOFS benchmark, NATURALPROVER improves the quality of
next-step suggestions and generated proofs over fine-tuned GPT-3, according to
human evaluations from university-level mathematics students. NATURALPROVER
is capable of proving some theorems that require short (2-6 step) proofs, and
providing next-step suggestions that are rated as correct and useful over 40% of
the time, which is to our knowledge the first demonstration of these capabilities
using neural language models.[1]
[Figure 1: NATURALPROVER proves Even Integer Plus 5 is Odd. At training time, NATURALPROVER](https://proofwiki.org/wiki/Even_Integer_Plus_5_is_Odd)
obtains background knowledge about references (e.g. theorems or definitions) via reference recon_struction: learning to map a reference’s title to its content. At test time, NATURALPROVER grounds_
its generations through in-context reference constraints that are retrieved or human-provided, and
[optionally enforced with stepwise constrained decoding. This theorem’s human-written proof in](https://proofwiki.org/wiki/Even_Integer_Plus_5_is_Odd/Proof_by_Contradiction)
[ProofWiki contains an error and differs substantially from NATURALPROVER’s correct proof.](https://proofwiki.org/wiki/Even_Integer_Plus_5_is_Odd/Proof_by_Contradiction)
[1Code and data available at https://github.com/wellecks/naturalprover.](https://github.com/wellecks/naturalprover)
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
-----
**1** **Introduction**
Constructing a rational argument that justifies a claim is a key aspect of explaining, verifying, and
communicating ideas in situations ranging from everyday interactions, to legal and political discourse,
to science and mathematics [Davis and Hersh, 1981, Voss and Means, 1991, Kaye, 1992]. Within
the latter context, a mathematical proof – a sequence of logical arguments expressed in a mixture of
symbolic and natural language – assumes this role by providing justification and insight into why a
claim is true [de Villiers, 1990]. Proofs operate on a relatively explicit and objective set of ground
knowledge, isolating a subset of reasoning that is desirable for models that form the foundation of
machine learning systems [Bommasani et al., 2021]. Moreover, we envision assistive systems that
provide suggested proofs or next-steps, analogous to language-model-based code suggestions (e.g.
GitHub CoPilot [Chen et al., 2021]) or formal proof assistants (e.g. GPT-f [Han et al., 2021a]),
which could make learning or using mathematics more productive and accessible.
To this end, we study the capabilities of large-scale language models (e.g. GPT-3 Brown et al. [2020])
on two new theorem proving tasks in natural mathematical language: next-step suggestion, in which
a model suggests the next step of a proof, and full-proof generation, in which a model fully proves a
claim. As proofs are grounded in knowledge from past results (e.g. theorems, definitions), analogous
to facts deployed in a conversation [Dinan et al., 2019], prior rulings used in a legal opinion [Erik G.
Jensen, 2014], or articles used to justify an answer [Nakano et al., 2021], we develop a methodology
for obtaining and using background knowledge to prove theorems with a generic language model.
We develop NATURALPROVER, a language model that generates proofs by conditioning on background references (e.g. theorems and definitions that are either retrieved or human-provided), and
optionally enforces their presence with a constrained decoding algorithm that leverages the multi-step
structure of proofs. On a collection of theorems from the NATURALPROOFS benchmark [Welleck
et al., 2021], NATURALPROVER improves the quality of next-step suggestions and generated proofs
over fine-tuned GPT-3, according to human evaluations from university-level mathematics students.
NATURALPROVER is capable of proving some theorems that require short (2-6 step) proofs, and
providing next-step suggestions that are rated as correct and useful more than 40% of the time, which
is to our knowledge the first demonstration of these capabilities using neural language models.
Along with these successes, we study deficiencies in our current models. We find that models can
struggle with logical coherence on longer proofs, with providing valid justifications, and with performing multi-step symbolic derivations. Taken together, our tasks, methodology, and evaluation show the
feasibility of language models as interactive aids in mathematics, along with open challenges.
**2** **NATURALPROOFS-GEN Dataset and Tasks**
We create a NATURALPROOFS-GEN dataset adapted from NATURALPROOFS [Welleck et al., 2021],
and use the dataset for two tasks: suggesting the next step of a proof, and fully proving a theorem.
**NATURALPROOFS-GEN. NATURALPROOFS-GEN adapts data from NATURALPROOFS, which**
contains theorem statements, proofs, definitions, and additional pages (e.g. axioms, corollaries)
[sourced from ProofWiki, an online compendium of community-contributed mathematical proofs. In](https://proofwiki.org/)
which are a mixture of text and LNATURALPROOFS-GEN, each exampleATE[X. Welleck et al. [2021] split the examples and reference sets into] (x, y) ∈D pairs a theorem x with a gold proof y, both of
training, dev, and test splits to ensure that no theorem in the dev or test splits was mentioned in the
training split. We adopt these splits of roughly 12.5k training, 1k validation, and 1k test examples, and
sampled core evaluation sets with 100 dev and 100 test theorems that are used for human evaluation.
The proofs contain additional structure, discussed next.
**Multi-step proof structure. Each proof has a multi-step structure, meaning that a proof y =**
(y1, . . ., y|y|) is a variable-length token sequence that is segmented into proof steps, where each step
_yt is itself a variable-length sequence of tokens (either text or Latex). The segmentation is largely_
determined by ProofWiki’s formatting and community standards for structuring proofs, and we
additionally merge steps to ensure that each step contains non-trivial semantic content. For example,
Figure 1 shows a 4-step (generated) proof with each step highlighted in green.
**References. Each proof mentions a variable-number of references** **r1, . . ., rRy** from a set of
_{_ _}_ _R_
roughly 33k theorems and definitions, analogous to how Wikipedia articles reference other pages.
-----
For example, Figure 1 shows a proof with reference mentions in blue. Each mention identifies a
reference by its title and provides a natural language surface form. For instance, in Figure 1, the
first proof step mentions the definition of even integer as even, which is formatted in the proof as
```
[[Definition:Even_Integer|even]] and tokenized along with the rest of the proof.
```
**Tasks.** We consider two tasks that are motivated by an assistive system that provides suggested
proofs or next-steps to a user. The full proof generation task is to generate a proof y given a theorem
**xproof history. The next-step suggestion y<t from a gold proof. In each case, we consider an additional task is to generate a set of next steps {yt[k][}]k[K]=1** [given theorem] provided reference[ x][ and]
setting where the model is also given the set of references {r[∗]1[, . . .,][ r][∗]Ry _[}][ from a gold proof of the]_
theorem. The next-step task simulates a human correctly proving the theorem up to a point, then
querying a system for suggested next-steps when stuck, while the provided reference setting simulates
a human specifying a plan for a system that writes a proof.
**3** **NATURALPROVER: Grounded Proof Generation via Language Modeling**
We describe NATURALPROVER, a language model which generates grounded proofs by conditioning
on references and optionally enforcing their presence with constrained decoding.
**Setup. Our objective is to generate correct proofs, ˆy = arg maxy correct(x, y). Unfortunately,**
evaluating proof correctness is costly, and is only done once at test time. A naive approach is to
approximate the objective, ˆy ≈ arg maxy log pθ(y|x), by fine-tuning a language model pθ on (x, y)
examples and using a decoding algorithm (e.g. greedy decoding). We instead investigate conditioning
on background knowledge in the form of reference documents, pθ(y **x, R), which is beneficial in**
_|_
related generation settings (e.g. Shuster et al. [2021]), and offers control over the generated proof. To
do so, NATURALPROVER uses in-context references and a reference reconstruction objective.
**In-context references. Language models have a limited context window that prevents conditioning**
on full documents. Instead, NATURALPROVER conditions on a set of reference titles, pθ(y|x, Rtitle).
Concretely, we fine-tune on (theorem, reference titles, proof) sequences of the form,
```
<theorem> <title> {theorem-title} </title> <content> {theorem-content} </content> </theorem>
```
`<ref> {ref-title-1} </ref> ...` `<ref> {ref-title-R} </ref>` `<proof> {proof} </proof>` (1)
with new-lines and {} tokens omitted, relevant strings inserted, and loss only on tokens after <proof>.
**Reference reconstruction. Reference titles do not capture all of the information contained in the**
reference documents. We learn a mapping between each reference title and its underlying document
with a reference reconstruction objective, pθ(r|rtitle) for references r in the training reference set.
Concretely, we fine-tune on additional (title, content) pairs of the form,
`<{type}> <title> {title} </title> <content> {content} </content> </{type}>,` (2)
where the {type} is theorem/definition/other, and the loss is only on tokens after <content>. Intuitively,
this lets the model associate each reference title with the reference’s underlying content.
**The joint objective.** For training, we minimize the joint loss,
_−_ log pθ(r|rtitle) _._ (3)
**r**
_∈RX[train]_ i
_L(θ) =_
_−_ log pθ(y|x, Rtitle) +
(x,yX)∈D[train]
_|D[train]| + |R[train]|_
**Evaluation-time references.** We consider two settings for evaluation-time references: (i) retrieved
references, from a retrieval model f (x) **r1, . . ., rk**, and (ii) human-provided references from
_→{_ _}_
the ground-truth proof. The retrieval setting simulates a fully automated proof assistant, while the
second simulates a human specifying a plan for an assistant that writes a proof, and acts as an upper
bound for a retrieval system optimized to predict references in a ground-truth proof.
**3.1** **Stepwise constrained decoding**
In the provided-reference setting, the conditioned references are known to be relevant to a correct
proof. We hypothesize that explicitly encouraging generated proofs to contain the references will
-----
improve correctness, by placing lexical constraints on the reference-titles at decoding time,
**yˆ ≈** arg maxy log pθ(y|x, Rtitle), subject to **rtitleX∈Rtitle** I [rtitle ∈ **y] = |Rtitle|,** (4)
where I [·] is an indicator function. To approximate this objective, we generate step-by-step by
sampling multiple proof-step candidates, retaining those with high value (reference coverage and
log-probability) in a beam, and continuing to the next step, which we call stepwise beam search.
**Value function. The search supports any function of the proof-so-far, v(y** _t)_ R. We use a value
_≤_ _→_
function that is a weighted combination of constraint satisfaction and log-probability,
_vα(y≤t) = αvconstraint(y≤t) + (1 −_ _α)vLM(y≤t),_ (5)
where vconstraint(y≤t) is the number of unique in-context reference-titles in y≤t, and vLM(y≤t) is
log pθ(y≤t). We normalize each term by dividing by the maximum absolute value among candidates.
**Stepwise beam search. The procedure generates a proof y = (y1, . . ., yT ) by iteratively sampling**
and pruning next-proof-step candidates yt. Each iteration expands a size-K beam of proofs-so-far,
_St_ 1 = _y<t[k]_ _[}][K]k=1[, by generating][ N][ next-step candidates,]_
_−_ _{_
_N_
_St[′]_ [=][ ∪][y]<t[∈][S]t−1 (y<t _yt[n][)][ |][ y]t[n]_ _n=1[,]_ (6)
_◦_ _[∼]_ _[q][(][·|][y][<t][,][ x][, R][title][)]_
where q is a decoding algorithm (e.g. temperature sampling) and is concatenation. The next
_◦_
iteration’s beam is formed by selecting the top scoring candidates, St = arg top-Ky≤t∈St′ _[v][α][(][y][≤][t][)][.]_
When a proof in the beam terminates, it is not expanded further. The search ends when the beam
consists of K terminated proofs. The highest value proof is returned as the final output.
**Stepwise++. We add two mechanisms for promoting exploration at each step. First, we expand each**
prefix in the beam (Eqn. 6) by sampling with multiple temperatures, _yt[n]_
_{τi}i[m]=1[}][, where][ q][τ][ is sampling with temperature][ τ]_ [. This relaxes the commitment to a single] { _[∼]_ _[q][τ]_ [(][·|][y][<t][,][ x][, R][title][)][ |][ τ][ ∈]
temperature for all proof steps, balancing exploration (higher τ ) with exploitation (lower τ ).
Second, rather than selecting the top-K candidates, we select clusters based on different value weights:
_St =_ _α_ _αj_ _ℓj=1_ [top][K][′] [(][S]t[α][)][, where][ S]t[α] [is the set of candidates scored with][ v][α][, and][ K] _[′][ =][ K/ℓ][. This]_
_∪_ _∈{_ _}_
interpolates between selecting steps based on likelihood (low α) and constraint satisfaction (high α).
**Full proof sampling and greedy decoding. An alternative is to sample full proofs and select the**
best one according to the value function. This can be viewed as expansion (Eqn. 6) done at the
full proof, rather than the step level. Moreover, greedy decoding corresponds to expanding only 1
candidate with temperature → 0. We formalize this in §D as a segment-level search that contains
stepwise++, full proof sampling, and greedy decoding as special cases.
**4** **Proof Evaluation**
A proof’s correctness is contingent on a variety of factors, including reasoning with past results,
performing symbolic derivations, and altogether providing sufficient evidence that the claim is true.
We design a human-evaluation schema that isolates these aspects at the proof-step level, along with a
full-proof summary. Table 1 summarizes the schema, which we overview below.
**References. First, proofs involve deploying statements from references, such as applying a definition**
or adapting it to fit the context. Deployments should be consistent with the reference, e.g. deploying
the definition of even integer as ‘...by definition, ∃k ∈ Z : x = 2k...’, rather than ‘...∃k ∈ Z : x =
2k + 1’, and are a common source of errors in student proofs [Edwards and Ward, 2004].
Second, proofs use references as justification for steps of reasoning; for instance, Real Addition is
Commutative provides justification for the statement x + y = y + x where x, y ∈ R, but not for
_xy = yx. This aspect is analogous to using an article to justify a claim (e.g. [Nakano et al., 2021])._
Finally, proofs should not hallucinate references, or ‘beg the question’ by self-referencing the current
theorem.
**Equations. Proofs contain a variety of multi-step derivations, ranging from simple arithmetic to**
more sophisticated derivations (e.g. see Table 17). A derivation should start with a valid equation
given the surrounding context (e.g. x + x = 2x in Table 1 versus x + x = 3x). Each subsequent step
should be a valid derivation from the previous step, e.g. stating = (2k + 6) − 1 after y = 2k + 5.
-----
**Error Type** **Example**
**Reasoning: Reference**
Invalid Deployment Since x is an even integer, ∃k ∈ Z : x = 2k + 1.
Invalid Justification E(X [2]) = _k=1_ _[k][2][Pr][(][X][ =][ k][)]_ Power Series for Exponential Function
|Power|of|Number|are|Irrational|√, 32 is irrational.|
|---|---|---|---|---|---|
|Col1|Self|Loo|p (Proving Pythagoras’s Theorem:) From Pythagoras’s Theorem, c2 = a2 + b2.|
|---|---|---|---|
**Reasoning: Equation**
|Col1|Invalid Invalid|Equatio Derivati|n ∀x ∈R, x + x = 3x. on (Since x is an even integer, x + 1 = 2r + 1) = 2(r + 1)|
|---|---|---|---|
**Reasoning: Other**
|Col1|Skips Steps (x ∈Z is not a multiple of 3.) Therefore, x3 ≡1 or 8(mod 9) Repetition (Let △ABC be a right triangle.) Then △ABC is a right triangle. Invalid (Other) (x is an even integer.) So, x + 1 is an even integer.|
|---|---|
|Col1|Language|Let c =pa2 \add b2 be the (|incomplete|statement|;|unknown|symbol|\add)|
|---|---|---|---|---|---|---|---|---|
|Col1|Symbolic|(Let x ∈R.) Let y = x◦x−1. (|undefined|operator|◦|for|real|numbers|)|
|---|---|---|---|---|---|---|---|---|---|
Table 1: Overview of human evaluation error schema. See Table 24 for the full schema. Reference.
Hallucinated reference . The necessary context (e.g. known conditions, prior steps).
|Hallucinate|d reference|
|---|---|
**Other reasoning, language, & symbolic errors. A proof should provide sufficient evidence**
that a claim is true to a human reader; it should not skip steps. Proof steps should make progress
towards proving the goal; in particular, they should not repeat known conditions in the theorem or
conclusions made in a prior step. Finally, our schema leaves room for any other reasoning errors, as
well as symbol errors (e.g. undefined symbols) and language errors (e.g. incomplete statements).
**Usefulness and correctness. To judge the potential utility of language models as assistive systems**
in natural mathematics, we measure whether generated next-steps and full proofs are potentially
useful hints for proving the theorem on one’s own. Additionally, we measure a summary judgment of
correctness. Note that an incorrect statement can still be helpful; for instance, it could give a hint for
the type of reference to use, derivation to perform, argument to make, etc.
**Human evaluation protocol. We measure these aspects through human annotation at a step-wise and**
an overall level. For a step-wise annotation, an annotator is presented with the theorem, proof-so-far,
and a generated next-step. The annotator labels the {0, 1} correctness, usefulness, and presence of
fine-grained errors outlined above. After labeling each step of a proof, the annotator rates the full
proof’s overall correctness and usefulness on a 0-5 scale. A rating of 4 or 5 is needed to be considered
as correct, and a rating of 3 or above is needed to be considered as useful.
**Automatic metrics: lexical content. As automatic proxies for quality, we compare each generated**
proof against its ground-truth counterpart using the sentence-level n-gram matching metric GLEU
[Mutton et al., 2007], and following work in knowledge-grounded dialogue [Shuster et al., 2021]
we use F1 overlap between generated and ground-truth tokens. Prior to computing the metrics, we
normalize the generated and ground-truth proofs by only keeping the surface form of references,
removing formatting characters with a MediaWiki parser, and collapsing any consecutive whitespace
into a single space.
**Automatic metrics: knowledge grounding. We define knowledge grounding as meaning that a**
generated proof contains the same references as those found in the ground-truth proof. To measure
this, we use precision, recall, and F1-score between the reference sets contained in the generated and
ground-truth proofs; i.e. m( ˆr1, . . ., ˆr ˆR[}][,][ {][r]1[∗][, . . .,][ r][∗]R
_{_ _∗_ _[}][)][, where][ m][(][·][)][ is precision, recall, or F1. We]_
also use Knowledge Token-F1 (kF1) ([Shuster et al., 2021]), the overlap of the generated proof’s
tokens with tokens contained in the references mentioned in the ground-truth proof.
**5** **Experiments**
We use the training and dev splits of NATURALPROOFS-GEN during fine-tuning, and the core
_evaluation sets consisting of 100 theorems from the validation set and 100 from the test set for_
-----
Reasoning Errs (↓) Lexical Errs (↓) Per-Step (↑) Full Proof (↑)
|Col1|Ref.|Col3|Eqn.|Col5|Other|Col7|Lang.|Col9|Sym.|Col11|Useful|Col13|Correct|Col15|Useful|Col17|Correct|Col19|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GPT-3 30.92 32.54 40.15 5.61 5.24 25.69 28.18 20% 13%
NATURALPROVERRETRIEVE 23.52 37.55 23.66 4.54 6.19 41.54 33.56 32% 24%
NATURALPROVER 25.84 35.93 25.23 8.41 5.35 39.60 26.30 35% 24%
NATURALPROVER++ 23.61 28.54 **18.45** 5.58 3.65 **46.57** **35.41** **45%** **32%**
Next-step (NATURALPROVER) 19.70 26.32 19.10 8.57 5.86 51.43 42.86 – –
Table 2: Human evaluation results on the core test set for full proof generation and next-step
suggestion (bottom row). All models are fine-tuned on NATURALPROOFS-GEN. Knowledge – either
retrieved or human provided – and constrained decoding improve proof generation, with 46% of
proof steps rated as useful and 35% correct according to university-level mathematics students.
evaluation (see §2). These theorems were selected by the authors such that by looking at the theorem
title each author could recall its content and sketch a proof. While this may shift the evaluation
towards an easier slice of the dataset, it was necessary to make human evaluation at a meaningful
scale feasible. We also use the core sets for explorations and ablations.
We finetune three GPT-3 [Brown et al., 2020] (Curie) models, using the OpenAI API (see Appendix E
for details):
1. Baseline GPT-3. We finetune a baseline GPT-3 model, pθ(y|x), on theorem-proof examples
_{(x, y)} from the training split. At test time, we condition the model on a test theorem._
2. NATURALPROVERRETRIEVE. We finetune GPT-3 with retrieved references, pθ(y|x, ˆr1, . . ., ˆr20).
We use a pretrained joint retrieval model f (x) → (r1, . . ., r|R|) from [Welleck et al., 2021],
which was trained to retrieve an input theorem’s ground truth references. At test time, the model
receives a theorem and the top-20 reference titles that are retrieved given the theorem.
3. NATURALPROVER. We finetune GPT-3 with human-provided references, pθ(y|x, r[∗]1[, . . .,][ r][∗]Ry [)][,]
where {r[∗]1[, . . .,][ r][∗]Ry _[}][ is the set of reference-titles in the ground-truth proof. We use reference-title]_
conditioned examples (Eqn. 1) and reference-reconstruction (Eqn. 2) on the training split/reference
set. At test time, the model receives a theorem and reference titles from its ground-truth proof.
For next-step suggestion we use the human-provided knowledge model (NATURALPROVER).
**Decoding. For full proof generation, we use stepwise++ decoding with the provided knowledge**
model, which we refer to as NATURALPROVER++, and otherwise use greedy decoding. We do not
use stepwise constrained decoding with retrieved references since these references introduce noisy
constraints, nor for next-step prediction since the algorithm is designed for multi-step proofs. See §E
for additional experimental details.
**Human evaluation setup. To evaluate the proofs generated by NATURALPROVER, we recruited**
15 students from the Department of Mathematics and Applied Mathematics at the University of
Washington, including undergraduate, masters, and Ph.D. students. The annotators were trained on
how to evaluate proof correctness and compensated according to IRB requirements; see §F.2. For
each task, we first reveal the theorem and its gold proof to the annotator. If they cannot understand
a theorem or its gold proof, they may skip evaluating it. Otherwise, they may proceed to see the
model-generated proof, one step at a time, and annotate each step under the step-wise evaluation
schema (outlined in §4). After all the steps are shown and evaluated, for the full-proof generation
task, the annotator is asked to annotate the entire proof under the overall evaluation schema.
**5.1** **Main Results**
**Our best method is capable of generating correct and useful proofs. According to human evalu-**
ation results (Table 2), our best method is NATURALPROVER with human-provided references and
stepwise++ inference. 32% of the proofs generated by this method are rated as correct and 45% are
rated as useful as an aid for human proof writers. On the per-step level, 35% of the proof steps are
correct and 47% are useful . Taking a more granular view, our best method makes significantly
less reference, equation, and other reasoning errors than other baselines. It makes very few
language and symbolic errors, meaning it produces mostly complete, well-formatted mathematical
statements, and defines and uses symbols accordingly. It mostly avoids hallucinating references or
creating circular proofs. On the other hand, despite improving over the other methods, the model
-----
**Theorem** **Singleton Set is not Dense-in-itself**
Let T = (S, τ ) be a topological space.
Let x ∈ _S._
Then the singleton set {x} is not dense-in-itself.
GPT-3 Let x ∈ _S._
|Clopen|Points|in|Topolog|ical|Space|
|---|---|---|---|---|---|
_{x} is open in T ._
|Point i|s Open|iff|it is|not|Dense|-in|-itself|
|---|---|---|---|---|---|---|---|
that {x} is not dense-in-itself.
NATURALPROVER++ From Singleton Point is Isolated, {x} has an isolated point.
Hence the result by definition of dense-in-itself.
Singleton Point is Isolated
Let T = (S, τ ) be a topological space.
Let x ∈ _S._
Then x is an isolated point of
the singleton set {x},
but not necessarily an isolated point of
_T ._
Dense-in-itself
Let T = S, τ be a topological space.
Let H ⊆ _S._
Then H is dense-in-itself iff it contains
no isolated points.
Table 3: GPT-3 hallucinates references, while the knowledge-grounded NATURALPROVER++ with
constrained decoding correctly uses references, resulting in a correct and useful proof.
often struggles with correctly deploying and using references (23.6% reference error rate), as well as
symbolic computations (28.5% equation error rate), especially multi-step derivations (21.9% invalid).
**What do the model’s correct proofs look like?** We inspected the proofs labeled as correct and
found three main categories: (1) reference-assembly proofs whose correctness is heavily determined
by reference statements (e.g. Table 18, Table 20); (2) template-adaptation proofs in which the model
adapts the structure and content of a training theorem’s proof to prove the unseen evaluation theorem
(e.g. Table 21, Table 22); (3) complex proofs that are not fully determined by reference statements and
differ significantly from training proofs (e.g. Figure 1, Table 3). In terms of techniques, our method
demonstrates some ability to produce direct proofs (Table 19), proofs by cases (Table 22), proofs by
induction (Table 23), utilize references (Table 20) and do symbolic computations (Table 21).
**Vanilla fine-tuned GPT-3 struggles with proof generation.** The vanilla fine-tuned GPT-3 model
yielded fewer useful and correct proofs, with more reference-based and other reasoning errors
than all three knowledge-grounded settings. The model showed severe reference hallucination (18%)
and repetition (23%). It also makes significantly more reasoning errors related to reference usage.
Language and symbolic error rates roughly stay the same. Overall, naively fine-tuning GPT-3 on
theorem-proof examples alone is suboptimal for proof generation.
**Human-provided knowledge improves proof generation.** Grounding the generations with
human-provided references significantly raises correctness and usefulness of the proofs in both
full-proof and per-step evaluation. It most substantially reduces reference errors, especially invalid
deployments and hallucinated references. For example, Table 3 shows the model grounding a proof
with information from the theorem Singleton Point is Isolated and the definition of Dense-in-itself, in
contrast to the vanilla GPT-3 model which hallucinates references.
**Retrieved knowledge also improves proof generation.** Retrieved knowledge also turns out to
be very helpful, and even comparable to human-provided knowledge in some metrics. Although
the retrieval model is far from perfect, the proof generation model is capable of narrowing down
the retrieved reference titles provided in its context, assembling proofs that are useful and correct more often than the no-knowledge model. Qualitatively, we found examples where grounding
in retrieved references eliminates repetition, enables multi-step derivations justified by references
(Table 21), and assembles references into a correct proof (Table 20). This paves a promising
path towards fully automated mathematical proof generation in natural mathematical language.
**Constrained decoding further improves proof**
**generation. Table 4 confirms that stepwise++ decod-**
ing approximates the constrained objective (Eqn. 4)
better than greedy search, yielding proofs with lower
perplexity and higher constraint satisfaction (Ref-F1).
This translates to generations that are correct and
useful more often according to the annotators. Intuitively, the constraints encourage the model to include
references that help prove the claim (e.g. Table 18).
In-context Stepwise++ PPL (↓) Ref-F1 (↑)
1.0639 26.33
1.0549 30.07
1.0644 89.43
1.0549 94.25
Table 4: Stepwise++ approximates the constrained objective better than greedy.
-----
Lexical Grounding
GLEU Token F1 kF1 Ref-P Ref-R Ref-F1 Halluc (↓)
GPT-3 24.40 49.96 49.30 29.93 24.73 23.69 17.92
NATURALPROVERRETRIEVE 26.58 53.02 55.88 38.17 28.48 27.10 2.25
NATURALPROVER 35.27 66.00 90.07 93.05 86.05 87.08 1.60
NATURALPROVER++ 34.49 65.61 96.39 94.66 95.00 93.92 1.71
Table 6: Automatic metrics on the core test set for full-proof generation, and correlation between
human metrics and automatic metrics on the core validation set.
**Next-step suggestion.** The next-step suggestion
task characterizes a model’s performance on making a single proof step given a correct proof-so-far.
In Table 2 we use the provided-knowledge model with greedy decoding for next-step suggestion, and
find that reasoning errors decrease and per-step usefulness and correctness improve compared to the
full proof setting, with 51% of the proof steps rated as useful and 43% correct. Although we used a
single suggestion in our human evaluation study, in Table 5 we simulate a user choosing from among
multiple suggestions by sampling 10 next-steps from our model and computing automatic metrics
on the sample with the best sum of metrics. Using 10 samples instead of greedily decoding a single
sequence substantially improves each metric, suggesting that utility might be increased further by
presenting multiple suggestions.
**How good are Automatic Metrics?** We study how well the
automatic lexical and grounding metrics introduced in (§4) can
reflect the real quality of proofs, as a guide for using them as
a proxy evaluation protocol for NATURALPROOFS-GEN. We
compute the Pearson correlation coefficient between each pair
of human and automatic metrics, with data from the four experiment settings for full-proof generation. Results are shown in
the lower part of Table 6, with error metrics negated, meaning
positive correlation is desired.
**Decoding** GLEU Ref-F1
Greedy 47.87 65.50
Temp (t=.6) 60.60 84.44
Temp (t=.8) 61.89 86.74
Temp (t=1.0) **62.12** **86.87**
the lower part of Table 6, with error metrics negated, meaning Table 5: _Next-step suggestion:_
positive correlation is desired. Sampling 10 suggestions improves
The lexical and grounding metrics positively correlate with full over a single greedy suggestion.
proof correctness and usefulness (≥ 0.8). At the step-level,
the metrics show (i) high correlation with step-level correctness and language errors ; (ii) varied, but
positive, correlations with aggregate reasoning errors; (iii) negative correlation with symbolic errors
(though symbolic errors are relatively low for all models). The results suggest that optimizing for
automatic metrics may be a viable strategy, albeit without guarantees on how finer-grained reasoning
aspects vary across proofs.
**5.2** **Ablations and error analysis.**
**Reference reconstruction. We fine-tune an additional GPT-**
3 model that is provided with in-context reference titles, but
without reference reconstruction. As seen in Table 7, reference reconstruction improves content and reference usage.
**Constrained decoding. First, Table 9 compares the step-**
|Recon.|Gleu Ref-F1 Halluc.|
|---|---|
|||
| |33.03 82.85 3.32 35.93 84.15 2.68|
level search in stepwise++ with searching at the full-proof Table 7: Effect of reference relevel through sampling multiple proofs and selecting the construction in NATURALPROVER
best with the NATURALPROVER value function (rerank (n)). (greedy decoding, full validation set).
Reranking 60 samples matches the cost of stepwise++ in
terms of number of decoded tokens. Full-proof reranking yields the best Gleu, though with lower
-----
Expand Select GLEU Ref-F1
40.62 (.84) 91.78 (.49)
41.12 (.58) 92.61 (.63)
39.14 (.55) 93.11 (.34)
40.11 (1.55) 94.13 (.45)
Table 8: Ablation of the stepwise++ expansion
and selection mechanisms. Mean (std) over 3
runs shown on the core dev set.
Decoding Gleu Ref-F1
Greedy 41.12 (–) 89.30 (–)
Rerank (10) 43.88 (.29) 91.72 (.28)
Rerank (60) 42.23 (.80) 93.16 (.27)
Stepwise++ 40.11 (1.55) 94.13 (.45)
Table 9: Stepwise versus full-proof search. Mean
(std) over 3 runs on the core dev set.
reference-F1. Second, Table 8 shows that the expansion and selection mechanisms together result in
the best reference matching, while holding Gleu at a similar level. Finally, Table 12 shows that both
terms in the NATURALPROVER value function αvconstraints + (1 − _α)vLM are needed: increasing the_
constraint weight α increases reference-matching, with a tradeoff in Gleu at high values.
**Language model comparison.** Table 10 varies the language model used to parameterize NATURALPROVER . The content and reference usage metrics improve with larger models. Separately, we
find that increasing inference-time compute closes the gap in reference-matching between GPT-2
and the larger GPT-3 model (Table 11): sampling 10 full-proofs from GPT-2 and selecting the best
using the NATURALPROVER value function achieves the same reference-F1 as GPT-3 with a single
greedily-decoded proof. However, Gleu remains much higher with the larger GPT-3 model.
**Challenge: Reasoning with references.** Although reference reasoning errors were decreased
through knowledge-grounding and constrained decoding, NATURALPROVER still commits a reference
error on 23.6% of test steps (27% dev), with 15% of steps containing invalid deployments and 10%
invalid justifications. For next-step prediction, the reference error rate remains nontrivial (19.7%
test, 13% dev)., meaning that the model can struggle to correctly deploy references or use them as
justification even in the absence of compounding errors from previous steps. Table 15 shows example
invalid deployments and justifications; the errors are at times subtle, and require reasoning about the
theorem statement, reference content, and proof context.
**Challenge: Equations and derivations.** NATURALPROVER commits an equation-related error
on 28.5% of test steps (22.8% dev), including invalid equations (9.4%) and derivations (21.9%).
Though an improvement over vanilla fine-tuned GPT-3 (32.5%), the errors occur frequently and
remain high for next-step prediction (26%). Table 17 shows representative errors, which range
from simple ‘commonsense’ mistakes (e.g. 24 = 2[3]) to making invalid steps with false justification within more sophisticated multi-step proofs. Investigating the role of pretraining, in-context
techniques [Nye et al., 2021], and autoformalization [Szegedy, 2020] is interesting future work.
**Challenge: Proof length.** Although NATURALPROVER demonstrates some ability to write long
proofs (e.g. Table 23), the 42% next-step correctness
suggests that compounding errors are likely as proof
length increases. Indeed, our best model’s full-proof
correctness is 48% on 1-4 step proofs (n = 102),
decreasing to 15.6% on proofs with 5 or more steps
(n = 64), with lower per-step usefulness and correctness at later steps (Figure 2). Our findings are
analogous to recent work on language modeling for
formal theorem proving [Polu et al., 2022], where
current models are typically limited to chaining 2 or
3 non-trivial steps of mathematical reasoning.
**5.3** **Additional discussion**
Figure 2: Per-step correctness and usefulness
as a function of step number, for full-proof
generation with NATURALPROVER++ and
next-step prediction with NATURALPROVER.
Finally, we provide higher-level comments on future work related to interactive systems, mathematical
assistants, and generating proofs in informal versus formal mathematics.
**Interactive & improving systems.** Currently, our tasks are at two ends of a spectrum: in next-step
generation, we always assume previous steps are from a human-written proof, while in full proof
-----
generation they are always from the model. Our results with multiple next-step suggestions suggest
that users might find some suggestion among the multiple returned useful at a high rate, pointing to
a middle ground: a human-in-the-loop NATURALPROVER, in which a human picks the next step
from among the returned suggestions, or writes one based on the suggestions. The selected or written
next-step could then be used as feedback to improve the system, enabling an iteratively improving
NATURALPROVER. This notion of a continuously improving, teachable system is an emerging (e.g.
Dalvi et al. [2022]) and interesting future direction.
**Assistants for mathematics.** Our tasks were motivated by an assistant that helps a user write a
proof, either from scratch or when stuck part of the way through. Our study here focuses on capability:
investigating whether neural language models are capable of performing the underlying mathematics
that would be expected from such an assistant. A further challenge is to also ensure reliability – a
user should have confidence that the model is not deceptive or incorrect, and is robust to changes
in domain, on nearby problems, and on alternative ways of expressing a problem. Even further, we
would like flexibility – human teachers can interact with a student flexibly through dialogue, natural
language, and diagrams, rather than the strict input-output format defined by a dataset. Our work
provides an initial step towards this larger vision.
**Informal and formalized mathematics.** Our work investigates theorem proving entirely in natural
mathematical language (i.e. ‘informal’ mathematics), as it reflects an interface that a student typically
uses when working with mathematics. An alternative is proving theorems in a formalized system,
in which proof steps are expressed in a programming language (e.g. Lean de Moura et al. [2015]).
Operating purely in a formalized system allows for verifying correctness – unlike our setting which
must be verified by a human – arguably at the cost of flexibility and interpretability, as the mathematics
is no longer expressed in natural language and must adhere to constraints of the formal system.
Investigating combinations of the two – e.g. expressing a theorem in natural language, receiving a
verified formal proof, then providing an interpretation in natural language – presents a wide range of
interesting directions for future work.
**6** **Related Work**
**Formalized mathematics with neural language models.** A large portion of work on machine
learning for mathematics focuses on formalized mathematics. Language models have been used for
interactive theorem proving, including in GPT-f [Polu and Sutskever, 2020, Polu et al., 2022], PACT
[Han et al., 2021a], and in Urban and Jakubuv [2020]. In these settings proof steps are expressed in a
programming language (e.g. Lean [de Moura et al., 2015]) and there is access to a verifier, which
differs from our setting of theorem proving in natural mathematical language.
**Informal mathematics with neural language models.** Previous work on theorem proving in natural mathematical language focuses on retrieving relevant premises (e.g. theorems, definitions)
[Ferreira and Freitas, 2020a,b, Welleck et al., 2021, Han et al., 2021b], or informal-to-formal translation [Wang et al., 2020], which differ from our setting of generating next-steps or full proofs. Outside
of theorem proving, various works use sequence models for problem solving, including benchmarking
language models on arithmetic [Saxton et al., 2019] or competition problems [Hendrycks et al., 2021],
symbolic mathematics [Lample and Charton, 2020, Welleck et al., 2022], augmenting LMs with
verifiers [Cobbe et al., 2021] or in-context rationales [Wei et al., 2022] for math word problems, or
using language models for math-related program synthesis [Austin et al., 2021, Drori et al., 2021] and
competitive programming [Li et al., 2022]. These settings focus on generating executable programs
or a numerical answer, which differ from our theorem proving setting, where the goal is to generate
sound and convincing arguments on a range of topics in natural mathematical language.
**Related areas in NLP.** Systematic reasoning in natural language (outside of math) has been studied
with synthetic proofs [Saha et al., 2020, Tafjord et al., 2021], single-step deductions [Bostrom et al.,
2021], or entailment trees [Dalvi et al., 2021], which differ from proving real-world mathematical
theorems. Augmenting LMs with knowledge reduces hallucinations in dialogue [Shuster et al.,
2021] which has an analogous step-wise structure, while [Nakano et al., 2021] use references within
long-form answers; these and related NLP findings differ from improving the utility of mathematical
proofs. Lexically-constrained decoding algorithms include variants of (token-level) beam search (e.g.
[Anderson et al., 2017, Hokamp and Liu, 2017, Lu et al., 2021b,a]) which assume access to per-token
-----
logits, and gradient-based decoding [Qin et al., 2022]; our segment-level decoding only assumes a
sampler that returns text and its log-probability, making it compatible with recent language model
API interfaces (e.g. the GPT-3 API).
**7** **Conclusion**
We described NATURALPROVER, a knowledge-grounded language model that generates mathematical
proofs by conditioning on background theorems and definitions, and optionally enforces their presence
with constrained decoding. Our system improves the quality of next-step suggestions and generated
proofs over fine-tuned GPT-3, demonstrating an ability to correctly prove theorems and provide
useful suggestions to human proof writers.
**Acknowledgments and Disclosure of Funding**
This work was funded in part by the Natural Sciences and Engineering Research Council of Canada
(NSERC) (funding reference number 401233309), DARPA MCS program through NIWC Pacific
(N66001-19-2-4031), and the Allen Institute for AI. We also thank Google Cloud Compute, as well
as OpenAI.
The authors would like to thank Alisa Liu, Julian Michael, Yuren (Rock) Pang, and Kaiming Cheng
for dogfooding and providing valuable feedback to our human evaluation system. We would also like
to thank James McGivern for developing an interactive demo for NaturalProver.
**References**
P. Anderson, B. Fernando, M. Johnson, and S. Gould. Guided open vocabulary image captioning
with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in
_Natural Language Processing, pages 936–945, Copenhagen, Denmark, Sept. 2017. Association_
[for Computational Linguistics. doi: 10.18653/v1/D17-1098. URL https://www.aclweb.org/](https://www.aclweb.org/anthology/D17-1098)
```
anthology/D17-1098.
```
J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry,
Q. Le, and C. Sutton. Program synthesis with large language models, 2021.
R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg,
A. Bosselut, E. Brunskill, E. Brynjolfsson, S. Buch, D. Card, R. Castellon, N. S. Chatterji, A. S.
Chen, K. Creel, J. Davis, D. Demszky, C. Donahue, M. Doumbouya, E. Durmus, S. Ermon,
J. Etchemendy, K. Ethayarajh, L. Fei-Fei, C. Finn, T. Gale, L. E. Gillespie, K. Goel, N. D.
Goodman, S. Grossman, N. Guha, T. Hashimoto, P. Henderson, J. Hewitt, D. E. Ho, J. Hong,
K. Hsu, J. Huang, T. F. Icard, S. Jain, D. Jurafsky, P. Kalluri, S. Karamcheti, G. Keeling, F. Khani,
O. Khattab, P. W. Koh, M. S. Krass, R. Krishna, R. Kuditipudi, A. Kumar, F. Ladhak, M. Lee,
T. Lee, J. Leskovec, I. Levent, X. L. Li, X. Li, T. Ma, A. Malik, C. D. Manning, S. P. Mirchandani,
E. Mitchell, Z. Munyikwa, S. Nair, A. Narayan, D. Narayanan, B. Newman, A. Nie, J. C. Niebles,
H. Nilforoshan, J. F. Nyarko, G. Ogut, L. Orr, I. Papadimitriou, J. S. Park, C. Piech, E. Portelance,
C. Potts, A. Raghunathan, R. Reich, H. Ren, F. Rong, Y. H. Roohani, C. Ruiz, J. Ryan, C. R’e,
D. Sadigh, S. Sagawa, K. Santhanam, A. Shih, K. P. Srinivasan, A. Tamkin, R. Taori, A. W.
Thomas, F. Tramèr, R. E. Wang, W. Wang, B. Wu, J. Wu, Y. Wu, S. M. Xie, M. Yasunaga, J. You,
M. A. Zaharia, M. Zhang, T. Zhang, X. Zhang, Y. Zhang, L. Zheng, K. Zhou, and P. Liang. On the
opportunities and risks of foundation models. ArXiv, abs/2108.07258, 2021.
K. Bostrom, X. Zhao, S. Chaudhuri, and G. Durrett. Flexible generation of natural language
deductions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan_guage Processing, pages 6266–6278, Online and Punta Cana, Dominican Republic, Nov. 2021._
Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.506. URL
```
https://aclanthology.org/2021.emnlp-main.506.
```
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. J. Henighan, R. Child, A. Ramesh,
D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess,
-----
J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models
are few-shot learners. ArXiv, abs/2005.14165, 2020.
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda,
N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv preprint
_arXiv:2107.03374, 2021._
K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton,
R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems. arXiv
_preprint arXiv:2110.14168, 2021._
B. Dalvi, P. Jansen, O. Tafjord, Z. Xie, H. Smith, L. Pipatanangkura, and P. Clark. Explaining
answers with entailment trees. In Proceedings of the 2021 Conference on Empirical Methods in
_Natural Language Processing, pages 7358–7370, Online and Punta Cana, Dominican Republic,_
Nov. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.585.
[URL https://aclanthology.org/2021.emnlp-main.585.](https://aclanthology.org/2021.emnlp-main.585)
B. Dalvi, O. Tafjord, and P. Clark. Towards teachable reasoning systems. ArXiv, abs/2204.13074,
2022.
Davis and Hersh. The mathematical experience. Birkhauser, 1981.
L. M. de Moura, S. Kong, J. Avigad, F. van Doorn, and J. von Raumer. The lean theorem prover
(system description). In A. P. Felty and A. Middeldorp, editors, CADE, volume 9195 of Lecture
_Notes in Computer Science, pages 378–388. Springer, 2015. ISBN 978-3-319-21400-9. URL_
```
http://dblp.uni-trier.de/db/conf/cade/cade2015.html#MouraKADR15.
```
M. de Villiers. The role and function of proof in Mathematics. Pythagoras, 1990.
E. Dinan, S. Roller, K. Shuster, A. Fan, M. Auli, and J. Weston. Wizard of wikipedia: Knowledgepowered conversational agents. In International Conference on Learning Representations, 2019.
[URL https://openreview.net/forum?id=r1l73iRqKm.](https://openreview.net/forum?id=r1l73iRqKm)
I. Drori, S. Zhang, R. Shuttleworth, L. Tang, A. Lu, E. Ke, K. Liu, L. Chen, S. Tran, N. Cheng,
R. Wang, N. Singh, T. L. Patti, J. Lynch, A. Shporer, N. Verma, E. Wu, and G. Strang. A neural
network solves, explains, and generates university math problems by program synthesis and
[few-shot learning at human level, 2021. URL https://arxiv.org/abs/2112.15594.](https://arxiv.org/abs/2112.15594)
B. S. Edwards and M. B. Ward. Surprises from mathematics education research: Student (mis)use
of mathematical definitions. American Mathematical Monthly, 2004. ISSN 00029890. doi:
10.2307/4145268.
Erik G. Jensen. Thinking Like a Lawyer. In Thinking Like a Lawyer, chapter 2, "Forms. Stan[ford Law School, 2014. URL https://law.stanford.edu/wp-content/uploads/2018/](https://law.stanford.edu/wp-content/uploads/2018/04/ILEI-Forms-of-Legal-Reasoning-2014.pdf)
```
04/ILEI-Forms-of-Legal-Reasoning-2014.pdf.
```
D. Ferreira and A. Freitas. Natural language premise selection: Finding supporting statements for
mathematical text. In Proceedings of the 12th Language Resources and Evaluation Conference,
pages 2175–2182, Marseille, France, May 2020a. European Language Resources Association.
[ISBN 979-10-95546-34-4. URL https://www.aclweb.org/anthology/2020.lrec-1.266.](https://www.aclweb.org/anthology/2020.lrec-1.266)
D. Ferreira and A. Freitas. Premise selection in natural language mathematical texts. In Proceedings
_of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7365–7374,_
Online, July 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.
[657. URL https://www.aclweb.org/anthology/2020.acl-main.657.](https://www.aclweb.org/anthology/2020.acl-main.657)
L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite,
N. Nabeshima, S. Presser, and C. Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
J. M. Han, J. Rute, Y. Wu, E. W. Ayers, and S. Polu. Proof artifact co-training for theorem proving
with language models, 2021a.
-----
J. M. Han, T. Xu, S. Polu, A. Neelakantan, and A. Radford. Contrastive finetuning of generative
language models for informal premise selection. In AITP, 2021b.
D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the math dataset, 2021.
C. Hokamp and Q. Liu. Lexically constrained decoding for sequence generation using grid beam
search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics
_(Volume 1: Long Papers), pages 1535–1546, Vancouver, Canada, July 2017. Association for_
[Computational Linguistics. doi: 10.18653/v1/P17-1141. URL https://www.aclweb.org/](https://www.aclweb.org/anthology/P17-1141)
```
anthology/P17-1141.
```
D. H. Kaye. Penn State Law eLibrary Journal Articles Faculty Works 1992, Proof in Law and Science.
_[Jurimetrics J, 32, 1992. URL http://elibrary.law.psu.edu/fac_works.](http://elibrary.law.psu.edu/fac_works)_
G. Lample and F. Charton. Deep learning for symbolic mathematics. In International Conference on
_[Learning Representations, 2020. URL https://openreview.net/forum?id=S1eZYeHFDS.](https://openreview.net/forum?id=S1eZYeHFDS)_
Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago, et al. Competition-level code generation with alphacode. arXiv preprint
_arXiv:2203.07814, 2022._
X. Lu, S. Welleck, P. West, L. Jiang, J. Kasai, D. Khashabi, R. L. Bras, L. Qin, Y. Yu, R. Zellers, N. A.
Smith, and Y. Choi. Neurologic a*esque decoding: Constrained text generation with lookahead
heuristics. ArXiv, abs/2112.08726, 2021a.
X. Lu, P. West, R. Zellers, R. Le Bras, C. Bhagavatula, and Y. Choi. NeuroLogic decoding:
(un)supervised neural text generation with predicate logic constraints. In Proceedings of the
_2021 Conference of the North American Chapter of the Association for Computational Lin-_
_guistics: Human Language Technologies, pages 4288–4299, Online, June 2021b. Associa-_
tion for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.339. [URL https:](https://aclanthology.org/2021.naacl-main.339)
```
//aclanthology.org/2021.naacl-main.339.
```
A. Mutton, M. Dras, S. Wan, and R. Dale. Gleu: Automatic evaluation of sentence-level fluency. In
_Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages_
344–351, 2007.
R. Nakano, J. Hilton, S. A. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju,
W. Saunders, X. Jiang, K. Cobbe, T. Eloundou, G. Krueger, K. Button, M. Knight, B. Chess,
and J. Schulman. Webgpt: Browser-assisted question-answering with human feedback. ArXiv,
abs/2112.09332, 2021.
M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan,
A. Lewkowycz, M. Bosma, D. Luan, C. Sutton, and A. Odena. Show your work: Scratchpads for
intermediate computation with language models. ArXiv, abs/2112.00114, 2021.
S. Polu and I. Sutskever. Generative language modeling for automated theorem proving, 2020.
S. Polu, J. M. Han, K. Zheng, M. Baksys, I. Babuschkin, and I. Sutskever. Formal mathematics
statement curriculum learning, 2022.
L. Qin, S. Welleck, D. Khashabi, and Y. Choi. Cold decoding: Energy-based constrained text
generation with langevin dynamics. ArXiv, abs/2202.11705, 2022.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised
multitask learners. arXiv, 2019.
S. Saha, S. Ghosh, S. Srivastava, and M. Bansal. PRover: Proof generation for interpretable
reasoning over rules. In Proceedings of the 2020 Conference on Empirical Methods in Natural
_Language Processing (EMNLP), pages 122–136, Online, Nov. 2020. Association for Computational_
[Linguistics. doi: 10.18653/v1/2020.emnlp-main.9. URL https://aclanthology.org/2020.](https://aclanthology.org/2020.emnlp-main.9)
```
emnlp-main.9.
```
-----
D. Saxton, E. Grefenstette, F. Hill, and P. Kohli. Analysing mathematical reasoning abilities of
[neural models. In International Conference on Learning Representations, 2019. URL https:](https://openreview.net/forum?id=H1gR5iR5FX)
```
//openreview.net/forum?id=H1gR5iR5FX.
```
K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston. Retrieval augmentation reduces hallucination
in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021,
pages 3784–3803, Punta Cana, Dominican Republic, Nov. 2021. Association for Computational
[Linguistics. doi: 10.18653/v1/2021.findings-emnlp.320. URL https://aclanthology.org/](https://aclanthology.org/2021.findings-emnlp.320)
```
2021.findings-emnlp.320.
```
C. Szegedy, editor. A Promising Path Towards Autoformalization and General Artificial Intelligence,
2020.
O. Tafjord, B. Dalvi, and P. Clark. ProofWriter: Generating implications, proofs, and abductive
statements over natural language. In Findings of the Association for Computational Linguistics:
_ACL-IJCNLP 2021, pages 3621–3634, Online, Aug. 2021. Association for Computational Lin-_
[guistics. doi: 10.18653/v1/2021.findings-acl.317. URL https://aclanthology.org/2021.](https://aclanthology.org/2021.findings-acl.317)
```
findings-acl.317.
```
J. Urban and J. Jakubuv. First neural conjecturing datasets and experiments. In International
_Conference on Intelligent Computer Mathematics, pages 315–323. Springer, 2020._
J. F. Voss and M. L. Means. Learning to reason via instruction in argumentation. Learning and
_Instruction, 1991. ISSN 09594752. doi: 10.1016/0959-4752(91)90013-X._
Q. Wang, C. Brown, C. Kaliszyk, and J. Urban. Exploration of neural machine translation in
autoformalization of mathematics in Mizar. In CPP 2020 - Proceedings of the 9th ACM SIGPLAN
_International Conference on Certified Programs and Proofs, co-located with POPL 2020, 2020._
doi: 10.1145/3372885.3373827.
J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting
elicits reasoning in large language models, 2022.
S. Welleck, J. Liu, R. L. Bras, H. Hajishirzi, Y. Choi, and K. Cho. Naturalproofs: Mathematical
theorem proving in natural language. In Thirty-fifth Conference on Neural Information Processing
_[Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/](https://openreview.net/forum?id=Jvxa8adr3iY)_
```
forum?id=Jvxa8adr3iY.
```
S. Welleck, P. West, J. Cao, and Y. Choi. Symbolic brittleness in sequence models: on systematic
generalization in symbolic mathematics. AAAI, abs/2109.13986, 2022.
-----
**A** **Additional Results**
**A.1** **Additional ablations**
Table 10 shows automatic metrics with various language models used to parameterize NATURALPROVER.
Table 11 shows results with the 774M parameter GPT-2 model with greedy decoding, and full-proof
sampling & reranking with 5 and 10 samples, compared to the 13B parameter GPT-3 with greedy
decoding. We use τ = 0.3 and α = 0.75 based on our full-proof sampling experiments with GPT-3.
Table 12 varies the value function parameter α (core dev set). We use full-proof sampling since
stepwise++ uses multiple values of α in its selection.
|Model Params|Gleu Ref-F1 Halluc|
|---|---|
|||
|GPT-Neo 125M GPT-2 774M GPT-J 6B GPT-3 13B|24.85 61.42 11.07 32.06 65.22 6.76 39.14 79.23 3.51 42.39 89.29 1.90|
Table 10: Varying the language model parameterization of NATURALPROVER (provided knowledge, greedy decoding, core dev set).
|Model Decoding|Gleu Ref-F1 Halluc|
|---|---|
|||
|GPT-2 Greedy GPT-2 Rerank (5) GPT-2 Rerank (10) GPT-3 Greedy|32.06 65.22 6.76 32.95 83.55 5.24 32.65 89.30 2.89 42.39 89.29 1.90|
Table 11: Increasing the inference-time compute
budget and reranking with the NATURALPROVER
value function closes the reference-matching gap
between GPT-2 (774M) and GPT-3 (13B).
_α_ **Gleu Ref-F1**
0.0 42.79 88.40
.25 42.05 90.81
.50 42.59 91.75
.75 42.17 93.19
1.0 41.90 93.60
Table 12: Effect of value function, from α : 0 (LM only) to α : 1.0 (constraint only), with full-proof
sampling (10).
Lexical Grounding
GLEU Token F1 kF1 Ref-P Ref-R Ref-F1 Halluc (↓)
Stepwise Stochastic Beam 41.0 68.89 90.33 91.43 82.04 84.21 4.60
Constrained Stepwise++ 40.4 68.90 97.24 95.05 94.85 94.15 2.00
Table 13: NaturalProver with a stepwise stochastic beam search baseline versus stepwise++ decoding.
The baseline search corresponds to using stepwise decoding with an LM-only value function (α : 0).
Constrained stepwise++ decoding substantially improves grounding metrics compared to stochastic
beam search, while keeping the lexical content metrics at a similar level. Core validation set.
**A.2** **Multiple next-step suggestions**
Table 14 shows next-step suggestion results with 10 sampled suggestions versus greedy decoding.
-----
Lexical Grounding
**Decoding** GLEU Token F1 kF1 Ref-P Ref-R Ref-F1 Halluc (↓)
Greedy 47.87 65.33 70.03 80.04 72.78 65.50 0.93
Nucleus (p=.5) 51.10 68.34 73.69 82.75 74.93 69.21 0.94
Nucleus (p=.7) 53.97 71.01 78.86 84.75 79.28 74.52 0.66
Nucleus (p=.9) 57.79 74.45 85.66 90.17 84.03 81.83 **0.22**
Temperature (t=.6) 60.60 76.43 87.46 91.03 87.48 84.44 0.62
Temperature (t=.8) 61.89 77.48 89.67 **93.19** 88.46 86.74 0.43
Temperature (t=1.0) **62.12** **77.60** **89.78** 93.05 **88.96** **86.87** 0.63
Table 14: Automatic metrics on the core dev set for next-step suggestion, with different decoding
strategies. Generations are from NATURALPROVER (provided knowledge). For stochastic decoding,
10 candidates are sampled. We compute metrics using the candidate with the highest sum of all
metrics (with Hallucination negated).
**B** **Qualitative Examples**
**B.1** **Error Analysis**
**B.1.1** **Reference errors**
**Theorem 1** **Geometric Mean of two Positive Real Numbers is Between them**
Let a, b ∈ R be real numbers such that 0 < a < b.
Let G(a, b) denote the geometric mean of a and b.
Then:
_a < G(a, b) < b_
NATURALPROVER++ Proof: correct deploy
By definition of geometric mean:
_G(a, b) =_ _[a][2][ +][ b][2]_
2
...
_Gold Proof_ **Proof:**
By definition of geometric mean:
_G(a, b) :=_ _√ab_
|√ ab|Col2|
|---|---|
where _√ab specifically denotes the positive square root of a and b._
...
_Reference_ Geometric Mean
The geometric mean ofLet x1, x2, . . ., xn ∈ xR1>, x02 be (strictly) positive real numbers., . . ., xn is defined as:
_Gn := ([Q][n]k=1_ _[x][k][)][1][/n][.]_
**Theorem 2** **Power Set of Transitive Set is Transitive**
Let S be a transitive set.
Then its power set P(S) is also a transitive set.
NATURALPROVER++ Proof: correct deploy
From Set is Transitive iff Subset of Power Set:
FromS ⊆P Power Set of Subset(S) :
...
|S) ⊆S|Col2|
|---|---|
_Gold Proof_ **Proof:**
Let S be transitive.
By Set is Transitive iff Subset of Power Set:
_S ⊆P(S)_
Then by Power Set of Subset:
...
|) ⊆P(P(S))|Col2|
|---|---|
_Reference_ Power Set of Subset
Let S ⊆ _T where S and T are both sets._
Then:
_P(S) ⊆P(T ) where P(S) denotes the power set of S._
-----
Table 15: Reference deployment errors. Theorem 1: NATURALPROVER incorrectly deploys the definition of geometric mean (G(a, b) = (ab)[1][/][2] would be correct). Theorem 2: An invalid deployment
of Power Set of Subset; a correct application would yield P(S) ⊆P(P(S)). All generated
proofs are with provided knowledge and stepwise++ decoding.
**Theorem 3** **Derivative at Point of Inflection**
Let f be a real function which is twice differentiable on the open interval (a . . b).
Let f have a point of inflection at ξ ∈ (a . . b).
Then:
_f_ _[′′]_ (ξ) = 0
where f _[′′]_ (ξ) denotes the second derivative of f at ξ.
NATURALPROVER++ Proof: correct deploy justif.
By definition of point of inflection:
where f (η) denotes the derivative of f with respect to η.
|) : f (η) = 0|Col2|
|---|---|
...
_Gold Proof_ **Proof:**
By definition of point of inflection,
|a local|maxi|mum|
|---|---|---|
|local|min|imum|
|---|---|---|
...
_Reference_ Point of Inflection
Let f be a real function which is differentiable on an interval I ⊆ R.
Let ξ ∈ I.
_f has a point of inflection at ξ iff the derivative f_ _[′]_ of f
has either a local maximum or a local minimum at ξ.
**Theorem 4** **Minus One is Less than Zero**
_−1 < 0_
NATURALPROVER++ Proof: correct useful justif.
0 < 1 Real Zero is Less than Real One
⇝ _−1 < 0_ Negative of Real Zero equals Zero
|Order|of|Real|Numbers|is|Dual|of|Order|of|their|Negatives|
|---|---|---|---|---|---|---|---|---|---|---|
_Gold Proof_ **Proof:**
0 < 1 Real Zero is Less than Real One
⇝ 0 > −1 Negative of Real Zero equals Zero
|Order|of|Real|Numbers|is|Dual|of|Order|of|their|Negatives|
|---|---|---|---|---|---|---|---|---|---|---|
⇝ _−1 < 0_ Dual Ordering
■
_Reference_ Order of Real Numbers is Dual of Order of their Negatives
_∀x, y ∈_ R : x > y ⇐⇒ (−x) < (−y)
Table 16: Reference justification errors. Theorem 3: NATURALPROVER makes an invalid inference
using the definition of point of inflection (given the theorem statement, f (ξ) = 0 holds, but not
necessarily f (η) = 0 for all η in the interval), which can be viewed as both incorrectly deploying
the definition and using it as invalid justification. Theorem 4: A subtle invalid justification that is
still useful to the human annotator: the reference provides justification for −0 > −1 rather than
_−1 < 0. See Table 17 for additional justification errors within derivations. All generated proofs are_
with provided knowledge and stepwise++ decoding.
-----
**B.1.2** **Equation errors**
**Theorem 1** **Sum of Squares of Divisors of 24 and 26 are Equal**
The sum of the squares of the divisors of 24 equals the sum of the squares of the divisors of
26:
_σ2(24) = σ2(26)_
where σα denotes the divisor function.
**Proof:** correct eqn.
NATURALPROVER++ 24 = 2[3]
26 = 2[2]
⇝ _σ2(24) = σ2(26)_
■
24 = 2[3]
26 = 2[2]
⇝ _σ2(24) = σ2(26)_
**Theorem 2** **Trace of Sum of Matrices is Sum of Traces**
Let A = [a]n and B = [b]n be square matrices of order n.
Let A + B denote the matrix entrywise sum of A and B.
Then: tr(A + B) = tr(A) + tr(B) where tr(A) denotes the trace of A.
**Proof:** correct eqn. ref.
|tr(C) A+B P tr(C) C=B Trace of Matrix|Trace of Matrix|
|---|---|
_tr(A + B) =_ **C=A+B** _[tr][(][C][)]_ Trace of Matrix
= **C=A** _[tr][(][C][) +][ P]C=B_ _[tr][(][C][)][ Sum of Summations equals Summation of Sum]_
= tr(A) + tr([P]B) Trace of Matrix
[P]
**Theorem 3** **Moment Generating Function of Discrete Uniform Distribution**
Let X be a discrete random variable with a discrete uniform distribution with parameter n
for some n ∈ N.
Then the moment generating function MX of X is given by:
_MX_ (t) = _[e]n[t][(1](1[ −] −_ _[e]e[nt][t])[)]_
**Proof:** correct eqn. ref.
NATURALPROVER++ From the definition of the discrete uniform distribution,
_X has probability mass function: Pr(X = k) =_
_n + k_
From the definition of a moment generating function:
_MX_ (t) = Ee[tX] = _k=0_ [Pr(][X][ =][ k][)][e][tk]
So:
_MX_ (t) = _k = 0_ [Pr(][P][X][n] [ =][ k][)][e][tk]
|Sum of Geometric Sequence|Col2|
|---|---|
= _n+n_ _k_ _nk = 0[((][e][tk][)]n_ _nj = 0−1_ [(][e] _n ))_
P [Q]
■
= _n+n_ _k_ _nj = 0[(][e]_ _n e[tj]_ )
= _n+n_ _k_ [(1]P[ −] _[e][nt][)]_ Discrete Uniform Distribution
= _[et]n[(1](1[−]−[ent]e[t]_ ) [)]
Table 17: Equation-related errors in full proof generation. NATURALPROVER can struggle with
invalid equations and derivations, including basic equalities (Theorem 1), and more sophisticated
settings (Theorems 2 and 3). Since derivations involve deploying definitions and rules from references
and using references as justification, equation-related errors are often coupled with reference errors .
-----
**B.2** **Examples**
**Theorem** **Metric Space is Perfectly Normal** _Gold Proof_
Let M = (A, d) be a metric space. By definition, a topological space is perfectly normal space iff it is:
Then M is a perfectly normal space. perfectly T4 space and T1 (Fréchet) space.
We have that:
_T2 (Hausdorff) Space is a T1 (Frechet) Space._
|Metric|Space|is|Perfectly|T4|
|---|---|---|---|---|
|Metric|Space|is|T2||
NATURALPROVER++ From:
_T2 (Hausdorff) Space is T1 Space_
|Metric|Space|is|Hausdorff|
|---|---|---|---|
it follows that M is a topological space which
|Metric|Space|is|Perfectly|T4|Col6|
|---|---|---|---|---|---|
is perfectly normal.
NATURALPROVER From:
_(without stepwise++)_ Metric Space is Perfectly
|om:|Col2|Col3|Col4|
|---|---|---|---|
|Metric|Space i|s Perfectly|T4|
|Metric|Space i|s T2||
|follows that M is a perfectly||normal space .||
Table 18: Example proof using provided in-context reference constraints. The key theorem T2
```
Space is T1 Space is provided as a constraint, but under greedy decoding the model ignores the
```
constraint, resulting in skipping steps . Stepwise++ decoding selects proof steps based on likelihood
and constraint satisfaction, resulting in better reference coverage and a correct proof.
**Theorem Title** `Equality of Complex Numbers`
**Theorem Content**
Gold Proof
NATURALPROVER
NATURALPROVER++
Table 19: A complex, direct proof. Without stepwise++ decoding, NATURALPROVER makes an
_invalid deployment error, continues with some nonsense, and prematurely terminates the proof. The_
NATURALPROVER++ proof is correct, thanks to stepwise++ decoding.
-----
**Theorem Title** `Compact Complement Topology is Connected`
**Theorem Content**
Gold Proof
GPT-3
NATURALPROVER
RETRIEVE
Table 20: A reference assembly proof. GPT-3’s proof is incorrect, possibly because it doesn’t know
to use the two references. NATURALPROVERRETRIEVE uses retrieved references (shown on the right)
to arrive at a correct proof.
-----
**Theorem Title** `Pointwise Addition on Real-Valued Functions is Associative`
**Theorem** **Con-**
**tent**
Gold Proof
GPT-3
NATURALPROVER
Table 21: A template adaptation proof, which is proved via symbolic derivations. NATURALPROVER
adapts the proof of a similar training theorem, Pointwise Addition on Complex-Valued Func```
tions is Associative, to prove the claim. Despite training on the same (theorem, proof) pairs,
```
vanilla GPT-3 fails to prove the claim.
-----
**Theorem Title** `Cosine in terms of Sine`
**Theorem** **Con-**
**tent**
Gold Proof
GPT-3
NATURALPROVER
Table 22: A template adaptation proof by cases. GPT-3’s proof goes completely derailed and it
does not know to use the reference Sum of Squares of Sine and Cosine. NATURALPROVER’s
proof is correct. The model adapts the proof of the mirroring theorem, Sine in terms of Cosine,
in the training set.
-----
**Theorem Title: Triangle Inequality/Complex Numbers/General Result**
**Theorem Content:**
Gold Proof NATURALPROVER
Table 23: A complex proof by induction. NATURALPROVER’s proof makes one hallucinated
reference error, one repetition error, and is otherwise correct. The model did not see a similar
proof during training: while there are more variants of the Triangle Inequality theorem in our
dataset (i.e. with Real Numbers and Geometry), they only discuss the 2-variable case and none of
them discuss the n-variable general result. So in this case, the model has learned the format of proof
by induction and can apply it in new context. (A proof-by-induction example in train set: Sum of
```
Sequence of Squares/Proof by Induction.)
```
-----
**C** **Dataset Details**
We provide an overview of NATURALPROOFS and its ProofWiki domain from which we build
NATURALPROOFS-GEN. Refer to [Welleck et al., 2021] for further details about NATURALPROOFS.
Our dataset is derived from NATURALPROOFS, a multi-domain corpus of theorem statements, proofs,
definitions, and additional pages (e.g. axioms, corollaries) in natural mathematical language. We
use the ProofWiki[2] domain, which provides broad-coverage of many subject areas (e.g. Set Theory,
Analysis) sourced from ProofWiki, an online compendium of community-contributed mathematical
proofs. PROOFWIKI contains ∼20k theorems, ∼20k proofs, ∼12k definitions, and ∼1k additional
pages (e.g. axioms, corollaries). The set of all ∼33k theorems, definitions, and additional pages form
the reference set R. Finally, ∼14.5k of the theorems x are paired with at least one proof y to form
_examples D = {(x, y)i}i[N]=1[. Welleck et al. [2021] split the reference sets and examples into training,]_
validation, and test splits to ensure that no theorem in the validation or test splits was mentioned in
the training split.
**D** **Segment-level Constrained Decoding**
In this section we present a generic segment-level decoding algorithm that contains stepwise++,
full-proof sampling, and greedy decoding as special cases. We generate a multi-step proof using a
value function v(·) that measures language quality and constraint satisfaction. Search can be done
at the step-level, in which candidate next-steps are generated and high-value steps are retained in
a beam, or at the proof-level, in which multiple proofs are generated and the highest-value proof
is selected. We formalize these into a generic segment-level search, where a segment st is either a
proof-step yt or a full proof y.
The search iteratively builds a multi-step proof y = (y1, . . ., yT ) by expanding, scoring, and selecting
a set of candidate segments:
- Expand: St−1 → _St[′]_ [extends segments][ S][t][−][1] [=][ {][s][≤][t][}][ into candidates][ S]t[′] [=][ {][(][s][≤][t][, s][t][)][}][.]
- Score : (s _t, v)_ R scores a candidate using a value function, v(s _t)_ R.
_≤_ _→_ _≤_ _→_
- Select : St[′] [prunes candidates][ S]t[′] [into segments][ S][t] [used in the next iteration.]
_[→]_ _[S][t]_
**Value function.** We score candidates based on constraint satisfaction and language quality,
_v(s≤t) = αvconstraint(s≤t) + (1 −_ _α)vLM(s≤t),_ (7)
where vconstraint(y≤t) is the number of unique in-context reference-titles in s≤t, and vLM(s≤t) is
log pθ(s≤t). We normalize each term by dividing by the maximum absolute value among candidates.
**Greedy search.** This baseline search defines a segment as a full proof, meaning s0 is an empty
sequence and s1 is a proof y. Expand samples one segment candidate with temperature 0. Score
and select are trivial since there is only one candidate. Greedy search costs T steps of tokens.
**Sample-and-rerank.** In this search, a segment is again full proof, but expand samples N candidates, S1[′] [=][ {][y][n][ ∼] _[q][(][·|][x][)][}]n[N]=1[, where][ q][ is a decoding algorithm (e.g. temperature sampling).]_
`Select takes the top scoring candidate, y = arg maxyn∈S1′` _[v][(][y][n][)][. The cost is][ NT][ steps of tokens.]_
**Step-wise stochastic beam search.** This search generates by iteratively sampling and re-ranking
next-step candidates. In this case, a segment is a proof step, yt, and each iteration starts with a
beam of proofs-so-far, St 1 = _y<t[k]_ _[}][K]k=1[, where][ K][ is the beam size.][ Expand][ samples][ N][ next-step]_
_−_ _{_
candidates for each proof-so-far in the beam,
_St[′]_ [=]
_N_
(y<t ◦ _yt[n][)][ |][ y]t[n]_ _[∼]_ _[q][(][·|][y][<t][,][ x][)]_ _n=1[,]_ (8)
_y<t∈St−1_
where q is a decoding algorithm (e.g. temperature sampling) and ◦ is concatenation. Select forms
the next beam using the top-K scoring candidates,
_St = arg top-K_ _v(y_ _t)._ (9)
_y≤t∈St[′]_ _≤_
2The ProofWiki domain of NATURALPROOFS dataset is under the CC BY-SA 4.0 license.
-----
When a proof in the beam terminates, it is not expanded further. The search ends when the beam
consists of K terminated proofs. The highest scoring proof is returned as the final output. The cost is
_NTK steps of tokens._
**Stepwise++.** At certain proof steps it is important to enumerate and explore options, while at others
(e.g. derivations) a single highly probable prediction is better. To this end, we expand by sampling
with multiple temperatures, meaning that we expand each prefix y<t in (6) using:
_yt[n]_ (10)
_{_ _[∼]_ _[q][τ]_ [(][·|][y][<t][,][ x][)][ |][ τ][ ∈{][τ][1][, . . ., τ][m][}}][,]
where qτ is sampling with temperature τ . This relaxes the commitment to a single temperature for all
proof steps, intuitively balancing exploration (higher τ ) with exploitation (lower τ ).
Second, during the search we want to balance selecting proof steps that satisfy constraints and proof
steps with high log-probability. To this end, we select clusters with different value weights,
_St = {y≤t ∈_ topK′ (Sα) | α ∈{α1, . . ., αℓ}}, (11)
where Sα means the set of candidates scored with v = αvconstraint + (1 − _α)vLM, and K_ _[′]_ = K/ℓ.
This interpolates between selecting steps with good language score (α small), constraint score (α
large), and balance (α : 0.5).
**E** **Implementation Details and Experimental Setup**
**Data preprocessing.** We automatically infer the boundaries of proof steps within the raw proof
contents, and merge contiguous lines into atomic proof steps when appropriate. Steps are separated
by the \n token (\\n in Python string), and lines within a step are separated by the newline token (\n
in Python string).
**Additional model details.** All GPT-3 models (including NATURALPROVER models) are fine-tuned
instances of the Curie engine, the second largest model available through the OpenAI API at the
time of writing.[3] The model’s performance on the EleutherAI evaluation harness[4] is between the
6.7B and 13B variants of the autoregressive transformer language model GPT-3 from [Brown et al.,
2020],[5] though further details of the Curie model are not publicly available.
Separately, we fine-tune GPT-J 6B,[6] a publicly available autoregressive transformer language model
trained on the Pile [Gao et al., 2020], GPT-2 [Radford et al., 2019], an autoregressive transformer
language model trained on scraped web documents, and GPT-Neo-125M,[7] a GPT-2 like causal
language model trained on the Pile.
Our retrieval model is the joint retrieval model from [Welleck et al., 2021] trained for reference
retrieval on ProofWiki using the same dataset splits as NaturalProver. We use the publicly-available
pretrained model from the GitHub repository of [Welleck et al., 2021] and do not update the model
further. We use the model to retrieve the top-20 references for each input theorem.
**Implementation details.** All GPT-3 models (including NATURALPROVER models) are fine-tuned
with the OpenAI API[8] for 4 epochs with a batch size of 64. Other models (GPT-2/J/Neo) are trained
on one Quadro RTX 8000 GPU. During inference, the prompt (up to <proof>) is truncated to 1024
tokens. For full proof generation, we allow a maximum of 1020 generated tokens. For next-step
suggestion, we truncate the proof-so-far to 900 tokens, and allow a maximum of 120 generated tokens
per step.
**Stepwise++ decoding.** For expansion with multiple temperatures, we use N = 10 candidates
sampled with (n, τ ) ∈{(1, 0.0), (3, 0.3), (3, 0.5), (3, 0.7)}. We also tried including τ = 1.0 which
resulted in very poor GLEU, and {(1,0.0), (5,0.3), (4,0.5)}. For selection, we use a beam size K = 9,
and three equally-sized clusters formed with α ∈{0.1, 0.5, 1.0}. We also tried {0.5, 0.75, 0.9}. We
use α = 0.75 to pick select the final sequence, based on our ablation with full-proof sampling.
3https://beta.openai.com/docs/guides/fine-tuning
[4https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
[5https://blog.eleuther.ai/gpt3-model-sizes/](https://blog.eleuther.ai/gpt3-model-sizes/)
[6https://huggingface.co/EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B)
[7https://github.com/EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo)
[8https://beta.openai.com/docs/guides/fine-tuning](https://beta.openai.com/docs/guides/fine-tuning)
-----
**Full proof sampling.** We use temperature τ = 0.3, selected based on a search over τ ∈
_{0.1, 0.3, 0.5, 0.7} using GLEU plus Ref-F1 on the core dev set._
**F** **Additional Evaluation Details**
**F.1** **Full Evaluation Schema**
Table 24 shows the full schema of human evaluation. The overall correctness and usefulness are
rated on a 0-5 scale. The step-wise correctness and usefulness are yes/no questions, while the error
types ask for a binary indicator for the existence of each error type.
**F.2** **Additional Human Evaluation Details**
**Process.** The authors conducted and moderated group sessions with the annotators. Each session
consisted of 30-minutes of training and a 1-hour working/Q&A period. After attending the session,
annotators could continue working on their assigned tasks for two weeks. Each annotator was
assigned 25 theorems (with 5 proofs per theorem, equaling 125 total tasks) and asked to complete
as many tasks as they would like. The evaluation guideline that the annotators referenced to can
[be found in the supplementary materials. The pre-recorded training video is available at https:](https://drive.google.com/file/d/1TRS5XRf_coLEkC4lqaizaqSwHHgBPrG2)
```
//drive.google.com/file/d/1TRS5XRf_coLEkC4lqaizaqSwHHgBPrG2.
```
**Interface.** We developed an interface that displays theorems and proofs in a rendered, humanreadable format and collects annotations. The interface is built on MediaWiki[9], which also powers
the ProofWiki website[10]. We also developed a web console that helps human annotators navigate
annotation tasks and track progress. Figure 3 shows screenshots of the interface.
**Payment.** Human annotators are paid based on the number of tasks they complete. Each task is
worth ($1.0+#steps×$0.4). We pay each annotator an additional $40 for attending the group session.
Annotators are guaranteed a minimal rate of $20/hour. The human evaluation costs approximately
$5,000.
**Ethics review.** The human evaluation study is approved by University of Washington under IRB
STUDY00014751. Consent was obtained from each human annotator by signing a consent form via
DocuSign prior to the beginning of study. The IRB approval letter and a template of the consent form
can be found in the supplementary materials. Minimal personally identifiable information (PII) was
collected, and removed prior to any data analysis.
**F.3** **Full results**
Table 25 shows the full results of human evaluation, including the error rates of fine-grained error
types.
**F.4** **Analyzing the Annotators**
**Inter-annotator agreement.** We compute inter-annotator agreement using proofs in the core dev
set that get an evaluation from two or more annotators. Overall, the annotators achieved fair agreement
(Fleiss kappa κ = 0.24). The level of agreement for each evaluation question is shown in Figure 4.
Fair to moderate agreement is reached for identifying coarse-grained error types, while the high-level
questions (i.e. correctness, usefulness) have relatively low agreement.
**Source diversity.** Figure 5 shows the largest proportion of evaluations covered by a fixed number
of annotators. The top-1 annotator contributes 20% of the total evaluations when counting by proofs
and 18% when counting by steps. 50% of the total evaluations is covered by roughly the top 3 or 4
annotators. Therefore, our human evaluation results have good source diversity and do not heavily
depend on a single annotator’s opinion.
[9https://www.mediawiki.org](https://www.mediawiki.org)
[10https://www.proofwiki.org](https://www.proofwiki.org)
-----
**Aspect / Error Type** **Definition**
OVERALL EVALUATION
**Correctness** Choose a rating below. Not every statement in each rating will apply to the proof
given the rating, but many statements will apply, and the general theme of the
rating will hold:
_◦_ 0: The proof is missing.
_◦_ 1: The proof makes no sense or is unrelated to the problem statement.
_◦_ 2: The proof contains serious logical flaws and lacks adequate justification or
explanation.
_◦_ 3: The proof has some gaps in reasoning.
_◦_ 4: The proof is correct or nearly correct and logically coherent.
_◦_ 5: The proof is correct and flows logically.
**Usefulness** Even if the proof is not perfect, would it be useful to you if you were to prove
this theorem?
_◦_ 0: The proof is missing.
_◦_ 1: Seeing this proof would not help with proving the theorem by myself at all.
_◦_ 2: Seeing this proof proof would slightly decrease the effort needed to prove
the theorem by myself.
_◦_ 3: Seeing this proof would make it substantially easier to prove the theorem by
myself.
_◦_ 4. The proof is almost correct, and only needs a few minor corrections.
_◦_ 5: The proof is correct and could be directly used as a solution.
STEP-WISE EVALUATION
**Correctness** Is this step correct?
_◦_ Yes
_◦_ No (check this if you identified any error in previous questions)
_◦_ Cannot determine (e.g. this step makes a valid progress, but it depends on an
invalid prior step)
_◦_ This is a meaningless step (e.g. QED)
**Usefulness** Could this step be a helpful hint for proving the theorem by myself?
_◦_ Yes
_◦_ No
**Reasoning: Reference**
|Col1|Invalid Deployment A statement deployed from a reference is not consistent with the reference. Invalid Justification A reference is used as invalid justification for a statement. Hallucinated Ref. A reference that does not exist is used. Self Loop The step refers to the theorem itself.|
|---|---|
**Reasoning: Equation**
|Col1|Invalid E Invalid D|quation A standalone equation or initial equation in a derivation is invalid. erivation An equation in a derivation does not follow from the preceding steps.|
|---|---|---|
**Reasoning: Other**
|Col1|Skips Ste Repetitio Invalid (|ps The step assumes unproven statements, or skips non-trivial steps. n The step is merely a repetition of known things. Other) The step’s reasoning is invalid for reasons not captured by the other categories.|
|---|---|---|
**Language**
|Col1|Incomplete The step is not a complete mathematical statement or equation. Misformatted Math A math expression is not properly formatted. Unknown There is a mis-spelled word, or unrecognized math symbol.|
|---|---|
**Symbolic**
|Col1|Undefined One of the symbols is undefined. Overloaded One of the symbols has overloaded meanings. Mistyped A symbol usage is not well-typed. Unconventional Unconventional notation is used.|
|---|---|
Table 24: Detailed description of the human evaluation schema.
-----
Figure 3: Human evaluation interface. The first screenshot is the web console for task navigation and
progress tracking. The next three screenshots show examples of qualification page, overall evaluation
page, and step-wise evaluation page.
-----
**Model** GPT-3 NPRETRIEVE NP NP++ NP
**Task** Full-proof Full-proof Full-proof Full-proof Next-step
OVERALL EVALUATION (0-5 scale)
Samples 90 88 90 92 –
|Correctness (↑) 1.94 2.49 2.41 2.68 Usefulness (↑) 1.80 2.34 2.43 2.75|– –|
|---|---|
|Col1|Correctness Usefulness (|(↑) 28.18 33.56 26.30 35.41 42.86 ↑) 25.69 41.54 39.60 46.57 51.43|Col4|
|---|---|---|---|
|Reasoning: Reference Errors (↓) 30.92 23.52 25.84 23.61 Invalid Deployment 14.71 13.48 18.04 15.24 Invalid Justification 17.96 13.62 13.30 10.30 Hallucinated Ref. 4.61 1.10 1.38 1.29 Self Loop 2.24 1.24 0.31 0.86|||19.70 13.68 9.62 1.05 0.75|
|Reasoning: Equation Errors (↓) 32.54 37.55 35.93 28.54 Invalid Equation 15.21 16.23 12.23 9.44 Invalid Derivation 24.56 27.10 27.37 21.89|||26.32 12.63 15.64|
|Reasoning: Other Errors (↓) 40.15 23.66 25.23 18.45 Skips Steps 2.87 3.03 2.29 4.51 Repetition 23.07 4.95 5.66 1.93 Invalid (Other) 15.21 16.37 18.35 12.02|||19.10 3.46 2.56 13.53|
|Language Errors (↓) 5.61 4.54 8.41 5.58 Incomplete 1.62 2.48 1.99 1.07 Misformatted Math 2.99 1.93 3.82 3.22 Unknown 1.62 0.69 3.98 1.72|||8.57 3.76 3.91 2.56|
|Symbolic Errors (↓) 5.24 6.19 5.35 3.65 Undefined 1.25 2.06 1.53 1.07 Overloaded 2.00 0.41 0.76 0.43 Mistyped 1.87 2.89 1.83 1.93 Unconventional 0.87 1.38 1.83 1.07|||5.86 2.11 0.60 3.01 1.05|
STEP-WISE EVALUATION (%)
Samples 802 727 654 466 665
Correctness (↑) 28.18 33.56 26.30 **35.41** 42.86
Table 25: Full human evaluation results on the core test set. NP = NATURALPROVER. Coarse-grained
error rates (e.g. Reasoning: Reference Errors ) are computed as the frequency of existence of any
fine-grained error under the respective bucket.
Figure 5: Source diversity of human annotations.
Figure 4: Inter-annotator agreement of human
evaluation.
-----
**G** **Ethical Considerations**
Our system may produce proofs of mathematical theorems that are fallacious or misleading, which
may have negative impact if deployed in real educational environments. We kindly remind potential
users that our system and models are experimental, and their outputs should be interpreted critically.
-----
| [
"Sean, Welleck",
"Jiacheng, Liu",
"Ximing, Lu",
"Hannaneh, Hajishirzi",
"Yejin, Choi"
] | 2022-05-01T00:00:00 | NeurIPS 2022 | true | 50 | 7 | null | http://arxiv.org/abs/2205.12910 | https://arxiv.org/abs/2205.12910 | https://www.semanticscholar.org/paper/d593b9b8d63426f0d6a795dd7f2294619bc03610 |
INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving | In learning-assisted theorem proving, one of the most critical challenges is to generalize to theorems unlike those seen at training time. In this paper, we introduce INT, an INequality Theorem proving benchmark designed to test agents’ generalization ability. INT is based on a theorem generator, which provides theoretically infinite data and allows us to measure 6 different types of generalization, each reflecting a distinct challenge, characteristic of automated theorem proving. In addition, provides a fast theorem proving environment with sequence-based and graph-based interfaces, conducive to performing learning-based research. We introduce base-lines with architectures including transformers and graph neural networks (GNNs)for INT. Using INT, we find that transformer-based agents achieve stronger test performance for most of the generalization tasks, despite having much larger out-of-distribution generalization gaps than GNNs. We further find that the addition of Monte Carlo Tree Search (MCTS) at test time helps to prove new theorems. | This paper introduces INT, an INequality Theorem proving benchmark, specifically designed to test agents' generalization ability, and evaluates the same agents augmented with Monte Carlo Tree Search at test time, and shows that MCTS can help to prove new theorems. | ## INT: AN INEQUALITY BENCHMARK FOR EVALUATING GENERALIZATION IN THEOREM PROVING
**Yuhuai Wu[∗], Albert Qiaochu Jiang[∗], Jimmy Ba & Roger Grosse**
University of Toronto & Vector Institute
{ywu, ajiang, jba, rgrosse}@cs.toronto.edu
ABSTRACT
In learning-assisted theorem proving, one of the most critical challenges is to generalize to theorems unlike those seen at training time. In this paper, we introduce INT,
an INequality Theorem proving benchmark designed to test agents’ generalization
ability. INT is based on a theorem generator, which provides theoretically infinite
data and allows us to measure 6 different types of generalization, each reflecting a
distinct challenge, characteristic of automated theorem proving. In addition, INT
provides a fast theorem proving environment with sequence-based and graph-based
interfaces, conducive to performing learning-based research. We introduce baselines with architectures including transformers and graph neural networks (GNNs)
for INT. Using INT, we find that transformer-based agents achieve stronger test
performance for most of the generalization tasks, despite having much larger outof-distribution generalization gaps than GNNs. We further find that the addition of
Monte Carlo Tree Search (MCTS) at test time helps to prove new theorems.
1 INTRODUCTION
Advances in theorem proving can catalyze developments in fields including formal mathematics (McCune, 1997), software verification (Darvas et al., 2005), and hardware design (Kern and Greenstreet,
1999). Following its recent success across other application domains, machine learning has significantly improved the performance of theorem provers (Bansal et al., 2019; Bridge et al., 2014;
Gauthier et al., 2018; Huang et al., 2019; Irving et al., 2016; Kaliszyk et al., 2018; Lee et al., 2020;
Loos et al., 2017; Urban et al., 2011; Wang and Deng, 2020; Yang and Deng, 2019; Li et al., 2020;
Rabe et al., 2020; Polu and Sutskever, 2020). Two key factors that make theorem proving particularly
challenging for ML are data sparsity and that it requires out-of-distribution generalization.
Firstly, due to the difficulty of formalizing mathematics for humans, manually generated formal
proofs are necessarily expensive. Typical formal mathematics datasets contain thousands (Huang
et al., 2019) to tens-of-thousands (Yang and Deng, 2019) of theorems — orders of magnitude smaller
than datasets that enabled breakthroughs in areas such as vision (Deng et al., 2009) and natural
language processing (Rajpurkar et al., 2016). Secondly, the assumption frequently made in machine
learning that each data point is identically and independently distributed does not hold in general for
theorem proving: interesting problems we want to prove are non-trivially different from those we
have proofs for. Hence, the out-of-distribution generalization ability is crucial.
Synthetic datasets that rely on procedural generation provide a potentially unlimited amount of data.
Well-designed synthetic datasets have been shown to help understand the capabilities of machine
learning models (Johnson et al., 2017; Ros et al., 2016; Weston et al., 2016). With the goal of
alleviating the data scarcity problem and understanding out-of-distribution generalization for theorem
proving, we introduce INT. INT is a synthetic INequality Theorem proving benchmark designed for
evaluating generalization. It can generate a theoretically unlimited number of theorems and proofs in
the domain of algebraic equalities and inequalities. INT allows tweaking of its problem distribution
along 6 dimensions, enabling us to probe multiple aspects of out-of-distribution generalization. It
is accompanied by a fast proof assistant with sequence and graph-based interfaces. A common
reservation to hold for synthetic datasets is one of realism: can synthetic data help to prove realistic
_∗: equal contribution_
-----
theorems? Polu and Sutskever (2020) adopted our generation method and showed that augmentation
of 1% of synthetic theorems in training helped to complete 2.3% more proofs on Metamath (Megill
and Wheeler, 2019). This demonstrates the usefulness of INT in real mathematics.
Time and memory requirements for the proof assistant have often been an obstacle for using theorem
provers as RL environments. Most existing proof assistants require a large software library to define
numerous mathematical theorems, leading to slow simulation. Therefore, a key design objective for
INT was to be lightweight and swift. Taking advantage of the limited scope of inequality theorems,
we load a minimal library and achieve fast simulation. Reducing the simulation overhead allows for
experimentation with planning methods such as MCTS which requires many calls to a simulator.
We summarize the contributions of this paper as follows:
1. We make, to the best of our knowledge, the first attempt to investigate an important question
in learning-assisted theorem proving research, i.e., can theorem provers generalize to different
problem distributions? We introduce INT for evaluating six dimensions of generalization.
2. We introduce and benchmark baseline agents for the six types of generalization tasks in INT. We
find that transformer-based agents’ generalization abilities are superior when training and test data
are drawn from the same distribution and inferior in out-of-distribution tasks in INT, compared
to GNN-based agents. Surprisingly, despite larger generalization gaps, transformer-based agents
have favorable test success rates over GNN-based ones in most cases.
3. We find that searching with MCTS at test time greatly improves generalization.
2 RELATED WORKS
**Automatic and Interactive Theorem Proving.** Modern Automatic Theorem Provers (ATPs) such
as E (Schulz, 2013) and Vampire (Kovács and Voronkov, 2013) represent mathematical theorems in
first-order logic and prove them with resolution-based proof calculi. On the other hand, Interactive
Theorem Provers (ITPs) allow human formalization of proofs. This perhaps makes them more suitable
for biologically inspired methods such as machine learning. Famous ITPs include Isabelle (Paulson,
1986), Coq (Barras et al., 1999), LEAN (de Moura et al., 2015), and HOL Light (Harrison, 1996).
**Learning-assisted Theorem Proving.** Theorem provers have been improved by supervised learning (Urban et al., 2011; Bridge et al., 2014; Irving et al., 2016; Loos et al., 2017; Wang et al., 2017;
Rocktäschel and Riedel, 2017; Bansal et al., 2019; Gauthier et al., 2018; Huang et al., 2019; Yang and
Deng, 2019; Kaliszyk and Urban, 2015; Polu and Sutskever, 2020; Li et al., 2020; Rabe et al., 2020;
Jakubuv and Urban, 2019; Olsák et al., 2020; Jakubuv et al., 2020; Kaliszyk et al., 2015; Gauthier and
Kaliszyk, 2015). Wang et al. (2017) used graph embeddings to represent logic formulas and achieved
state-of-the-art classification accuracy on the HolStep dataset (Kaliszyk et al., 2017). Reinforcement
learning (RL) was employed in (Zombori et al., 2019; Gauthier, 2019; 2020). Kaliszyk et al. (2018)
combined MCTS with RL to prove theorems with connection tableau. Notably, GPT-f (Polu and
Sutskever, 2020) adopts our INT generation method for dataset augmentation.
**Datasets for Theorem Proving.** There have been many formal mathematical libraries (Megill and
Wheeler, 2019; Rudnicki, 1992; Gauthier, 2019). Formalized mathematical theorems include the
Feit-Thompson theorem (Gonthier et al., 2013) and the Kepler Conjecture (Hales et al., 2017). The
largest human formal reasoning dataset is IsarStep (Li et al., 2020), where they mined the archive
of formal proofs and brought together 143K theorems in total. These works rely on human efforts
to formalize theorems, which leads to small to moderate-sized datasets. There have been studies
on synthesizing theorems (Urban, 2007; Urban et al., 2008; Piotrowski and Urban, 2018; Gauthier
et al., 2017; 2016; Chvalovsky et al., 2019; Lenat, 1976; Fajtlowicz, 1988; Colton, 2012; Johansson`
et al., 2014) It is worth mentioning that there have been a few approaches (Urban and Jakubv, 2020;
Wang and Deng, 2020) on neural theorem synthesizers. Our theorem generator INT is designed to
be capable of creating an infinite number of theorems, as well as benchmarking the generalization
ability of learning-assisted theorem provers.
3 THE INT BENCHMARK DATASET AND PROOF ASSISTANT
Our INT benchmark dataset provides mathematical theorems and a means to study the generalization
capability of theorem provers. For this purpose, we need control over the distribution of theorems:
-----
this is achieved by a highly customizable synthetic theorem generator. We used a set of ordered field
axioms (Dummit and Foote, 2004) to generate inequality theorems and a subset of it to generate
equality theorems. Details of the axiomization schemes can be found in Appendix A. The code
[for generating theorems and conducting experiments is available at https://github.com/](https://github.com/albertqjiang/INT)
[albertqjiang/INT.](https://github.com/albertqjiang/INT)
3.1 TERMINOLOGY
The axiom combination of a proof refers to the set of axioms used in constructing it. The sequence
of axioms applied in order in the proof is called the axiom order. For example, let A, B, C denote
three unique axioms, and their order of application in a proof be [B, B, A, C]. In this case, the
axiom combination is the set {A, B, C} and the axiom order is the sequence [B, B, A, C]. An initial
condition is a (usually trivial) logic statement (e.g. a = a) to initiate the theorem generation process.
The degree of an expression is the number of arithmetic operators used to construct it. For example,
degree(a) = 0 while degree(((a ∗ _c) ∗_ _b)[2]) = 3._
3.2 INT ASSISTANT
Original goal Goal 1 Goal 2 (trivial)
= Addition Commutativity = Addition Associativity =
+ + + + + +
+ c + b Step 1 c + + b Step 2 c + c +
a b c a a b c a a b a b
(c) INT, graph interface
(a) LEAN, rw stands for
rewrite
(b) INT, seq2seq interface
Figure 1: A proof of a + b + c = c + a + b in LEAN and INT, with seq2seq and graph interfaces.
We built a lightweight proof assistant to interact with theorem provers. It has two interfaces, providing
theorem provers with sequential and graph representations of the proof state, respectively.
A problem in INT is represented by a goal and a set of premises (e.g. a + 0 = a, ∅), which are
mathematical propositions. The INT assistant maintains a proof state composed of the goal and the
proven facts. The proof state is initialized to be just the goal and premises of the theorem. A proof is a
sequence of axiom-arguments tuples (e.g. [(AdditionZero, [a +0])]). At each step of the proof, a tuple
is used to produce a logical relation in the form of assumptions → conclusions (e.g. ∅ _→_ _a + 0 = a)._
Then, if the assumptions are in the proven facts, the conclusions are added to the proven facts; if the
conclusions include the goal, the unproven assumptions will become the new goal. The assistant
considers the theorem proven, if after all steps in the proof are applied, the goal is empty or trivial.
In Figure 1, we present the same proof in LEAN (de Moura et al., 2015) and INT assistants. They both
process proofs by simplifying the goal until it is trivial. The INT assistant’s seq2seq interface (Figure
1b) is very similar to that of LEAN (Figure 1a) with the rewrite tactic. An action is composed of
an axiom followed by argument names and their positions in the proof state. in obj indicates that
the arguments can be found in the objective. The graph interface (Figure 1c) of the INT assistant
allows theorem provers to chose axiom arguments from the computation graphs of the proof state by
node. We can view theorem proving with this interface as a graph manipulation task.
INT assistant provides fast simulation. To demonstrate this, we produced 10,000 typical proof steps
in both interfaces, 40-character-long on average. We executed them with HOL Light (Harrison, 1996)
and INT assistant. The average time it takes per step is 7.96ms in HOL Light and 1.28ms in INT,
resulting in a 6.2× speedup. The correctness of the proofs is ensured by a trusted core of fewer than
200 lines of code.
-----
3.3 THEOREM GENERATOR
One of the main contributions of this paper is to provide a generation algorithm that is able to
produce a distribution of non-trivial synthetic theorems given an axiom order. Generating theorems
by randomly sampling axiom and argument applications will often yield theorems with short proofs.
Instead, we write production rules for axioms in the form of transformation and extension rules. With
these production rules, we can find arguments and new premises required for longer proofs.
We provide the theorem generation algorithm in Algorithm 1. The general idea of the algorithm is to
morph a trivial logic statement into one that requires a non-trivial proof; we call this statement the
core logic statement. We initiate the core logic statement C0 to be one of the initial conditions. At
step t of the generation process, we are given an axiom at specified by the axiom order. We apply
the MORPH function associated with the axiom at to Ct 1 and derive a new logic statement Ct and
_−_
corresponding premises Pt. The key design idea in the MORPH function is to ensure that the newly
**Algorithm 1 Theorem Generator**
1: function GENERATE_THEOREM(initial conditions I, axiom order A)
2: Axiom order length L = len(A).
4:3: **forInitialize core logic statement t ←** 1 to L do _C0 ∼_ _Uniform(I), and the set of premises P = {C0}._
5:6: Get axiomGet new logic statement and premises: at ← _A[t]._ _Ct, Pt_ MORPH (at, Ct 1).
7: Add new premises to the set of all premises: P ← ← _P ∪_ _Pt._ _−_
8: **end for**
9: **return CL, P**
10: end function
generated logic statement and the premises form the implication Ct 1, at, Pt _Ct (see Appendix B_
for details). Therefore, we can chain the implications from all steps together to obtain a proof whose− _→_
length is the axiom order: C0, _at, Pt_ _t=1_
statement CL and its premises C {0, {Pt}}[L]t[L]=1 _[→][are returned as the theorem generated. Below we show a][C][L][, where][ L][ denotes the length. The last core logic]_
step-by-step example of how a theorem is generated with our algorithm.
A worked example
Use Algorithm 1 to generate a theorem with initial conditions I: {a = a, b = b, c = c, d = d, e = e}
and axiom order A: [AdditionAssociativity (AA), AdditionCommutativity (AC), EquivalenceImpliesDoubleInequality (EIDI), FirstPrincipleOfInequality (FPI)].
Core logic statementStep 1: a1 = AA. C1 C: a0 + ( ∼ _Uniformb + c) = ((aI +) : b a) + = c a, P._ 1 = ∅.
Step 2: a2 = AC. C2: a + (b + c) = (b + a) + c, P2 = ∅.
Step 3: a3 = EIDI. C3: a + (b + c) ≥ (b + a) + c, P3 = ∅.
Step 4: a4 = FPI. C4: (a + (b + c)) + d ≥ ((b + a) + c) + e, P4 = {d ≥ _e}._
Theorem generated: Given d ≥ _e, prove a + (b + c) + d ≥_ _b + a + c + e._
With recorded axiom and argument applications, we can synthesize proofs to the theorems. The proofs
can be used for behavior cloning. Appendix E shows statistics of the generated proofs, including the
distribution of length of theorems in characters, the distribution of axioms, and the distribution of the
number of nodes in proof state graphs.
4 EXPERIMENTS
Our experiments are intended to answer the following questions:
1. Can neural agents generalize to theorems: 1) sampled from the same distribution as training
data, 2) with different initial conditions, 3) with unseen axiom orders, 4) with unseen axiom
combinations, 5) with different numbers of unique axioms, 6) with shorter or longer proofs?
-----
Train Test
1.00 Transformer GNN
0.75
0.50
Success rate0.25
0.00
K3 L3 K3 L5 K3 L7 K5 L5 K5 L7 K3 L3 K3 L5 K3 L7 K5 L5 K5 L7
Degree
0 1 2
1.00 Transformer GNN
0.75
0.50
Success rate0.25
0.00
K3 L3 K3 L5 K3 L7 K5 L5 K5 L7 K3 L3 K3 L5 K3 L7 K5 L5 K5 L7
Figure 2: Proof success rates on problems generated with different K and L parameters. Left: When
the IID assumption holds, the success rate decreases as the two generation parameters K and L are
increased. Right: All agents are trained on degree-0 problems and evaluated against problems of
degree 0, 1, and 2. We find that transformer-based agents deteriorate in performance as the test
problems become more complex than training problems. For GNN-based agents, there are no obvious
trends as to how the proof success rate changes as the degree of the initial entities is varied.
2. How do different architectures (transformer vs. GNN) affect theorem provers’ in-distribution and
out-of-distribution generalization?
3. Can search at test time help generalization?
4.1 EXPERIMENT DETAILS
In the following experiments, we used the proofs generated by the INT generator to perform behavior
cloning. We then evaluated the success rates of trained agents in a theorem proving environment. We
denote the cardinality of an axiom combination as K and the length of a proof as L. In the worked
example, K = 4 and L = 4. For each theorem distribution, we first generated a fixed test set of 1000
problems, and then produced training problems in an online fashion, while making sure the training
problems were different from the test ones. For each experiment, we generated 1000 problems and
performed 10 epochs of training before generating the next 1000. We ran 1500 such iterations in
total, with 1.5 million problems generated. We used the Adam optimizer (Kingma and Ba, 2015). We
searched over the learning rates {10[−][5], 3 · 10[−][5], 10[−][4], 3 · 10[−][4]} in preliminary experiments and
found 10[−][4] to be the best choice, which was used for following experiments. We used one Nvidia
P100 or Tesla T4 GPU with 4 CPU cores for training. For each experiment, we ran 2 random seeds,
and picked the one with higher validation success rates for test evaluation. Since this paper focuses
on inequalities, all figures and tables in the main text are based on results from the ordered-field
axiomization. We also include results of GNN-based agents on equalities in Appendix G.
4.2 NETWORK ARCHITECTURES Architecture
In this section, we introduce four baselines built Transformer GNN TreeLSTM Bag of Words
on commonly used architectures: Transform- 1.0
ers (Vaswani et al., 2017), Graph Neural Networks (GNNs), TreeLSTMs (Tai et al., 2015) and
Bag-of-Words (BoWs). In preliminary experiments, we found Graph Isomorphism Networks
(GINs) (Xu et al., 2019) to have performed the
best among several representative GNN architec- Success rate
tures. So we used GIN as our GNN of choice. 0.2
Transformers interact with the INT proof assistant
through the seq2seq interface while the other base- 0.0
lines through the graph interface.
Architecture
Transformer GNN TreeLSTM Bag of Words
1.0
0.8
0.6
0.4
Success rate
0.2
0.0
K3 L3 K3 L5 K3 L7 K5 L5 K5 L7
For sequence-to-sequence training, we used a Figure 3: Proof success rates on test probcharacter-level transformer architecture with 6 en- lems generated with K and L settings. Transcoding layers and 6 decoding layers. We used former and GNN perform well; TreeLSTM has
512 embedding dimensions, 8 attention heads and mediocre performance; and Bag-of-Words per2048 hidden dimensions for position-wise feed- forms poorly: it cannot prove more than 5% of
forward layers. We used dropout with rate 0.1, problems.
label smoothing with coefficient 0.1, and a maximum 2048 tokens per batch. The library fairseq (Ott et al., 2019) was used for its implementation.
-----
Table 1: Left: Average success rates (in %) of agents trained on different numbers of axiom orders.
**Right: Average success rates (in %) of agents trained on different numbers of axiom combinations.**
# Axiom 100 500 2000 5000
orders Train Test Train Test Train Test Train Test
Transformer 93.2 10.0 93.4 **62.8** 93.6 **87.9** 93.7 **91.8**
GNN 87.6 **21.1** 86.6 53.6 79.0 70.4 75.7 74.7
# Axiom 25 100 200 300
combinations Train Test Train Test Train Test Train Test
Transformer 96.1 29.3 96.0 **71.8** 95.4 **88.4** 94.4 **91.3**
GNN 79.1 **47.5** 76.6 68.0 72.6 72.4 72.8 71.9
For data in the graph form, each node in computation graphs corresponds to a character in the formula.
We first used a learnable word embedding of dimension 512 to represent each node. We then used
6 GIN layers to encode graph inputs into vector representations, each with 512 hidden dimensions.
The graph representation was obtained by taking the sum of all the node embeddings. For the
TreeLSTM and the BoW baselines, we used a bidirectional TreeLSTM with 512 hidden dimensions
and a BoW architecture to compute the graph representation vectors from node embeddings. The
hyper-parameters used were found to be optimal in preliminary experiments. We then proposed
axioms conditioned on the graph representations, with a two-layer MLP of hidden dimension 256.
Conditioning on the graph representation and axiom prediction, the arguments are selected in an
autoregressive fashion. Namely, the prediction of the next node is conditioned on the previous ones.
For each argument prediction, we used a one-layer MLP with a hidden size of 256. We used graph
neural network libraries Pytorch Geometric (Fey and Lenssen, 2019) for the GIN implementation,
and DGL (Wang et al., 2019) for the TreeLSTM implementation.
We trained agents based on architectures mentioned above by behavior cloning on theorems of various
length (L) and number of axioms (K). The success rates for proving 1000 test theorems are plotted
in Figure 3. As the BoW architecture did not utilize the structure of the state, it failed miserably at
proving theorems, indicating the significance of the structural information. TreeLSTM performed
worse than the graph neural network baseline. The transformer and the GNN baselines perform the
best among the architectures chosen and they take inputs in sequential and graph forms, respectively.
Thus, we used these two architectures in the following experiments to investigate generalization.
4.3 BENCHMARKING SIX DIMENSIONS OF GENERALIZATION
**IID Generalization** In this experiment, the training and test data are independently and identically
distributed (IID). The performances of our transformer-based and GNN-based agents are displayed
on the left in Figure 2. As can be seen, the performance of agents examined on train and test problems
are very similar. The largest difference between train and test success rates is 2% (K3L7). Notably,
transformer-based agents complete 15.3% more test proofs than GNN-based agents on average.
**Initial Condition** Consider two theorems: (1) (a + b)[2] = a[2] + b[2] + 2ab and (2) (a + (b + c))[2] =
_a[2]_ + (b + c)[2] + 2a(b + c). The two problems take the same axioms and the same number of steps
to prove. However, the axiom argument complexities are different, which can be seen as a result of
varying initial conditions. Can agents trained on problems like (1) prove theorems like (2)?
For an initial condition of the form X = X, we use the degree of the entity X to determine the
complexity. In this experiment, we trained agents on problems with initial conditions made up of
entities of degree 0, and evaluated them on ones of degrees 1 and 2. The results are presented in
Figure 2 (b) with various K and L. For transformer-based agents, the success rate drops 25.6% on
degree-1 problems and 31.5% on degree-2 problems on average. However, for GNN-based agents,
the largest generalization gap between training and test success rates is 3% (K3L5). This shows that
GNN agents can generalize to problems of higher complexities while transformer agents struggle.
**Axiom Orders** Let A and B represent two different axioms. There are multiple orders in which
they can be applied in a K2L3 problem. O1 = [A, A, B] and O2 = [B, A, B] are two examples. Can
an agent trained on problems generated with O1 prove theorems generated with O2?
For both architectures, we investigated how well agents can generalize to problems with different
axiom orders than those in training. We generated 100, 500, 2000, and 5000 axiom orders to use in
the training set for different K and L settings. We evaluated the test success rates on 1000 unseen
axiom orders with the corresponding K and L settings and averaged them. The results averaged over
different K and L settings are shown on the left of Table 1 (See Appendix G.5 for the full results).
It can be observed in the table that the test success rates rise when we increase the number of axiom
orders in the training set. We notice that transformer-based agents have worse generalization than
-----
Trained on
K3 L7 K5 L7 K7 L7
Transformer GNN
1.00
0.75
0.50
Success rate0.25
0.00
K3 L7 K5 L7 K7 L7 K3 L7 K5 L7 K7 L7
Trained on
K3 L3 K3 L5 K3 L7
Transformer GNN
1.00
0.75
0.50
Success rate0.25
0.00
K3 L3 K3 L5 K3 L7 K3 L3 K3 L5 K3 L7
Figure 4: Proof success rates on problems generated with different parameters. Left: We keep L
the same and vary K. The success rate is likely to decrease when the test problems have different
_K from the training problems. Right: We keep K the same and vary L. For all agents, the proof_
success rate is lower on theorems that require longer proofs.
GNN-based ones, as their average generalization gap is larger. This is particularly true when the
number of axiom orders in the training set is 100: transformer-based agents can prove only 10.0% of
test theorems. Remarkably, they still manage to complete more proofs than GNNs when the number
of axiom orders in the training set exceeds 500.
**Axiom Combinations** Consider three problems provable in the ordered field axiomization (Appendix A): (1) a[2] _≥_ 0, (2) a ∗ (b + c) = b ∗ _a + a ∗_ _c, and (3) a[2]_ + b[2] _−_ 2ab ≥ 0. Solving (1)
requires axiom SquareGEQZero (SGEQZ). Solving (2) requires axiom AdditionMultiplicationDistribution (AMD) and axiom MultiplicationCommutativity (MC). Solving (3) requires axiom SGEQZ and
axiom AMD. Notice that all axioms used to prove (3) appear in the proofs of (1) and (2). We ask: can
an agent trained on theorems like (1) and (2) prove theorems like (3)?
In this set of experiments, we investigated how well theorem provers can generalize to problems with
different axiom combinations than those in training for both architectures. We used 25, 100, 200, and
300 axiom combinations to generate the training set with various K and L settings, and evaluated
the agents on test sets generated with 300 unseen combinations. The results averaged over different
_K and L settings are displayed on the right in Table 1 (See Appendix G.5 for full results). As the_
number of axiom combinations in training set increases, the generalization gap decreases and test
success rate improves. The transformer-based agents have larger generalization gaps than GNN-based
ones. This is particularly obvious when there are 25 axiom combinations: the generalization gap is
66.8% for transformers and 31.6% for GNNs. The test success rate of transformers is 18.2% lower
than that of GNNs in this setting. Yet when there are more than 100 axiom combinations in training,
transformers always perform better on the test sets, completing 3.8% − 19.6% more proofs. When
the data is diverse, transformers perform better; when it is insufficient, GNNs are better. This might
be due to the difference in the inductive bias used by both structures and might explain the choice of
neural architectures in deep learning practice.
**Number of Axioms** Here we investigated how well theorem provers could generalize to test
problems that were generated with a different number of axioms than at training time. For instance,
let A, B and C represent different axioms. Will agents trained on K2L3 axiom orders like [A, B, A]
and [C, C, B] be able to prove theorems generated with K3L3 axiom orders like [A, B, C]?
We trained the agents on problems that have the same proof length (L = 7) and varying Ks. The
results are on the left of Figure 4. It can be observed from the figure that in general, agents perform
the best on the K they were trained on and worse when K shifts away. Transformer-based agents
showed better performances in all K and L settings, completing 20.9% more proofs than GNN-based
ones on average. The success rates of transformer-based agents drop 5.6% on average when the test
_K is shifted away by 1 from the training K. For GNN-based agents, this averages to 5.1%. This_
shows that their generalization abilities to different number of axioms are similar.
**Proof Length** We tested the generalization ability of theorem provers over the dimension of proof
length of the theorems. To do this, we kept the cardinality of the axiom set to be the same (K = 3)
and varied the evaluated problems’ proof length (L = 3, 5, 7). The result is presented on the right of
Figure 4. For all of the agents trained, the success rate decreases as the length of the proof increases.
This is due to the natural difficulty of completing longer proofs. Observing the figure, we see that the
longer the training problems, the less they deteriorate in performance when proofs becomes longer:
agents trained on K3L3 problems complete 18.8% fewer proofs when L is increased by 1, while ones
trained on K3L7 complete 5.7% fewer. Furthermore, the performance of transformer-based agents
-----
Table 2: The behavior cloning (BC) agents versus the MCTS-assisted (search) agents. Left: The
average success rates (in %) of agents with and without MCTS over 1000 test theorems. Right: The
average length of successful proofs by agents with and without MCTS over 1000 test theorems. K
denotes the cardinality of the axiom combination of a proof, L denotes the length of the proof.
Train K3L3 K3L5 K3L7
Evaluation BC Search BC Search BC Search
K3 L3 92 **98** 91 97 81 96
K3 L5 50 64 80 **92** 70 **92**
K3 L7 25 40 64 78 58 **81**
Average 56 67 78 89 69 **90**
Train K3L3 K3L5 K3L7
Evaluation BC Search BC Search BC Search
K3 L3 3.83 **3.33** 4.00 3.52 5.00 3.67
K3 L5 7.54 6.82 6.2 **5.52** 6.84 5.56
K3 L7 9.05 8.54 8.01 7.53 8.39 **7.50**
Average 6.81 6.23 6.07 **5.52** 6.74 5.58
decreases by 12.2% when the test proof length increases by 1, compared to 10.7% for GNN-based
ones. This suggests that transformers have inferior proof length generalization abilities than GNNs.
4.4 GENERALIZING WITH SEARCH
We investigated whether performing search at test time can help agents generalize. Specifically,
we investigated the effectiveness of Monte-Carlo Tree Search (MCTS) in finding proofs for unseen
theorems with GNN-based agents. We chose GNN-based agents because they are better at outof-distribution generalization than transformer-based ones. Straightforward application of MCTS
is impractical: in our theorem proving environment, the action space can be as large as 1.3M in
size (see Appendix H). Hence, it would be infeasible to expand all possible actions when constructing
the MCTS trees. Thus, we only performed MCTS over the axiom space (18 distinct axioms in
total), and the arguments were proposed by the behavior cloning agents. Following AlphaGo
Zero/AlphaZero (Silver et al., 2017; 2018), we trained a value network to estimate the value of a
state. The value network is an MLP with two hidden layers of size 256, taking the GNN global
representations of graphs as input. It was trained on 1000 episodes of rollouts obtained by the
behavior cloning agents, with a learning rate of 3 · 10[−][6]. We also followed AlphaZero for the choice
of the upper confidence bound, and the way that actions are proposed using visit counts. We used
200 simulations for constructing MCTS trees. More details can be found in Appendix F. We took
the agents trained on "K3L3", "K3L5", and "K3L7" from section 4.3, and evaluated the agents’
performance when boosted by MCTS.
**Generalization** The average success rates on 1000 test theorems are presented on the left in Table
2. We can see that search greatly improved the generalization results. It helped to solve 21%
more problems on average for the agent trained on theorem distribution K3L7. Remarkably, when
evaluating on K3L7 theorems, search helped the K3L3 agent improve its success rate from 25%
to 40%: a relative improvement of 60%. It is interesting to see the K3L7 behavior cloning agent
solved 9% fewer problems on average than the K3L5 agent. But search brought about much larger
improvement to the K3L7 agent and helped it to solve the largest proportion of problems on average –
90%. This indicates that skills learned through behavior cloning can be better exploited by searching.
The average proof length for 1000 problems is presented on the right in Table 2 (we count those
unsolved problem as 15, the step limit of an episode). We can see that by performing search, we are
able to discover proofs of length closer to the ground truth proof length. For test theorems requiring
3-step proofs, the K3L3 agent was able to prove them in 3.33 steps on average, with a gap of 0.33
steps to the optimal value. Similarly, for test theorems requiring 5-step proofs, the K3L5 agent was
able to prove them in 5.52 steps on average, with a gap of 0.52 steps; and for theorems requiring
7-step proofs, K3L7 agent achieved a gap of 0.5 steps.
4.5 DISCUSSION
Experimental results suggested that transformer-based agents can complete more proofs in the IID
generalization scenario but have larger out-of-distribution generalization gaps than GNN-based ones.
The larger gap may be due to the lack of constraints in the sequence-to-sequence framework, in which
the model can propose sequences that are invalid actions, whereas the graph interface constrains the
model to propose valid actions only. However, we still see that transformers are able to complete
more proofs overall. This shows the superiority of transformers in model capacity when applied
to theorem proving. This insight motivates us to explore the possibility of taking the best from
both worlds, combining both graph structural information and the strong transformer architecture to
improve learning-assisted theorem proving. We leave it for future work.
-----
5 CONCLUSION
We addressed the problem of diagnosing the generalization weaknesses in learning-assisted theorem
provers. We constructed INT, a synthetic benchmark of inequalities, to analyze the generalization of
machine learning methods. We evaluated transformer-based and GNN-based agents and a variation
of GNN-based agents with MCTS at test time. Experiments revealed that transformer-based agents
generalize better when the IID assumption holds while GNN-based agents generalize better in outof-distribution scenarios. We also showed that search can boost the generalization ability of agents.
We stress that proving theorems in INT is not an end in itself. A hard-coded expert system might
perform well on INT but not generalize to real-world mathematical theorems. Therefore, INT should
be treated as instrumental when diagnosing generalization of agents. The best practice is to use INT
in conjunction with real mathematical datasets.
We believe our benchmark can also be of interest to the learning community, facilitating research in
studying generalization beyond the IID assumption. The agents’ abilities to reason and to go beyond
the IID assumption are essential in theorem proving, and studying how to acquire these abilities is at
the frontier of learning research. In other domains requiring out-of-distribution generalization, such as
making novel dialogs (Chen et al., 2017) or confronting unseen opponents in Starcraft (Vinyals et al.,
2019), the requirements for data and computation forbid a generally affordable research environment.
The INT benchmark provides practical means of studying out-of-distribution generalization.
ACKNOWLEDGEMENTS
We thank Jay McClelland, Han Huang and Yuanhao Wang for helpful comments and discussions.
We also thank anonymous reviewers for valuable and constructive feedbacks. We are grateful to the
Vector Institute for providing computing resources. YW was supported by the Google PhD fellowship.
AQJ was supported by a Vector Institute research grant.
REFERENCES
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. Holist:
An environment for machine learning of higher order logic theorem proving. In Kamalika
Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference
_on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97_
[of Proceedings of Machine Learning Research, pages 454–463. PMLR, 2019. URL http:](http://proceedings.mlr.press/v97/bansal19a.html)
[//proceedings.mlr.press/v97/bansal19a.html.](http://proceedings.mlr.press/v97/bansal19a.html)
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Yann Coscoy, David Delahaye,
Daniel de Rauglaudre, Jean-Christophe Filliâtre, Eduardo Giménez, Hugo Herbelin, et al. The
Coq proof assistant reference manual. INRIA, version, 6(11), 1999.
James P. Bridge, Sean B. Holden, and Lawrence C. Paulson. Machine learning for first-order theorem
proving. Journal of automated reasoning, 53(2):141–172, 2014.
Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. A survey on dialogue systems: Recent
advances and new frontiers. Acm Sigkdd Explorations Newsletter, 19(2):25–35, 2017.
Karel Chvalovsky, Thibault Gauthier, and Josef Urban. First experiments with data driven con-`
[jecturing. AITP 2019, 2019. URL http://aitp-conference.org/2019/abstract/](http://aitp-conference.org/2019/abstract/AITP_2019_paper_27.pdf)
[AITP_2019_paper_27.pdf.](http://aitp-conference.org/2019/abstract/AITP_2019_paper_27.pdf)
Simon Colton. Automated theory formation in pure mathematics. Springer Science & Business
Media, 2012.
Ádám Darvas, Reiner Hähnle, and David Sands. A theorem proving approach to analysis of secure
information flow. In International Conference on Security in Pervasive Computing, pages 193–209.
Springer, 2005.
Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
Lean theorem prover (system description). In International Conference on Automated Deduction,
pages 378–388. Springer, 2015.
-----
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale
hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and
_Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248–255. IEEE_
[Computer Society, 2009. doi: 10.1109/CVPR.2009.5206848. URL https://doi.org/10.](https://doi.org/10.1109/CVPR.2009.5206848)
[1109/CVPR.2009.5206848.](https://doi.org/10.1109/CVPR.2009.5206848)
David Steven Dummit and Richard M Foote. Abstract algebra, volume 3. Wiley Hoboken, 2004.
Siemion Fajtlowicz. On conjectures of graffiti. In Annals of Discrete Mathematics, volume 38, pages
113–118. Elsevier, 1988.
Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. CoRR,
[abs/1903.02428, 2019. URL http://arxiv.org/abs/1903.02428.](http://arxiv.org/abs/1903.02428)
Thibault Gauthier. Deep reinforcement learning in HOL4. CoRR, abs/1910.11797, 2019. URL
[http://arxiv.org/abs/1910.11797.](http://arxiv.org/abs/1910.11797)
Thibault Gauthier. Deep reinforcement learning for synthesizing functions in higher-order logic. In
Elvira Albert and Laura Kovács, editors, LPAR 2020: 23rd International Conference on Logic for
_Programming, Artificial Intelligence and Reasoning, Alicante, Spain, May 22-27, 2020, volume 73_
[of EPiC Series in Computing, pages 230–248. EasyChair, 2020. URL https://easychair.](https://easychair.org/publications/paper/Tctp)
[org/publications/paper/Tctp.](https://easychair.org/publications/paper/Tctp)
Thibault Gauthier and Cezary Kaliszyk. Sharing HOL4 and HOL light proof knowledge. In
Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov, editors, Logic for
_Programming, Artificial Intelligence, and Reasoning - 20th International Conference, LPAR-_
_20 2015, Suva, Fiji, November 24-28, 2015, Proceedings, volume 9450 of Lecture Notes in_
_Computer Science, pages 372–386. Springer, 2015. doi: 10.1007/978-3-662-48899-7\_26. URL_
[https://doi.org/10.1007/978-3-662-48899-7_26.](https://doi.org/10.1007/978-3-662-48899-7_26)
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Initial experiments with statistical conjecturing
over large formal corpora. In Andrea Kohlhase, Paul Libbrecht, Bruce R. Miller, Adam Naumowicz,
Walther Neuper, Pedro Quaresma, Frank Wm. Tompa, and Martin Suda, editors, Joint Proceedings
_of the FM4M, MathUI, and ThEdu Workshops, Doctoral Program, and Work in Progress at the_
_Conference on Intelligent Computer Mathematics 2016 co-located with the 9th Conference on_
_Intelligent Computer Mathematics (CICM 2016), Bialystok, Poland, July 25-29, 2016, volume_
[1785 of CEUR Workshop Proceedings, pages 219–228. CEUR-WS.org, 2016. URL http:](http://ceur-ws.org/Vol-1785/W23.pdf)
[//ceur-ws.org/Vol-1785/W23.pdf.](http://ceur-ws.org/Vol-1785/W23.pdf)
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Tactictoe: Learning to reason with HOL4
tactics. In Thomas Eiter and David Sands, editors, LPAR-21, 21st International Conference
_on Logic for Programming, Artificial Intelligence and Reasoning, Maun, Botswana, May 7-_
_12, 2017, volume 46 of EPiC Series in Computing, pages 125–143. EasyChair, 2017. URL_
[https://easychair.org/publications/paper/WsM.](https://easychair.org/publications/paper/WsM)
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. Learning
[to prove with tactics. CoRR, abs/1804.00596, 2018. URL http://arxiv.org/abs/1804.](http://arxiv.org/abs/1804.00596)
[00596.](http://arxiv.org/abs/1804.00596)
Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François Garillot,
Stéphane Le Roux, Assia Mahboubi, Russell O’Connor, Sidi Ould Biha, et al. A machine-checked
proof of the odd order theorem. In International Conference on Interactive Theorem Proving,
pages 163–179. Springer, 2013.
Thomas Hales, Mark Adams, Gertrud Bauer, Tat Dat Dang, John Harrison, Hoang Le Truong, Cezary
Kaliszyk, Victor Magron, Sean McLaughlin, Tat Thang Nguyen, et al. A formal proof of the kepler
conjecture. In Forum of mathematics, Pi, volume 5. Cambridge University Press, 2017.
John Harrison. HOL Light: A tutorial introduction. In International Conference on Formal Methods
_in Computer-Aided Design, pages 265–269. Springer, 1996._
-----
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A learning environment
for theorem proving. In 7th International Conference on Learning Representations, ICLR 2019,
_[New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.](https://openreview.net/forum?id=r1xwKoR9Y7)_
[net/forum?id=r1xwKoR9Y7.](https://openreview.net/forum?id=r1xwKoR9Y7)
Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas Eén, François Chollet,
and Josef Urban. DeepMath - deep sequence models for premise selection. In
Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 29:_ _An-_
_nual Conference on Neural Information Processing Systems 2016, December 5-10, 2016,_
_Barcelona, Spain, pages 2235–2243, 2016._ [URL http://papers.nips.cc/paper/](http://papers.nips.cc/paper/6280-deepmath-deep-sequence-models-for-premise-selection)
[6280-deepmath-deep-sequence-models-for-premise-selection.](http://papers.nips.cc/paper/6280-deepmath-deep-sequence-models-for-premise-selection)
Jan Jakubuv and Josef Urban. Hammering mizar by learning clause guidance (short paper). In
John Harrison, John O’Leary, and Andrew Tolmach, editors, 10th International Conference on
_Interactive Theorem Proving, ITP 2019, September 9-12, 2019, Portland, OR, USA, volume 141 of_
_LIPIcs, pages 34:1–34:8. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi: 10.4230/_
[LIPIcs.ITP.2019.34. URL https://doi.org/10.4230/LIPIcs.ITP.2019.34.](https://doi.org/10.4230/LIPIcs.ITP.2019.34)
Jan Jakubuv, Karel Chvalovský, Miroslav Olsák, Bartosz Piotrowski, Martin Suda, and Josef
Urban. ENIGMA anonymous: Symbol-independent inference guiding machine (system description). In Nicolas Peltier and Viorica Sofronie-Stokkermans, editors, Automated Rea_soning - 10th International Joint Conference, IJCAR 2020, Paris, France, July 1-4, 2020,_
_Proceedings, Part II, volume 12167 of Lecture Notes in Computer Science, pages 448–463._
[Springer, 2020. doi: 10.1007/978-3-030-51054-1\_29. URL https://doi.org/10.1007/](https://doi.org/10.1007/978-3-030-51054-1_29)
[978-3-030-51054-1_29.](https://doi.org/10.1007/978-3-030-51054-1_29)
Moa Johansson, Dan Rosén, Nicholas Smallbone, and Koen Claessen. Hipster: Integrating theory
exploration in a proof assistant. In International Conference on Intelligent Computer Mathematics,
pages 108–122. Springer, 2014.
Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and
Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual
reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 2901–2910, 2017.
Cezary Kaliszyk and Josef Urban. Learning-assisted theorem proving with millions of lemmas. J.
_[Symb. Comput., 69:109–128, 2015. doi: 10.1016/j.jsc.2014.09.032. URL https://doi.org/](https://doi.org/10.1016/j.jsc.2014.09.032)_
[10.1016/j.jsc.2014.09.032.](https://doi.org/10.1016/j.jsc.2014.09.032)
Cezary Kaliszyk, Josef Urban, and Jirí Vyskocil. Lemmatization for stronger reasoning in large
theories. In Carsten Lutz and Silvio Ranise, editors, Frontiers of Combining Systems - 10th
_International Symposium, FroCoS 2015, Wroclaw, Poland, September 21-24, 2015. Proceedings,_
volume 9322 of Lecture Notes in Computer Science, pages 341–356. Springer, 2015. doi: 10.1007/
[978-3-319-24246-0\_21. URL https://doi.org/10.1007/978-3-319-24246-0_](https://doi.org/10.1007/978-3-319-24246-0_21)
[21.](https://doi.org/10.1007/978-3-319-24246-0_21)
Cezary Kaliszyk, François Chollet, and Christian Szegedy. HolStep: A machine learning dataset for
higher-order logic theorem proving. In 5th International Conference on Learning Representations,
_ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net,_
[2017. URL https://openreview.net/forum?id=ryuxYmvel.](https://openreview.net/forum?id=ryuxYmvel)
Cezary Kaliszyk, Josef Urban, Henryk Michalewski, and Miroslav Olšák. Reinforcement learning of
theorem proving. In Advances in Neural Information Processing Systems, pages 8822–8833, 2018.
Christoph Kern and Mark R Greenstreet. Formal verification in hardware design: a survey. ACM
_Transactions on Design Automation of Electronic Systems (TODAES), 4(2):123–193, 1999._
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua
Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations,
_ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL_
[http://arxiv.org/abs/1412.6980.](http://arxiv.org/abs/1412.6980)
-----
Laura Kovács and Andrei Voronkov. First-order theorem proving and Vampire. In International
_Conference on Computer Aided Verification, pages 1–35. Springer, 2013._
Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, and Kshitij Bansal. Mathematical
reasoning in latent space. In 8th International Conference on Learning Representations, ICLR
_[2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://](https://openreview.net/forum?id=Ske31kBtPr)_
[openreview.net/forum?id=Ske31kBtPr.](https://openreview.net/forum?id=Ske31kBtPr)
Douglas B Lenat. Am: An artificial intelligence approach to discovery in mathematics as heuristic
search, sail aim-286. Artificial Intelligence Laboratory, Stanford University, 1976.
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Modelling high-level mathematical
[reasoning in mechanised declarative proofs. CoRR, abs/2006.09265, 2020. URL https://](https://arxiv.org/abs/2006.09265)
[arxiv.org/abs/2006.09265.](https://arxiv.org/abs/2006.09265)
Sarah M. Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided
proof search. In Thomas Eiter and David Sands, editors, LPAR-21, 21st International Conference
_on Logic for Programming, Artificial Intelligence and Reasoning, Maun, Botswana, May 7-_
_12, 2017, volume 46 of EPiC Series in Computing, pages 85–105. EasyChair, 2017._ URL
[https://easychair.org/publications/paper/ND13.](https://easychair.org/publications/paper/ND13)
William McCune. Solution of the Robbins’ problem. Journal of Automated Reasoning, 19(3):
263–276, 1997.
Norman Megill and David A Wheeler. Metamath: A Computer Language for Mathematical Proofs.
Lulu. com, 2019.
Miroslav Olsák, Cezary Kaliszyk, and Josef Urban. Property invariant embedding for automated
reasoning. In Giuseppe De Giacomo, Alejandro Catalá, Bistra Dilkina, Michela Milano, Senén
Barro, Alberto Bugarín, and Jérôme Lang, editors, ECAI 2020 - 24th European Conference on
_Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August_
_29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial_
_Intelligence (PAIS 2020), volume 325 of Frontiers in Artificial Intelligence and Applications,_
[pages 1395–1402. IOS Press, 2020. doi: 10.3233/FAIA200244. URL https://doi.org/10.](https://doi.org/10.3233/FAIA200244)
[3233/FAIA200244.](https://doi.org/10.3233/FAIA200244)
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier,
and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Waleed Ammar,
Annie Louis, and Nasrin Mostafazadeh, editors, Proceedings of the 2019 Conference of the
_North American Chapter of the Association for Computational Linguistics: Human Language_
_Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, pages_
48–53. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-4009. URL
[https://doi.org/10.18653/v1/n19-4009.](https://doi.org/10.18653/v1/n19-4009)
Lawrence C. Paulson. Natural deduction as higher-order resolution. The Journal of Logic Program_ming, 3(3):237–258, 1986._
Bartosz Piotrowski and Josef Urban. Atpboost: Learning premise selection in binary setting with
ATP feedback. In Didier Galmiche, Stephan Schulz, and Roberto Sebastiani, editors, Automated
_Reasoning - 9th International Joint Conference, IJCAR 2018, Held as Part of the Federated Logic_
_Conference, FloC 2018, Oxford, UK, July 14-17, 2018, Proceedings, volume 10900 of Lecture_
_Notes in Computer Science, pages 566–574. Springer, 2018. doi: 10.1007/978-3-319-94205-6\_37._
[URL https://doi.org/10.1007/978-3-319-94205-6_37.](https://doi.org/10.1007/978-3-319-94205-6_37)
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_[CoRR, abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393.](https://arxiv.org/abs/2009.03393)_
Markus N Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical reasoning via
self-supervised skip-tree training. arXiv preprint arXiv:2006.04757, 2020.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, 000+ questions for
machine comprehension of text. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings
_of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016,_
-----
_Austin, Texas, USA, November 1-4, 2016, pages 2383–2392. The Association for Computational_
[Linguistics, 2016. doi: 10.18653/v1/d16-1264. URL https://doi.org/10.18653/v1/](https://doi.org/10.18653/v1/d16-1264)
[d16-1264.](https://doi.org/10.18653/v1/d16-1264)
Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In Advances in Neural
_Information Processing Systems, pages 3788–3800, 2017._
Germán Ros, Laura Sellart, Joanna Materzynska, David Vázquez, and Antonio M. López. The
SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban
scenes. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016,
_Las Vegas, NV, USA, June 27-30, 2016, pages 3234–3243. IEEE Computer Society, 2016. doi:_
[10.1109/CVPR.2016.352. URL https://doi.org/10.1109/CVPR.2016.352.](https://doi.org/10.1109/CVPR.2016.352)
Piotr Rudnicki. An overview of the mizar project. In Proceedings of the 1992 Workshop on Types for
_Proofs and Programs, pages 311–330. Citeseer, 1992._
Stephan Schulz. System description: E 1.8. In International Conference on Logic for Programming
_Artificial Intelligence and Reasoning, pages 735–743. Springer, 2013._
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez,
Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of Go
without human knowledge. Nature, 550(7676):354–359, 2017.
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen
Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess,
shogi, and Go through self-play. Science, 362(6419):1140–1144, 2018. ISSN 0036-8075. doi:
[10.1126/science.aar6404. URL https://science.sciencemag.org/content/362/](https://science.sciencemag.org/content/362/6419/1140)
[6419/1140.](https://science.sciencemag.org/content/362/6419/1140)
Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations
from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting
_of the Association for Computational Linguistics and the 7th International Joint Conference on_
_Natural Language Processing (Volume 1: Long Papers), pages 1556–1566, Beijing, China, July_
[2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1150. URL https:](https://www.aclweb.org/anthology/P15-1150)
[//www.aclweb.org/anthology/P15-1150.](https://www.aclweb.org/anthology/P15-1150)
Josef Urban. Malarea: a metasystem for automated reasoning in large theories. In Geoff Sutcliffe,
Josef Urban, and Stephan Schulz, editors, Proceedings of the CADE-21 Workshop on Empirically
_Successful Automated Reasoning in Large Theories, Bremen, Germany, 17th July 2007, volume_
[257 of CEUR Workshop Proceedings. CEUR-WS.org, 2007. URL http://ceur-ws.org/](http://ceur-ws.org/Vol-257/05_Urban.pdf)
[Vol-257/05_Urban.pdf.](http://ceur-ws.org/Vol-257/05_Urban.pdf)
Josef Urban and Jan Jakubv. First neural conjecturing datasets and experiments. arXiv preprint
_arXiv:2005.14664, 2020._
Josef Urban, Geoff Sutcliffe, Petr Pudlák, and Jirí Vyskocil. Malarea SG1- machine learner for
automated reasoning with semantic guidance. In Alessandro Armando, Peter Baumgartner, and
Gilles Dowek, editors, Automated Reasoning, 4th International Joint Conference, IJCAR 2008,
_Sydney, Australia, August 12-15, 2008, Proceedings, volume 5195 of Lecture Notes in Computer_
_[Science, pages 441–456. Springer, 2008. doi: 10.1007/978-3-540-71070-7\_37. URL https:](https://doi.org/10.1007/978-3-540-71070-7_37)_
[//doi.org/10.1007/978-3-540-71070-7_37.](https://doi.org/10.1007/978-3-540-71070-7_37)
Josef Urban, Jiˇrí Vyskoˇcil, and Petr Štˇepánek. MaLeCoP machine learning connection prover. In
_International Conference on Automated Reasoning with Analytic Tableaux and Related Methods,_
pages 263–277. Springer, 2011.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information
_processing systems, pages 5998–6008, 2017._
-----
Oriol Vinyals, Igor Babuschkin, Wojciech Marian Czarnecki, Michaël Mathieu, Andrew Joseph
Dudzik, Junyoung Chung, Duck Hwan Choi, Richard W. Powell, Timo Ewalds, Petko Georgiev,
Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai,
John P. Agapiou, Max Jaderberg, Alexander Sasha Vezhnevets, Rémi Leblond, Tobias Pohlen,
Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom Le Paine, Caglar Gulcehre,
Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina
McKinney, Oliver Smith, Tom Schaul, Timothy P. Lillicrap, Koray Kavukcuoglu, Demis Hassabis,
Chris Apps, and David Silver. Grandmaster level in StarCraft II using multi-agent reinforcement
learning. Nature, pages 1–5, 2019.
Mingzhe Wang and Jia Deng. Learning to prove theorems by learning to generate theorems. CoRR,
[abs/2002.07019, 2020. URL https://arxiv.org/abs/2002.07019.](https://arxiv.org/abs/2002.07019)
Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding. In Advances in Neural Information Processing Systems, pages 2786–2796,
2017.
Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang,
Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J.
Smola, and Zheng Zhang. Deep graph library: Towards efficient and scalable deep learning on
[graphs. CoRR, abs/1909.01315, 2019. URL http://arxiv.org/abs/1909.01315.](http://arxiv.org/abs/1909.01315)
Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards AI-complete question
answering: A set of prerequisite toy tasks. In Yoshua Bengio and Yann LeCun, editors, 4th
_International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May_
_[2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1502.](http://arxiv.org/abs/1502.05698)_
[05698.](http://arxiv.org/abs/1502.05698)
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural
networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans,
_[LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?](https://openreview.net/forum?id=ryGs6iA5Km)_
[id=ryGs6iA5Km.](https://openreview.net/forum?id=ryGs6iA5Km)
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International
_Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research,_
[pages 6984–6994, Long Beach, California, USA, 09–15 Jun 2019. PMLR. URL http://](http://proceedings.mlr.press/v97/yang19a.html)
[proceedings.mlr.press/v97/yang19a.html.](http://proceedings.mlr.press/v97/yang19a.html)
Zsolt Zombori, Adrián Csiszárik, Henryk Michalewski, Cezary Kaliszyk, and Josef Urban. Towards
[finding longer proofs. CoRR, abs/1905.13100, 2019. URL http://arxiv.org/abs/1905.](http://arxiv.org/abs/1905.13100)
[13100.](http://arxiv.org/abs/1905.13100)
-----
APPENDIX A AXIOM SPECIFICATIONS
**Field axioms** Definition
AdditionCommutativity (AC) _→_ _a + b = b + a_
AdditionAssociativity (AA) _→_ _a + (b + c) = (a + b) + c_
AdditionSimplification (AS) _a = b →_ _a + (−b) = 0_
MultiplicatoinCommutativity (MC) _→_ _a · b = b · a_
MultiplicationAssociativity (MA) _→_ _a · (b · c) = (a · b) · c_
MultiplicationSimplification (MS) (a = 0) (a = b) 1 = a [1]b
_̸_ _∧_ _→_ _·_
AdditionMultiplicationLeftDistribution (AMLD) _→_ (a + b) · c = a · c + b · c
AdditionMultiplicationRightDistribution (AMRD) _→_ _a · (b + c) = a · b + a · c_
SquareDefinition (SD) _→_ _a[2]_ = a · a
MultiplicationOne (MO) _→_ _a · 1 = a_
AdditionZero (AZ) _→_ _a + 0 = a_
PrincipleOfEquality (POE) (a = b) ∧ (c = d) → _a + c = b + d_
EquMoveTerm(Helper axiom) (EMT) _a + b = c →_ _a = c + (−b)_
**Ordered field axioms** Definition
All field axioms
SquareGEQZero (SGEQZ) _a = b →_ _a · b ≥_ 0
EquivalenceImpliesDoubleInequality (EIDI) _a = b →_ (a ≥ _b) ∧_ (a ≤ _b)_
IneqMoveTerm (IMT) _a + b ≥_ _c →_ _a ≥_ _c + (−b)_
FirstPrincipleOfInequality (FPOI) (a ≥ _b) ∧_ (c ≥ _d) →_ _a + c ≥_ _b + d_
SecondPrincipleOfInequality (SPOI) (a ≥ _b) ∧_ (c ≥ 0) → _a · c ≥_ _b · c_
Table 3
-----
APPENDIX B THE MORPH FUNCTION
We detail the morphing of C at each step as follows. For each theorem a, we define two symbolic
patterns: La and Ra, each represented by an expression (see Appendix C for full details). For
example, if a is AdditionCommutativity, we use La = x1 + x2 to denote any formula that is a sum
of two terms (x1 and x2 can be arbitrary terms). We check if one of the nodes in the computation
graph of C has the structure defined by _a. If so, we then transform that node to a formula specified_
_L_
by _a. For example, if C is (p + q) + l = (p + (q + l)), p + q is a node that matches the pattern_
_R_
specified by La, in which x1 = p and x2 = q. Let Ra = x2 + x1. We hence transform the node
_p + q to q + p as specified by_ _a. As a result, C_ _[′]_ becomes (q + p) + l = (p + (q + l)). If there is no
_R_
node in the computation graph, we morph the core logic statement using the extension function E,
defined in Appendix D . We sample nodes in available computation graphs and combine them with
_C, coming up with C_ _[′]_ and optionally a non-empty set of new premises Pnew.
**Algorithm 2 Theorem Generator (complete)**
1: function GENERATE_THEOREM(initial conditions I, axiom order A)
2: Axiom order length L = len(A).
3: Initialize core logic statement C0 _Uniform(_ ), and the set of premises P = _C0_ .
4: **for t ←** 1 to L do _∼_ _I_ _{_ _}_
5: Get axiom at _A[t]._
7:6: Add new premises to the set of all premises:Get new logic statement and premises: ← _Ct, P Pt ←_ MORPHP _Pt (.at, Ct−1)._
_←_ _∪_
8: **end for**
9: **return CL, P**
10: end function
**function MORPH(axiom a, core logic statement C)**
2: Collect Nt = {n| n is a node in C and matches the pattern specified by La}
4: **if NSample nodet ̸= ∅** **then** _n_ _Uniform(_ _t)._
_∼_ _N_
Transform n into new node n[′] using the mapping from La to Ra.
6: _C_ _[′]_ _←_ Replace n with n[′] in the graph of C. Pnew ← ∅.
**else**
8: Collect N, the set of all nodes in the graphs.
Extend C and get the set of premises: C _[′], Pnew_ (a, C, ).
10: **end if** _←E_ _N_
**return C** _[′], Pnew._
12: end function
The reasons that we have two sets of rules for morphing are as follow: 1) Transformation rules can
only be applied when the axiom will produce an equality, while extension rules can be applied to any
axiom. So in order to generate theorems with all the axioms, we need the extension rules. 2) Almost
all the extension rules will complicate the core logic statement while none of the transformation rules
will. If we only have extension rules, the goal generated can be very complex even the proof is of
moderate length. In order to generate compact theorems (goal not too complicated) with long proofs,
the transformation rules are preferred. Therefore we only apply extension rules when transformation
rules are not applicable.
-----
APPENDIX C TRANSFORMATION RULES
The implementations of the transformation rules L and R.
Axiom (a) _a_ _a_
_L_ _R_
AdditionCommutativity _x1 + x2_ _x2 + x1_
AdditionAssociativity _x1 + (x2 + x3)_ (x1 + x2) + x3
AdditionSimplification _x1 + (−x1)_ 0
MultiplicatoinCommutativity _x1_ _x2_ _x2_ _x1_
_·_ _·_
MultiplicationAssociativity _x1_ (x2 _x3)_ (x1 _x2)_ _x3_
_·_ _·_ _·_ _·_
MultiplicationSimplification _x1_
_·_
_x1_
AdditionMultiplicationLeftDistribution (x1 + x2) · x3 _x1 · x3 + x2 · x3_
AdditionMultiplicationRightDistribution _x1 · (x2 + x3)_ _x1 · x2 + x1 · x3_
SquareDefinition _x[2]1_ _x1_ _x1_
_·_
MultiplicationOne _x1_ 1 or 1 _x1_ _x1_
_·_ _·_
AdditionZero _x1 + 0 or 0 + x1_ _x1_
SquareGEQZero NA NA
PrincipleOfEquality NA NA
EquMoveTerm NA NA
EquivalenceImpliesDoubleInequality NA NA
IneqMoveTerm NA NA
FirstPrincipleOfInequality NA NA
SecondPrincipleOfInequality NA NA
Table 4
-----
APPENDIX D EXTENSION FUNCTION
For these axioms, the core logic statement C needs to be of the form LHS(C) = RHS(C).
Axiom (a) Extension function E(C, a, N )
Sample node n Uniform ( )
AdditionCommutativity _∼_ _N_
**return RHS(C) +n = n+ LHS(C), ∅**
AdditionAssociativity Sample nodesreturn RHS(C n)+(1, nn21 ∼ + nUniform2) =LHS( ( NC)+) _n1 + n2, ∅_
AdditionSimplification **return 0 =LHS(C)+(−RHS(C)), ∅**
Sample node n Uniform ( )
MultiplicatoinCommutativity _∼_ _N_
**return RHS(C)·n = n·LHS(C), ∅**
MultiplicationAssociativity Sample nodesreturn RHS(C n)·1(, nn12 · ∼ n2Uniform) =LHS(C () N·n1) · n2, ∅
MultiplicationSimplification **return 1 =LHS(C)·**
1
RHS(C) [,][ ∅]
Sample nodes n1, n2 Uniform ( )
AdditionMultiplicationLeftDistribution **return (n1 + n2) · RHS ∼** (C) = _N_
_n1 · LHS(C) + n2 · LHS(C), ∅_
Sample nodes n1, n2 Uniform ( )
AdditionMultiplicationRightDistribution **return RHS(C) · (n1 ∼ + n2) =** _N_
LHS(C) · n1 + LHS(C) · n2, ∅
SquareDefinition **return LHS(C) · RHS(C) = LHS(C)[2], ∅**
**return Uniform ( {LHS(C)** 1 = RHS(C),
MultiplicationOne _·_
1 · LHS(C) = RHS(C) } ), ∅
**return Uniform ( {LHS(C) + 0 = RHS(C),**
AdditionZero
0 + LHS(C) = RHS(C) } ), ∅
SquareGEQZero **return LHS(C) · RHS(C) ≥** 0, ∅
Sample nodes n1, n2, where n1 = n2
PrincipleOfEquality **return LHS(C) + n1 ∼N = RHS(C) + n2, {n1 = n2}**
Only execute when LHS(C) is of the form x + y
EquMoveTerm
**return x = RHS(C) + (−y), ∅**
EquivalenceImpliesDoubleInequality **return LHS(C) ≥** RHS(C), ∅
Table 5
-----
For these axioms, the core logic statement C needs to be of the form LHS(C) ≥ RHS(C).
Axiom (a) Extension function E(C, a, N )
Only execute when LHS(C) is of the form x + y
IneqMoveTerm
**return x ≥** RHS(C) + (−y), ∅
FirstPrincipleOfInequality Sample nodesreturn LHS(C n) +1, n n21 ∼N ≥ RHS(, whereC) + n n12 ≥, {nn12 ≥ _n2}_
Sample node n, where n 0
SecondPrincipleOfInequality _∼N_ _≥_
**return LHS(C) ·n ≥** RHS(C) ·n, {n ≥ 0}
Table 6
-----
APPENDIX E DATASET STATISTICS
APPENDIX E.1 THEOREM LENGTH
We compare the length of the theorems generated in characters and plot their distributions in Figure
5. The length of the theorem in characters is a measure for how complicated it is. As is expected,
the more complicated the theorem is, the longer the proof(bigger L). It is also worth noting that as
_L becomes bigger, the distribution of theorem length becomes less concentrated. This is likely a_
consequence of a more spread-out theorem length range.
|5 0 5|Col2|Col3|K|3 L3|
|---|---|---|---|---|
||||K K|3 L5 3 L7|
||||||
||||||
||||||
Field axioms Ordered-field axioms
K3 L3
0.015 0.015 K3 L5
K3 L7
0.010 0.010
0.005 0.005
Probablity density
0.000 0.000
0 50 100 150 200 0 50 100 150 200
Problem length in characters Problem length in characters
Figure 5: The distribution of theorem length in characters for field axioms(left) and ordered-field
axioms(right) generated with parameters K3L3, K3L5, and K3L7. As the length of the proof
is increased, so is the number of characters in the theorem, while the distribution of latter is less
concentrated.
-----
APPENDIX E.2 AXIOM DISTRIBUTIONS
The frequency at which each axiom is applied influences the distribution of theorems our generator
is able to produce. In Figure 6, we present the proportions of axioms that are applied in generating
10,000 theorems. Their frequencies are a measure of how easy it is to satisfy the conditions to apply
them. For the field axioms, the PrincipleOfEquality axiom is the most frequently used(9.30%) and
the EquMoveTerm axiom is the most rarely used(2.38%). EquMoveTerm has a strict condition for
application: the left hand side of the core logic statement has to be of the form x + y, therefore not
frequently applied. For the ordered-field axioms, the EquivalenceImpliesDoubleInequality axiom
is the most frequently used(10.18%). Since we start with a trivial equality in generation and want
to end up with an inequality, a transition from equality to inequality is needed. Among the ways of
transitioning, this conditions to apply this axiom is easiest to satisfy. Its popularity is followed by the
group of Field axioms, from MultiplicationCommutativity(4.69%) to AdditionAssociativity(5.98%).
The rest are ordered-field axioms which define the properties of inequalities, proportions ranging
from IneqMoveTerm(1.14%) to FirstPrincipleOfInequality(5.74%).
EquMoveTerm 2.38% MultiplicationCommutativity 5.56% SquareDefinition 5.92% AdditionCommutativity 6.10% AdditionMultiplicationRightDistribution 6.44% AdditionMultiplicationLeftDistribution 6.49% MultiplicationAssociativity 6.51% MultiplicationLeftOne 7.17% MultiplicationRightOne 7.17% MultiplicationSimplification 7.24% AdditionAssociativity 7.35% AdditionSimplification 7.35% AdditionLeftZero 7.50% AdditionRightZero 7.50% PrincipleOfEquality 9.30%
IneqMoveTerm 1.14% EquMoveTerm 1.54% SquareGEQZero 2.31% SecondPrincipleOfInequality 2.68% MultiplicationCommutativity 4.69% FirstPrincipleOfInequality 4.74% AdditionCommutativity 5.05% SquareDefinition 5.26% MultiplicationAssociativity 5.37% AdditionMultiplicationLeftDistribution 5.46% MultiplicationSimplification 5.49% AdditionRightZero 5.58% AdditionLeftZero 5.58% AdditionMultiplicationRightDistribution 5.66% AdditionSimplification 5.76% MultiplicationLeftOne 5.78% MultiplicationRightOne 5.78% PrincipleOfEquality 5.96% AdditionAssociativity 5.98% EquivalenceImpliesDoubleInequality 10.18%
(a) Field axiom distribution
(b) Ordered-field axiom distribution
Figure 6
-----
APPENDIX E.3 NUMBER OF NODES
Since an action in the MDP consists of an axiom and a list of nodes as its arguments and the number
of axioms is fixed, the number of nodes available determines the size the action space. Therefore
it is interesting to investigate how many nodes are available in a proof. In Figure 7 we present the
average number of nodes in proofs of different length. It can be told from the figure that the longer
the proofs, the more nodes there will be, as expected. Comparing the axiom sets used, we find that
the average number of nodes for ordered-field axioms is larger than that of field axioms. This is
likely the consequence of ordered-field axioms, in generation, being more capable of producing new
premises(e.g. First Principle of Inequality will produce an inequality premise(see Table 6), thus
adding more nodes in the graphs).
2 3 4 5 6
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
|||||||||||||||||||||
Length
40
35
30
25
20
15
10
Figure 7
-----
APPENDIX F MORE EXPERIMENTAL DETAILS FOR GENERALIZATION WITH
SEARCH
We give more experimental details for the use of MCTS. Following (Silver et al., 2017), in the
selection step of the MCTS tree construction, we use the following formula to select the next action,
_b_ _[N]_ [(][s, b][)]
1 + N (s, a)
pP
_a[∗]_ = argmaxa
_Q(s, a) + cpuctP_ (s, a)
where Q(s, a) represents the action value function, N (s, a) denotes the visit counts, P (s, a) is the
prior probability, and cpuct is a constant hyperparameter. In all of our experiments, we used the
behavior cloning policy for computing P (s, a), and we used cpuct = 1. After the MCTS tree is built,
1
the action is sampled from the policy distribution π(a|s) = N (s, a) _τ, where τ is a hyperparameter_
and was chosen as 1 in our experiments.
-----
APPENDIX G MORE TRAINING AND EVALUATION RESULTS
APPENDIX G.1 LEARNING CURVES OF GNN-BASED AGENTS
Field axioms Ordered-field axioms
1.0
0.8
0.6 Trained on
K3 L3
0.4 K3 L5
Success rate K3 L7
0.2
0.0
0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5
Datapoints 1e7 Datapoints 1e7
Figure 8: Proof success rates for field axioms(left) and ordered-field axioms(right) of GNN-based
agents trained on different K and L parameters. We keep the K the same and vary the L. The agents
converge slower and to a lower success rate when the proof length is increased. Also, the agents on
field axioms are easier to train than those on ordered-field axioms.
APPENDIX G.2 PERFORMANCE VARIATION OF TRAINED AGENTS
To verify that the experimental results are statistically significant, we ran the experiments on proof
length generalization in subsection 4.3 with 5 random seeds and tabled the results.
Table 7: Success rates of agents trained and tested on problems of different parameters (mean ± std)
in percentage.
Transformers Tested on K3 L3 K3 L5 K3 L7
K3 L3 97.6 ± 0.9 31.5 ± 1.6 10.9 ± 1.0
Trained on K3 L5 97.2 ± 0.7 88.3 ± 1.2 59.5 ± 1.6
K3 L7 96.6 ± 1.2 87.0 ± 1.6 75.1 ± 1.2
GNNs Tested on K3 L3 K3 L5 K3 L7
K3 L3 91.5 ± 0.5 45.6 ± 1.7 16.5 ± 0.8
Trained on K3 L5 86.4 ± 0.9 77.8 ± 0.9 58.4 ± 1.5
K3 L7 82.0 ± 1.3 71.4 ± 1.1 56.5 ± 1.5
APPENDIX G.3 GNN-BASED AGENTS ON IID GENERALIZATION
Train Test
Field axioms Ordered-field axioms
1.0
0.8
0.6
0.4
Success rate
0.2
0.0
K3 L3 K3 L5 K3 L7 K5 L5 K5 L7 K3 L3 K3 L5 K3 L7 K5 L5 K5 L7
Figure 9: Proof success rates on problems generated with different K and L parameters (K denotes
the cardinality of the axiom combination of a proof, L denotes the length of the proof). When the IID
assumption holds, the success rate decreases as the two generation parameters K and L are increased.
-----
APPENDIX G.4 GNN-BASED AGENTS ON INITIAL CONDITION GENERALIZATION
Degree
0 1 2
Field axioms Ordered-field axioms
1.0
0.8
0.6
0.4
Success rate
0.2
0.0
K3 L3 K3 L5 K3 L7 K5 L5 K5 L7 K3 L3 K3 L5 K3 L7 K5 L5 K5 L7
Figure 10: Proof success rates on problems generated with different K and L parameters (K denotes
the cardinality of the axiom combination of a proof, L denotes the length of the proof). When
generalizing to different initial conditions, there are no obvious trends as to how the proof success
rate changes as the degree of the initial entities is varied.
-----
APPENDIX G.5 FULL RESULTS ON AXIOM ORDERS AND COMBINATIONS GENERALIZATION
Table 8: Top: Proof success rates (in %) of agents trained on different numbers of axiom orders.
**Bottom: Proof success rates (in %) of agents trained on different numbers of axiom combinations.**
_K denotes the cardinality of the axiom combination of a proof, L denotes the length of the proof._
Axiom 100 500 2000 5000
Architecture
orders Train Test Train Test Train Test Train Test
K3 L3 98.4 32.6 99.5 90.0 98.8 98.7 97.6 97.6
K3 L5 95.3 6.3 94.0 56.3 94.0 94.9 96.5 94.9
K3 L7 87.8 3.8 88.0 46.4 88.3 77.5 88.4 85.5
K5 L5 94.7 5.6 97.0 72.9 97.4 93.1 97.5 96.9
K5 L7 89.7 1.8 88.6 48.6 89.3 75.2 88.6 84.0
**Average** 93.2 10.0 93.4 62.8 93.6 87.9 93.7 91.8
K3 L3 84.3 38.6 94.4 73.9 93.7 89.0 90.5 92.3
K3 L5 92.7 17.1 86.3 60.0 84.4 72.9 77.7 77.1
K3 L7 82.4 14.1 82.4 33.8 68.6 57.7 70.2 63.5
K5 L5 91.0 23.0 89.7 61.2 81.8 75.0 78.3 80.8
K5 L7 87.5 12.9 80.2 39.0 66.5 57.4 61.6 60.0
**Average** 87.6 21.1 86.6 53.6 79.0 70.4 75.7 74.7
Transformer
GNN
Axiom 25 100 200 300
Architecture
combos Train Test Train Test Train Test Train Test
K3 L3 99.2 34.1 99.0 72.8 99.5 96.1 98.6 98.2
K3 L5 97.8 29.3 98.6 66.3 97.5 89.5 94.3 90.4
K3 L7 93.6 25.0 91.9 55.9 91.5 80.0 91.9 85.9
K5 L5 98.5 27.4 98.4 87.6 97.0 93.6 97.3 94.9
K5 L7 91.2 30.5 92.2 76.3 91.7 82.9 90.0 87.0
**Average** 96.1 29.3 96.0 71.8 95.4 88.4 94.4 91.3
K3 L3 96.3 61.6 96.0 90.1 92.7 91.2 95.3 92.0
K3 L5 82.1 43.4 80.3 68.9 78.5 74.9 76.5 76.1
K3 L7 72.1 34.3 68.1 57.2 62.3 63.7 62.5 62.0
K5 L5 77.8 61.6 78.9 71.0 74.5 78.4 72.8 74.9
K5 L7 67.2 36.8 59.7 52.7 54.9 54.0 56.7 54.5
**Average** 79.1 47.5 76.6 68.0 72.6 72.4 72.8 71.9
Transformer
GNN
-----
APPENDIX G.6 GNN-BASED AGENTS ON AXIOM NUMBER GENERALIZATION
Trained on
K3 L7 K5 L7 K7 L7
Field axioms Ordered-field axioms
1.0
0.8
0.6
0.4
Success rate
0.2
0.0
K3 L7 K5 L7 K7 L7 K3 L7 K5 L7 K7 L7
Figure 11: Proof success rates on problems generated with different parameters ((K denotes the
cardinality of the axiom combination of a proof, L denotes the length of the proof). We keep
parameter L the same and vary parameter K. The success rate is likely to decrease when an agent is
evaluated on problems that have different K than the problems it is trained on.
APPENDIX G.7 GNN-BASED AGENTS ON PROOF LENGTH GENERALIZATION
Trained on
K3 L3 K3 L5 K3 L7
Field axioms Ordered-field axioms
1.0
0.8
0.6
0.4
Success rate
0.2
0.0
K3 L3 K3 L5 K3 L7 K3 L3 K3 L5 K3 L7
Figure 12: Proof success rates on problems generated with different parameters ((K denotes the
cardinality of the axiom combination of a proof, L denotes the length of the proof). We keep
parameter K the same and vary parameter L. For all agents, the proof success rate is lower on
theorems that require longer proofs. The best-performing agent for problems of a given length is
usually the agent trained on problems of the same length.
-----
APPENDIX H THEOREM PROVING AS A MARKOV DECISION PROCESS (MDP)
We model theorem proving as a Markov Decision Process. A state s in the MDP is the proof state
maintained by the assistant, namely, the goal, the premises and the proven facts, represented by
computation graphs. An action a is a tuple of an axiom and a sequence of arguments. We denote the
axiom space as X and the argument space, the set of all the nodes in available computation graphs, as
_N_ . The maximum number of arguments for one axiom within our axiomizations is 3, therefore the
action space is A = X × N [3]. The assistant ignores redundant arguments if fewer than 3 are needed
for the axiom considered. We show in Appendix E.3 the distribution of the number of nodes for proofs
of different length. The size of the discrete action space can be as large as 18 × 42[3] _≈_ 1.33 × 10[6].
The deterministic state transition function P (s, a) is implicitly determined by the proof assistant.
When the proof assistant deems the proof complete and the theorem proven, the episode terminates
and a reward of one is given. Otherwise, the reward is zero at each step. When the step limit for a
proof is exhausted, the episode terminates with a reward of zero. For experiments in this paper, we
used a step limit of 15.
-----
APPENDIX I EXAMPLE PROBLEMS
### Equality theorems
**Theorem 1**
Goal: ((0 · 1) · ((−(a[2])) · c)) = (((−(a[2])) · ((a · a) + (−(a[2])))) · c)
**Theorem 2**
Goal: (((((0 + c) + a) · a) · 1) · (b · (0 + c))) = ((((c · a) + (a · a)) · (0 + c)) · b)
**Theorem 3**
Goal: 0 = ((((c + 0) · (a + a)) · (
1
((c _a)+(c_ _a))_ [)) + (][−][(0 + 1)))]
_·_ _·_
**Theorem 4**
Premises: (b + d) = b
Goal: (1 + (−((b + b) · (
1
((b+(b+d)) 1) [)))) = (0 + 0)]
_·_
**Theorem 5**
Premises: (a + d) = b
Goal: 1 = (((d · ((a + d) + ((c + (a + d)) + 0))) · ((d · (a + d)) + (d · (c + b)))) · ( ((d·((a+d)+((c1+(a+d))+0)))[2]) [))]
**Theorem 6**
Premises: ((b · b) + d) = (b · b)
Goal: (0 + ((b · b) + d)) = (((1 · ((b + b) · b)) + (−(((b · b) + (b · b)) · 1))) + (b · b))
**Theorem 7**
Goal: ((a · (a + 0)) + ((−(0 + a)) · (a + 0))) = ((a · 0) + (0 · 0))
**Theorem 8**
Goal: (((c · c) + c) · ((c[2]) · 1)) = (((c · c) · (0 + (c · c))) + (c · (0 + (c · c))))
**Theorem 9**
Goal: 1 = ((((a · c) + ((b · (a · b)) · c)) · (a + (a · c))) · (
1
((((a+((b _a)_ _b))_ _c)_ (a _c))+(((a+((b_ _a)_ _b))_ _c)_ _a))_ [))]
_·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_
**Theorem 10**
Goal: ((((b · c) + (c · c)) + (−(0 + ((b + c) · c)))) · (c · c)) = ((c[2]) · 0)
**Theorem 11**
Goal: (1 · (b + a)) = ((0 + (a + b)) + 0)
**Theorem 12**
Goal: (((−c) · (−c)) + (((−c) · c) + ((−c) · (−c)))) = (((−c) · (−c)) + (0 · (−c)))
**Theorem 13**
Goal: (((a[2]) · (a · (a + 0))) + (a · (a · (a + 0)))) = ((((a[2]) · (a[2])) + (a · (a[2]))) + 0)
**Theorem 14**
Goal: ((((b · 1) · (a · c)) · (b · a)) + (((b · 1) · (a · c)) · (b · a))) = (((((b · a) · c) · (b · a)) + (((b · a) · c) · (b · a))) · 1)
**Theorem 15**
Goal: 1 = (( ((
**Theorem 16**
1
1
(b+0) [)][·][b][)] [)][ ·][ 1)]
-----
Goal: 0 = ((0 + (−((a · b) + (−(b · a))))) + (−(0 · 1)))
**Theorem 17**
Premises: (a + d) = c; ((b + c) + e) = (a + d)
Goal: (((b · d) + (b · (b + (a + d)))) + ((b + c) + e)) = ((((b · d) + (b · (b + c))) · 1) + (a + d))
**Theorem 18**
Goal: (((( [1]b [)][ ·][ b][)][ ·][ b][)][ ·][ 1) = ((][b][ ·][ 1)][ ·][ 1)]
**Theorem 19**
Goal: (((1 · (b · (c + a))) + (b · a)) + 1) = (1 · ((1 · ((b · c) + (b · a))) + ((b · a) + 1)))
**Theorem 20**
Premises:Goal: (((a ( + (b +b d +) = d)) c ·; ((1 ( ((1 ·· aa1)+) +c) e[)) + ((1]) = a _[ ·][ a][) +][ e][)) = ((1][ ·][ 1) +][ a][)]_
**Theorem 21**
Goal: ((((c[2]) · ((c[2]) · c)) + (−(((c · c) · (c[2])) · c))) + (b + b)) = ((1 · ((0 + b) + b)) + (−0))
**Theorem 22**
Premises: (b + d) = (a · b)
Goal: (1·((((c+c)·(((a·b)·c)+(c+c)))+((c+c)·(c+c)))+(a·b))) = (((((c+c)·((((a·(b·c))+c)+c)+(c+c)))+(b+d))·1)+0)
**Theorem 23**
Premises: ((0 · 1) + d) = (1 · 0)
Goal: (((((a+(0·1))·(1·0))+(−b))+(1·0))+(1·0)) = (((((a·(1·0))+((b+(−b))·(1·0)))+((−b)+(1·0)))+((0·1)+d))+0)
**Theorem 24**
Premises: (a + d) = (1 + c)
Goal: (((((1·b)+(c·b))+(1+c))[2])·((1+c)·b)) = ((((((1·b)+(c·b))+(1+c))·(((b·(1+(1·c)))+(a+d))·1))·(1+c))·b)
**Theorem 25**
Premises: (a + d) = (b · 1)
Goal: 0 = ((b + (a + d)) + (−((b · 1) + (b · 1))))
**Theorem 26**
Premises: (c + d) = a
Goal: (0 + ((((a + a) · 1) + a) · 1)) = (1 · ((1 · ((a + (c + d)) + 0)) + (1 · a)))
**Theorem 27**
Premises: (c + d) = (b + c)
Goal: (1 · ((((b + c) · c) + (b · (b + c))) + (c + d))) = ((((b + c)[2]) + (b + c)) · 1)
**Theorem 28**
Premises: ((1 · b) + d) = b
Goal: (((((1 · b) + b) · (a · 1)) · (((b + ((1 · b) + d)) · a) · 1)) + 0) = ((((b + (1 · b)) · (a · 1))[2]) · 1)
**Theorem 29**
Goal: (((b · 1) + 0) · (1 · 0)) = (((b · 1) · ((−(0 + b)) + (1 · b))) + (0 · ((−(0 + b)) + (1 · b))))
**Theorem 30**
-----
Goal: (1 · 1) = (((((a · (c + c)) + 0) · (b · (c + c))) · (
**Theorem 31**
Goal: ((1 · (b · b)) · b) = (1 · (0 + (((0 + b) · b) · b)))
**Theorem 32**
Goal: (((c · (c · 1)) + 0) · 1) = (((c · c) + 0) · 1)
1
((((a _c)+(a_ _c))_ _b)_ (c+c)) [)) + 0)]
_·_ _·_ _·_ _·_
**Theorem 33**
Goal: 1 = (1 · (
1
1
((b·( [1]b [))+0)][ ))] [))]
((1+0)·(
**Theorem 34**
Goal: (((((((c + a) · a) · (c + a)) · c) · (a + c)) · (c + a)) · (c + a)) = (((((((a + c) · (c + a)) · a) · c) · (a + c)) · (c + a)) · (c + a))
**Theorem 35**
Goal: 0 = ((−(1 · 0)) + ((−(c + c)) + ((1 · c) + c)))
**Theorem 36**
Goal: 1 = (1 · (
1
1
(((a+c)+a)+(−(c+a))) [))] [))]
(a·(
**Theorem 37**
Premises: (a + d) = a; (( [1]c [) +][ e][) =][ b]
Goal: (((1 · (1 · ( (c·(1[1]c [))] [))) +][ a][) +][ b][) = (1][ ·][ (((1][ ·][ 1) + (][a][ +][ d][)) + ((][ 1]c [) +][ e][)))]
**Theorem 38**
Goal: 0 = ((b · (b + (−b))) + (−(((0 + 0) · b) + 0)))
**Theorem 39**
Goal: (((1 · c) + (−(1 · (c · 1)))) · 1) = ((0 · 1) · 1)
**Theorem 40**
Goal: ((a + b) · (1 · ((b · c) + (c · c)))) = ((a · ((c · c) + (b · c))) + (b · ((c · c) + (b · c))))
**Theorem 41**
Goal: (0 + ((0 + ((c + c) · c)) · (a · b))) = (0 + ((((c · c) · a) + ((c · c) · a)) · b))
**Theorem 42**
Premises: (0 + d) = 1
Goal: ((((1 · 0) + (a + (a · 1))) + 0) + d) = (((((1 · a) + (−(a · 1))) + a) + (a · 1)) + 1)
**Theorem 43**
Premises: (b + d) = 0
Goal: 0 = ((((((0 + b) · 0) + ((0 + b) · b)) · 1) + 0) + (−((((b · 0) + (b · b)) + (b + d)) · 1)))
**Theorem 44**
Goal: ((0 + c) · ((−c) + (((c · 1) + 0) + (−c)))) = (((0 + c) · (−c)) + ((0 + c) · 0))
**Theorem 45**
Goal: 0 = (0+(−(((0·0)+(a·0))+(−(((((((a·b)+(a·b))+((b+b)+b))+(−(((a·(b+b))+(b+b))+b)))+a)·0)·1)))))
**Theorem 46**
-----
Premises: ((a + b) + d) = (a + b); (b + e) = a
Goal: (a · a) = (1 · (a · a))
**Theorem 47**
Premises: (c + d) = c
Goal: ((b · (1 + 0)) + (b · (c + d))) = (0 + (b · ((0 + (1 · (
1
(b+c) [))] [))) + (][c][ +][ d][))))]
((b+(c+d))·(
**Theorem 48**
Goal: ((b + ((((a + a) · 1) · a) + 0)) · a) = ((b · a) + ((((a · a) · 1) + (a · (a · 1))) · a))
**Theorem 49**
Goal: (((1 + b) · (((a · ((c · 1) + (c[2]))) + 1) + b)) + ((1 + b) · (((a · ((c · 1) + (c[2]))) + 1) + b))) = ((((1 + b) + (1 + b)) ·
((((a · (c · 1)) + (a · (c · (c · 1)))) + (1 + b)) · 1)) · 1)
**Theorem 50**
Goal: 0 = (((((0+(((c·c)+(c·c))+((c+c)+(c·c))))+0)+c)·a)+(−(((0+((((c+c)·c)+(c+c))+(c·c)))·a)+(c·a))))
### Inequality theorems
**Theorem 1**
Premises: (1 + d) ≥ 0; (b + e) ≥ 0
Goal: ((((1 + 1) · (a · ( _a[1]_ [)))][ ·][ (1 +][ d][)) + (][b][ +][ e][))][ ≥] [((((1][ ·][ 1) + (1][ ·][ 1))][ ·][ (1 +][ d][)) + 0)]
**Theorem 2**
Goal: (b[2]) ≥ (0 + (b · (1 · b)))
**Theorem 3**
Premises: ((c + 0) + d) ≥ 0; (d + e) ≥ _b_
Goal: ((c · ((c + 0) + d)) + (d + e)) ≥ (((0 + c) · ((c + 0) + d)) + b)
**Theorem 4**
Goal: (b + 0) ≥ ((((0 + b) + c) + c) + (−(c + c)))
**Theorem 5**
Premises: (1 + d) ≥ 0
Goal: ((((c · c) + c) + a) · (1 + d)) ≥ ((((c[2]) + (c + a)) · 1) · (1 + d))
**Theorem 6**
Premises: (b + d) = b
Goal: 1 ≥ ((((a + b) + (−(b + d))) · ((a + b) + b)) · (
**Theorem 7**
Premises: ((0 + a) + d) = 0
Goal: (((0 + a) · a) + ((0 + a) + d)) ≥ ((a[2]) + 0)
1
((a (a+b))+(a _b))_ [))]
_·_ _·_
**Theorem 8**
Premises: (b + d) = a
Goal: ((c · b) + (b · b)) ≥ (1 · ((((c + a) + (−(b + d))) + b) · b))
-----
**Theorem 9**
Goal: (1 · ((((b · ( [1]b [))][ ·][ a][)][ ·][ (1][ ·][ 1)) +][ a][))][ ≥] [((1][ ·][ ((1][ ·][ 1)][ ·][ (][a][ ·][ (1][ ·][ 1)))) + (1][ ·][ a][))]
**Theorem 10**
Premises: (c + d) ≥ 0
Goal: (b · (c + d)) ≥ ((((b + b) + 0) + (−b)) · (c + d))
**Theorem 11**
Goal: (((b + 0) + (b + c)) + 0) ≥ (((b + b) + c) + 0)
**Theorem 12**
Goal: ((c · (c + 0)) + 0) ≥ ((c[2]) + 0)
**Theorem 13**
Goal: (1 · (b · 1)) ≥ ((1 · b) · 1)
**Theorem 14**
Goal: 1 ≥ ((((b · ( [1]b [)) + (][ 1]b [)) + 1)][ ·][ (]
1
((1+( [1]b [))+1)] [))]
**Theorem 15**
Goal: 1 ≥ (( ((c·a)·(
1
(a·c) [))] [)][ ·][ 1)]
**Theorem 16**
Goal: ((c · (a · a)) + (((a · a) + (c · a)) · (a · a))) ≥ (0 + ((c + (0 + ((a + c) · a))) · (a · a)))
**Theorem 17**
Goal: (((c · b) + a) · ((c · b) + (c · b))) ≥ ((a · ((c · b) + (c · b))) + ((c · b) · ((c · b) + (c · b))))
**Theorem 18**
Goal: ((a · b) · 1) ≥ ((((a · 1) · b) · 1) · 1)
**Theorem 19**
Goal: a ≥ ((a + c) + (−c))
**Theorem 20**
Goal: ((c · b) · b) ≥ (b · (b · c))
**Theorem 21**
Premises: (a + d) = a; ((a + d) + e) ≥ 0; (b + f ) ≥ (0 · 0)
Goal: ((((((c _·_ 0)+(0 _·_ 0))+(a + _d))_ _·_ ((0+((c +0) _·_ (a +(−a))))+ _a))_ _·_ ((a + _d)+_ _e))+(b_ + _f_ )) ≥ ((0 _·_ ((a + _d)+_ _e))+(0_ _·_ 0))
**Theorem 22**
Premises:Goal: ((((((0 + ( (c + d)c ≥ + (0;− ((0 + 0) +c))) · (−c)) e ·) ≥ ( ((0(0 + 0)·(−c))+(01 _·(−c)))_ [))][ ·][ (0 + 1))][ ·][ (][c][ +][ d][)) + ((0 + 0) +][ e][))][ ≥] [((0][ ·][ (][c][ +][ d][)) + (0 + 0))]
**Theorem 23**
Premises: ((a[2]) + d) ≥ 0
Goal: ((((a · a) + c) · (0 + (1 · (a · a)))) · ((a[2]) + d)) ≥ ((((a · a) · ((a[2]) + 0)) + (c · ((a[2]) + 0))) · ((a[2]) + d))
**Theorem 24**
Premises: (c + d) = c; ((0 + a) + e) ≥ _a_
-----
Goal: ((((a + b) · (((a + (−a)) + (a + b)) + (c + d))) · (((((0 + a) + b) + c) · (a + b)) · 1)) + ((0 + a) + e)) ≥ (0 + a)
**Theorem 25**
Goal: 1 ≥ ((a · (c + b)) · (
1
((a _c)+(a_ _b))_ [))]
_·_ _·_
**Theorem 26**
Premises: (a + d) ≥ _b_
Goal: ((0·((((((a+c)+a)·(a·c))·(a·c))+((a·c)·(a·c)))+(−((((((a+(c+a))·a)·c)+(a·c))·(a·c))+0))))+(a+d)) ≥ (0+b)
**Theorem 27**
Premises: ((c · b) + d) = (b · b); ((b · b) + e) ≥ _a_
Goal: ((((b + b) + (b + b)) · ((((c · (b · b)) + b) + b) + (b[2]))) + ((b · b) + e)) ≥ ((((b + b) · ((((c · b) · b) + (b + b)) + ((c · b) +
_d))) + ((b + b) · ((((c · b) · b) + (b + b)) + ((c · b) + d)))) + a)_
**Theorem 28**
Premises: ((b · 0) + d) ≥ _c_
Goal: ((((b + (((0 + c) + (0 + c)) + 0)) · 0) · ((b · 0) + (((0 + c) + (0 + c)) · 0))) + ((b · 0) + d)) ≥ (0 + c)
**Theorem 29**
Premises: (a + d) ≥ 0
Goal: ((0 · ((((c · c) + (c · 0)) · a) + (−(((c + 0) · ((c + 0) · a)) · 1)))) + (a + d)) ≥ (0 + 0)
**Theorem 30**
Premises: (a + d) ≥ _c_
Goal: (((b · (b · 1)) + (b · c)) + (a + d)) ≥ ((0 + (b · ((b · 1) + c))) + c)
**Theorem 31**
Goal: (0 + (0 + (c + b))) ≥ (0 + ((b + c) + 0))
**Theorem 32**
Goal: (a + (a + 0)) ≥ ((((0 + a) + 0) + a) + 0)
**Theorem 33**
Premises: ((c + c) + d) ≥ _a; (d + e) ≥_ 0; ((c + c) + f ) ≥ (0 + a); (b + g) ≥ 0
Goal: (((((((c+c)+(c+c))·((c+c)+(c+c)))+((c+c)+d))+(d+e))+((c+c)+f ))+(b+g)) ≥ ((((0+a)+0)+(0+a))+0)
**Theorem 34**
Goal: (((0 + b) + c) + a) ≥ (0 + (0 + (b + (c + a))))
**Theorem 35**
Premises: (a + d) ≥ 0; (a + e) ≥ (c · c); (e + f ) ≥ 0; (c + g) ≥ 0; (c + h) ≥ (c + g); (c + i) ≥ 0
Goal: (((((((c · c) · (a + d)) + (a + e)) · (e + f )) · (c + g)) + (c + h)) · (c + i)) ≥ ((((((0 · (a + d)) + (c · c)) · (e + f )) · (c +
_g)) + (c + g)) · (c + i))_
**Theorem 36**
Goal: (1 · (1 · (1 · a))) ≥ (1 · ((a + 0) + 0))
**Theorem 37**
Premises: (b + d) ≥ _b; ((c + b) + e) ≥_ _c; (b + f_ ) ≥ _a; (e + g) ≥_ (b + f )
Goal: (((c + (b + d)) + (b + f )) + (e + g)) ≥ (((((c + b) + c) + (−((c + b) + e))) + a) + (b + f ))
-----
**Theorem 38**
Goal: ((a + (((b + c) · (b + c)) + ((c + b) · b))) · ((c + b) + (c + b))) ≥ ((((((b + c) · (c + b)) + ((b + c) · b)) + a) · (c + b)) +
(((((b + c) · (c + b)) + ((b + c) · b)) + a) · (c + b)))
**Theorem 39**
Premises: (c + d) = b; ((c + b) + e) = (c + d); (a + f ) ≥ 0; (0 + g) ≥ 0; (g + h) ≥ 0; (d + i) ≥ 0
Goal: ((((((c+(c+d))+((c+b)+e))·(a+f ))·(0+g))·(g+h))·(d+i)) ≥ ((((((c+b)+(c+d))·(a+f ))·(0+g))·(g+h))·(d+i))
**Theorem 40**
Goal: ((((c + a) · b) · b) + (a + c)) ≥ ((a + c) + (((a + c) · b) · b))
**Theorem 41**
Goal: (((c + b) + (a + (c + b))) · (
1
((((1 _c)+b)+a)+(c+b))_ [))][ ≥] [(1][ ·][ 1)]
_·_
**Theorem 42**
Premises: (c + d) = b
Goal: (((((c·b)+(c[2]))·((b+c)·(c·b)))+(c+d))·(((((c·(b+c))·(b+c))·c)·b)+b)) ≥ (((((c·b)+(c[2]))·((b+c)·(c·b)))+(c+d))[2])
**Theorem 43**
Goal:Premises: ((1 ( · (ac + + d f) =)) · b ((; (b +d + b e) +) = g a)); ≥ (c((((( + f )b ≥ + b0) +; (( ab +) · b () +(0+(( g)b ≥+(a0+1d))+(d+e))) [))][ ·][ (][c][ +][ f] [))][ ·][ ((][b][ +][ b][) +][ g][))]
**Theorem 44**
Goal: (((((a · 1) · a) · 1) · b) + (((a · 1) · (a · 1)) · (a · a))) ≥ (1 · ((((a · a) · 1) · b) + (((a · a) · 1) · (a · a))))
**Theorem 45**
Premises: ((c + 0) + d) ≥ _b; (1 + e) ≥_ _a_
Goal: ((0 + ((c + 0) + d)) + (1 + e)) ≥ (((0 + (−((c · 1) + (−(c + 0))))) + b) + a)
**Theorem 46**
Premises: (c + d) ≥ (a · c)
Goal: (((1 · (1 · (a · (a · c)))) · ((1 · ((a · a) · c)) + 0)) + (c + d)) ≥ (0 + (a · c))
**Theorem 47**
Premises: (c + d) ≥ _c_
Goal: ((c · (0 + c))[2]) ≥ (((0 + ((c · (0 + c)) · (c[2]))) + c) + (−(c + d)))
**Theorem 48**
Premises: (a + d) = b
Goal: (1 · ((b + b) + (−(1 · (b + (a + d)))))) ≥ (1 · (0 · 1))
**Theorem 49**
Premises: ((c · b) + d) = a; ((c · b) + e) ≥ _b_
Goal: (((b · b) · (a · (c · b))) + ((c · b) + e)) ≥ ((((b · b) · a) · (c · b)) + b)
**Theorem 50**
Goal: (((a + c) · (c + a)) + ((a · (c + a)) + ((c · c) + (c · a)))) ≥ (((a + c) · ((c + a) + (c + a))) · 1)
-----
| [
"Albert Q., Jiang",
"Yuhuai, Wu",
"Jimmy, Ba",
"Roger, Grosse"
] | 2020-07-01T00:00:00 | ICLR 2021 | true | 49 | 17 | null | https://arxiv.org/abs/2007.02924 | https://arxiv.org/abs/2007.02924 | https://www.semanticscholar.org/paper/016863a86189c4e8ccecf9a36c4406c439a8a84c |
Math Word Problem Solving with Explicit Numerical Values | In recent years, math word problem solving has received considerable attention and achieved promising results, but previous methods rarely take numerical values into consideration. Most methods treat the numerical values in the problems as number symbols, and ignore the prominent role of the numerical values in solving the problem. In this paper, we propose a novel approach called NumS2T, which enhances math word problem solving performance by explicitly incorporating numerical values into a sequence-to-tree network. In addition, a numerical properties prediction mechanism is used to capture the category and comparison information of numerals and measure their importance in global expressions. Experimental results on the Math23K and APE datasets demonstrate that our model achieves better performance than existing state-of-the-art models. | A novel approach called NumS2T is proposed, which enhances math word problem solving performance by explicitly incorporating numerical values into a sequence-to-tree network. | # Math Word Problem Solving with Explicit Numerical Values
**Qinzhuo Wu, Qi Zhang, Zhongyu Wei, Xuanjing Huang[∗]**
Shanghai Key Laboratory of Intelligent Information Processing,
School of Computer Science, Fudan University, Shanghai, China
(qzwu17,qz,zywei,xjhuang)@fudan.edu.cn
In recent years, math word problem solving
has received considerable attention and
achieved promising results, but previous
methods rarely take numerical values into
consideration. Most methods treat the
numerical values in the problems as number
symbols, and ignore the prominent role
of the numerical values in solving the
problem. In this paper, we propose a novel
approach called NumS2T, which enhances
math word problem solving performance by
explicitly incorporating numerical values into
a sequence-to-tree network. In addition, a
numerical properties prediction mechanism is
used to capture the category and comparison
information of numerals and measure
their importance in global expressions.
Experimental results on the Math23K and
APE datasets demonstrate that our model
achieves better performance than existing
state-of-the-art models. [1]
**1** **Introduction**
Taking a math word problem as input, the math
word problem solving task aims to generate a corresponding solvable expression and answer. With
the advancements in natural language processing,
math word problem solving has received growing
attention in recent years (Roy and Roth, 2015;
Mitra and Baral, 2016; Ling et al., 2017; Huang
et al., 2018). Many methods have been proposed
that use sequence-to-sequence (seq2seq) models
with an attention mechanism (Bahdanau et al.,
2014) for math word problem solving (Wang et al.,
2017b, 2018b, 2019). To better utilize expression
structure information, some methods use sequenceto-tree (seq2tree) models to generate expressions
_∗_ Corresponding author.
1Code is available at [https://github.com/](https://github.com/ qinzhuowu/NumS2T/)
[qinzhuowu/NumS2T/](https://github.com/ qinzhuowu/NumS2T/)
**Problem: A school purchased several pairs of new**
desks and chairs for grade students. Each desk 𝑣1
15 1 10
9.9 3 8.3
6.5 7 8
is worth $ , and each chair is worth $ . 𝑣2 𝑣3
The price difference between the tables and the
chairs is $ . There are more students 𝑣4 𝑣5
100 25
64 20%
18 (1/3)
than chairs. How many students are there?
**Expression 1:** 𝑣4 / ( 𝑣2 - 𝑣3 ) + 𝑣5
**Numerical expression: 100 / ( 15 – 10 ) + 25**
**Expression 2:** 𝑣4 / ( 𝑣2 - 𝑣3 ) * ( 1 + 𝑣5 )
**Numerical expression: 64 / (9.9 – 8.3) * (1+20%)**
**Expression 3:** 𝑣4 / ( 𝑣3 - 𝑣2 ) * ( 1 + 𝑣5 )
**Numerical expression: 18 / (8 – 6.5) * (1 + (1/3))**
Figure 1: Example of a math word problem. The
same problem with different numerical values may
correspond to different math expressions. Without
numerical value information, the model can hardly
determine which expression is correct.
and have achieved promising results (Liu et al.,
2019; Xie and Sun, 2019; Wu et al., 2020). These
methods convert the target expression into a binary
tree, and generate a pre-order traversal sequence of
this expression tree based on the parent and sibling
nodes of each node.
Although promising results have been achieved,
previous methods rarely take numerical values into
consideration, despite the fact that in math word
problem solving, numerical values provide vital
information. As an infinite number of numerals can
appear in math word problems, it is impossible to
list them all in the vocabulary. Previous methods replace all the numbers in the problems with number
symbols (e.g., v1, v2) in order in the preprocessing
stage. These replaced problems are used as input
5859
-----
to directly generate expressions containing number
symbols. The number symbols in the expressions
are then replaced with the numerical values in the
original problems to obtain executable expressions.
As shown in Figure 1, taking the problem with
numerical values _v2=15, v3=10, v4=100, v5=25_
_{_ _}_
as input, the target expression of the problem would
be “v4/(v2 _v3) + v5”. However, if the number_
_−_
symbol v5 = 20%, the target expression for the
same problem would be “v4/(v2 _v3)_ (1 + v5)”.
_−_ _∗_
Similarly, without numerical value information, the
model can hardly determine whether the number
gap between the table and the chair should be
_v2 −_ _v3 or v3 −_ _v2. As such, it will incorrectly_
generates the same expression for problems with
different numerical values.
To address these problems, we propose a novel
approach called NumS2T to better capture numerical value information and utilize numerical
properties. Specifically, the proposed model uses a
sequence-to-tree network with a digit-to-digit number encoder that explicitly incorporates numerical
values into the model and captures number-aware
problem representations. In addition, we designed
a numerical properties prediction mechanism to further utilize the numerical properties. NumS2T predicts the comparative relationship between paired
numerical values, determines the category of each
numeral, and measures their importance for generating the final expression. With the category
and comparison information, the model can better
identify the interactive relationship between the
numerals, and thus generate better results. With
consideration of the importance of the numerals,
the model can capture the global relationship
between the numerals and target expressions rather
than simply focusing on the local relationship
between numeral pairs.
The main contributions of this paper can be
summarized as follows:
_• We explicitly incorporate numerical value_
information into math word problem solving
tasks.
_• We propose a numerical properties prediction_
mechanism to utilize numerical properties. To
incorporate the local relationship between numerals and the global relationship associated
with the final expression, NumS2T compares
the paired numerical values, determines the
category of each numeral, and then measures
whether they should appear in the final expression.
_• We conducted experiments on two large-_
scale Math23K and Ape210K datasets to
verify the effectiveness of our NumS2T model.
The results show that our model achieved
better performance than existing state-of-theart methods.
**2** **Models**
In this section, we present details regarding our
proposed NumS2T model. As shown in Figure 2,
we use an attention-based sequence-to-tree model
with a problem encoder (Section 2.2) and a treestructured decoder to generate math expressions
(Section 2.4). In addition, we explicitly incorporate
numerical values to obtain number-aware problem
representations (Section 2.3). Finally, we propose
a numerical properties prediction mechanism to
further utilize the numerical properties (Section
2.5).
**2.1** **Problem Definition**
A math word problem X = (x1, x2, . . ., xm) is a
sequence of m words. Our goal is to generate a
math expression Y = (y1, y2, . . ., yn), where Y is
the pre-order traversal sequence of a binary math
expression tree, which can be executed to produce
the answer to problem X.
Here, we replace all of the numbers in the problem X with a list of number symbols based on their
order of appearance. Let Vc = (v1, v2, . . ., vK)
be the K numbers that appear in problem X.
The numerical value of the k-th number vk is
a sequence of l characters (vk[1][, v]k[2][, . . ., v]k[l] [)][. The]
generated vocabulary Vg is composed of several
common numbers (e.g., 1,100,π) and several math
operators (e.g., +,-,*,/). At each time step during
decoding, the NumS2T model either copies a
number from Vc or generates a number from Vg.
**2.2** **Problem Encoder**
We use a two-layer bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) network as the encoder, which encodes the math
word problem X into a sequence of hidden states
5860
-----
|ncode e|ncode|encod|e encode|encode|
|---|---|---|---|---|
|𝑥2 𝑥𝑖 𝑥𝑖+1 𝑥𝑚|Col2|
|---|---|
|Embedding Layer||
|Bi-LSTM||
|ph Attention Network||
|𝐡𝟐𝐱 𝐡𝐢𝐱 𝐡 𝐡𝐤𝐠 𝐡𝐤𝐠 𝐡𝐤 𝟐 𝐢 𝐢 𝐡𝟐𝐧 𝐡𝐢𝐧 𝐡𝐢𝐧 𝐡𝟐𝐜𝐧 𝐡𝐢𝐜𝐧 𝐡𝐢𝐜 𝐡𝟐 𝐡𝐢 𝐡𝐢|𝐢𝐱 𝐡𝐦𝐱 +𝟏 𝐠 𝐡𝐤𝐠 +𝟏 𝐦 𝐡𝐦𝐧 +𝟏 +𝐧 𝐡𝐦𝐜𝐧 𝟏 +𝟏 𝐡𝐦|
|ention & Aggregation Copy Mechanism||
|Decimal Fraction|Percentage|
|---|---|
|−|𝑔6.5 < 𝑔7|𝑔8 ≥𝑔7|𝑔18 ≥𝑔7|𝑔(1/3) < 𝑔|
|---|---|---|---|---|
|𝑔 ≥𝑔|−|𝑔 ≥𝑔 𝑔|≥𝑔|𝑔(1/3) ≥𝑔6.|
|7 6.5 𝑔7 < 𝑔8|𝑔6.5 < 𝑔8|8 6.5 −|18 6.5 𝑔18 ≥𝑔8|𝑔(1/3) < 𝑔|
|𝑔7 < 𝑔18 𝑔|6.5 < 𝑔18|𝑔8 < 𝑔18|−|𝑔(1/3) < 𝑔1|
|𝑔7 ≥𝑔(1/3) 𝑔|6.5 ≥𝑔(1/3)|𝑔8 ≥𝑔(1/3)𝑔|18 ≥𝑔(1/3)|−|
Math Word Problem Numerical Values digit-to-digit encode
7 6.5 8 18 (1/3)
𝑥1 𝑥2 𝑥𝑖 𝑥𝑖+1 𝑥𝑚 ( 1 / 3 )
Embedding Layer 𝑣1 𝑣2 𝑣𝑘 𝑣𝑘+1 𝑣𝐾 𝑣𝑘1 𝑣𝑘2 𝑣𝑘𝑗 𝑣𝑘𝑗+1 𝑣𝑘𝑙
encode encode encode encode encode
Bi-LSTM
Graph Attention Network Self 𝐡𝐧𝐯𝟏 𝐡𝐧𝐯𝟐 𝐡𝐧𝐯𝐤 𝐡𝐧𝐯𝐤+𝟏 𝐡𝐧𝐯𝐊 𝐡𝐧𝐯𝐤,𝟏 𝐡𝐧𝐯𝐤,𝟐 𝐡𝐧𝐯𝐤,𝐣 𝐡𝐧𝐯𝐤,𝐣+𝟏 𝐡𝐧𝐯𝐤,𝐥
𝐡𝐡𝐤𝐠𝟏𝟏𝐱 𝐡𝐡𝐤𝐠𝟐𝟐𝐱 𝐡𝐡𝐤𝐠𝐢 𝐢𝐱 𝐡𝐡𝐤𝐠𝐢+𝟏𝐢+𝟏𝐱 𝐡𝐡𝐤𝐠𝐦𝐦𝐱 Concatenateattention 𝐡𝐜𝐧𝐯𝟏 𝐡𝐜𝐧𝐯𝟐 𝐡𝐜𝐧𝐯𝐤 𝐡𝐜𝐧𝐯𝐤+𝟏 𝐡𝐜𝐧𝐯𝐊 Average Pooling Layer
𝐡𝐧𝟏 𝐡𝐧𝟐 𝐡𝐧𝐢 𝐡𝐧𝐢+𝟏 𝐡𝐧𝐦 Replacing
𝐡𝐜𝐧𝟏 𝐡𝐜𝐧𝟐 𝐡𝐜𝐧𝐢 𝐡𝐜𝐧𝐢+𝟏 𝐡𝐜𝐧𝐦 Number-aware problem states
𝐡𝟏 𝐡𝟐 𝐡𝐢 𝐡𝐢+𝟏 𝐡𝐦 Representations of all numerals 𝐡𝐯𝐤 (b) Numerical Values Encoder
7 6.5 8 18 (1/3) 7 6.5 8 18 (1/3)
Attention & Aggregation & Copy Mechanism 𝐡𝐯𝟏 𝐡𝐯𝟐 𝐡𝐯𝒌 𝐡𝐯𝒌+𝟏 𝐡𝐯𝑲 𝐡𝐯𝟏 𝐡𝐯𝟐 𝐡𝐯𝒌 𝐡𝐯𝒌+𝟏 𝐡𝐯𝑲
Integer Decimal Fraction Percentage Category
𝑦1 𝑦2 𝑦3 𝑦4 Pairwise Comparison Score
- / 𝑣4 _-_ Numeral Categories Loss
Generated expression − 𝑔6.5 < 𝑔7 𝑔8 ≥𝑔7 𝑔18 ≥𝑔7 𝑔(1/3) < 𝑔7
𝑦1 𝑦𝑦22𝑦1 𝑦𝑦22𝑦1 𝑔𝑔77 ≥𝑔 < 𝑔6.58 𝑔6.5 < 𝑔− 8 𝑔8 ≥𝑔− 6.5 𝑔𝑔1818 ≥𝑔 ≥𝑔6.58 𝑔𝑔(1/3)(1/3) ≥𝑔 < 𝑔6.58 𝐡7 6.5 8 18 (1/3) 𝐯𝟏 𝐡𝐯𝟐 𝐡𝐯𝒌 𝐡𝐯𝒌+𝟏 𝐡𝐯𝑲
𝑔7 < 𝑔18 𝑔6.5 < 𝑔18 𝑔8 < 𝑔18 − 𝑔(1/3) < 𝑔18 𝑔𝑣′ 1 𝑔𝑣′ 2 𝑔𝑣′ 𝑘 𝑔𝑣′ 𝑘+1 𝑔𝑣′ 𝐾 Scalar
𝑦* * / * / 1 𝑦𝑦2 𝑦3 𝑣4 - / 𝑦3 𝑣4 -𝑦4 𝑔7 ≥𝑔(1/3) 𝑔6.5 ≥𝑔(1/3) 𝑔8 ≥𝑔(1/3) 𝑔18 ≥𝑔(1/3) − Target expression Value 𝑔𝑣′ 𝑘
Generated partial expression tree
Pairwise Comparison Loss Global Relationship Loss
(a) Sequence-to-tree Model (c) Numerical Properties Prediction Mechanism
𝐡𝐢𝐱 Problem hidden states 𝐡𝐤𝐠𝐢 Knowledge-aware problem hidden states
𝐡𝐧𝐢 Numeral hidden states 𝐡𝐜𝐧𝐢 Contextual numeral hidden states 𝐡𝐢 Number-aware problem representations
Figure 2: Main structure of our NumS2T model. Given a math word problem sequence, we use (a) an
attention-based sequence-to-tree model to generate its math expression. To explicitly incorporate numerical value
information, we use (b) a numerical values encoder to obtain the number-aware problem states h[num]i, which are
then concatenated with the problem hidden states in (a) to obtain number-aware problem representations hi. In
addition, we propose (c) a numerical properties prediction mechanism for comparing the paired numerical values,
determining the category of each numeral, and measuring whether they should appear in the target expression.
H=(h[x]1[,][ h]2[x][, . . .,][ h]m[x] [)][ ∈] [R][m][×][2][d][ as follows:]
**h[x]i** [= []h[−]→[x]i _[,][ ←]h−[x]i_ []][,]
_−h→[x]i_ [= BiLSTM(][E][(][x][i][)][,][ −]h−[x]i→1[)][,]
_−_
_←h−[x]i_ [= BiLSTM(][E][(][x][i][)][,][ ←]h[x]i−−1[)][.]
_−_
to their categories. See Wu et al. (2020) for more
details.
The knowledge-aware problem states h[kg]i are
obtained from a two-layer graph attention network
(Veliˇckovi´c et al., 2018) on the entity graph:
_αij = softmax_ h [[W][x][h]i[x] [: W][x][h]j[x][]))][,]
Aij=1 [(][ f][ (w][T]
(2)
**h[kg]i** = _t=1||,...,T_ _σ(_ Aij=1 _[α][ij][W][k][h]j[x][)][,]_
X
(1)
Here, word embedding vectors E(xi) are obtained
via a wording embedding layer E(·). _d is the_
dimension of the hidden state and h[x]i is the
concatenation of the forward and backward LSTM
hidden states.
Following Wu et al. (2020), we enrich the problem representations with common-sense knowledge information from external knowledge bases.
The words in problem sequences X and their categories in external knowledge bases are constructed
as an entity graph. In this entity graph, each word
is related to its neighbor in the problem. If there
are two nouns belonging to the same category in
the knowledge base, these two nouns are related
where wh[T][,][ W][x][,][ W][k][ are weight vector and ma-]
trices. || and [:] are concatenation functions. f (·)
and σ are the LeakyRelu and sigmoid activation
functions. T is the number of heads in GAT layer.
If the i-th word is related to the j-th word, the score
of the adjacent matrix Aij is set to 1, otherwise it is
set to 0.
5861
-----
**2.3** **Number-aware Problem Representations**
To solve the issues mentioned in the introduction
section, we need to incorporate explicit numerical
value information into NumS2T. However, there
are an infinite number of numerals that can appear
in math word problems. For example, among the
18,529 problems in the training set of Math23K,
there are 3,058 different numerical values. Therefore, rather than list all these numerals in the
vocabulary, we encode each numeral value digit
by digit.
All the digits in the numerical value vk are
treated as a sequence (vk[1][, v]k[2][, . . ., v]k[l] [) and embed-]
ded via the embed layer E(·). Take a 5-digit value
_vk = (1/3) as an example, we have E(vk) ∈_
R[5][×][d][emb]. Similar to the architecture shown in
Equation 1, we use a BiLSTM network to encode
the numeral values and obtain the numeral hidden
states hvk with an average pooling layer:
**h[n]vk,j** [= BiLSTM(][E][(][v]k[j] [)][,][ h]v[n]k,j−1[)][,]
_l_ (3)
**h[n]vk** [= 1]l _j=1_ **[h]v[n]k,j[.]**
X
**2.4** **Tree Structured Decoder**
Previous works (Xie and Sun, 2019; Liu et al.,
2019; Wu et al., 2020) have confirmed that a
sequence-to-tree model can better represent the
expression structures than a sequence-to-sequence
model, because a tree structured decoder can
capture the global expression information and
focus on the features of adjacent nodes.
The tree structured decoder takes the final
number-aware problem representations hi as input
and generates the target expression from top to
bottom. The target expression can be regarded as a
pre-order traversal of a binary tree, with operators
as internal nodes and numbers as leaf nodes. The
decoder is a one-layer LSTM, which updates its
states as follows:
**st+1 = LSTM([E(yt) : ct : rt], st).** (7)
At time step t+1, the decoder uses the last generated
word embedding E(yt), the problem context state
**ct and the expression context state rt to update its**
previous hidden state st.
The problem context state ct is computed via
attention mechanism as follows:
_αti = softmax(tanh(Whhi_ +Ws[st : rt])),
To capture the relations and dependency between
numeral pairs, we use a self-attention mechanism
(Wang et al., 2017a) on the hidden state of all
the numerals H[n]v [=][ {][h]v[n]k[}]k[K]=1 [to compute the]
contextual numeral hidden states h[cn]vk[:]
_αvk = softmax( (H[n]v[)][T][W][h][h][n]vk[)][,]_
(4)
**h[cn]vk** [=][ α][v]k **v[,]**
_[·][ H][n]_
where αvk is the attention distribution of vk on all
the numerals in the problem X.
Combining the numeral hidden states h[n]vk[,][ h]v[cn]k
with the original problem hidden states h[x]i [,][ h]i[kg][,]
we have number-aware problem states h[num]i enhanced with explicit numeral value information:
[hnvk [:][ h]v[cn]k[]] _xi = vk_
**h[num]i** = [h[x]i [:][ h]i[kg][]] _xi is not a number_ (5)
The final number-aware problem representations
are obtained by concatenating the problem hidden
states h[x]i [, the knowledge-aware problem states][ h]i[kg]
and the number-aware problem states h[num]i :
**hi = [h[x]i** [:][ h][kg]i : h[num]i ]. (6)
_m_ (8)
_αtihi,_
_i=1_
X
**ct =**
where Wh, Ws are weight matrices. αti is the
attention distribution on the number-aware problem
representations hi.
The expression context state rt is computed
via a state aggregation mechanism (Wu et al.,
2020). It describes the global representation of
the partial expressions y<t = (y1, y2, . . ., yt 1)
_−_
being generated by the decoder. At time step t,
the decoder aggregates each node’s context state
with its neighbor nodes in the generated partial
expression tree. The aggregation functions are as
follows:
**r[0]t** [=][ s][t][,]
(9)
**r[η]t[+][1]** = σ(Wr[r[η]t [:][ r]t[η],p [:][ r]t[η],l [:][ r]t[η],r[])][,]
where σ is the sigmoid function and Wr is a weight
matrix. r[0]t [is initialized with decoder hidden state]
**st when η = 0,. rt,p, rt,l, rt,r are the context state**
of the parent node, the left child node, and the
right child node of yt in the expression tree. rt[η][+][1]
represents the expression context state updated with
5862
-----
global information from all nodes in the generated
partial expression.
Lastly, the decoder can generate a word from
a given vocabulary Vg. It can also generate a
number symbol in Vc, and use it to copy a number
from the problem X. The final distribution is the
combination of the generated probability and copy
probability:
**Numeral categories.** In the sentence “the
number of apples is 5 more than the number of
pears,” replacing the numeral 5 with the integer 100
may not affect the structure of the target expression,
but replacing the numeral 5 with 20% may change
the structure from “+5” to “*(1 + 20%)”. We
roughly divide all numbers into four categories:
_{integer, decimal, fraction, percentage}, and assign_
a category label C = {1,2,3,4}, respectively. Given
the number-aware problem representations hvk for
each numeral vk, we calculate the category score
distribution P(Cvk **hvk) and then minimize the**
_|_
negative log likelihood:
P(Cvk **hvk)=softmax(Wchvk),**
_|_
_K_ (12)
_LCA = −_ _K[1]_ log P(Cvk _|hvk)._
_k=1_
X
**Hv = {hvk}k[K]=1[,]**
_pc = σ(Wz[st : ct : rt] + WvHv),_
Pc(yt) = softmax(f ([st : ct : rt : Hv])),
Pg(yt) = softmax(f ([st : ct : rt])),
P(yt _y<t,X) = pcPc(yt) + (1_ _pc)Pg(yt)._
_|_ _−_
(10)
Here, Hv are the number-aware problem representations of all the numerals vk in X. Wz, Wv are the
weight matrices. f (·) is a perception layer. pc is
the probability that the current word is a number
copied from the problem.
**2.5** **Numerical Properties Prediction**
**Mechanism**
Our NumS2T model explicitly incorporates numerical values information. Furthermore, utilize the
numerical properties to the degree possible through
a numerical properties prediction mechanism. We
consider three numerical properties to be useful for
solving math word problems:
**Pairwise Numeral Comparison.** If we consider the question “What is the difference between
_v1 and v2,” the comparative relationship between_
these two numerals can help the model decide
whether to generate v1 − _v2 or v2 −_ _v1._ In
this paper, we compare each numeral vk in the
question with the other numerals. Then, we
calculate the pairwise comparison scores zkj based
on their number-aware problem representations,
and we optimize the pairwise comparison loss
to assign numerals with larger numerical values
higher pairwise comparison scores. The pairwise
comparison loss LCR is calculated as follows:
_gvk = σ(Whhvk),_
**Global relationship with target expressions.**
Current models tend to focus on the local relationship between numerals, while sometimes these
numerals are not related to the target expression.
Given “3 bags of rice weighing 60 kg,” the numeral
3 is highly correlated with 60. However, if the
problem relates to the total price of the rice rather
than the weight of each bag of rice, the numeral
3 is not so important for generating the target
expression. The NumS2T model predicts a scalar
value gv[′] _k_ [for each numeral that denotes whether]
this numeral will be used in a math expression.
The importance label avk =1 when vk is used in
the ground truth math expression, otherwise avk =0.
The supervised loss is defined by:
_gv[′]_ _k_ [=][ σ][(W][g][h][v]k[)][,]
_GR =_
_L_ _−_ _K[1]_
_ai log gv[′]_ _k[+(1][−][a][i][) log (1][−][g]v[′]_ _k_ [)][.]
_k=1_
X
(13)
**2.6** **Training**
During training, for each question–expression pair
(X, Y), we first train the NumS2T by optimizing
the maximum likelihood estimation (MLE) loss
_Ll on the probability distribution P(yt|y<t, X))._
Then, the final loss function L is a combination of
the MLE loss and three numerical properties loss
functions:
_zkj =_ _max(0, gvj −_ _gvk_ ) if vk ≥ _vj_
(max(0, gvk _gvj_ ) if vk < vj
_−_
_K_ _K_
_CR =_ _zkj,_
_L_ _−_ _K[1][2]_
_k=1_ _j=1_
X X
(11)
5863
_l =_
_L_ _−_ _n[1]_
log P(yt _y<t, X)),_
_|_
_i=1_
X
(14)
_L = Ll + β1LCR + β2LCA + β3LGR._
-----
|Models|Math23K|APE210K|
|---|---|---|
|DNS DNS-Retrieval S2S RecursiveNN Tree-Decoder GTS KA-S2T|58.1% 64.7% 66.7% 68.7% 69.0% 74.3% 76.3%|- - 56.6% - 66.5% 67.7% 68.7%|
|NumS2T|78.1%|70.5%|
Table 1: Answer accuracy of our model and other
state-of-the-art models on the Math23K and APE210K
datasets.
**3.3** **Baselines**
We compare our proposed NumS2T model with
the following baseline models: DNS (Wang et al.,
2017b) is a seq2seq model with a two-layer GRU
as an encoder and a two-layer LSTM as a decoder.
**DNS-Retrieval is a variant of DNS that combines**
a retrieval model. **S2S (Wang et al., 2018a)**
is a standard bidirectional LSTM-based seq2seq
model with an attention mechanism. RecursiveNN
(Wang et al., 2019) uses a recursive neural network
on the predicted tree structure templates Tree**Decoder (Liu et al., 2019) is a seq2tree model with**
a tree structured decoder. The decoder generates
each node based on its parent node and its sibling
node. GTS (Xie and Sun, 2019) generates each
node based on its parent node and its left sibling
subtree embedding. The subtree embedding is
obtained by merging the embedding of the subtree
from bottom to top. KA-S2T (Wu et al., 2020) is
a seq2tree model with external knowledge and a
state aggregation mechanism. The decoder use a
two-layer GCN to recursively aggregate neighbors
of each node in the partial expression tree.
**3.4** **Results Analysis**
The main evaluation results are presented in Table
1. Compared with baseline methods, our model
obtains the highest answer accuracy of 78.1% in
the Math23K dataset and 70.5% in the APE210K
dataset, which is significantly better than other
state-of-the-art methods. The experimental results
provide the following observations:
1) The methods with a tree-structured decoder
(Tree-Decoder, GTS, KA-S2T) perform better than
methods with a sequence-structured decoder (DNS,
S2S). These methods treat the math expression as
a binary tree and directly use adjacent nodes in the
Here, β1, β2, β3 are hyper-parameters.
**3** **Experiment**
**3.1** **Dataset**
We present the experimental results of math word
problem solving using our proposed models on
the Math23K (Wang et al., 2017b) and Ape210K
(Zhao et al., 2020)[2] datasets. Following Xie and
Sun (2019), we removed the problems that the
corresponding expressions could not be executed
to obtain the given answers and the problems that
omit intermediate calculation expressions. For
Math23K, following previous studies (Xie and
Sun, 2019; Wu et al., 2020), we randomly split
the dataset into a training set, a development set
and a test set with 18,529, 2,316, 2,316 problems.
For Ape210K, we use the official data partition.
There are 166,270, 4,157, and 4,159 problems
in our training set, development set and test set,
respectively.
We report answer accuracy as the main evaluation metrics of the math word problem solving
task.
**3.2** **Implementation Details**
In this paper, we truncate the problem to a max
sequence length of 150, and the expression to
a max sequence length of 50. We select 4,000
words that appear most frequently in the training
set of each dataset as the vocabulary, and replace
the remaining words with a special token UNK.
We initialize the word embedding with the pretrained 300-dimension word vectors[3]. The problem
encoder used two external knowledge bases: Cilin
(Mei, 1985) and Hownet (Dong et al., 2010). The
number of heads T in GAT is 8. The hidden
size is 512 and the batch size is 64. We use
the Adam optimizer (Kingma and Ba, 2014) to
optimize the models an the learning rate is 0.001.
We compute the final loss function with β1, β2, β3
of 0.5. Dropout (Srivastava et al., 2014) is set to 0.5.
Models are trained in 80 epoches for the Math23K
dataset and 50 epoches for the Ape210K dataset.
During testing, the beam size is set to 5. Once all
internal nodes in the expression tree have two child
nodes, the decoder stops generating the next word.
The hyper-parameters are tuned on the valid set.
2https://github.com/yuantiku/ape210k
3https://github.com/Embedding/Chinese-Word-Vectors
5864
-----
|Models|Math23K|APE210K|
|---|---|---|
|KA-S2T|76.3%|68.7%|
|NumS2T w/o Symbols NumS2T w/o Numerals NumS2T w/o SelfAtt|75.4% 76.6% 77.3%|64.4% 69.2% 69.8%|
|NumS2T|78.1%|70.5%|
Table 2: Ablation study on reducing the numerical
values incorporated into the model.
|Models|Math23K|APE210K|
|---|---|---|
|KA-S2T|76.3%|68.7%|
|NumS2T-base NumS2T-base + CR NumS2T-base + CA NumS2T-base + GR|77.0% 77.7% 77.4% 77.3%|69.6% 70.1% 70.0% 69.8%|
|NumS2T|78.1%|70.5%|
Table 3: Ablation study on reducing the numerical
properties used in the numerical properties prediction
mechanism. CR, CA and GR respectively indicate
pairwise numeral comparison, numeral category and
global relationship with the target expression.
of the numerical properties prediction mechanism.
From the table we can observe that:
1) NumS2T-base is the variant of NumS2T
without the numerical properties prediction mechanism. Without numerical properties, the answer
accuracy in the Math23K and APE210K datasets
are reduced to 77.0% and 69.6%, which show
that the numerical properties prediction mechanism
contributes considerably to improving performance.
In addition, NumS2T-base still outperforms the
state-of-the-art baseline KA-S2T, which once again
proves the effectiveness of explicitly incorporating
numerical values.
2) The use of pairwise numeral comparison,
numeral category and global relationship with a
target expression can improve accuracy by approximately 0.6%, 0.4% and 0.3%, respectively.
Their combination achieves further improvements
in model performance. These results show the
effectiveness of the numerical properties prediction
mechanism because it enables the model to further
utilize numerical properties.
**Model performance on problems with a differ-**
**ent number of numerals: Table 4 shows the**
results for how accuracy changes as the number of
numerals in the problem increases. The NumS2T
model outperforms the best-performing baseline
with respect to problems with a different number of
tree instead of the previous word in the sequence
to generate the next word. In this way, the model
can better capture the structure information of the
math expressions.
2) The KAS2T model with external knowledge
performs better than GTS, which proves that
external knowledge enables the model to obtain
better interaction between words.
3) NumS2T outperforms all the other baselines.
This result shows the effectiveness of the explicitly
incorporated numerical values and use of a numerical properties prediction mechanism.
**3.5** **Ablation Study**
**Effect of explicitly incorporating numerical val-**
**ues: We designed several NumS2T variants that**
reduce the numerical values incorporated in the
model. Here, “NumS2T w/o Numerals” means
that we remove the character-level numeric value
encoder. An input example is “Alan bought
_v1 apples for $ v2”._ “NumS2T w/o Symbols”
means that we not only remove the character-level
numeric value encoder, but also replace the math
symbols in math problems with character-level
numeric values. An input example is “Alan bought
2 5 apples for $ 1 5 0”.
Table 2 shows the results of these different
variants, from which we can see:
1)The experimental results show that model
performance of “NumS2T w/o Symbols” is significantly reduced in both datasets. We believe this
is because directly replacing the number symbols
will make it difficult for the model to obtain the
overall representation of each number.
2) The use of a self-attention mechanism significantly improves the accuracy by 0.8% in Math23K
and 0.7% in APE210K. This is because the same
numerical value may describe different information
in different problems. Therefore, the self-attention
mechanism combines numerical values with other
numerical values in the problem, which helps to
model numerical information and the relations
between these numerals.
3) Without numerical values, the answer accuracy of “NumS2T w/o Numerals” would be
reduced to 76.6% and 69.2%. The results show
the benefit of explicitly incorporating numerical
values.
**Effect of the numerical properties prediction**
**mechanism: Table 3 shows the results of several**
NumS2T variants designed to measure the effect
5865
-----
|Col1|Math23K|
|---|---|
|Num.|Prop. KA-S2T NumS2T Imp.( ) ↑|
|1 ≤ 2 3 4 5 6 7 ≥|2.0% 80.9% 83.0% 2.1% 36.8% 84.6% 85.1% 0.5% 46.1% 77.4% 78.4% 1.0% 11.4% 58.3% 60.6% 2.3% 2.8% 45.2% 54.9% 9.7% 0.7% 33.3% 46.7% 13.4% 0.3% 12.5% 37.5% 25.0%|
|5 6 ≥7|2.8% 45.2% 54.9% 9.7% 0.7% 33.3% 46.7% 13.4% 0.3% 12.5% 37.5% 25.0% APE210K|
|---|---|
|Num.|Prop. KA-S2T NumS2T Imp.( ) ↑|
|1 ≤ 2 3 4 5 6 7 ≥|9.1% 67.9% 71.4% 3.5% 34.4% 74.6% 75.5% 0.9% 36.9% 72.2% 75.6% 3.4% 12.7% 53.2% 57.4% 4.2% 3.6% 30.1% 43.7% 4.6% 1.4% 40.7% 54.2% 13.5% 1.9% 19.0% 27.9% 8.9%|
Table 4: Model performance on problems with a
different number of numerals. Prop. denotes the
proportion of these problems in the dataset. Imp.
denotes the accuracy improvement between NumS2T
and KA-S2T.
numerals. In addition, as the number of numerals
in the problems increase, the performance gap
between NumS2T and KAS2T also increases. This
is because with more numerals in the problem,
NumS2T, which explicitly incorporate numerical
value information, is able to more readily achieve
better performance. Meanwhile, NumS2T also
achieved a considerable improvement on problems
with only one numeral. This further demonstrates
the effect of utilizing numerical category information and global relationship information.
**3.6** **Case Study**
Table 5 shows three cases generated by KA-S2T
(Wu et al., 2020) and our NumS2T model. In the
first problem, without numerical values, KA-S2T
incorrectly uses the smaller value to subtract the
larger value when calculating the price difference
between footballs and basketballs. This case
requires the model to choose the larger value
between two numerals. Our NumS2T model
can better handle this problem. In the second
problem, KA-S2T replaces all of the numerals in
the problems with number symbols (v1, v2) and
does not know that v2=20% is not an integer. Our
proposed method can capture numerical values
and numeral category information to generate
|Problem:|Each football is worth $ 76, and each basketball is worth $ 45. The school bought the same number of basketballs and footballs, with a price difference of $ 248. How many footballs did the school buy?|
|---|---|
|KA-S2T: NumS2T:|248/(45-76) 248/(76-45)|
|Problem:|There are 250 pear trees in the orchard, 25% more than peach trees. There are 3 times as many orange trees as pear trees. How many more orange trees are there than peach trees?|
|KA-S2T: NumS2T:|(250*3)-(250-25%) (250*3)-250/(1-25%)|
|Problem:|The concert was held in a hall with 80 seats. 52 tickets have been sold, each priced at $ 25. How much is the ticket revenue?|
|KA-S2T: NumS2T:|(80-52)*25 52*25|
Table 5: Three cases of generated expressions by KAS2T (Wu et al., 2020) and NumS2T.
correct results. In the third problem, 80 seats
and 52 tickets are strongly semantically related,
so KA-S2T generates the sub-expression “80-52”.
However, this problem is about the fares that have
already been sold rather than how many tickets are
left. With numerical properties, NumS2T is able to
realize that 80 is not related to the target expression
and should not appear in the generated result.
**4** **Related Work**
**Math Word Problem Solving: In recent years,**
Seq2Seq (Sutskever et al., 2014) has been widely
used in math word problem solving tasks (Ling
et al., 2017; Wang et al., 2017b, 2018a). To better
utilize expression structure information, recent
studies have used Seq2Tree models (Liu et al.,
2019; Zhang et al., 2020a). Xie and Sun (2019)
proposed a tree structured decoder that uses a
goal-driven approach to generate expression trees.
Wu et al. (2020) proposed a knowledge-aware
Seq2Tree model with a state aggregation mechanism that incorporates common-sense knowledge
from external knowledge bases. Recently, several
methods have attempted to use the contextual
information of the numbers in the problem. Li
et al. (2019) propose a group attention mechanism
to extract quantity-related features and quantitypair features. Zhang et al. (2020b) connects each
5866
-----
**References**
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
[Bengio. 2014. Neural machine translation by jointly](https://arxiv.org/abs/1409.0473)
[learning to align and translate.](https://arxiv.org/abs/1409.0473) _arXiv preprint_
_arXiv:1409.0473._
Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura,
[and Hsin-Hsi Chen. 2019. Numeracy-600K: Learn-](https://doi.org/10.18653/v1/P19-1635)
[ing numeracy for detecting exaggerated information](https://doi.org/10.18653/v1/P19-1635)
[in market comments.](https://doi.org/10.18653/v1/P19-1635) In Proceedings of the 57th
_Annual Meeting of the Association for Computa-_
_tional Linguistics, pages 6307–6313, Florence, Italy._
Association for Computational Linguistics.
Zhendong Dong, Qiang Dong, and Changling Hao.
[2010. HowNet and its computation of meaning. In](https://www.aclweb.org/anthology/C10-3014)
_Coling 2010: Demonstrations, pages 53–56, Beijing,_
China. Coling 2010 Organizing Committee.
Heng Gong, Wei Bi, Xiaocheng Feng, Bing Qin,
[Xiaojiang Liu, and Ting Liu. 2020. Enhancing con-](https://doi.org/10.18653/v1/2020.findings-emnlp.262)
[tent planning for table-to-text generation with data](https://doi.org/10.18653/v1/2020.findings-emnlp.262)
[understanding and verification. In Findings of the](https://doi.org/10.18653/v1/2020.findings-emnlp.262)
_Association for Computational Linguistics: EMNLP_
_2020, pages 2905–2914, Online. Association for_
Computational Linguistics.
[Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long](https://doi.org/10.1162/neco.1997.9.8.1735)
[short-term memory. Neural computation, 9:1735–](https://doi.org/10.1162/neco.1997.9.8.1735)
80.
Danqing Huang, Jin-Ge Yao, Chin-Yew Lin, Qingyu
Zhou, and Jian Yin. 2018. [Using intermediate](https://doi.org/10.18653/v1/P18-1039)
[representations to solve math word problems.](https://doi.org/10.18653/v1/P18-1039) In
_Proceedings of the 56th Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 419–428, Melbourne, Australia._
Association for Computational Linguistics.
[Diederik P Kingma and Jimmy Ba. 2014. Adam: A](https://arxiv.org/abs/1412.6980v5)
[method for stochastic optimization. arXiv preprint](https://arxiv.org/abs/1412.6980v5)
_arXiv:1412.6980._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
Bing Tian Dai, and Dongxiang Zhang. 2019.
[Modeling intra-relation in math word problems](https://doi.org/10.18653/v1/P19-1619)
[with different functional multi-head attentions. In](https://doi.org/10.18653/v1/P19-1619)
_Proceedings of the 57th Annual Meeting of the_
_Association for Computational Linguistics, pages_
6162–6167, Florence, Italy. Association for Computational Linguistics.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil
Blunsom. 2017. [Program induction by rationale](https://doi.org/10.18653/v1/P17-1015)
[generation: Learning to solve and explain algebraic](https://doi.org/10.18653/v1/P17-1015)
[word problems. In Proceedings of the 55th Annual](https://doi.org/10.18653/v1/P17-1015)
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 158–167,_
Vancouver, Canada. Association for Computational
Linguistics.
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke
Kawahara. 2019. [Tree-structured decoding for](https://doi.org/10.18653/v1/D19-1241)
[solving math word problems. In Proceedings of the](https://doi.org/10.18653/v1/D19-1241)
_2019 Conference on Empirical Methods in Natural_
number in the problem with nearby nouns to enrich
the problem representations.
However, these methods rarely take numerical
values into consideration. They replace all the
numbers in the problems with number symbols
and ignore the vital information provided by the
numerical values in math word problem solving.
As such, these methods will incorrectly generates
the same expression for problems with different
numerical values.
**Numerical Value Representations:** Some recent studies have explored the numerical value
representations in language models (Naik et al.,
2019; Chen et al., 2019; Wallace et al., 2019).
Spithourakis and Riedel (2018) investigated several
of the strategies used for language models for their
possible application to model numerals. Gong et al.
(2020) proposed the use of contextual numerical
value representations to enhance neural content
planning by helping models to understand data
values. To incorporate numerical value information
into math word solving tasks, we use a digit-todigit numerical value encoder to obtain the numberaware problem representations. To further utilize
the numerical properties, we propose a numerical
properties prediction mechanism.
**5** **Conclusion**
In this study, we proposed a novel approach called
NumS2T, that better captures numerical value
information and utilizes numerical properties. In
this model, we use a digit-to-digit numerical value
encoder to explicitly incorporate numerical values.
In addition, we designed a numerical properties
prediction mechanism that compares the paired
numerical values, determines the category of each
numeral, and measures whether they should appear
in the final expression. Experimental results show
that our proposed NumS2T model outperforms
other state-of-the-art baseline methods.
**Acknowledgments**
The authors wish to thank the anonymous reviewers
for their helpful comments. This work was partially
funded by China National Key R&D Program
(No. 2018YFB1005100), National Natural Science
Foundation of China (No. 62076069, 61976056),
Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103).
5867
-----
_Language Processing and the 9th International_
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 2370–2379, Hong Kong,_
China. Association for Computational Linguistics.
Jiaju Mei. 1985. _Tongyi ci cilin._ Shangai cishu
chubanshe.
[Arindam Mitra and Chitta Baral. 2016. Learning to use](https://doi.org/10.18653/v1/P16-1202)
[formulas to solve simple arithmetic problems.](https://doi.org/10.18653/v1/P16-1202) In
_Proceedings of the 54th Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 2144–2153, Berlin, Germany._
Association for Computational Linguistics.
Aakanksha Naik, Abhilasha Ravichander, Carolyn
[Rose, and Eduard Hovy. 2019. Exploring numeracy](https://doi.org/10.18653/v1/P19-1329)
[in word embeddings.](https://doi.org/10.18653/v1/P19-1329) In Proceedings of the 57th
_Annual Meeting of the Association for Computa-_
_tional Linguistics, pages 3374–3380, Florence, Italy._
Association for Computational Linguistics.
Subhro Roy and Dan Roth. 2015. [Solving general](https://doi.org/10.18653/v1/D15-1202)
[arithmetic word problems.](https://doi.org/10.18653/v1/D15-1202) In Proceedings of
_the 2015 Conference on Empirical Methods in_
_Natural Language Processing, pages 1743–1752,_
Lisbon, Portugal. Association for Computational
Linguistics.
Georgios Spithourakis and Sebastian Riedel. 2018.
[Numeracy for language models:](https://doi.org/10.18653/v1/P18-1196) Evaluating and
[improving their ability to predict numbers.](https://doi.org/10.18653/v1/P18-1196) In
_Proceedings of the 56th Annual Meeting of the_
_Association for Computational Linguistics (Volume_
_1:_ _Long Papers), pages 2104–2115, Melbourne,_
Australia. Association for Computational Linguistics.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,
Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
[Dropout: A simple way to prevent neural networks](http://jmlr.org/papers/v15/srivastava14a.html)
[from overfitting.](http://jmlr.org/papers/v15/srivastava14a.html) _Journal of Machine Learning_
_Research, 15(56):1929–1958._
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
[Sequence to sequence learning with neural networks.](https://proceedings.neurips.cc/paper/2014/file/a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf)
In Advances in Neural Information Processing
_Systems, volume 27. Curran Associates, Inc._
Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova,
Adriana Romero, Pietro Li`o, and Yoshua Bengio.
[2018. Graph attention networks. In International](https://openreview.net/forum?id=rJXMpikCZ)
_Conference on Learning Representations._
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh,
and Matt Gardner. 2019. [Do NLP models know](https://doi.org/10.18653/v1/D19-1534)
numbers? [probing numeracy in embeddings.](https://doi.org/10.18653/v1/D19-1534) In
_Proceedings of the 2019 Conference on Empirical_
_Methods in Natural Language Processing and the_
_9th International Joint Conference on Natural_
_Language Processing (EMNLP-IJCNLP), pages_
5307–5315, Hong Kong, China. Association for
Computational Linguistics.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. [Translating a math](https://doi.org/10.18653/v1/D18-1132)
[word problem to a expression tree. In Proceedings](https://doi.org/10.18653/v1/D18-1132)
_of the 2018 Conference on Empirical Methods in_
_Natural Language Processing, pages 1064–1069,_
Brussels, Belgium. Association for Computational
Linguistics.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b.
[Mathdqn: Solving arithmetic word problems via](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16749)
[deep reinforcement learning.](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16749)
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing
Xu, Lianli Gao, Bing Tian Dai, and Heng Shen.
[2019. Template-based math word problem solvers](https://doi.org/10.1609/aaai.v33i01.33017144)
[with recursive neural networks. Proceedings of the](https://doi.org/10.1609/aaai.v33i01.33017144)
_AAAI Conference on Artificial Intelligence, 33:7144–_
7151.
Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang,
and Ming Zhou. 2017a. [Gated self-matching](https://doi.org/10.18653/v1/P17-1018)
[networks for reading comprehension and question](https://doi.org/10.18653/v1/P17-1018)
[answering.](https://doi.org/10.18653/v1/P17-1018) In Proceedings of the 55th Annual
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 189–198,_
Vancouver, Canada. Association for Computational
Linguistics.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017b.
[Deep neural solver for math word problems.](https://doi.org/10.18653/v1/D17-1088) In
_Proceedings of the 2017 Conference on Empirical_
_Methods in Natural Language Processing, pages_
845–854, Copenhagen, Denmark. Association for
Computational Linguistics.
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuanjing
[Huang. 2020. A knowledge-aware sequence-to-tree](https://doi.org/10.18653/v1/2020.emnlp-main.579)
[network for math word problem solving. In Proceed-](https://doi.org/10.18653/v1/2020.emnlp-main.579)
_ings of the 2020 Conference on Empirical Methods_
_in Natural Language Processing (EMNLP), pages_
7137–7146, Online. Association for Computational
Linguistics.
[Zhipeng Xie and Shichao Sun. 2019. A goal-driven](https://doi.org/10.24963/ijcai.2019/736)
[tree-structured neural model for math word prob-](https://doi.org/10.24963/ijcai.2019/736)
[lems.](https://doi.org/10.24963/ijcai.2019/736) In Proceedings of the Twenty-Eighth Inter_national Joint Conference on Artificial Intelligence,_
_IJCAI-19, pages 5299–5305. International Joint_
Conferences on Artificial Intelligence Organization.
Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei
Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a.
[Teacher-student networks with multiple decoders](https://doi.org/10.24963/ijcai.2020/555)
[for solving math word problem.](https://doi.org/10.24963/ijcai.2020/555) In Proceedings
_of the Twenty-Ninth International Joint Conference_
_on Artificial Intelligence, IJCAI-20, pages 4011–_
4017. International Joint Conferences on Artificial
Intelligence Organization. Main track.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
[Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graph-](https://doi.org/10.18653/v1/2020.acl-main.362)
[to-tree learning for solving math word problems.](https://doi.org/10.18653/v1/2020.acl-main.362)
In Proceedings of the 58th Annual Meeting of the
_Association for Computational Linguistics, pages_
5868
-----
3928–3937, Online. Association for Computational
Linguistics.
Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and
[Jingming Liu. 2020. Ape210k: A large-scale and](https://github.com/Chenny0808/ape210k)
[template-rich dataset of math word problems. arXiv](https://github.com/Chenny0808/ape210k)
_preprint arXiv:2009.11506._
5869
-----
| [
"Qinzhuo, Wu",
"Qi, Zhang",
"Chengqing, Zong",
"Zhongyu, Wei",
"Fei, Xia",
"Wenjie, Li",
"Xuanjing, Huang",
"Roberto, Navigli"
] | 2021-08-01T00:00:00 | ACL 2021 Long Papers | true | 49 | 8 | null | https://aclanthology.org/2021.acl-long.455 | null | https://www.semanticscholar.org/paper/ec15ff1fc5c9780fd91902f286a9e3fcd00b890d |
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | In this paper, we present an innovative process-oriented math process reward model called Math-shepherd, which assigns a reward score to each step of math problem solutions. The training of Math-shepherd is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of Math-shepherd in two scenarios: 1) Verification: Math-shepherd is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) Reinforcement Learning (RL): Math-shepherd is employed to reinforce LLMs.With Math-shepherd, a series of open-source LLMs demonstrates exceptional performance. For instance, process RL with Math-shepherd significantly enhances Mistral-7B (77.9%→84.1% on GSM8K and 28.6%→33.0% on MATH).The accuracy can be further improved to 89.1% and 43.5% on two benchmarks with verification of Math-shepherd.We believe that automatic process supervision holds significant potential for the future evolution of LLMs. | An innovative process-oriented math process reward model called Math-Shepherd, which assigns a reward score to each step of math problem solutions, which holds significant potential for the future evolution of LLMs. | # Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
**Peiyi Wang[1]** **Lei Li[3]** **Zhihong Shao[4]** **Runxin Xu[2]** **Damai Dai[1]** **Yifei Li[5]**
**Deli Chen[2]** **Yu Wu[2]** **Zhifang Sui[1]**
1State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University.
2DeepSeek-AI 3The University of Hong Kong
4Tsinghua University 5The Ohio State University
{wangpeiyi9979, nlp.lilei}@gmail.com
[email protected] [email protected]
**Abstract**
In this paper, we present an innovative processoriented math process reward model called
**MATH-SHEPHERD, which assigns a reward**
score to each step of math problem solutions.
The training of MATH-SHEPHERD is achieved
using automatically constructed process-wise
supervision data, breaking the bottleneck of
heavy reliance on manual annotation in existing
work. We explore the effectiveness of MATHSHEPHERD in two scenarios: 1) Verification:
MATH-SHEPHERD is utilized for reranking
multiple outputs generated by Large Language
Models (LLMs); 2) Reinforcement Learning
_(RL): MATH-SHEPHERD is employed to rein-_
force LLMs. With MATH-SHEPHERD, a series of open-source LLMs demonstrates exceptional performance. For instance, process RL
with MATH-SHEPHERD significantly enhances
Mistral-7B (77.9%→84.1% on GSM8K and
28.6%→33.0% on MATH). The accuracy can
be further improved to 89.1% and 43.5% on
two benchmarks with verification of MATHSHEPHERD. We believe that automatic process
supervision holds significant potential for the
future evolution of LLMs.
**1** **Introduction**
Large language models (LLMs) have demonstrated
remarkable capabilities across various tasks (Park
et al., 2023; Kaddour et al., 2023; Song et al.; Li
et al., 2023a; Wang et al., 2023a; Chen et al., 2023;
Zheng et al., 2023; Wang et al., 2023c), However,
even the most advanced LLMs face challenges in
complex multi-step mathematical reasoning problems (Lightman et al., 2023; Huang et al., 2023).
To address this issue, prior research has explored
different methodologies, such as pre-training (Azerbayev et al., 2023), fine-tuning (Luo et al., 2023;
Yu et al., 2023b; Wang et al., 2023b), prompting
(Wei et al., 2022; Fu et al., 2022), and verification
(Wang et al., 2023d; Li et al., 2023b; Zhu et al.,
2023; Leviathan et al., 2023). Among these tech
niques, verification has recently emerged as a favored method. The motivation behind verification
is that relying solely on the top-1 result may not
always produce reliable outcomes. A verification
model can rerank candidate responses, ensuring
higher accuracy and consistency in the outputs of
LLMs. In addition, a good verification model can
also offer invaluable feedback for further improvement of LLMs (Uesato et al., 2022; Wang et al.,
2023b; Pan et al., 2023).
The verification models generally fall into the
outcome reward model (ORM) (Cobbe et al., 2021;
Yu et al., 2023a) and process reward model (PRM)
(Li et al., 2023b; Khalifa et al., 2023; Uesato et al.,
2022; Lightman et al., 2023; Ma et al., 2023). The
ORM assigns a confidence score based on the entire
generation sequence, whereas the PRM evaluates
the reasoning path step-by-step. PRM is advantageous due to several compelling reasons. One major benefit is its ability to offer precise feedback by
identifying the specific location of any errors that
may arise, which is a valuable signal in reinforcement learning and automatic correction. Besides,
The PRM exhibits similarities to human behavior
when assessing a reasoning problem. If any steps
contain an error, the final result is more likely to
be incorrect, mirroring the way human judgment
works. However, gathering data to train a PRM can
be an arduous process. (Uesato et al., 2022) and
(Lightman et al., 2023) utilize human annotators to
provide process supervision annotations, enhancing
the performance of PRM. Nevertheless, annotation
by humans, particularly for intricate multi-step reasoning tasks that require advanced annotator skills,
can be quite costly, which hinders the advancement
and practical application of PRM.
To tackle the problem, in this paper, we propose an automatic process annotation framework.
Inspired by Monte Carlo Tree Search (Kocsis &
Szepesvári, 2006; Coulom, 2006; Silver et al.,
2016; Swiechowski et al.[´], 2023), we define the
9426
-----
**2** **Related Works**
**Improving and eliciting mathematical reasoning**
**abilities of LLMs.** Mathematical reasoning tasks
are one of the most challenging tasks for LLMs.
Researchers have proposed various methods to improve or elicit the mathematical reasoning ability
of LLMs, which can be broadly divided into three
groups: 1) pre-training: The pre-training methods
(OpenAI, 2023; Anil et al., 2023; Touvron et al.,
2023; Azerbayev et al., 2023) pre-train LLMs on
a vast of datasets that are related to math problems, such as the Proof-Pile and ArXiv (Azerbayev
et al., 2023) with a simple next token prediction
objective. 2) fine-tuning: The fine-tuning methods (Yu et al., 2023b; Luo et al., 2023; Yue et al.,
2023; Wang et al., 2023b; Gou et al., 2023) can
also enhance the mathematical reasoning ability
of LLMs. The core of fine-tuning usually lies in
constructing high-quality question-response pair
datasets with a chain-of-thought reasoning process.
and 3) prompting: The prompting methods (Wei
et al., 2022; Zhang et al., 2023; Fu et al., 2022;
Bi et al., 2023) aim to elicit the mathematical reasoning ability of LLMs by designing prompting
strategy without updating the model parameters,
which is very convenient and practical.
**Mathematical reasoning verification for LLMs.**
Except for directly improving and eliciting the
mathematical reasoning potential of LLMs, the
reasoning results can be boosted via an extra verifier for selecting the best answer from multiple
decoded candidates. There are two primary types
of verifiers: the Outcome Reward Model (ORM)
and the Process Reward Model (PRM). The ORM
allocates a score to the entire solution while the
PRM assigns a score to each individual step in
the reasoning process. Recent findings by (Lightman et al., 2023) suggest that PRM outperforms
ORM. In addition to verification, reward models
can offer invaluable feedback for further training
of generators (Uesato et al., 2022; Pan et al., 2023).
Compared to ORM, PRM provides more detailed
feedback, demonstrating greater potential to enhance generator (Wu et al., 2023). However, training a PRM requires access to expensive humanannotated datasets (Uesato et al., 2022; Lightman
et al., 2023), which hinders the advancement and
practical application of PRM. Therefore, we aim to
build a PRM for mathematical reasoning without
human annotation, and we explore the effective
quality of an intermediate step as its potential to
deduce the correct final answer. By leveraging the
correctness of the answer, we can automatically
gather step-wise supervision. Specifically, given
a math problem with a golden answer and a stepby-step solution, to achieve the label of a specific
step, we utilize a fine-tuned LLM to decode multiple subsequent reasoning paths from this step. We
further validate whether the decoded final answer
matches with the golden answer. If a reasoning
step can deduce more correct answers than another,
it would be assigned a higher correctness score.
We use this automatic way to construct the
training data for MATH-SHEPHERD, and verify our ideas on two widely used mathematical
benchmarks, GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021). We explore the effectiveness of MATH-SHEPHERD in two scenarios:
1) verification: MATH-SHEPHERD is utilized for
reranking multiple outputs generated by LLMs; 2)
reinforcement learning: MATH-SHEPHERD is employed to reinforce LLMs with step-by-step PPO.
With the verification of MATH-SHEPHERD, a series
of open-source LLMs from 7B to 70B demonstrates
exceptional performance. For instance, the step-bystep PPO with MATH-SHEPHERD significantly improves the accuracy of Mistral-7B (77.9%→84.1%
on GSM8K and 28.6%→33.0% on MATH). The
accuracy can be further enhanced to 89.1% and
43.5% on GSM8K and MATH with verification.
DeepSeek 67B (DeepSeek, 2023) achieves accuracy rates of 93.3% on the GSM8K dataset and
48.1% on the MATH dataset with verification of
MATH-SHEPHERD. To the best of our knowledge,
these results are unprecedented for open-source
models that do not rely on additional tools.
Our main contributions are as follows: 1) We
propose a framework to automatically construct
process supervision datasets without human annotations for math reasoning tasks; 2) We evaluate
our method on both step-by-step verification and
reinforcement learning scenarios. Extensive experiments on two widely used mathematical benchmarks - GSM8K and MATH, in addition to a series
of LLMs ranging from 7B to 70B, demonstrate the
effectiveness of our method; 3) We empirically analyze the key factors for training high-performing
process reward models, shedding light on future
directions toward improving reasoning capability
with automatic step-by-step verification and supervision.
9427
-----
ness of the automatic PRM with both verification
and reinforcement learning scenarios.
**3** **Methodology**
In this section, we first present our task formulation to evaluate the performance of reward models
(§3.1). Subsequently, we outline two typical categories of reward models, ORM and PRM(§3.2).
Then, we introduce our methodology to automatically build the training dataset for PRM(§3.3),
breaking the bottleneck of heavy reliance on manual annotation in existing work (Uesato et al., 2022;
Lightman et al., 2023).
**3.1** **Task Formulation**
We evaluate the performance of the reward model
in two scenarios:
**Verification** Following (Lightman et al., 2023),
we consider a best-of-N selection evaluation
paradigm. Specifically, given a problem p in the
testing set, we sample N candidate solutions from a
generator. These candidates are then scored using
a reward model, and the highest-scoring solution is
selected as the final answer. An enhanced reward
model elevates the likelihood of selecting the solution containing the correct answer, consequently
raising the success rate in solving mathematical
problems for LLMs.
**Reinforcement learning** We also use the automatically constructed PRM to supervise LLMs
with step-by-step RL. In this scenario, we evaluate the accuracy of the LLMs’ greedy decoding
output. An enhanced reward model is instrumental
in training higher-performing LLMs.
**3.2** **Reward Models**
**ORM** Given a mathematical problem p and its
solution s, ORM (P ×S → R) assigns a single realvalue to s to indicate whether s is correct. ORM
is usually trained with a cross-entropy loss (Cobbe
et al., 2021; Li et al., 2023b):
_LORM = −(ys log rs + (1 −_ _ys) log(1 −_ _rs)),_ (1)
where ys is the golden answer of the solution s,
_ys = 1 if s is correct, otherwise ys = 0. rs is the_
sigmoid score of s assigned by ORM. The success
of the reward model hinges on the effective construction of the high-quality training dataset. As the
math problem usually has a certain answer, we can
automatically construct the training set of ORM by
two steps: 1) sampling some candidate solutions
for a problem from a generator; 2) assigning the label to each sampling solution by checking whether
its answer is correct. Although false positives solutions that reach the correct answer with incorrect
reasoning will be misgraded, previous studies have
proven that it is still effective for training a good
ORM (Lightman et al., 2023; Yu et al., 2023a).
**PRM** Take a step further, PRM (P × S → R[+])
assigns a score to each reasoning step of s, which
is usually trained with:
_ysi log rsi + (1 −_ _ysi_ ) log(1 − _rsi_ ), (2)
_i=1_
X
_LP RM = −_
where ysi is the golden answer of si (the i-th step
of s), rsi is the sigmoid score of si assigned by
PRM and K is the number of reasoning steps for
_s._ (Lightman et al., 2023) also conceptualizes
the PRM training as a three-class classification
problem, in which each step is classified as either
‘good’, ‘neutral’, or ‘bad’. In this paper, we found
that there is not much difference between the binary
and the three classifications, and we regard PRM
training as the binary classification. Compared to
ORM, PRM can provide more detailed and reliable
feedback (Lightman et al., 2023). However, there
are currently no automated methods available for
constructing high-quality PRM training datasets.
Previous works (Uesato et al., 2022; Lightman
et al., 2023) typically resort to costly human annotations. The annotation cost invariably impedes
both the development and application of PRM.
**3.3** **Automatic Process Annotation**
In this section, we propose an automatic process
annotation framework to mitigate the annotation
cost issues associated with PRM. We first define the
quality of a reasoning step, followed by the introduction of our solution that obviates the necessity
for human annotation.
**3.3.1** **Definition**
Inspired by Monto Carlo Tree Search (Kocsis &
Szepesvári, 2006; Coulom, 2006; Silver et al.,
2016; Swiechowski et al.[´], 2023), we define the
quality of a reasoning step as its potential to deduce
_the correct answer. This criterion stems from the_
primary objective of the reasoning process, which
essentially is a cognitive procedure aiding humans
or intelligent agents in reaching a well-founded
outcome (Huang & Chang, 2023). Therefore, a
9428
-----
**Problem: Let 𝑝𝑝(𝑥𝑥) be a monic polynomial of degree 4. Three**
**Golden Answer: 24**
of the roots of p(x) are 1, 2, and 3. Find p(0) + p(4).
**Solution: 𝑺𝑺= 𝒔𝒔𝟏𝟏, 𝒔𝒔𝟐𝟐, 𝒔𝒔𝟑𝟑, ⋯,** 𝒔𝒔𝑲𝑲 **Answer: 20** ✗ **(a) Outcome Annotation: 𝒚𝒚𝑺𝑺** = 𝟎𝟎
**Problem: ….** 𝒔𝒔𝟐𝟐,𝟏𝟏 𝒔𝒔𝟑𝟑,𝟏𝟏 ⋯ 𝒔𝒔𝑲𝑲𝟏𝟏,𝟏𝟏 **Answer: 24** ✓
𝒔𝒔𝟏𝟏: Since three of the
roots of p(x) are 1, 2, and 𝒔𝒔𝟐𝟐,𝟐𝟐 𝒔𝒔𝟑𝟑,𝟐𝟐 ⋯ 𝒔𝒔𝑲𝑲𝟐𝟐,𝟐𝟐 **Answer: 24** ✓
3, we can write : p(x) =
(x - 1)(x - 2)(x - 3)(x - r). 𝒔𝒔𝟐𝟐,𝟑𝟑 𝒔𝒔𝟑𝟑,𝟑𝟑 ⋯ 𝒔𝒔𝑲𝑲𝟑𝟑,𝟑𝟑 **Answer: 20✗**
**(b): Process Annotation: 𝒚𝒚𝑺𝑺𝑺𝑺𝒔𝒔𝟏𝟏** = [𝟐𝟐]𝟑𝟑[ ; 𝒚𝒚]𝑯𝑯𝑯𝑯[𝒔𝒔][𝟏𝟏] = 𝟏𝟏
𝒔𝒔𝒊𝒊: the i-th step of the solution 𝑺𝑺. 𝒔𝒔𝒊𝒊,𝒋𝒋: the i-th step of the j-th finalized solution.
Figure 1: Comparison for previous automatic outcome annotation and our automatic process annotation. (a):
automatic outcome annotation assigns a label to the entire solution S, dependent on the correctness of the answer;
(b) automatic process annotation employs a ‘completer’ to finalize N reasoning processes (N=3 in this figure) for an
intermediate step (s1 in this figure), subsequently use hard estimation (HE) and soft estimation (SE) to annotate this
step based on all decoded answers.
step that has the potential to deduce a well-founded
result can be considered a good reasoning step.
Analogous to ORM, this definition also introduces
some degree of noise. Nevertheless, we find that it
is beneficial for effectively training a good PRM.
**3.3.2** **Solution**
**Completion** To quantify and estimate the po_tential for a give reasoning step si, as shown_
in Figure 1, we use a ‘completer’ to finalize N
subsequent reasoning processes from this step:
_{(si+1,j, · · ·, sKj_ _,j, aj)}j[N]=1[, where][ a][j][ and][ K][j][ are]_
the decoded answer and the total number of steps
for the j-th finalized solution, respectively. Then,
we estimate the potential of this step based on the
correctness of all decoded answers A = {aj}j[N]=1[.]
**Estimation** In this paper, we use two methods to
estimate the quality ysi for the step si, hard estimation (HE) and soft estimation (SE). HE supposes
that a reasoning step is good as long as it can reach
the correct answer a[∗]:
_ys[HE]i_ = 10 _∃aj ∈_ _A, aOtherwisej = a[∗]_ (3)
SE assumes the quality of a step as the frequency
with which it reaches the correct answer:
our automatic process annotation framework defines the quality of a step as its potential to deduce
the correct answer and achieve the label of each
step by completion and estimation.
**3.4** **Ranking for Verification**
Following (Lightman et al., 2023), we use the minimum score across all steps to represent the final
score of a solution assigned by PRM. We also explore the combination of self-consistency and reward models following (Li et al., 2023b). In this
context, we initially classify solutions into distinct
groups according to their final answers. Following that, we compute the aggregate score for each
group. Formally, the final prediction answer based
on N candidate solutions is:
I(ai = a) · RM (p, Si). (5)
_i=1_
X
_asc+rm = arg max_
Where RM (p, Si) is the score of the i-th solution
assigned by ORM or PRM for problem p.
**3.5** **Reinforcement Learning**
Upon achieving PRM, we employ reinforcement
learning to train LLMs. We implement Proximal
Policy Optimization (PPO) in a step-by-step manner. This method differs from the conventional outcome RL with ORM, which only offers a reward
at the end of the response (Ouyang et al., 2022).
Conversely, process RL offers rewards at the end
of each reasoning step. Formally, for the response
_N_
_j=1_ [I][(][a][j][ =][ a][∗][)]
_N_
P
_ys[SE]i_
(4)
Once we gather the label of each step, we can train
PRM with the cross-entropy loss. In conclusion,
9429
-----
with n tokens, Outcome RL provides the reward at
the last token:
GSM8K and 270k solutions for MATH. For verification, we choose LLaMA2-70B and LLemma34B as the base models to train reward models for
GSM8K and MATH, respectively. For reinforcement learning, we choose Mistral-7B as the base
model to train reward models and use it to supervise LLama2-7B and Mistral-7B generators. The
reward model is trained in 1 epoch with a learning rate 1e-6. For the sake of convenience, we
train the PRM using the hard estimation version
because it allows us to utilize a standard language
modeling pipeline by selecting two special tokens
to represent ‘has potential’ and ‘no potential’ labels, thereby eliminating the need for any specific
model adjustments. In reinforcement learning, the
learning rate is 4e-7 and 1e-7 for LLaMA2-7B and
Mistral-7B, respectively. The Kullback-Leibler coefficient is set to 0.04. We implement a cosine
learning rate scheduler, employing a minimal learning rate set to 1e-8. We use HAI-LLM (High-flyer,
2023) to train all models with the max sequence
length of 512.
**Baselines and Metrics** In the verification scenario, following (Lightman et al., 2023), we evaluate the performance of our reward model by comparing it against the Self-consistency (majority voting) and outcome reward model. The accuracy of
the best-of-N solution is utilized as the evaluation
metric. For PRM, the minimum score across all
steps is adopted to represent the final score of a
solution. In the reinforcement scenario, we compare our step-by-step supervision with the outcome
supervision provided by ORM, and Rejective Sampling Fine-tuning (RFT) (Yuan et al., 2023), we
sample 8 responses for each question in MetaMATH for RFT. We use the accuracy of LLMs’
greedy decoding output to assess the performance.
**4.1** **Main Results**
**MATH-SHEPHERD as verifier** Table 1 presents
the performance comparison of various methods on
GSM8K and MATH. We find that: 1) As the verifier, MATH-SHEPHERD consistently outperforms
self-consistency and ORM on two datasets with
all generators. Specifically, enhanced by MATHSHEPHERD, DeepSeek-67B achieves 93.3% and
48.1% accuracy on GSM8K and MATH; 2) In comparison to GSM8K, PRM achieves a greater advantage over ORM on the more challenging MATH
dataset; This outcome aligns with the findings in
(Uesato et al., 2022) and (Lightman et al., 2023).
_t_ = n 0
_̸_ (6)
_t = n_ _rORM_ _[,]_
_rt =_
while process RL provides rewards for each step:
_t /_ EOS 0
_∈_ _,_ (7)
_t ∈_ EOS _rPRMt_
_rt =_
where EOS denotes the set of indices that correspond to the end of each step.
**4** **Experiments**
**Datasets** We conduct our experiments using two
widely used math reasoning datasets, GSM8K
(Cobbe et al., 2021) and MATH (Hendrycks et al.,
2021). For the GSM8K dataset, we leverage the
whole test set in both verification and reinforcement learning scenarios. For the MATH dataset, in
the verification scenario, due to the computation
cost, we employ a subset MATH500 that is identical to the test set of Lightman et al. (2023). The
subset consists of 500 representative problems, and
we find that the subset evaluation produces similar
results to the full-set evaluation. To assess different
verification methods, we generate 256 candidate solutions for each test problem. We report the mean
accuracy of 3 groups of sampling results. In the
reinforcement learning scenario, we use the whole
test set to evaluate the model performance. We
train LLMs with MetaMATH (Yu et al., 2023b).
**Parameter Setting** Our experiments are based
on a series of large language models, LLaMA27B/13B/70B (Touvron et al., 2023), LLemma7B/34B (Azerbayev et al., 2023), Mistral-7B (Jiang
et al., 2023) and DeepSeek-67B (DeepSeek, 2023).
We train the generator and completer for 3 epochs
on MetaMATH. We train the Mistral-7B with a
learning rate of 5e-6. For other models, The learning rates are set to 2e-5, 1e-5, and 6e-6 for the
7B/13B, 34B, and 67B/70B LLMs, respectively.
To construct the training dataset of ORM and PRM,
we train 7B and 13B models for a single epoch
on the GSM8K and MATH training sets. Subsequently, we sample 15 solutions per problem
from each model for the training set. Following
this, we eliminate duplicate solutions and annotate
the solutions at each step. We use LLemma-7B
as the completer with the decoded number N=8.
Consequently, we obtain around 170k solutions for
9430
-----
**Models** **Verifiers** **GSM8K** **MATH500**
Self-Consistency 88.0 39.4
ORM 91.8 40.4
Self-Consistency + ORM 92.0 42.0
MATH-SHEPHERD (Ours) 93.2 44.5
Self-Consistency + MATH-SHEPHERD (Ours) 92.4 45.2
Self-Consistency 82.6 44.2
ORM 90.0 43.7
Self-Consistency + ORM 89.6 45.4
MATH-SHEPHERD (Ours) 90.9 46.0
Self-Consistency + MATH-SHEPHERD (Ours) 89.7 47.3
Self-Consistency 88.2 45.4
ORM 92.6 45.3
Self-Consistency + ORM 92.4 47.0
MATH-SHEPHERD (Ours) **93.3** 47.0
Self-Consistency + MATH-SHEPHERD (Ours) 92.5 **48.1**
LLaMA2-70B: MetaMATH
LLemma-34B: MetaMATH
DeepSeek-67B: MetaMATH
Table 1: Performances of different LLMs on GSM8K and MATH with different verification strategies. The reward
models are trained based on LLama2-70B and LLemma-34B on GSM8K and MATH, respectively. The verification
is based on 256 outputs. We report the mean accuracy of 3 groups of sampling results.
**Models** **Verifiers** **GSM8K** **MATH500**
Self-Consistency 83.9 35.1
ORM 86.2 36.4
Self-Consistency + ORM 86.6 38.0
MATH-SHEPHERD (Ours) 87.1 37.3
Self-Consistency + MATH-SHEPHERD (Ours) 86.3 38.3
Mistral-7B: MetaMATH
Self-Consistency 87.4 42.3
Mistral-7B: MetaMATH
ORM 87.6 41.3
Self-Consistency + ORM 89.0 43.1
+Process RL (Ours)
MATH-SHEPHERD (Ours) 88.4 41.1
Self-Consistency + MATH-SHEPHERD (Ours) **89.1** **43.5**
Table 2: Results of reinforcement learning and verification combination. The reward models are trained based on
Mistral-7B. The verification is based on 256 outputs. We report the mean accuracy of 3 groups of sampling results.
The former discovers that PRM and ORM yield
similar results on GSM8K, whereas the latter shows
that PRM significantly outperforms ORM on the
MATH dataset. This could be attributed to the relative simplicity of the GSM8K dataset compared
to MATH, i.e., the GSM8K dataset necessitates
fewer steps for problem-solving. As a result, ORM
operates efficiently when handling this particular
dataset; 3) In GSM8K, when combined with selfconsistency, there’s a drop in performance, whereas
in MATH, performance improves. These results
indicate that if the reward model is sufficiently powerful for a task, combining it with self-consistency
may harm the verification performance.
**MATH-SHEPHERD as reward model on rein-**
**forcement learning** Table 3 presents the performance of different LLMs with greedy decoding outputs. As is shown: 1) RL with process supervision
significantly improves the performance of two supervised fine-tuned models. For example, Mistral7B with Process RL achieves 84.1% and 33.0% on
the GSM8K and MATH datasets, respectively; 2)
RFT only slightly improves the model performance,
we believe this is because MetaMATH already has
conducted some data augmentation strategies like
RFT; 3) Outcome RL can also enhance the model
performance. However, it does not perform as well
as the Process RL with MATH-SHEPHERD, demonstrating the potential of our method.
**MATH-SHEPHERD as both reward models and**
**verifiers** We also combine RL and verification.
As shown in Table 2: 1) RL and verification are
complementary. For example, in MATH, Mistral7B utilizing process RL outperforms supervised
fine-tuning Mistral-7B 7.2% accuracy with selfconsistency as the verifier; The performance gap
9431
-----
MATH500
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||SC ORM PRM800K||
|||||SHEPHERD||
4 16 64 256
SC
ORM
PRM800K
SHEPHERD
N = number of solutions per problem
GSM8K
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
||||||SC ORM|||
||||||SHEPHERD|||
4 16 64 256
SC
ORM
SHEPHERD
N = number of solutions per problem
45
40
92.5
90.0
87.5
85.0
82.5
80.0
35
30
Figure 2: Performance of LLaMA2-70B using different verification strategies across different numbers of solution
candidates on GSM8K and MATH.
annotated PRM800K (Lightman et al., 2023). We
ascribe this superiority to the distribution gap and
the data quantity. Specifically, PRM800K is annotated based on the output from GPT-4, and consequently, a discrepancy arises for the output of opensource LLaMA models fine-tuned on MetaMATH.
Furthermore, when considering the quantity of data,
our automated reward model data exhibits both
high scalability and a reduced labeling cost. Consequently, our dataset is four times larger than that
provided in PRM800K. Overall, these results further underscore the effectiveness and potential of
our method. We also explore the performance of
different verification strategies on different sizes
of generators and verifiers from 7B to 70B. Please
refer to Appendix A for details.
**5.2** **Quality of Automatic Process Annotations**
In this section, we explore the quality of our automatic PRM dataset. To achieve this, we manually
annotate 160 steps sampled from the training set of
GSM8K and use different completers to infer from
each step to achieve their label. We find that:
**Automatic process annotation exhibits satisfac-**
**tory quality.** Figure 3(a) demonstrates that utilizing LLaMA2-70B trained on MetaMATH as the
completer, the accuracy of the hard estimation (HE)
reaches 86% when N equals 4. This suggests that
our automatically constructed dataset is of high
quality. However, we observed a decline in the
accuracy of the constructed dataset with further
increases in N. Our analysis indicates that larger
values for N may lead to false positives.
Figure 3(b) shows the cross-entropy loss between SE and HE labels compared to the humanannotated distribution: as N increases, SE progres
**Models** **GSM8K** **MATH**
LLaMA2-7B: MetaMATH 66.6 19.2
+ RFT 68.5 19.9
+ Outcome RL 70.8 20.8
+ Process RL 73.2 21.6
Mistral-7B: MetaMATH 77.9 28.6
+ RFT 79.0 29.9
+ Outcome RL 81.8 31.3
+ Process RL **84.1** **33.0**
Table 3: Performances of different models with greedy
decoding. We use the questions in MetaMATH for RFT
and RL. Both LLaMA2-7B and Mistral-7B are supervised by Mistral-7B-ORM and -MATH-SHEPHERD.
is even larger than that of greedy decoding results,
i.e., 4.4%; 2) after RL, the vanilla verification methods with only reward models are inferior to selfconsistency, we think the reason is that the initial
reward model is not sufficient to supervise the RLenhanced model. These results can show the potential of iterative RL, which we leave for future
work.
**5** **Analysis**
**5.1** **Performance with Different Number of**
**Candidate Solutions**
Figure 2 illustrates the performance comparison of
various strategies when applied to different numbers of candidates ranging from 1 to 256 on two
benchmarks. The key observations are as follows:
1) PRM exhibits consistent superior performance
when compared to both ORM and majority voting,
with the degree of this superiority becoming more
pronounced as N escalates. 2) In MATH, our automatically annotated datasets outperform the human
9432
-----
3.0
2.5
2.0
1.5
1.0
86
84
82
80
4 16 64 256
|Col1|Col2|Col3|Col4|Col5|7B 13B 70B|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
7B
13B
70B
N = number of decoded path
4 16 64 256
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
||7B: 13B|Soft :Soft||||
||70B 70B|:Soft :Hard||||
7B:Soft
13B:Soft
70B:Soft
70B:Hard
N = number of decoded path
4 16 64 256
|Col1|Col2|Col3|Col4|N W A|ormal eak ugmented|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
Normal
Weak
Augmented
N = number of decoded path
Figure 3: Quality of process annotation on GSM8K. (a): Accuracy of the process annotation using different
completer; (b): Loss of the process annotation using different completer; (c): Loss of the process annotation using
the same completer with different training data.
completers trained on MetaMath. The results indicate that a larger completer is adept at generating
superior-quality datasets. Figure 3(c) depicts the
cross-entropy loss of LLaMA2-70B trained with
different datasets. ‘Normal’ denotes the original
GSM8K training dataset; ‘Weak’ refers to the Normal set excluding examples whose questions are
in our 160 evaluation set; while ‘Augmented’ symbolizes MetaMath, an augmented version of the
Normal set. The findings suggest that high-quality
training sets allow the model to operate more proficiently as a completer. Importantly, the ‘Weak’ set
exhibits a markedly larger loss than other datasets.
This insight drives us to infer that LLMs should
acquire the questions in advance to enhance their
performance as completers. We can also conjecture
that a stronger foundational model, coupled with
superior training data, could further enhance the
quality of automatic annotation.
**5.3** **Influence of the Number of Data**
**Methods** **Models** **Acc.**
DIVERSE-NLI DeBERTa 61.3
DIVERSE-NLI LLaMA2-13B 75.6
DIVERSE-Rule - 75.0
MATH-SHEPHERD LLaMA2-13B (N = 4) 85.0
Table 4: Performance of MATH-SHEPHERD and
NLI/Rule-based annotation methods (Li et al., 2023b).
sively aligns closer to the standard distribution,
in contrast to HE which does not exhibit similar
behavior. It is essential to note that at N=4, HE
achieves an accuracy of 86%. We can theoretically
attain higher quality data exceeding 86% accuracy
by utilizing SE. However, we discovered that the
performance of the verifier exhibits no substantial
divergence whether trained with either SE or HE.
This may be attributable to the already high-quality
annotations provided by HE.
Furthermore, we also delve into other automatic
process annotation methodologies. For instance,
(Li et al., 2023b) employs a natural language inference (NLI) model and a string match rule to annotate a given step. The NLI-based method annotates
a step as correct if it is entailment with any step
in the reference solutions. The Rule-based method
annotates a step as correct if its support number
precisely matches that of any steps in the reference
solutions. As demonstrated in Table 4, our annotation strategy exhibits substantial superiority over
the two approaches.
**The ability of the LLM completer plays an im-**
**portant role in the data quality.** We employ a
completer to finalize multiple subsequent reasoning processes for a given step. Therefore, we investigate the impact of the LLM completer. Figure
3(b) presents the cross-entropy loss across diverse
We delve deeper into the analysis of PRM and
ORM by utilizing varying quantities of training
data. As depicted in Figure 4(a), it is clear that
PRM exhibits superior data efficiency. Specifically, it outperforms ORM by approximately 4%
accuracy when applying a modestly sized training
dataset (i.e., 10k instances). Furthermore, PRM
seems to have a higher potential ceiling than ORM.
These observations highlight the efficacy of PRM
for verification purposes.
**5.4** **Out-of-distribution Performance**
To further demonstrate the effectiveness of our
method, we conduct an out-of-distribution evaluation on the Hungarian national final exam[1], which
[1https://huggingface.co/datasets/](https://huggingface.co/datasets/keirp/hungarian_national_hs_finals_exam)
[keirp/hungarian_national_hs_finals_exam](https://huggingface.co/datasets/keirp/hungarian_national_hs_finals_exam)
9433
-----
70
60
92
90
50
40
88
30
10k 20k 40k 80k 160k
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||SC ORM SHEPHER|D|
SC
ORM
SHEPHERD
Number of training solutions
Figure 4: (a): Performance of different reward models using different numbers of training data; (b) performance of
different verification strategies on the out-of-distribution Hungarian national exam.
consists of 33 questions. The total score of these
questions is 100. We use the LLemma-34B trained
on MetaMATH to serve as the generator and generate 256 candidate solutions for each question.
We use LLemma-34B-ORM and LLemma-34BPRM to select the solution for each question. As
shown in Figure 4(b): 1) both LLemma-34BORM and LLemma-34B-PRM outperform the origin LLemma-34B, showing the reward model can
generalize to other domains; 2) PRM outperforms
ORM 9 scores, further demonstrating the superiority of PRM. We also conduct a case study to
intuitively demonstrate the effectiveness of MATHSHEPHERD. Please refer to Appendix B for details.
**6** **Conclusion**
increases, so does the quality of automatic annota
|63.0|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
|54.0|||||||
||||||||
|46.0|||||||
||||||||
||||||||
|ffe ar|Greedy rent nu ian natio|mbers nal e|ORM Methods of traini xam.|ng dat|SHEPHERD a; (b) pe|rf|
tions. However, this completion process demands
a lot of computing resources, potentially imposing
a limitation on the usage of our method. Despite
this limitation, the cost remains significantly lower
than human annotation. Furthermore, we are optimistic that advancements in efficient inference
techniques such as speculative decoding (Xia et al.,
2022; Leviathan et al., 2023) and vLLM (Kwon
et al., 2023) could mitigate this limitation.
**The automatic process annotation consists of**
**noise.** Similar to the automatic outcome annotation, our automatic process annotation also has
noise. Despite this, our experiments verify the efficacy of our method for training a PRM. In particular, the PRM trained on our dataset outperforms the
human-annotated PRM800K dataset. However, a
noticeable gap remains between PRM800K and the
candidate responses generated by the open-source
models utilized in this study, which may result in
the invalidation of PRM800K. As a result, the impact of this potential noise on PRM performance
is still undetermined. A comprehensive comparison between human and automated annotations is
envisaged for future studies. Furthermore, we assert that integrating human and automated process
annotations could play a vital role in constructing
robust and efficient process supervision.
In this paper, we introduce a process-oriented math
verifier called MATH-SHEPHERD, which assigns
a reward score to each step of the LLM’s outputs on math problems. The training of MATHSHEPHERD is achieved using automatically constructed process-wise supervision data, thereby
eradicating the necessity for labor-intensive human
annotation. Remarkably, this automatic methodology correlates strongly with human annotations.
Extensive experiments in both verification and reinforcement learning scenarios demonstrate the effectiveness of our method.
**Limitations**
**Acknowledgements**
This paper is supported by the National Key
Research and Development Program of China
2020AAA0106700. The contact author is Zhifang
Sui.
Our paper has some limitations, which we leave for
future work:
**The computational cost of the completion pro-**
**cess.** To determine the label of each reasoning
step, we utilize a ‘completer’ to decode N subsequent reasoning processes. We observe that as N
9434
-----
**References**
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. Palm 2 technical report. arXiv preprint
_arXiv:2305.10403, 2023._
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q
Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
Llemma: An open language model for mathematics.
_arXiv preprint arXiv:2310.10631, 2023._
Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng,
Guozhou Zheng, and Huajun Chen. When do
program-of-thoughts work for reasoning? _arXiv_
_preprint arXiv:2308.15452, 2023._
Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao,
Zefan Cai, Yuchi Wang, Peiyi Wang, Tianyu Liu, and
Baobao Chang. Towards end-to-end embodied decision making via multi-modal large language model:
Explorations with gpt4-vision and beyond. arXiv
_preprint arXiv:2310.02071, 2023._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. Training verifiers to solve math word
problems. arXiv preprint arXiv:2110.14168, 2021.
Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International
_conference on computers and games, pp. 72–83._
Springer, 2006.
DeepSeek. Deepseek llm: Let there be answers.
[https://github.com/deepseek-ai/](https://github.com/deepseek-ai/DeepSeek-LLM)
[DeepSeek-LLM, 2023.](https://github.com/deepseek-ai/DeepSeek-LLM)
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
Tushar Khot. Complexity-based prompting for multistep reasoning. arXiv preprint arXiv:2210.00720,
2022.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang,
Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora:
A tool-integrated reasoning agent for mathematical
problem solving. arXiv preprint arXiv:2309.17452,
2023.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874, 2021._
High-flyer. Hai-llm: Efficient and lightweight training
[tool for large models, 2023. URL https://www.](https://www.high-flyer.cn/en/blog/hai-llm)
[high-flyer.cn/en/blog/hai-llm.](https://www.high-flyer.cn/en/blog/hai-llm)
Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. In Anna
Rogers, Jordan Boyd-Graber, and Naoaki Okazaki
(eds.), Findings of the Association for Computational
_Linguistics: ACL 2023, pp. 1049–1065, Toronto,_
Canada, July 2023. Association for Computational
Linguistics. doi: 10.18653/v1/2023.findings-acl.
67. [URL https://aclanthology.org/](https://aclanthology.org/2023.findings-acl.67)
[2023.findings-acl.67.](https://aclanthology.org/2023.findings-acl.67)
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models
cannot self-correct reasoning yet. _arXiv preprint_
_arXiv:2310.01798, 2023._
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b.
_arXiv preprint arXiv:2310.06825, 2023._
Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy.
Challenges and applications of large language models. arXiv preprint arXiv:2307.10169, 2023.
Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, and Lu Wang. Grace:
Discriminator-guided chain-of-thought reasoning. In
_Findings of the Association for Computational Lin-_
_guistics: EMNLP 2023, pp. 15299–15328, 2023._
Levente Kocsis and Csaba Szepesvári. Bandit based
monte-carlo planning. In European conference on
_machine learning, pp. 282–293. Springer, 2006._
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory
management for large language model serving with
pagedattention. In Proceedings of the 29th Sympo_sium on Operating Systems Principles, pp. 611–626,_
2023.
Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast
inference from transformers via speculative decoding.
In International Conference on Machine Learning,
pp. 19274–19286. PMLR, 2023.
Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi
Wang, Shuhuai Ren, Mukai Li, Yazheng Yang,
Jingjing Xu, Xu Sun, et al. M3it: A large-scale
dataset towards multi-modal multilingual instruction
tuning. arXiv preprint arXiv:2306.04387, 2023a.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Jian-Guang Lou, and Weizhu Chen. Making language models better reasoners with step-aware verifier. In Anna Rogers, Jordan Boyd-Graber, and
Naoaki Okazaki (eds.), Proceedings of the 61st An_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pp. 5315–_
5333, Toronto, Canada, July 2023b. Association for
Computational Linguistics. doi: 10.18653/v1/2023.
[acl-long.291. URL https://aclanthology.](https://aclanthology.org/2023.acl-long.291)
[org/2023.acl-long.291.](https://aclanthology.org/2023.acl-long.291)
9435
-----
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
Schulman, Ilya Sutskever, and Karl Cobbe. Let’s
verify step by step. arXiv preprint arXiv:2305.20050,
2023.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large
language models via reinforced evol-instruct. arXiv
_preprint arXiv:2308.09583, 2023._
Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan,
Pengfei Liu, Yang You, and Hongxia Yang. Let’s
reward step by step: Step-level reward model
as the navigators for reasoning. _arXiv preprint_
_arXiv:2310.10080, 2023._
OpenAI. GPT-4 technical report. _CoRR,_
abs/2303.08774, 2023. doi: 10.48550/arXiv.
2303.08774. [URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2303.08774)
[48550/arXiv.2303.08774.](https://doi.org/10.48550/arXiv.2303.08774)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
Training language models to follow instructions with
human feedback. Advances in Neural Information
_Processing Systems, 35:27730–27744, 2022._
Sarah Pan, Vladislav Lialin, Sherin Muckatira, and
Anna Rumshisky. Let’s reinforce step by step. arXiv
_preprint arXiv:2311.05821, 2023._
Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of
human behavior. In Proceedings of the 36th Annual
_ACM Symposium on User Interface Software and_
_Technology, pp. 1–22, 2023._
David Silver, Aja Huang, Chris J Maddison, Arthur
Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the
game of go with deep neural networks and tree search.
_nature, 529(7587):484–489, 2016._
Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li,
Ke Wang, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world applications via restful apis. corr, abs/2306.06624, 2023.
doi: 10.48550. arXiv preprint arXiv.2306.06624.
Maciej Swiechowski, Konrad Godlewski, Bartosz Saw-[´]
icki, and Jacek Ma´ndziuk. Monte carlo tree search: A
review of recent modifications and applications. Arti_ficial Intelligence Review, 56(3):2497–2562, 2023._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288,
2023.
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell,
Geoffrey Irving, and Irina Higgins. Solving math
word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint
_arXiv:2305.16291, 2023a._
Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai
Lin, Yunbo Cao, Tianyu Liu, and Zhifang Sui. Making large language models better reasoners with alignment. arXiv preprint arXiv:2309.02144, 2023b.
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai
Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui.
Large language models are not fair evaluators. arXiv
_preprint arXiv:2305.17926, 2023c._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves
chain of thought reasoning in language models. In
_The Eleventh International Conference on Learning_
_Representations, ICLR 2023, Kigali, Rwanda, May_
_[1-5, 2023. OpenReview.net, 2023d. URL https:](https://openreview.net/pdf?id=1PL1NIMMrw)_
[//openreview.net/pdf?id=1PL1NIMMrw.](https://openreview.net/pdf?id=1PL1NIMMrw)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
and Denny Zhou. Chain-of-thought prompting elicits
reasoning in large language models. In NeurIPS,
2022.
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane
Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari
Ostendorf, and Hannaneh Hajishirzi. Fine-grained
human feedback gives better rewards for language
model training. arXiv preprint arXiv:2306.01693,
2023.
Heming Xia, Tao Ge, Furu Wei, and Zhifang Sui.
Lossless speedup of autoregressive translation with
generalized aggressive decoding. _arXiv preprint_
_arXiv:2203.16487, 2022._
Fei Yu, Anningzhe Gao, and Benyou Wang. Outcomesupervised verifiers for planning in mathematical reasoning. arXiv preprint arXiv:2311.09724, 2023a.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath:
Bootstrap your own mathematical questions for large
language models. arXiv preprint arXiv:2309.12284,
2023b.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. _arXiv preprint_
_arXiv:2308.01825, 2023._
9436
-----
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653, 2023._
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew
Chi-Chih Yao. Cumulative reasoning with large language models. _arXiv preprint arXiv:2308.04371,_
2023.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena.
_arXiv preprint arXiv:2306.05685, 2023._
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang,
Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and
Yujiu Yang. Solving math word problems via cooperative reasoning induced language models. In
Anna Rogers, Jordan Boyd-Graber, and Naoaki
Okazaki (eds.), Proceedings of the 61st Annual
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pp. 4471–4485,_
Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.
[acl-long.245. URL https://aclanthology.](https://aclanthology.org/2023.acl-long.245)
[org/2023.acl-long.245.](https://aclanthology.org/2023.acl-long.245)
9437
-----
**A** **Influence of the Model Size for**
**Verification**
To conduct an exhaustive evaluation of MATHSHEPHERD’s effectiveness, we performed a diverse
range of experiments using model sizes 7B, 13B,
and 70B. Figures 5(a), 5(b), and 2(a) display the results from the 7B, 13B, and 70B generators paired
with equal-sized reward models, respectively. It becomes evident that PRM exhibits superiority over
self-consistency and ORM across all sizes of base
models. Moreover, bigger reward models prove to
be more robust; for instance, the accuracy of the
70B reward models escalates as the number of candidate solutions rises, while the 7B reward models
show a decreasing trend.
Figure 5(c) and 5(d) presents the performance
of 7B and 70B generators interfaced with differentsized reward models. The findings illustrate that
utilizing a larger reward model to validate the output of a smaller generator significantly enhances
performance. Conversely, when a smaller reward
model is employed to validate the output of a larger
generator, the verification process adversely impacts the model’s performance compared to SC.
These results substantiate that we should utilize a
more potent reward model for validating or supervising the generator.
**B** **Case Study**
As outlined in Table 5, when presented with a question from the Hungarian national final exam, our
MATH-SHEPHERD accurately selected the correct
solution from a pool of 256 potential solutions,
which ORM failed. Moreover, MATH-SHEPHERD
displayed superior discernment by precisely identifying incorrect steps within the solutions selected
by ORM. Notably, it recognized errors in Step 2,
Step 6, and Step 9 and so on, and subsequently assigned them lower scores relative to those for steps
present in the correct solutions.
9438
-----
Generator:13B; Verifier:13B
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||SC O|RM|
||||SH|EPHERD|
4 16 64 256
SC
ORM
SHEPHERD
(b) N = number of solutions per problem
Generator:70B; Verifier:7B
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||SC O|RM|
||||SH|EPHERD|
4 16 64 256
SC
ORM
SHEPHERD
(c) N = number of solutions per problem
Generator:7B; Verifier:70B
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||SC O|RM|
||||SH|EPHERD|
4 16 64 256
SC
ORM
SHEPHERD
(d) N = number of solutions per problem
Generator:7B; Verifier:7B
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||SC||
||||O SH|RM EPHERD|
1 4 16 64 256
SC
ORM
SHEPHERD
(a) N = number of solutions per problem
88
87
86
85
84
83
82
81
85
80
75
70
65
80
78
76
74
72
70
68
74
72
70
68
66
64
62
Figure 5: Performance of different verification strategies on different sizes of generators and verifiers.
|Question|The first term of an arithmetic sequence is 18. The sum of the first six terms of the sequence is equal to the sum of the first seven terms. Show that the sum of the first 13 terms of the sequence is 0 and calculate the 13th term of the sequence.|
|---|---|
|M ATH-S H EPH ER D|Step 1: Let the common difference of the arithmetic sequence be d. MATH-SHEPHERD: 0.99. Step 2: The sum of the first six terms of the sequence is 18 + (18 + d) + (18 + 2d) + (18 + 3d) + (18 + 4d) + (18 + 5d) = 108 + 30d. MATH-SHEPHERD: 0.85. ...... Step 6: Therefore, d = −3. MATH-SHEPHERD: 0.85. ...... Step 8: The 13th term of the sequence is 18 −36 = −18. MATH-SHEPHERD: 0.93. ORM Score: 0.73.|
|---|---|
|O R M|Step 1: Let the common difference of the arithmetic sequence be d. MATH-SHEPHERD: 0.99. Step 2: The sum of the first six terms is 18 + (18 + d) + (18 + 2d) + (18 + 3d) + (18 + 4d) + (18 + 5d) = 90 + 15d. MATH-SHEPHERD: 0.53. ...... Step 6: Dividing by −6, we find that d = −2. MATH-SHEPHERD: 0.38. ...... Step 9: The 13th term of the sequence is 18 −26 = −8. MATH-SHEPHERD: 0.38. ORM Score: 0.84.|
|---|---|
Table 5: A case study from the Hungarian national exam. Red text denotes the mistake that ORM fails to detect.
9439
-----
| [
"Yifei, Li",
"Lei, Li",
"Zhihong, Shao",
"Peiyi, Wang",
"Vivek, Srikumar",
"Runxin, Xu",
"Zhifang, Sui",
"Lun-Wei, Ku",
"Yu, Wu",
"Andre, Martins",
"Damai, Dai",
"Deli, Chen"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 49 | 4 | null | https://aclanthology.org/2024.acl-long.510 | https://arxiv.org/abs/2312.08935 | https://www.semanticscholar.org/paper/4ba57555bef02f988f2ed3bab2f102733dc55221 |
Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks | Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers. Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required; c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping; d) duality exploiting task exploiting the quasi-duality between symbolic equation generation and problem’s part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods. | The Neural-Symbolic Solver (NS-Solver) is proposed to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. | # Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks
**Jinghui Qin[1], Xiaodan Liang[1,2], Yining Hong[3], Jianheng Tang[1]** **and Liang Lin[1,2][∗]**
1
Sun Yat-sen University
2
Dark Matter AI Inc.
3
University of California, Los Angeles
[email protected],{xdliang328,sqrt3tjh}@gmail.com,
[email protected],[email protected]
**Abstract**
Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic
constraints, leading to unexplainable and unreasonable predictions. Herein, we propose
Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks.
Our NS-Solver consists of a problem reader to
encode problems, a programmer to generate
symbolic equations, and a symbolic executor
to obtain answers. Along with target expression supervision, our solver is also optimized
via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised
number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting
what prior knowledge (e.g. how many legs
a chicken has) is required; c) program consistency checker computing the semantic loss
between predicted equation and target equation to ensure reasonable equation mapping;
d) duality exploiting task exploiting the quasi
duality between symbolic equation generation
and problem’s part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and
scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of
4 kinds of MWPs (arithmetic, one-unknown
linear, one-unknown non-linear, equation set)
with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods[1].
_∗Corresponding Author_
1The code and the new CM17k dataset are available at
[https://github.com/QinJinghui/NS-Solver.](https://github.com/QinJinghui/NS-Solver)
**1** **Introduction**
Deep neural networks have achieved remarkable
successes in natural language processing recently.
Although neural models have demonstrated performance superior to humans on some tasks, e.g.
reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.), it still lacks the ability
of discrete reasoning, resulting in low accuracy on
math reasoning. Thus, it is hard for pure neural
network approaches to tackle the task of solving
math word problems (MWPs), which requires a
model to be capable of natural language understanding and discrete reasoning. MWP solving
aims to automatically answer a math word problem by understanding the textual description of the
problem and reasoning out the underlying answer.
A typical MWP is a short story that describes a partial state of the world and poses a question about
an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities
need to be identified from the text. Furthermore,
the correct operators along with their computation
order among these quantities need to be determined.
Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural
semantic parsing (Liang et al., 2017a) and reading
comprehension (Chen et al., 2019), we address this
problem by neural-symbolic computing.
Recently, many researchers (Wang et al., 2017;
Huang et al., 2018; Wang et al., 2018b, 2019; Xie
and Sun, 2019; Chiang and Chen, 2019), inspired
by an encoder-decoder framework (Cho et al.,
2014), apply neural networks to solve MWPs by
learning the mapping function between problems
and their corresponding equations, and achieve remarkable successes. The encoder uses a neural network to represent a problem as a real-valued vector,
and the decoder uses another neural network to
5870
-----
generate an equation or expression token by token.
The main difference among previous methods is the
way to decode expressions or equations. However,
they only follow the encoder-decoder paradigm but
lacking the ability to explicitly incorporate essential math symbolic constraints (e.g. commonsense
constants, formulation regularization), leading to
unexplainable and unreasonable predictions. Besides, most of them only focus on arithmetic MWPs
without any unknown, preventing them from generalizing to various types of MWPs, such as equation
set problems.
To address the above issues, we propose a novel
**Neural-Symbolic Solver (NS-Solver), which ex-**
plicitly and seamlessly incorporates different levels of symbolic constraints by auxiliary learning
tasks. Our NS-Solver consists of three main components, a problem reader to encode the math word
problems into vector representations, a program_mer to generate the symbolic grounded equations,_
which are executed to produce answers, and a sym_bolic executor to obtain final results. In addition_
to the supervised training objective between generated symbolic grounded equations and groundtruth equations, our solver is also optimized by
four novel auxiliary objectives that enforce four
levels of problem understanding and symbolic reasoning. First, we apply number prediction task
to predict both the number quantity and number location in the problem in a self-supervised manner.
Second, we deploy commonsense constant pre**diction task to predict what prior commonsense**
knowledge (e.g. how many legs a chicken has) is required for our solver. Third, we propose program
**consistency checker to compute the semantic loss**
between the predicted program and ground-truth
equation to ensure reasonable equation mapping.
Finally, we also propose a novel duality exploit**ing task that exploits the quasi duality between**
symbolic grounded equation generation and the
problem’s part-of-speech generation to enhance the
understanding ability of our solver. There are some
key advantages of our solution. First of all, the
above four auxiliary tasks can produce additional
training signals, which improves the data efficiency
in training and makes our solver more robust. Second, using the predicted constant to constrain the
target symbolic table can reduce the search space
greatly, which means that our solver can generate
correct symbolic grounded equations easier and
better. Third, the auxiliary tasks have been proven
to help reduce the domain gap between seen and
unseen MWPs (Sun et al., 2019, 2020), thus improving the reasoning ability of our solver.
Besides, beyond the current large-scale highquality MWP benchmark that only includes one
type of problems, we also construct a large-scale
challenging Chinese MWPs dataset CM17K, which
contains 4 types of MWPs (arithmetic MWPs, oneunknown linear MWPs, one-unknown non-linear
MWPs, equation set problems) with more than 17K
samples, to provide a more realistic and challenging benchmark for developing a universal and scalable math solver. Extensive experiments on public
Math23K and our proposed CM17k demonstrate
the superiority of our NS-Solver compared to stateof-the-art methods in predicting final results while
ensuring intermediate equation rationality.
**2** **Related Work**
**Deep learning-based MWP Solvers.** Numerous methods have been proposed to tackle the
MWP solving task, ranging from rule-based methods (Bakman, 2007; Yuhui et al., 2010), statistical
machine learning methods (Kushman et al., 2014;
Zhou et al., 2015; Roy and Roth, 2015, 2016; Mitra and Baral, 2016; Huang et al., 2016; Roy and
Roth, 2018), semantic parsing methods (Shi et al.,
2015; Koncelkedziorski et al., 2015; Huang et al.,
2017; Liang et al., 2018a), to deep learning methods (Ling et al., 2017; Wang et al., 2017, 2018b;
Huang et al., 2018; Wang et al., 2018a; Xie and Sun,
2019; Wang et al., 2019; Zhang et al., 2020a,b; Qin
et al., 2020; Shen and Jin, 2020; Wu et al., 2020;
Chen et al., 2021; Hong et al., 2021a,b). However,
most deep learning-based methods only follow the
encoder-decoder framework without explicitly incorporating essential math symbolic constraints,
resulting in some unexplainable and unreasonable
predictions. Besides, most of them only focus on
arithmetic MWPs, preventing them from generalizing to various types, such as equation set problems.
**Neural-Symbolic Computing. Neural-symbolic**
computing has greatly promoted the development
of semantic parsing. Jia and Liang (2016); Dong
and Lapata (2016); Zhong et al. (2017) applied
neural sequence-to-sequence and sequence-to-tree
models to semantic parsing with full supervision.
Liang et al. (2017b, 2018b) have advanced the stateof-the-art in weakly supervised semantic parsing
on knowledge graphs and tabular databases. Al
5871
-----
|𝑥|4|
|---|---|
to produce answers by the executor. In our NS-Solver, we apply four auxiliary tasks to enhance its problem
**Math Word Problem** **Auxiliary Tasks**
Today there are chickens and rabbits
in the same cage, with a total of 26 **Commonsense Constant** **Number Quantity** **Number Location**
heads and 82 feet. How many **Prediction** **Prediction** **Prediction**
chickens and rabbits are there?
Today there are chickens … 26 heads and 82 … 26 heads and 82
and rabbits … feet ... feet....
Number mapping 𝑛1 26
𝑛2 82 Chicken_legs, 4
Problem Reader Rabbit_legs, 2 Quality of numbers: 2 Location: 15;18
Programmer {2, 4} ℒ𝐶𝐶𝑃 ℒ𝑁𝑄𝑃 ℒ𝑁𝐿𝑃
**Duality Exploration**
equation tree ; Dual Constrains Problem POS tagging
= =
Today/NN there/EX are/VBP
+ 𝑛1 + 𝑛2 Encoder Decoder POS Result chickens/NNS and/CC rabbits/NNS
𝑥 𝑦 ∗ ∗ …
ቊ2𝑥+ 4𝑦= 𝑛[𝑥+ 𝑦= 𝑛][1] 2 2 𝑥 4 𝑦 ℒ𝑑𝑢𝑎𝑙
Executor Consistency Checker GT Equation Tree
Chickens: 11, Rabbits: 15 ℒ𝑃𝐶𝐶
Figure 1: An overview of our NS-Solver. When a problem preprocessed by number mapping and replacement is
entered, our problem reader encodes the problem text into context representation. Then our programmer generates
a tree-structured symbolic grounded program explicitly. Finally, a symbolic grounded program will be executed
understanding and symbol reasoning ability for generating better programs.
though most of the successes of semantic parsing
are limited to structured data sources, it is not expensive for MWPs since it is easy to crawl lots of
problems with annotated equations and answers.
Therefore, MWP solving can benefit from supervised neural-symbolic computing.
**Self-Supervised Learning. Self-supervised auxil-**
iary tasks have been widely used in the fields of
natural language understanding (Devlin et al., 2019;
Lan et al.). Devlin et al. (2019) applied two selfsupervised auxiliary tasks, masked LM and next
sentence prediction, to improve the understanding
ability of BERT by pretraining. ALBERT (Lan
et al.) introduces sentence-order prediction task to
address the ineffectiveness of the next sentence prediction task in BERT. Hendrycks et al. (2019) show
that self-supervised learning can improve model
robustness and uncertainty.
**Dual Learning.** Dual learning, first proposed
by He et al. (2016), is a reinforcement training process that jointly trains a primal task and its dual task.
Then Xia et al. (2017) considered it as a way of supervised learning and designed a probabilistic regularization term to exploit the duality. It has been
widely applied in various fields, such as machine
translation (He et al., 2016), sentiment classification (Xia et al., 2017), question answering (Tang
et al., 2017), visual question answering (Li et al.,
2018), machine reading comprehension (Xiao et al.,
2018), and code generation (Wei et al., 2019). To
the best of our knowledge, we are the first to ex
ploit the duality in MWPs. Different from previous
works, we design a quasi dual learning method between symbolic grounded equation generation and
problem’s part-of-speech generation to enhance the
understanding ability by easing the difficulty of
generating problems from symbolic equations.
**3** **Neural-Symbolic Solver**
In this section, we present the design of the proposed NS-Solver. Its backbone mainly consists of
a problem reader that encodes the math word problems into vector representations, a programmer to
generate the symbolic grounded programs in prefix
order, and a symbolic executor to obtain final results. The overview of our NS-Solver is visualized
in Fig. 1. We first introduce the backbone of our
NS-Solver in section 3.1, and then we introduce
other auxiliary tasks in section 3.2.
**3.1** **Backbone**
**Problem Reader.** Given a problem text P =
_{xi}i[n]=1_ [processed by number template replace-]
ment which maps numeric values in a problem
to number templates (e.g., 26 and 82 to n1 and
_n2 in Fig. 1), the problem reader encodes each to-_
ken xi in the problem text into an embedding ei.
In this work, we deploy a two-layer bidirectional
GRU to encode each token xi into an embedding
_ei =_ _→hi_ +[←]h−i where _→hi and_ **h−i are from forward and**
_[−]_ _[−]_ _[←]_
backward GRUs, respectively. Besides, our prob
5872
-----
lem encoder also outputs a problem representation
**g0 =** **h→n +** **h−0 as the initial hidden state of our**
_[−]_ _[←]_
programmer, where **h→n and** **h−0 are the last hidden**
_[−]_ _[←]_
state of forward and backward GRUs, respectively.
**Programmer. The programmer takes the output**
of the problem reader as input and the problem
representation as the initial hidden state, and then
decodes a problem as a sequence of tokens _yi_ _i=1_
_{_ _}[m]_
which are organized as a prefix equation tree. In
this work, we deploy a tree-structured decoder (Xie
and Sun, 2019) with attention mechanism (Bahdanau et al., 2015) as the backbone of our programmer and modify them with UET representation (Qin et al., 2020) to support more symbols
for multiple types of MWPs. In our programmer,
the symbolic table consists of four parts. For each
problem, the problem-specific symbolic table contains math operators (+, −, ∗, /,ˆ, =, ;), unknown
variable (x and y), a series of commonsense constants (1, 3.14, etc) predicted by the Commonsense
Constant Prediction Task in 3.2, and the problemspecific number templates (n1, n2, n3, etc). It
should be noticed that ; is a special operator with
the lowest priority to integrate multiple equation
trees as an ensemble equation tree, so that equation
set problems can be handled as simple as arithmetic
problems.
**Executor. We deploy sympy[2], which is a python**
library for symbolic mathematics, as our symbolic
executor for obtaining final results by solving generated equations.
**3.2** **The Design of Auxiliary Tasks**
The MWP solving task remains challenging since
previous methods did not take full advantage of the
rich semantics contained in a problem and lacking
the ability to explicitly incorporate essential math
symbolic constraints. In this section, we introduce
four auxiliary learning tasks to exploit additional
training signals obtained from different tasks and
exploit the result of the commonsense constant
prediction task to explicitly constrain the constant
symbolic table, which can reduce the search space
for symbolic generation and ease the difficulty of
generating correct constant.
**Self-supervised** **Number** **Prediction** **(SNP)**
**Tasks. If a solver can fully understand the problem**
semantics, it should be able to identify the quantity
of numbers in a problem (i.e., to count how
many numeric values are in the problem) and
2https://www.sympy.org/
their corresponding locations in the problem
text accurately. For example, if the solver can
understand the problem in Fig. 1, it should be able
to predict there are two numbers(26 and 82) in
the problem, and their positions are 15 and 18,
respectively. Thus, number quantity prediction
and number location prediction are two critical
self-supervised tasks to help the problem reader
fully understand the problem semantics and
measure the ability of problem understanding of a
solver. Both two number prediction tasks take the
mean of the problem encoder’s outputs {ei}i[n]=1 [as]
their input and apply a single-layer feed-forward
neural network to compute the distribution of
number quantity and number locations. The
training objectives of two tasks for each problem
are formulated as:
_LNQP = −_
_LNLP = −_
_qti log p (qi|P_ ),
_i=1_
X
_L_
_lti log p (li|P_ ) .
_i=1_
X
(1)
where LNQP and LNLP denote the loss for the
Number Quantity Prediction (NQP) task and Number Location Prediction (NLP) task, respectively.
_Q and L are the maximum possible quantities of_
number and maximum possible number locations
for a problem at the dataset level. qti and lti represent the ground-truth value on i-th index of the
output probability distribution of NQP and NLP,
respectively.
**Commonsense** **Constant** **Prediction** **(CCP)**
**Task. Commonsense constants are important for**
solving some MWPs while most previous methods
only consider the constants 1 and 3.14, which are
not enough for a solver to solve problems that need
other commonsense constants. However, attaching
a lot of constants to the problem-specific symbolic
table will enlarge the search space, increasing the
difficulty of generating rational symbolic equations.
Therefore, we propose a commonsense constant
prediction task to predict what prior commonsense
knowledge (e.g. a chicken has 2.0 legs and a rabbit
has 4.0 legs for the problem in Fig. 1) is required
for the solver to solve a problem according to
the problem context. In this way, we can reduce
the search space greatly, thus improving the
performance of our solver. Similar to the number
prediction tasks, the commonsense constant
prediction task takes the mean of the problem
5873
-----
encoder’s output {ei}i[n]=1 [as their input and apply]
a single-layer feed-forward neural network to
compute the distribution of number quantity and
number locations The training objective for each
problem is formulated as:
generation. Given a pair of a problem and its corresponding equations (P,T ), and P _[′]_ is the part-ofspeech of P [3], the training objective of the duality
exploiting task is formulated as:
_Ldual =_ log ˆp(P _[′]) + log p (T_ _|P_ ) −2 (4)
log ˆp(T ) log p _P_ _T_ _._
_−_ _[′]|_
where ˆp(P _[′]) and ˆp(T_ ) are marginal distributions,
which can be modeled by their LSTM (Hochreiter
and Schmidhuber, 1997)-based language models,
respectively. Besides, we deploy a tree-structure
encoder inspired by GTS (Xie and Sun, 2019) to
encode equations in prefix for POS generation.
**3.3** **Training Objective**
Given the training dataset D={(P _[i], T_ [1]), (P [2], T [2]),
_· · ·,(P_ _[N]_ _, T_ _[N]_ ) }, where T _[i]_ is the universal expression tree of problem P _[i], we minimize the following_
loss function for our NS-Solver:
_ctj log p (ci|P_ ) . (2)
_i=1_
X
_LCCP = −_
where C is the total number of constants in the
symbolic table and cti represents the true value
on i-th index of the output probability distribution.
Since it is impossible for the commonsense constant prediction task to achieve 100% accuracy, in
addition to the predicted constants, we add three
extra constants that are not predicted but with the
highest probability into the symbolic table, making
a better trade-off between the size of the search
space and prediction accuracy.
**Program Consistency Checker (PCC). Although**
a problem can be solved by multiple equivalent but
different equations, the predicted equations should
be consistent with label equations as much as possible in the supervised learning setting. Therefore,
we propose a program consistency checker to check
the symbolic program consistency and regularize
the model by computing semantic loss between
the predicted symbolic program and ground-truth
equation to ensure the reasonable symbolic equation mapping. Let ˆyi and yi represent the predicted
symbol and ground-truth symbol, pi represents the
probability of ˆyi, the semantic loss is obtained by
computing a distance between the predicted distribution and ground-truth distribution as:
_L =_ [Lent1 + λ1 ∗Ldual + λ2 ∗LPCC
(P,TX)∈D
+λ3 ∗ (LNQP + LNLP ) + λ4 ∗LCCP ] .
(5)
where
prob(yt _P_ ) (6)
_|_
_t=1_
Y
_Lent1 = −_ log
where m denotes the size of T, and yt denotes the
t-th output. {λi}i[4]=1 [are empirical values that will]
be detailed in Section 4.2.
For the duality exploiting task, there is another
loss for training the branch of the problem’s partof-speech generation:
_LPCC = −log_
_pi_
_yˆi=yi_
Y
(1 − _pi) ._ (3)
_yˆYi≠_ _yi_
_POS =_ [ _ent2+λ5_ _dual+λ6_ _PCC′]._
_L_ _L_ _∗L_ _∗L_
(P _[′]X,T_ )∈D
(7)
where
**Duality Exploiting (DE) Task. Many previous**
works (He et al., 2016; Xia et al., 2017; Xiao et al.,
2018; Wei et al., 2019) have shown promising results by dual learning framework. Although intuitively, MWP solving and MWP generation are
related to each other, i.e., the input of MWP solving
is the output of MWP generation, and vice versa,
it is very hard for the MWP generation task to
generate good enough problems only by the equations without any topic information. Therefore, we
propose a duality exploiting task to enhance the
understanding ability of our solver by exploiting
the quasi duality between symbolic grounded equation generation and the problem’s part-of-speech
prob(xt _T_ ) (8)
_|_
_t=1_
Y
_Lent2 = −_ log
where n denotes the size of P, and xt denotes the
t-th output. _PCC′ is the semantic loss between_
_L_
predicted POS and the ground-truth POS. _λi_ _i=5_
_{_ _}[6]_
are empirical values that will also be detailed in
Section 4.2.
3We use Jieba (https://github.com/fxsjy/jieba) to generate
the POS of a problem.
5874
-----
**4** **Experiments**
**4.1** **CM17K Dataset**
Most public MWPs datasets are quite small such
as ALG514 or exist some incorrect labels such as
Dolphin18K. An exception is the Math23K dataset,
which contains 23161 problems labeled well with
structured equations and answers. However, it only
contains one-unknown linear math word problems,
which is not sufficient to validate the ability of a
math solver about solving multiple types of MWPs.
Therefore, we introduce a new high-quality math
word problems dataset, called CM17K, to validate
the universality of a solver and provide a more realistic and challenging benchmark for developing
a universal and scalable math solver. We collect
CM17K from two education websites[4]. These problems are oriented grades 6-12, containing 4 types
of MWPs with more than 17K samples, including
6215 arithmetic MWPs, 5193 one-unknown linear
MWPs, 3129 one-unknown non-linear MWPs, and
2498 equation set problems. It should be noticed
that our dataset is sufficient for validating the universality of math word problem solvers since these
problems can cover most cases about MWPs. We
label our data with structured equations and answers following Math23K (Wang et al., 2017). We
split our CM17K into train/valid/test sets at a ratio
of 8:1:1.
|1.|Col2|Col3|
|---|---|---|
||Math23K|CM17K|
|# Avg PL|28.015|54.365|
|# Avg EL|6.853|13.853|
|# Avg TS|5.554|11.834|
|# Avg Num|2.821|6.383|
|# Avg SNI|2.668|4.111|
|# Avg Ops|3.943|4.852|
|# Avg Constants|0.270|0.327|
Math23K CM17K
# Avg PL 28.015 54.365
# Avg EL 6.853 13.853
# Avg TS 5.554 11.834
# Avg Num 2.821 6.383
# Avg SNI 2.668 4.111
# Avg Ops 3.943 4.852
# Avg Constants 0.270 0.327
Table 1: Statistics of Math23K and CM17K. PL, EL,
TS, Num, SNI, Ops, and Constants represent problem
length, equation length, equation tree size, number of
quantities in problems, number of quantities occurred
in both problems and their corresponding equations,
number of operators in equations, and number of constants only occurred in equations, respectively.
The data statistics of Math23K and CM17K are
shown in Table 1. From the statistics, we can
see that all statistics of CM17K are larger than
Math23K. This shows that our dataset is more challenging and difficult for math word problem solvers.
Besides, since CM17K contains more types of
MWPs than Math23K, CM17K is more suitable
[4http://www.zxxk.com/ and http://www.jyeoo.com/](http://www.zxxk.com/)
for validating the reasoning ability of a solver than
Math23K.
**4.2** **Experimental Setup and Training Details**
**4.2.1** **Datasets, Baselines, and Metric**
We conduct experiments on Math23K and our
CM17K. The main state-of-the-arts to be compared
are as follows: DNS (Wang et al., 2017) is a universal solver based on the seq2seq model with significant number identification (SNI). GTS (Xie and
Sun, 2019) is a goal-driven tree-structured MWP
solver. StackDecoder (Chiang and Chen, 2019)
is an universal semantically-aligned math word
problems solver. (Zhang et al., 2020a) is an enhanced GTS with teacher-student distillation and
multi-decoder ensemble. Again, following prior
works (Wang et al., 2017; Chiang and Chen, 2019;
Xie and Sun, 2019), we use answer accuracy as
the evaluation metric: if the calculated value of the
predicted equation tree equals to the true answer, it
is thought as correct since the predicted expression
is equivalent to the target expression.
**4.2.2** **Implementation Details**
We use Pytorch[5] to implement our model on Linux
with an NVIDIA RTX2080Ti GPU card. All those
words with fewer than 5 occurrences are converted
into a special token UNK. The size of word embeddings and all hidden states for other layers are set as
128 and 512, respectively. Our model is optimized
by ADAM optimizor (Kingma and Ba, 2015) with
_β1 = 0.9, β2 =0.999, and ϵ = 1e[−][8]. The mini-batch_
size is set as 32. The initial learning rate is set as
1e[−][3] and then decreases to half every 40 epochs.
To prevent overfitting, we set dropout rate as 0.5
and weight decay as 1e[−][5]. Finally, we conduct
greedy search to generate symbolic equation trees.
We set λ1, λ2, λ3, λ5, and λ6 as 0.0005, 0.01, 1.0,
0.005, and 0.1 for both datasets, respectively. We
set λ4 as 0.000001 for Math23K while we set λ4 as
1.0 for CM17K. All constants are extracted from
the training set. In each epoch, all training data is
shuffled randomly and then cut into mini-batches.
**4.3** **Answer Accuracy**
Following prior works (Wang et al., 2017; Chiang
and Chen, 2019; Xie and Sun, 2019), we conduct 5fold cross-validation on Math23K. For CM17K, we
evaluate the performance on the test set. The results
are shown in Table 2. From Table 2, we can observe
5http://pytorch.org
5875
-----
that benefiting from the four new auxiliary tasks
and neural-symbolic paradigm, our NS-Solver outperforms the baselines on both datasets in terms of
answer accuracy. Specifically, for Math23K and
CM17K, the accuracy gains of NS-Solver over
GTS are 1.37% and 5.93%, respectively. Comparing with TSN-MD, our solver outperforms it by
about 0.6% on Math23K. It shows that our model is
more feasible for solving multiple types of MWPs.
It also shows that our NS-Solver is more effective
than other state-of-the-art models on the real-world
scenario that needs to solve various MWPs with a
|ed solver.|Col2|Col3|
|---|---|---|
|Model|Math23K|CM17K|
|DNS (Wang et al., 2017)|58.1%|15.93%|
|StackDecoder (Chiang and Chen, 2019)|66.0%|37.24%|
|GTS (Xie and Sun, 2019)|74.3%|47.12%|
|TSN-MD (Zhang et al., 2020a)|75.1%|-|
|NS-Solver (Ours)|75.67%|54.05%|
Model Math23K CM17K
DNS (Wang et al., 2017) 58.1% 15.93%
StackDecoder (Chiang and Chen, 2019) 66.0% 37.24%
GTS (Xie and Sun, 2019) 74.3% 47.12%
TSN-MD (Zhang et al., 2020a) 75.1%
NS-Solver (Ours) **75.67%** **54.05%**
Table 2: Model comparison on answer accuracy
**4.4** **Comparisons on different subsets**
We drill down to analyze the generalization of DNS,
GTS, and NS-Solver on different types of MWPs in
the test subset of CM17K. Their answer accuracy
on different types of MWPs is shown in Table 3.
We can observe that our NS-Solver outperforms the
other two models by a large margin on all subsets.
Specifically, the accuracy gains of our NS-Solver
over GTS on four subsets are 3.87%, 9.12%, 6.99%,
and 9.44%. This shows that with the help of four
auxiliary tasks, our NS-Solver obtains better generalization ability on multiple types of MWPs than
|baselines.|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||arithmetic|one-unknown linear|one-unknown non-linear|equation set|
|Number||619|526|315|244|
|DNS|Correct|23|49|67|132|
||Accuracy|3.7%|9.32%|21.27%|54.1%|
|GTS|Correct|255|220|201|128|
||Accuracy|41.20%|41.83%|63.80%|52.45%|
|NS-Solver (Ours)|Correct|279|268|223|151|
||Accuracy|45.07%|50.95%|70.79%|61.89%|
one-unknown one-unknown
arithmetic linear non-linear equation set
Number 619 526 315 244
Correct 23 49 67 132
DNS
Accuracy 3.7% 9.32% 21.27% 54.1%
Correct 255 220 201 128
GTS
Accuracy 41.20% 41.83% 63.80% 52.45%
Correct 279 268 223 151
NS-Solver (Ours)
Accuracy **45.07%** **50.95%** **70.79%** **61.89%**
Table 3: Answer accuracy on CM17K’s test subset.
**4.5** **Performance on Tree Length**
Intuitively, the size of the symbolic equation tree is
proportional to the complexity of the mathematical
relationship in the problem. The more complex
the mathematical relationship is, the more difficult
it is to solve the problem. Here, we compare our
proposed NS-Solver with GTS on CM17K to show
the superiority of our NS-Solver on different equation tree sizes. The answer accuracies for different
sizes of expression trees on CM17K test subset are
shown in Fig. 2. We can see that there is a tendency
Figure 2: Answer accuracies for different sizes of symbolic equation trees on CM17K.
for answer accuracy to degrade with the growth of
the problem complexity measured as the size of the
equation tree, and our NS-Solver outperforms GTS
on most cases of different equation tree sizes. This
shows our NS-Solver can better model the mathematical relationships of the problem than GTS. It
can also be noticed that the improvement of our
NS-Solver over the GTS is increasing when the
problems become more complex.
However, although our model outperforms other
methods, there still has room for improvement in
semantic understanding and symbolic reasoning
since longer equations often match with more complex MWPs which entail more complex math relationships.
**4.5.1** **Ablation on different auxiliary tasks**
We study the contribution of different auxiliary
tasks of our NS-Solver. For this purpose, we consider five different combinations: 1) only the backbone [NS-Solver - CCP - SNP - PCC - DE]; 2) backbone + duality exploiting task [NS-Solver - CCP SNP - PCC]; 3) backbone + duality exploiting task
+ program consistent checker [NS-Solver - CCP SNP]; 4) backbone + duality exploiting task + program consistent checker + number prediction tasks
[NS-Solver - CCP]; and 5) the proposed NS-Solver
[NS-solver]. For each of these combinations, each
model was trained for 80 epochs on CM17K and
validated on its test subset. The learning rate decreased to half every 20 epochs. The results are
provided in Fig. 4.
As one can see, all four auxiliary tasks can improve performance. Specifically, the accuracy gains
of DE, PCC, SNP, and CCP are 1.00%, 1.41%,
1.11%, and 1.12%, respectively. Besides, the binary
accuracies of the two SNP tasks are 97% (number
quantity prediction) and 96.8% (number location
prediction). Moreover, the accuracy of our CCP
5876
-----
**Problem** **Generated Symbolic Equation** **Auxiliary Task**
**SNS-solver - CCP - NP - PCC - DE**
**Case 1: 学校买来** NUM(n0 [5]) 盒羽毛球,每盒 NUM (n1 [12]) 个,共用 NUM(n2 [240]) **(Ours):**
元,平均每个羽毛球多少元钱?(The school bought NUM(n0 [5]) boxes of badminton, x=n2/n1 (error) +Duality
each box of NUM (n1 [12]), sharing NUM(n2 [240]) yuan, how much is the average price of Exploiting (DE)
each badminton ?) **SNS-solver - CCP - NP - PCC (Ours):**
x=n2/(n0*n1) (correct)
**Case 2: 小杰与同学们去南岳山玩,每人车票费是** NUM(n0 [22]) 元,他们总共花 **SNS-solver - CCP - NP - PCC (Ours):**
了 NUM(n1 [154]) 元车费,他们买了几张票?(Xiaojie went to Nanyueshan with his x=n1/(n0*1.0) (correct) +Program
classmates. The ticket per person was NUM(n0 [22]) yuan. They spent a total of NUM(n1 Consistency
[154]) yuan. How many tickets were they bought?) **SNS-solver - CCP - NP (Ours):** Checker (PCC)
**Groundtruth: x=n1/n0** x=n1/n0 (correct)
**Case 3: 妈妈想给** NUM (n0 [1]) 间长 NUM (n1 [7]) 米,宽NUM(n2 [4]) 米的房间铺上地 **SNS-solver - CCP - NP (Ours):**
砖,每平方米的地砖价钱是 NUM(n3 [60]) 元,那么铺好地砖至少要花多少钱? x=n3*n1*n2/10000 (error) +Number
(Mother wants to lay a floor tile in and a width of NUM(n2 [4]) meters. The price per square meter of floor tiles is NUM(n0 [1]) room with a length of NUM(n1 [7]) NUM(nmeters 3 [60]) **SNS-solver - CCP (Ours):** Prediction (NP)
yuan. So how much does it cost to lay the floor tiles?) x=n3*n1*n2 (correct)
**Case 4: 小胖家装修新房了,准备在客厅铺上地砖,客厅是长方形的地面,长**
NUM (n地砖,他至少要购买多少块这样的地砖?(The chubby family has renovated a new 0 [5]) 米,宽 NUM(n1 [6]) 米,他选中了边长为 NUM (n2 [40]) 厘米的正方形 **SNS-solver - CCP (Ours): x=n0*n1/((n2/100)*(n2/10)) (error)** +Commonsens
house and is ready to lay floor tiles in the living room. The living room is a rectangular floor e Constant
with a length of floor tile with a side length of buy at least?) NUM(n0 [5]) meters and a width of NUM(n2 [40]) cm. How many pieces of floor tiles should he NUM (n1 [6]) meters. He chose a square **SNS-solver (Ours): x=n0*n1/((n2/100)*(n2/100)) (correct)** Prediction (CCP)
**Case 5: 千米每小时,慢车速度为甲、乙** NUM(n0 [2]) 地相距NUM(n3NUM(n [80]) 千米每小时,慢车从甲地出发,快车从1 [200]) 千米,快车速度为 NUM(n2 [120]) **GTS:**
乙地出发。如果 NUM(n4 [2]) 车同时出发,相向而行,出发后几时 NUM(n5 [2]) 车 n2*x=n1+n3*x (error) +All above
相遇? (The distance between the speed of express train is NUM(nNUM(n2 [120]) 0 [2]) locations A and B is kilometers per hour, and the speed of slow train NUM(n1 [200]) kilometers, **SNS-solver (Ours):** four tasks
is travel towards each other, when will the NUM(n3 [80]) kilometers per hour. If the NUM(nNUM(n5 [2]) 4 [2]) cars meet after departure?)cars depart at the same time \\ and n2*x+n3*x=n1 (correct)
Figure 3: Typical cases. Note that the results are represented as infix order which is more readable than prefix
order. The programs generated by NS-Solver are also translated into human-readable equations. Constants and
number symbols are labelled in red and cyan, respectively.
tency checker (PCC) that effectively regularizes
the model’s output by constraining the distance
between predicted symbols and ground-truth symbols during training, [NS-solver - CCP - SNP]
can generate more consistent equations with the
ground-truth than [NS-solver - CCP - SNP - PCC],
as shown in Case 2. With self-supervised number prediction (SNP), [NS-solver - CCP] can generate better results and avoid generating symbols
that do not belong to the problem, as shown in
**Case 3. With commonsense constant prediction**
(CCP), our NS-Solver manages to choose correct
constants by constraining the constant symbolic
table using predicted results of CCP. As shown in
**Case 4, [NS-solver - CCP] chooses error constant**
10 while NS-solver chooses two correct constants.
Besides, although GTS and NS-Solver generate the
same symbols sometimes, our NS-Solver generates
correct equations with the help of our four auxiliary objectives, as shown in Case 5. Overall, all
four auxiliary tasks can improve our NS-Solver’s
understanding and reasoning ability.
|Model|BERT + Tree Decoder (Xie and Sun, 2019)|NS-Solver + BERT|
|---|---|---|
|CM17K|55.0%|60.68%|
Model BERT + Tree Decoder (Xie and Sun, 2019) NS-Solver + BERT
CM17K 55.0% **60.68%**
Table 4: Generalization to different backbone
accuracy.
5877
Figure 4: Ablation Study on different auxiliary compo-
nents. ‘-’ represents we remove the component.
task is 97.8%. This shows that our auxiliary tasks
can enhance our NS-Solver to enforce better prob-
lem understanding and symbol reasoning. Overall,
our proposed NS-Solver achieves the best answer
**Case Study**
We also present the results of our NS-Solver with
different combinations of four auxiliary tasks in
. Benefiting from explicitly exploiting the
probabilistic correlation between two quasi dual
tasks to regularize the training process in our du-
ality exploiting (DE) task, our [NS-solver - CCP
- SNP - PCC] can generate correct equations by
understanding the problem better while [NS-solver
- CCP - SNP - PCC - DE] generates error equations,
as shown in Case 1. With the program consis-
-----
**4.7** **Extends to other backbone**
To show that our auxiliary tasks can be adapted to
other backbones, we replace GTS’s encoder with
BERT (BERT + Tree Decoder) and NS-Solver’s
encoder with BERT (NS-Solver + BERT), where
we adopt a Chinese BERT-base pre-trained with
whole word masking (Cui et al., 2020). We conduct
experiments on CM17K. The results are shown
in Table 4. We can observe that with auxiliary
tasks, our NS-Solver + BERT still can outperform
**BERT + Tree Decoder, which shows that our aux-**
iliary tasks’ strong generalization.
**5** **Conclusion**
In this work, we propose Neural-Symbolic Solver
(NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by four
auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to
generate a symbolic grounded program, and a symbolic executor to obtain final results. In addition
to supervised learning with target expression, our
solver is also optimized via four new auxiliary objectives that enforce four levels of symbolic reasoning. Besides, we also construct a new dataset
CM17K containing 4 types of MWPs with more
than 17K samples, which provides a more realistic
and challenging benchmark for developing a universal and scalable math solver. Extensive experiments on Math23K and CM17K demonstrate the
superiority of our NS-Solver compared to state-ofthe-art methods in answer accuracy while ensuring
intermediate equation rationality.
**6** **Ethical Impact**
We collected CM17K from two online education
websites, which is only used for academic research,
and the copyright belongs to the original websites.
This work may inspire research in the field of numerical reasoning.
**Acknowledgements** This work was supported
in part by National Key R&D Program of China
under Grant No.2020AAA0109700, National
Natural Science Foundation of China (NSFC)
under Grant No.U19A2073, No.61976233 and
No. 61836012, the Natural Science Foundation of Guangdong Province under Grant
No. 2017A030312006, Guangdong Province
Basic and Applied Basic Research (Regional
Joint Fund-Key) Grant No.2019B1515120039,
Shenzhen Fundamental Research Program
(Project No.RCYX20200714114642083 and
No.JCYJ20190807154211365), Zhijiang Lab’s
Open Fund (No.2020AA3AB14), CSIG Young Fellow Support Fund, and Guangdong Provincial Key
Laboratory of Information Security Technology.
**References**
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly
learning to align and translate. In 3rd Inter_national Conference on Learning Representations,_
_ICLR 2015, San Diego, CA, USA, May 7-9, 2015,_
_Conference Track Proceedings._
Yefim Bakman. 2007. Robust understanding of word
problems with extraneous information. Computing
_Research Repository, arXiv:math/0701393._
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan
Liang, Lingbo Liu, Eric P. Xing, and Liang Lin.
2021. GeoQA: A geometric question answering
benchmark towards multimodal numerical reasoning. arXiv preprint arXiv:2105.14517.
Xinyun Chen, Chen Liang, Adams Wei Yu, Denny
Zhou, Dawn Song, and Quoc V Le. 2019. Neural
symbolic reader: Scalable integration of distributed
and symbolic representations for reading comprehension. In International Conference on Learning
_Representations._
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-aligned equation generation for
solving and reasoning math word problems. In
_Proceedings of the 2019 Conference of the North_
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
_Volume 1 (Long and Short Papers), pages 2656–_
2668. Association for Computational Linguistics.
Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. 2014. [Learning](https://doi.org/10.3115/v1/D14-1179)
[phrase representations using RNN encoder–decoder](https://doi.org/10.3115/v1/D14-1179)
[for statistical machine translation. In Proceedings of](https://doi.org/10.3115/v1/D14-1179)
_the 2014 Conference on Empirical Methods in Nat-_
_ural Language Processing (EMNLP), pages 1724–_
1734, Doha, Qatar. Association for Computational
Linguistics.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi[jin Wang, and Guoping Hu. 2020. Revisiting pre-](https://www.aclweb.org/anthology/2020.findings-emnlp.58)
[trained models for Chinese natural language process-](https://www.aclweb.org/anthology/2020.findings-emnlp.58)
[ing. In Proceedings of the 2020 Conference on Em-](https://www.aclweb.org/anthology/2020.findings-emnlp.58)
_pirical Methods in Natural Language Processing:_
_Findings, pages 657–668, Online. Association for_
Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. [BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
5878
-----
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing.](https://doi.org/10.18653/v1/N19-1423) In Proceedings of the 2019 Conference
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
[Li Dong and Mirella Lapata. 2016. Language to logi-](https://doi.org/10.18653/v1/P16-1004)
[cal form with neural attention. In Proceedings of the](https://doi.org/10.18653/v1/P16-1004)
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
33–43, Berlin, Germany. Association for Computational Linguistics.
Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu,
Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in neural
_information processing systems, pages 820–828._
Dan Hendrycks, Mantas Mazeika, Saurav Kadavath,
and Dawn Song. 2019. Using self-supervised learning can improve model robustness and uncertainty.
In Advances in Neural Information Processing Sys_tems, pages 15637–15648._
Sepp Hochreiter and J¨urgen Schmidhuber. 1997.
Long short-term memory. _Neural computation,_
9(8):1735–1780.
Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang,
and Song-Chun. Zhu. 2021a. Learning by fixing:
Solving math word problems with weak supervision.
In Thirty-Fifth AAAI Conference on Artificial Intelli_gence._
Yining Hong, Qing Li, Ran Gong, Daniel Ciao, Siyuan
Huang, and Song-Chun. Zhu. 2021b. Smart: A situation model for algebra story problems via attributed
grammar. In The Thirty-Fifth AAAI Conference on
_Artificial Intelligence, AAAI-21._
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
2018. Neural math word problem solver with reinforcement learning. In Proceedings of the 27th Inter_national Conference on Computational Linguistics,_
pages 213–223. Association for Computational Linguistics.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian
Yin. 2017. Learning fine-grained expressions to
solve math word problems. In Proceedings of the
_2017 Conference on Empirical Methods in Natural_
_Language Processing, pages 805–814. Association_
for Computational Linguistics.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset
construction and evaluation. In Proceedings of the
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
887–896. Association for Computational Linguistics.
[Robin Jia and Percy Liang. 2016. Data recombination](https://doi.org/10.18653/v1/P16-1002)
[for neural semantic parsing. In Proceedings of the](https://doi.org/10.18653/v1/P16-1002)
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
12–22, Berlin, Germany. Association for Computational Linguistics.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In international
_conference on learning representations._
Rik Koncelkedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52th Annual Meeting of the Association for Compu-_
_tational Linguistics, volume 1, pages 271–281._
Zhenzhong Lan, Mingda Chen, Sebastian Goodman,
Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised
learning of language representations. In 8th Inter_national Conference on Learning Representations,_
_ICLR 2020, Addis Ababa, Ethiopia, April 26-30,_
_2020._
Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli
Ouyang, Xiaogang Wang, and Ming Zhou. 2018. Visual question generation as dual task of visual question answering. In Proceedings of the IEEE Confer_ence on Computer Vision and Pattern Recognition,_
pages 6116–6124.
Chao-Chun Liang, Yu-Shiang Wong, Yi-Chung Lin,
[and Keh-Yih Su. 2018a. A meaning-based statistical](https://doi.org/10.18653/v1/N18-1060)
[English math word problem solver. In Proceedings](https://doi.org/10.18653/v1/N18-1060)
_of the 2018 Conference of the North American Chap-_
_ter of the Association for Computational Linguistics:_
_Human Language Technologies, Volume 1 (Long Pa-_
_pers), pages 652–662, New Orleans, Louisiana. As-_
sociation for Computational Linguistics.
Chen Liang, Jonathan Berant, Quoc Le, Kenneth D.
[Forbus, and Ni Lao. 2017a. Neural symbolic ma-](https://doi.org/10.18653/v1/P17-1003)
[chines: Learning semantic parsers on Freebase with](https://doi.org/10.18653/v1/P17-1003)
[weak supervision. In Proceedings of the 55th An-](https://doi.org/10.18653/v1/P17-1003)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 23–33,_
Vancouver, Canada. Association for Computational
Linguistics.
Chen Liang, Jonathan Berant, Quoc Le, Kenneth D.
[Forbus, and Ni Lao. 2017b. Neural symbolic ma-](https://doi.org/10.18653/v1/P17-1003)
[chines: Learning semantic parsers on Freebase with](https://doi.org/10.18653/v1/P17-1003)
[weak supervision. In Proceedings of the 55th An-](https://doi.org/10.18653/v1/P17-1003)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 23–33,_
Vancouver, Canada. Association for Computational
Linguistics.
5879
-----
Chen Liang, Mohammad Norouzi, Jonathan Berant,
Quoc V Le, and Ni Lao. 2018b. Memory augmented
policy optimization for program synthesis and semantic parsing. In Advances in Neural Information
_Processing Systems, pages 9994–10006._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167. Associa-_
tion for Computational Linguistics.
Arindam Mitra and Chitta Baral. 2016. Learning to
use formulas to solve simple arithmetic problems.
In Proceedings of the 54th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 2144–2153. Association for_
Computational Linguistics.
Jinghui Qin, Lihui Lin, Xiaodan Liang, Rumin Zhang,
and Liang Lin. 2020. Semantically-aligned universal tree-structured solver for math word problems.
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 3780–3789.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
[Percy Liang. 2016. SQuAD: 100,000+ questions for](https://doi.org/10.18653/v1/D16-1264)
[machine comprehension of text. In Proceedings of](https://doi.org/10.18653/v1/D16-1264)
_the 2016 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 2383–2392, Austin,_
Texas. Association for Computational Linguistics.
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752. Association_
for Computational Linguistics.
Subhro Roy and Dan Roth. 2016. Unit dependency
graph and its application to arithmetic word problem
solving. In Thirtieth AAAI Conference on Artificial
_Intelligence, pages 3082–3088._
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transac_tions of the Association for Computational Linguis-_
_tics, 6:159–172._
[Yibin Shen and Cheqing Jin. 2020. Solving math word](https://doi.org/10.18653/v1/2020.coling-main.262)
[problems with multi-encoders and multi-decoders.](https://doi.org/10.18653/v1/2020.coling-main.262)
In Proceedings of the 28th International Conference
_on Computational Linguistics, pages 2924–2934,_
Barcelona, Spain (Online). International Committee
on Computational Linguistics.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Processing,_
pages 1132–1142. Association for Computational
Linguistics.
Yu Sun, Eric Tzeng, Trevor Darrell, and Alexei A Efros.
2019. Unsupervised domain adaptation through selfsupervision. arXiv preprint arXiv:1909.11825.
Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller,
Alexei Efros, and Moritz Hardt. 2020. Test-time
training with self-supervision for generalization under distribution shifts. In International Conference
_on Machine Learning, pages 9229–9248. PMLR._
Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and
Ming Zhou. 2017. Question answering and question generation as dual tasks. _arXiv preprint_
_arXiv:1706.02027._
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018a. Translating a math word
problem to a expression tree. In Proceedings of the
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 1064–1069. Associa-_
tion for Computational Linguistics.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Thirty-Second AAAI Con_ference on Artificial Intelligence, pages 5545–5552._
Lei Wang, Dongxiang Zhang, Zhang Jipeng, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers
with recursive neural networks. In Thirty-Third
_AAAI Conference on Artificial Intelligence, pages_
7144–7151.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854. Association for Computational Linguistics.
Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, and Zhi Jin. 2019.
Code generation as a dual task of code summarization. In Advances in Neural Information Processing
_Systems, pages 6559–6569._
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuanjing
[Huang. 2020. A knowledge-aware sequence-to-tree](https://doi.org/10.18653/v1/2020.emnlp-main.579)
[network for math word problem solving. In Proceed-](https://doi.org/10.18653/v1/2020.emnlp-main.579)
_ings of the 2020 Conference on Empirical Methods_
_in Natural Language Processing (EMNLP), pages_
7137–7146, Online. Association for Computational
Linguistics.
Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai
Yu, and Tie-Yan Liu. 2017. Dual supervised learning. In Proceedings of the 34th International Confer_ence on Machine Learning-Volume 70, pages 3789–_
3798. JMLR. org.
Han Xiao, Feng Wang, Jianfeng Yan, and Jingyao
Zheng. 2018. Dual ask-answer network for machine reading comprehension. _arXiv preprint_
_arXiv:1809.01997._
5880
-----
Zhipeng Xie and Shichao Sun. 2019. A goal-driven
tree-structured neural model for math word problems. In Proceedings of the Twenty-Eighth In_ternational Joint Conference on Artificial Intelli-_
_gence, IJCAI-19, pages 5299–5305. International_
Joint Conferences on Artificial Intelligence Organization.
Ma Yuhui, Zhou Ying, Cui Guangzuo, Ren Yun, and
Huang Ronghuai. 2010. Frame-based calculus of
solving arithmetic multi-step addition and subtraction word problems. In International Workshop on
_Education Technology and Computer Science, vol-_
ume 2, pages 476–479.
Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei
Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a.
[Teacher-student networks with multiple decoders for](https://doi.org/10.24963/ijcai.2020/555)
[solving math word problem. In Proceedings of the](https://doi.org/10.24963/ijcai.2020/555)
_Twenty-Ninth International Joint Conference on Ar-_
_tificial Intelligence, IJCAI-20, pages 4011–4017. In-_
ternational Joint Conferences on Artificial Intelligence Organization. Main track.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graphto-tree learning for solving math word problems. In
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 3928–_
3937.
Victor Zhong, Caiming Xiong, and Richard Socher.
2017. Seq2sql: Generating structured queries
from natural language using reinforcement learning.
_arXiv preprint arXiv:1709.00103._
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using
quadratic programming. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 817–822. Association for_
Computational Linguistics.
5881
-----
| [
"Yining, Hong",
"Jinghui, Qin",
"Chengqing, Zong",
"Jianheng, Tang",
"Liang, Lin",
"Fei, Xia",
"Wenjie, Li",
"Roberto, Navigli",
"Xiaodan, Liang"
] | 2021-08-01T00:00:00 | ACL 2021 Long Papers | true | 49 | 7 | null | https://aclanthology.org/2021.acl-long.456 | https://arxiv.org/abs/2107.01431 | https://www.semanticscholar.org/paper/7654dbd372d8b65e730e3bd477ff9fec96c16dc5 |
Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems | Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings. | This paper investigates how a neural network understands patterns only from semantics, and proposes a contrastive learning approach, where the neural network perceives the divergence of patterns. | ## Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems
**Zhongli Li[1][∗], Wenxuan Zhang[2][∗][†], Chao Yan[2][†], Qingyu Zhou[1],**
**Chao Li[1], Hongzhi Liu[2], Yunbo Cao[1]**
1Tencent Cloud Xiaowei
2Peking University
{neutrali,qingyuzhou,diegoli,yunbocao}@tencent.com
{zwx980624@stu,cyan@stu,liuhz@ss}.pku.edu.cn
**Abstract**
Math Word Problem (MWP) solving needs
to discover the quantitative relationships over
natural language narratives. Recent work
shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at
this issue and argue that the cause is a lack of
overall understanding of MWP patterns. We
first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations like n1 +
_n2 are the same, most problems get closer_
representations and those representations apart
from them or close to other prototypes tend
to produce wrong solutions. Inspired by it,
we propose a contrastive learning approach,
where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation
into a tree and seeking similar tree structures.
The solving model is trained with an auxiliary
objective on the collected examples, resulting
in the representations of problems with similar prototypes being pulled closer. We conduct
experiments[1] on the Chinese dataset Math23k
and the English dataset MathQA. Our method
greatly improves the performance in monolingual and multilingual settings.
**Prob. B: Joyce starts with**
75 apples. She gives 52 to
Larry. How many apples
does Joyce end with?
**Prob. A: Norma has 88**
cards. She loses 70. How
many cards will Norma
have ?
n1 / n2
**Eq: 75 - 52**
**Eq: 88 - 70**
**Prob. C: A bee has**
6 legs. How many
legs do 2 bees have?
**Eq: 2 * 6**
**Prob. D: 2 bee have**
12 legs. How many
legs does a bee have?
**Eq: 12 / 2**
n1 + n2 n1 − n2 n1 * n2
Figure 1: The visualization of the problem representations by T-SNE. "Prob." and "Eq" are short for the math
word problem and its solution equation. The problem
A and B are in the same prototype equation "n1 _n2"._
_−_
The problem C and D are semantically similar.
𝑛1 + 𝑛2 𝑛1 −𝑛2
𝑛1 × 𝑛2
not merely about numbers (𝑛1 ÷ 𝑛2 (𝑛1 + 𝑛2) × 𝑛Council3,(𝑛 19891 + 𝑛). Mathe-2) ÷ 𝑛3
matically excellent students explore patterns, not
just memorize procedures (Schoenfeld, 1992). Recently, Patel et al. (2021) mention that existing
MWP models (Xie and Sun, 2019; Zhang et al.,
2020) rely on shallow heuristics to generate equations. These models can predict solutions well even
if leaving only narratives without questions, which
suggests that neural networks learn to solve MWPs
by memorizing the lexical input like rote learning.
Thus, existing models get stuck in memorize procedures. We look at this issue and hypothesize
it is because they focus on text understanding or
equation generation for one problem. The same
quantitative relationship corresponds to many problems of different themes and scenarios, but previ
**1** **Introduction**
A Math Word Problem (MWP) is described as a
natural language narrative with a math question.
The MWP solver is required to generate a solution
equation, which can be calculated to get the numerical answer, by understanding the contextual
problem description.
In teaching, students are encouraged to recognize that mathematics is really about patterns and
_∗_ Zhongli Li and Wenxuan Zhang contributed equally.
Qingyu Zhou is the corresponding author.
_† Contribution done during internship at Tencent Cloud_
Xiaowei.
[1The code is available at https://github.com/](https://github.com/zwx980624/mwp-cl)
[zwx980624/mwp-cl.](https://github.com/zwx980624/mwp-cl)
-----
ous methods overlook the outlining and distinction
of MWP patterns.
In this work, we first investigate how a neural network understands MWP patterns only from semantics. We adopt the widely used encoder-decoder
model structure (Cho et al., 2014). BERT (Devlin et al., 2019) is employed as the semantic encoder, and a tree decoder (Xie and Sun, 2019) is
adopted to generate equations. We probe the problem representations in BERT. The visualization by
T-SNE (van der Maaten and Hinton, 2008) in Figure 1 shows that, through the semantic encoder,
most representations of problems with the same
prototype equation are pulled closer, even if their
narratives are semantically different. We also analyze the representations in different BERT layers,
and the results show the lexical semantics mainly
affects the problem-solving in lower layers. Besides, for each prototype equation, those problem
representations far away from its center representation tend to produce incorrect solutions.
Inspired by it, we propose a contrastive learning
approach that seeks similar prototypes to support
model to better understand patterns and perceive
the divergence of patterns. When collecting contrastive examples, we follow Xie and Sun (2019) to
convert the prototype equation to a tree. Given an
equation tree, the positive examples are retrieved if
their trees or subtrees have the same structure, and
the negative examples are collected from the rest
in terms of the operator types and the size of the
tree. The solving model is first jointly optimized
by an equation generation loss and a contrastive
learning loss on the collected examples, and then,
is further trained on the original dataset. While
the generation loss empowers the model to memorize procedures from the semantics, the contrastive
learning loss brings similar patterns closer and disperses the different patterns apart.
We conduct experiments on the Chinese dataset
Math23k (Wang et al., 2017) and the English
dataset MathQA (Amini et al., 2019) in monolingual and multilingual settings. To support constructing multilingual contrastive examples, we follow Tan et al. (2021) to adapt MathQA as the counterpart of Math23k. Experimental results show
that our method achieves consistent gains in monolingual and multilingual settings. In particular,
our method allows the model to improve the performance in one language using data in another
language, which suggests that MWP patterns are
language-independent. Furthermore, we verify that,
through our contrastive learning, the representations that previously generate wrong solutions get
closer to their centers, and several problems are
solved well.
To summarize, the contributions of this paper
include: i) An analysis of the MWP model showing that the semantic encoder understands lexical
semantics in lower layers and gathers the prototype
equations in higher layers. ii) A contrastive learning approach helping the model to better understand MWP patterns and perceive the divergence
of patterns. iii) Applications in the multilingual
setting suggesting that we can further improve
the model performance using data in different languages.
**2** **Related Work**
**2.1** **Math Word Problem Solving**
Given a natural language narrative with a mathematical question, the task is to generate a solution
equation to answer the question. The methods can
be divided into four categories: rule-based methods (Fletcher, 1985; Bakman, 2007), statistical machine learning methods (Kushman et al., 2014; Hosseini et al., 2014), semantic parsing methods (Shi
et al., 2015; Koncel-Kedziorski et al., 2015) and
deep learning methods (Wang et al., 2017; Huang
et al., 2018a,b; Xie and Sun, 2019; Zhang et al.,
2020).
Deep learning methods have achieved significant improvement on MWP solving. Wang et al.
(2017) first attempt to use recurrent neural networks to build a seq2seq solving model. Xie and
Sun (2019) propose a tree-structured decoder to
generate an equation tree. Syntactically correct
equations can be generated through traversing the
equation tree. Zhang et al. (2020) apply graph
convolutional networks to extract relationships of
quantities in math problems. Recently, unsupervised pretraining of language models (Devlin et al.,
2019; Yang et al., 2019a) has provided informative contextual representations for text understanding, and fine-tuning techniques (Cui et al., 2019;
Li et al., 2021) have brought further performance
gains. Several works (Kim et al., 2020; Tan et al.,
2021; Cobbe et al., 2021) based on pretrained language models enhance the ability of problem understanding.
-----
Epoch 1 Epoch 10 Epoch 20 Epoch 43
Layer 2 Layer 6 Layer 9 Layer 12
n1 + n2 n1 − n2 n1 * n2 n1 / n2 (n1 + n2) * n3 (n1 + n2) / n3
Figure 2: The T-SNE visualization of problem representations in different epochs and different layers. Different
colors represent different prototype equations. The model achieves the highest accuracy at the training epoch 43.
**3** **Semantic Encoder Gathers Prototypes**
In this section, we explore how a neural network
understands patterns from semantics. We adopt the
encoder-decoder model structure to solve problems,
and perform analyses on the problem representations. The observation is that the semantic encoder
understands lexical semantics at lower layers and
gathers the prototype equations at higher layers.
**3.1** **Experimental Setup**
**3.1.1** **Datasets**
We perform analyses on two widely used datasets
Math23k (Wang et al., 2017) and MathQA (Amini
et al., 2019). The Math23k dataset is composed
of 23k MWPs in elementary education, and the
MathQA has 37k MWPs with multiple choices and
equations.
**3.1.2** **Model Architecture**
**Semantic Encoder** The pre-trained language
model BERT (Devlin et al., 2019) is employed
as the semantic encoder. The unsupervised pretraining on large corpora renders the model to learn
linguistic knowledge, which provides rich textual
representations.
**2.2** **Contrastive Learning**
Contrastive learning is a method of representation
learning, which is first designed by Hadsell et al.
(2006). By pulling semantically similar embeddings together and pushing semantic different ones
apart, contrastive learning can provide more effective representations. In NLP, similar approaches
have been explored in many fields. Bose et al.
(2018) develop a sampler to find harder negative
examples, which forces the model to learn better
word and graph embeddings. Yang et al. (2019b)
use contrastive learning to reduce word omission
errors in neural machine translation. Clark et al.
(2020) train a discriminative model on contrastive
examples to obtain more informative language representations. Gao et al. (2021) advance the performance of sentence embeddings by using contrastive
learning in supervised and unsupervised settings.
Yu et al. (2021) develop a contrastive self-training
to help language model fine-tuning and label denoising in weak supervision.
To the best of our knowledge, this is the first
work to adopt contrastive learning to MWP solving.
With the supervision of contrastive learning, we
seek similar MWP patterns to pull them closer, and
collect confusing patterns to push them apart.
-----
**0.8**
**0.8**
**0.6**
**0.8**
**0.6**
**0.4**
**Cosine similarity**
**0.2**
**0.4**
**0.2**
|1 0.8 0.6 0.4 0.2|SimilarS simemilaanr tsiecsmantics Same PSaarmadei gPmaradigm|
|---|---|
|0.8 prediction 0.6 0.4 correct 0.2 of ate|Col2|
|---|---|
**0** **1** **02** **13** **24** **35** **46** **57** **68** **9 10 11 127** **8** **9 10 11 12**
**Layer indexLayer index**
Figure 3: Similarities of problem representations in different BERT layers. The blue polyline corresponds to
the semantically similar problems. The red polyline
corresponds to problems with same prototype equation.1
**10**
**108**
**Interval indexInterval index**
Figure 4: Model performance in each distance interval.
The interval index x indicates the cosine distances are
in the interval [0.1 × (x − 1), 0.1 × x). The dotted line
is computed by polynomial least squares fitting.
**Train set** **Test set**
**Train set** **Test set**
**index**
**0.6**
**0**
**1** **mBERT-TD w/o CL**
**mBERT-TD w/o CL**
**0.8** **mBERT-TD w CLmBERT-TD w CL**
**Equation Decoder0.6** A tree decoder (Xie and Sun
) is adopted to generate solution equations. We
**0.4**
use the BERT-encoded representation of [CLS]
token to initialize the root node when decoding. Re-0.2
cursively, the decoder generates the embedding of[0.5,0.6)0 **[0.6,0.7)** **[0.7,0.8)** **[0.8,0.9)** **[0.9,1.0)**
**[0.5,0.6)** **[0.6,0.7)** **[0.7,0.8)** **[0.8,0.9)** **[0.9,1.0)**
each node, and predicts the probabilities of numberIntervals of cosine distance
and operator candidates.Intervals of cosine distance
For brevity, we denote our model as BERT-TD.
The model takes the textual problem description
as the input and is optimized by minimizing the
negative log-likelihoods of node probabilities for
predicting the ground-truth equation tree.
**189**
**3.3** **Semantics and Prototype Equation10870** **10870**
**189**
**mBERT**
**w CL**
**65**
From the visualizations, we can not see how seman-2504
tics affects problem-solving. To this end, we collect
**65**
20 problem pairs with similar lexical semantics but
**2504**
exactly different prototypes, and 20 problem pairsCalinski-Harabasz
**Calinski-Harabasz**
with the same prototype but in different themes or
**mBERT …** **mBERT …**
scenarios. Not like taking themBERT w/o…mBERT **mBERT [CLS]mBERT w/o… representa-mBERT**
tion in Section 3.2w/o CL, we average the representationsw CL **w/o CL**
over all words in one problem. The cosine similarities of the averaged representations are calculated
for these problem pairs in different BERT layers.
The averaged similarities are shown in Figure 3. The semantically similar problems obtain
higher values in lower layers but the similarity
gradually decreases as the model deepens. Meanwhile, with the increase of the model depth, although in different semantics, the problems with
the same prototype equation achieve higher similarity. This demonstrates that lexical semantics
affects problem-solving at lower layers, and the
model further extracts prototypes from the semantics at higher layers.
**3.4** **Clustering and Solving Ability**
**3.2** **Shifts of Problem Representation**
To explore how the neural model learns MWP
patterns during training, we first extract BERTencoded representations of [CLS] token in different epochs and different layers. Then we perform the T-SNE visualization (van der Maaten and
Hinton, 2008) shown in Figure 2. The representations of different epochs are picked from the top
layer of BERT, and the representations of different
layers are picked from the best trained model. It
can be seen that, as the training goes on, the representations with the same prototype equation are
gathering. Besides, with the increase of the depth
of encoder layers, the gathering tendency becomes
more and more obvious.
Intuitively, the prototype equation exhibits the essential relationship between the quantities in MWP.
These results also verify that the patterns learned
by the neural model are directly associated with the
prototype equations.
With the above observation, we attempt to discover
the relationship between prototype clustering and
model performance. For each prototype equation,
we first average the representations of the corresponding problems to obtain its center point, and
then calculate the cosine distances between representations and its centers. A higher cosine distance
means the representation is closer to its center. We
split the cosine distance into several intervals and
compute the proportion of correct predictions for
each interval. The results are shown in Figure 4,
-----
Problem Prototype Equation
Larry starts with n1 cards. n2 are
eaten by a hippopotamus. How
many cards does Larry end with?
Frank made n1 dollars mowing lawns over the summer. If
he spent n2 dollars buying new
mower blades, how many n3 dollar games could he buy with the
money he had left?
_n1_ _n2_
_−_
(n1 _n2)/n3_
_−_
Table 1: Math word problems with the same quantitative relationship, i.e. the subtraction of numerics n1
and n2. The same prototype equations are in red color.
which suggests that the representations apart from
centers tend to produce wrong solutions.
**4** **Contrastive Learning**
In this section, we propose a contrastive learning
approach to help the model to perceive the divergence of MWP patterns. One drawback of existing
deep learning methods is that they overlook the outlining and distinction of MWP patterns. In contrast,
we seek similar prototype equations from various
problems to support model to understand patterns,
and collect easily confused patterns for model to
distinguish.
**4.1** **Data Collection**
We construct contrastive MWP triples (p, p[+], p[−])
containing a basic problem p and its positive and
negative examples {p[+], p[−]}.
**Positive Example** One direct way is to collect
problems whose prototype equation is completely
the same as the given problem p. However, the
same quantitative relationship in p also exists in
other problems. As shown in Table 1, for the second problem, before answering "How many games
could he buy?", another hidden question is "How
much money does he have?" whose solving equation is in the same prototype as the first problem.
Thus, we parse the prototype equation to tree structure by following Xie and Sun (2019) and consider
its sub-equations and subtrees. The problem p[+] is
taken as a positive example if its tree or subtree
has the same structure as p, such as "tree" and the
subtree of "tree[+]" in Figure 5.
**Negative Example** Bose et al. (2018) and Kalantidis et al. (2020) stress the importance of hard
negative examples in contrastive learning. If we
choose p[−] whose prototype is totally different from
|Col1|Col2|
|---|---|
Contrastive Problem Triple
ℒ𝑒𝑞 ℒ𝑒𝑞
𝑒𝑞[−] 𝑒𝑞 𝑒𝑞[+]
𝑡𝑟𝑒𝑒[−] 𝑡𝑟𝑒𝑒 𝑡𝑟𝑒𝑒[+]
× + × + 𝑒𝑒𝑒[+][−] ℒ𝑐𝑙
Tree Decoder subtree
Transformer
Block L
ℎ[−]
ℎ
ℎ[+]
...
Transformer
BERT Encoder Block 2
Transformer
Block 1
𝑝[−]
𝑝
𝑝[+]
Figure 5: An overview of our model.
_p, the original MWP model can easily distinguish_
them apart. Thus, in this work, the problem p[−] is
chosen as a hard negative example if its tree has the
same number of nodes but different operator node
types, such as "tree" and "tree[−]" in Figure 5. With
the training on hard negative examples, our model
can distinguish more subtle differences from various prototypes, and further grasp the inner pattern
of MWP.
**4.2** **Training Procedure**
We train the model on our contrastive problem
triples. As shown in Figure 5, the problems are
first encoded by BERT, and then the tree decoder
predicts the nodes of the equation tree.
During contrastive learning, the triple z =
(p, p[+], p[−]) are input to the model together to predict equation trees. Owing to the decoding manner
of Xie and Sun (2019), each node embedding represents the whole subtree information rooted in it.
The root node embeddings of the problem p and
its negative problem p[−] are picked for model to
distinguish. For its positive problem p[+], we find
the root node of the tree or subtree containing the
same structure as p, and pull its embedding closer
to that of p. For brevity, we denote these node embeddings as (e, e[+], e[−]) and the contrastive learning
loss becomes:
max(0, η + sim(e, e[−])
(1)
_−sim(e, e[+])),_
_Lcl =_
-----
as a counterpart of Math23k. Table 2 shows data
statistics. We report the accuracy of equation generation, namely as "Acc (eq)", that the problem is
solved well if the generated equation is equal to the
annotated formula. Considering several equations
satisfy the problem solution, we report the accuracy of answer value, namely as "Acc (ans)", to
see whether the value calculated by the generated
equation is equal to the target value.
**Implementation** We conduct our contrastive
learning in the monolingual and multilingual perspectives. In the monolingual setting, we construct
contrastive triples inside each dataset. In the multilingual setting, for each problem, the positive
and negative examples are from different sources.
Specifically, given a Chinese MWP in Math23k,
we collect positive examples from MathQA and
negative examples from Math23k. We adopt BERTbase (Devlin et al., 2019) as the problem encoder,
and follow Xie and Sun (2019) to build the treedecoder for solution generation. The hidden size
of the decoder is set to 768. Multilingual BERT
is used in the multilingual setting. The max input
length is set to 120 and the max output length is
set to 45. The loss margin η is set to 0.2. The
weight α of contrastive learning loss is set to 5. We
use AdamW (Loshchilov and Hutter, 2017) as our
optimizer, and perform grid search over the sets
of the learning rate as {5e-5, 1e-4} and the number of epochs as {30, 50} for each training stage.
The batch size is fixed to 16 to reduce the search
space, and we evaluate models for every epoch. We
use the dropout of 0.5 to prevent over-fitting and
perform a 3-beam search for better generations.
**5.2** **Baselines**
To verify the effectiveness of the proposed method,
we directly train our model on original datasets
without contrastive learning. In particular, the
multilingual baseline model is trained by mixing
Math23k and the adapted MathQA. In addition to
comparing with BERT, we also investigate the following approaches:
**GroupAttention[2]** (Li et al., 2019) develop an
attention mechanism to capture the quantity-related
and question-related information.
**GTS[3]** (Xie and Sun, 2019) generate equation
[2https://github.com/lijierui/](https://github.com/lijierui/group-attention)
[group-attention](https://github.com/lijierui/group-attention)
[3https://github.com/ShichaoSun/math_](https://github.com/ShichaoSun/math_seq2tree)
[seq2tree](https://github.com/ShichaoSun/math_seq2tree)
Dataset #Train #Dev #Test
Math23k 21,162 1,000 1,000
MathQA 29,837 4,475 2,985
MathQA[†] 23,703 3,540 2,410
Table 2: Statistics of the used datasets. The
"MathQA[†]" is the adapted MathQA dataset by following Tan et al. (2021).
where sim(·) is the cosine similarity, and the η is
a margin hyper-parameter.
The basics of a MWP solving model is to generate a solution equation to answer the math question.
We transform the target equation y into Polish notation as [y1, y2, ..., ym], where m is the equation
length. The tree decoder generates k-node token
_yk recursively, and the loss of generating equation_
is computed as:
(yk _p)_ (2)
_P_ _|_
_k=1_
Y
_−_ log P(y|p) (3)
_P(y|p) =_
_Leq =_
X
The final training objective is to minimize the
equation loss and contrastive loss as follows:
_L = Leq + α · Lcl_ (4)
where α is a hyper-parameter that represents the
importance of the contrastive learning.
However, not all problems have positive examples, such as those problems whose solution is one
value without any operator. With this in mind, we
develop the two-stage training strategy. The MWP
solver is first trained on our contrastive triples at
stage I, and then further trained on the original
dataset at stage II.
**5** **Experiments**
We evaluate our method on two widely used
datasets (Wang et al., 2017; Amini et al., 2019),
and demonstrate its effectiveness in monolingual
and multilingual settings.
**5.1** **Configuration**
**Data and Metrics** We collect problems from the
Chinese dataset Math23k (Wang et al., 2017) and
the English dataset MathQA (Amini et al., 2019).
As the formula formats of the two datasets are different, we follow Tan et al. (2021) to adapt MathQA
-----
Math23k MathQA[†]
Models Acc (eq) Acc (ans) Acc (eq) Acc (ans)
_Monolingual Setting_
GroupAttention (Li et al., 2019) - 69.5 63.3[∗] 70.4[∗]
GTS (Xie and Sun, 2019) - 75.6 68.9[∗] 71.3[∗]
Graph2Tree (Zhang et al., 2020) - 77.4 70.0[∗] 72.0[∗]
BERT-TD w/o CL 71.2 82.4 73.5 75.1
BERT-TD w CL **71.8** **83.2** **74.4** **76.3**
_Multilingual Setting_
mBERT-TD w/o CL 67.8 80.5 72.0 73.5
mBERT-TD w CL **70.9** **83.9** **74.2** **76.3**
Table 3: Main results on Math23k and the adapted MathQA test sets. "Acc(eq)" is the equation accuracy and
"Acc(ans)" is the answer accuracy. "∗" means our reimplementation based on released codes. "CL" is short for the
contrastive learning. "mBERT" is short for the multilingual BERT.
|Margin η 0.05 0.1 0.15 0.2 0.3|Col2|
|---|---|
|Math23k MathQA†|82.6 83.7 83.4 83.9 81.8 76.1 76.2 76.1 76.3 76.0|
Table 5: Results (answer accuracy) of using different
loss margin η in the multilingual setting.
|Col1|Acc (eq) Acc (ans)|
|---|---|
|Baseline|71.2 82.4|
|---|---|
|Stage I CL (α = 1) Stage II|70.1 81.5 70.5 83.0|
|---|---|
|Stage I CL (α = 5) Stage II|70.6 82.5 71.8 83.2|
|---|---|
Table 6: Results of using different loss weight α on
Math23k in the monolingual setting. Two-stage results
are reported.
**Multilingual Results** We adapt our model to the
multilingual setting by using multilingual BERT
and mixing two train sets. The contrastive learning
improves Math23k answer accuracy to 83.9 (3.4
absolute improvements) and MathQA answer accuracy to 76.3 (2.8 absolute improvements), which
are competitive with the monolingual results. This
demonstrates that the model can learn similar patterns in different languages.
**5.4** **Analysis**
We conduct ablations to better understand the
contributions of different components in our contrastive learning method.
|Col1|Pos. Neg.|Math23k MathQA†|
|---|---|---|
|Baseline|- -|80.5 73.5|
|CL|Same Ours Ours Rand Ours Ours|82.3 75.5 82.3 75.8 83.9 76.3|
|---|---|---|
Table 4: Results (answer accuracy) of different strategies collecting examples. "Pos." and "Neg." are corresponding to positive and negative examples. "Same"
indicates the positive examples have exactly the same
prototype equations. "Rand" indicates the negative examples are randomly selected from the rest.
trees through a tree structure decoder in a goaldriven mannner.
**Graph2Tree[4]** (Zhang et al., 2020) design a
graph-based encoder for representing the relationships and order information among the quantities.
**5.3** **Main Results**
Experimental results are shown in Table 3. Training the MWP solver with our proposed contrastive
learning outperforms the baseline models on all
datasets.
**Monolingual Results** Compared to previous
methods, the pretrained linguistic knowledge in
BERT can help the MWP solver improve performance greatly. With our proposed contrastive learning method, our model achieves consistent gains on
Math23k and the adapted MathQA. This suggests
that seeking patterns with supervision benefits the
model to solve MWPs.
[4https://github.com/2003pro/Graph2Tree](https://github.com/2003pro/Graph2Tree)
-----
**0.8**
**0.6**
**0.4**
**0.2**
**ntics**
**gm**
**11 12**
**9,1.0)**
|0.8 predictions 0.6 0.4 correct 0.2 of ate 0|Col2|Col3|
|---|---|---|
**mBERT-TD w/o CL**
**mBERT-TD w CL**
**0.8**
**0.6**
**0.4**
**0.2**
**Rate of correct predictions** **0**
**[0.5,0.6)** **[0.6,0.7)** **[0.7,0.8)** **[0.8,0.9)** **[0.9,1.0)**
**1** **2** **3** **4** **5** **6** **7** **8** **9** **10**
mBERT w/o CL mBERT w CL
**Interval index**
**Intervals of cosine distance**
Figure 8: Equation accuracy in each distance interval
with and without our contrastive learning.
Figure 6: T-SNE visualization of the problem representation with and without our contrastive learning.
**Train set** **Test set**
**Output (w/o CL): 100 / (3 / 2)**
**Output (w CL): 100 / (3 + 2)**
**Input:**
**189**
**Output (w CL):**
**Input:**
A boatman selling a boat along river flow. If he
sell boat in steal water at 3 m/sec and flow of river is 2
m/sec, how much time he will take to sell 100 m.
100 / (3 / 2)
100 / (3 + 2)
A pipe can fill the tank in 30 minutes and pipe b
can empty the tank in 90 minutes. How long it will take
to fill the tank if both pipes are operating together?
**Output (w/o CL): 1 / ((1 / 30) + (1 / 90))**
**Output (w CL): 1 / ((1 / 30) - (1 / 90))**
**Input: If 20 liters of chemical x are added to 80 liters**
of a mixture that is 25% chemical x and 75% chemical y, then what percentage of the resulting mixture is
chemical x?
**Output (w/o CL): 1 + ((25 / 100) * 5)**
**Output (w CL): 20 + ((25 / 100) * 80)**
Table 7: Examples of the problem input and equation
output of MWP solvers.
**10870**
**mBERT**
**w CL**
**65**
**mBERT**
**w/o CL**
**2504**
**mBERT**
**w/o CL**
**mBERT**
**w CL**
Figure 7: Calinski-Harabasz index on the train/test set
with and without our contrastive learning.
**5.4.1** **Effects of Data Collection**
The contrastive examples consist of positive examples with similar patterns and negative examples
with exactly different patterns. In this work, we
investigate different strategies of collecting positive and negative examples. As well as our strategy,
we attempt to collect MWPs containing the same
prototype equation to be the positive examples, and
randomly select negative examples from the rest.
Table 4 shows that our strategy achieves better
performance on all datasets. In addition to the
problems with the same prototype equations, our
collected examples include more problems having
the same equation subtree structures. It can be seen
that the model can benefit from these examples. For
the negative examples, we take the problems with
the same number of operators but different operator
types. If performing random selection, the model
performance drops, which suggests that our collected examples can support the model to disperse
the different patterns. No matter which strategy we
use, compared to the baseline without contrastive
learning, our method advances MWP solving and
gives one way to improve the performance by using
data in different languages.
**5.4.2** **Effects of Hyperparameters**
We train the "mBERT-TD" model with several loss
margins (0.05, 0.1, 0.15, 0.2 and 0.3) to disperse
the different patterns. As shown in Table 5, the
margin 0.2 can help the model achieve the best
performance but lower margins 0.1 and 0.15 also
perform well.
As introduced in Section 4.2, we train our model
in two stages and the loss weight α represents the
importance of the contrastive learning. Table 6
shows the results of using different weights in each
stage. It can be seen that the higher weight achieves
better performance, and at stage II, training on all
examples further improves the performance.
**5.4.3** **Visualization and Statistics**
We perform the T-SNE visualization shown in Figure 6. The problem representations with the same
prototype equation are more gathered through our
contrastive learning. To measure this variation,
we calculate the Calinski-Harabasz index (Cali´nski
and Harabasz, 1974). Figure 7 shows that our
-----
method supports the model to gain higher clustering scores.
The above results illustrate that, for each prototype equation, the representations are pulled closer
to its centers. We re-compute the proportion of
correct predictions as described in Section 3.4. The
results are shown in Figure 8. We observe the
accuracy increases in most intervals, which also
verifies the effectiveness of contrastive learning. In
particular, our model also performs well in lower
intervals such as [0.6,0.7) and [0.7,0.8), which indicates those problems a little far away from their
centers are not easily confused with other problems of different patterns, and our model disperses
different patterns apart indeed.
Besides, we show few examples in Table 7. It
can be seen that the contrastive learning method
helps the model capture the quantitative relationships exactly.
**6** **Conclusion**
In this paper, we find the neural network generates incorrect solutions due to the non-distinction
of MWP patterns. To this end, we propose a contrastive learning approach to support the model
to perceive divergence of patterns. We seek similar patterns in terms of the equation tree structure
and collect easily confused patterns for our model
to distinguish. Our method outperforms previous
baselines on Math23k and MathQA in monolingual
and multilingual settings.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. [MathQA: Towards interpretable](https://doi.org/10.18653/v1/N19-1245)
[math word problem solving with operation-based](https://doi.org/10.18653/v1/N19-1245)
[formalisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
Yefim Bakman. 2007. Robust understanding of
word problems with extraneous information. arXiv
_preprint math/0701393._
Avishek Joey Bose, Huan Ling, and Yanshuai Cao.
[2018. Adversarial contrastive estimation. In Pro-](https://doi.org/10.18653/v1/P18-1094)
_ceedings of the 56th Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 1: Long_
_Papers), pages 1021–1032, Melbourne, Australia._
Association for Computational Linguistics.
[T. Cali´nski and J Harabasz. 1974. A dendrite method](https://doi.org/10.1080/03610927408827101)
[for cluster analysis. Communications in Statistics,](https://doi.org/10.1080/03610927408827101)
3(1):1–27.
Kyunghyun Cho, Bart van Merrienboer, Çaglar
Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Hol[ger Schwenk, and Yoshua Bengio. 2014. Learning](https://doi.org/10.3115/v1/d14-1179)
[phrase representations using RNN encoder-decoder](https://doi.org/10.3115/v1/d14-1179)
[for statistical machine translation. In Proceedings of](https://doi.org/10.3115/v1/d14-1179)
_the 2014 Conference on Empirical Methods in Nat-_
_ural Language Processing, EMNLP 2014, October_
_25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a_
_Special Interest Group of the ACL, pages 1724–1734._
ACL.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and
Christopher D. Manning. 2020. [ELECTRA: pre-](http://arxiv.org/abs/2003.10555)
[training text encoders as discriminators rather than](http://arxiv.org/abs/2003.10555)
[generators. CoRR, abs/2003.10555.](http://arxiv.org/abs/2003.10555)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse,
and John Schulman. 2021. [Training verifiers to](http://arxiv.org/abs/2110.14168)
[solve math word problems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
[National Research Council. 1989. Everybody Counts:](https://doi.org/10.17226/1199)
_[A Report to the Nation on the Future of Mathematics](https://doi.org/10.17226/1199)_
_[Education. The National Academies Press, Wash-](https://doi.org/10.17226/1199)_
ington, DC.
Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei
Zhang. 2019. [Fine-tune BERT with sparse self-](https://doi.org/10.18653/v1/D19-1361)
[attention mechanism.](https://doi.org/10.18653/v1/D19-1361) In Proceedings of the
_2019 Conference on Empirical Methods in Natu-_
_ral Language Processing and the 9th International_
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 3548–3553, Hong Kong,_
China. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. [BERT: pre-training of](https://doi.org/10.18653/v1/n19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/n19-1423)
[standing.](https://doi.org/10.18653/v1/n19-1423) In Proceedings of the 2019 Conference
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, NAACL-HLT 2019, Minneapolis, MN,_
_USA, June 2-7, 2019, Volume 1 (Long and Short Pa-_
_pers), pages 4171–4186. Association for Computa-_
tional Linguistics.
Charles R Fletcher. 1985. Understanding and solving
arithmetic word problems: A computer simulation.
_Behavior Research Methods, Instruments, & Com-_
_puters, 17(5):565–571._
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006.
Dimensionality reduction by learning an invariant
mapping. In 2006 IEEE Computer Society Confer_ence on Computer Vision and Pattern Recognition_
_(CVPR’06), volume 2, pages 1735–1742. IEEE._
-----
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533._
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
[2018a. Neural math word problem solver with rein-](https://aclanthology.org/C18-1018)
[forcement learning. In Proceedings of the 27th Inter-](https://aclanthology.org/C18-1018)
_national Conference on Computational Linguistics,_
pages 213–223, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Danqing Huang, Jin-Ge Yao, Chin-Yew Lin, Qingyu
[Zhou, and Jian Yin. 2018b. Using intermediate rep-](https://doi.org/10.18653/v1/P18-1039)
[resentations to solve math word problems. In Pro-](https://doi.org/10.18653/v1/P18-1039)
_ceedings of the 56th Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 1: Long_
_Papers), pages 419–428, Melbourne, Australia. As-_
sociation for Computational Linguistics.
Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion,
Philippe Weinzaepfel, and Diane Larlus. 2020. Hard
negative mixing for contrastive learning. _arXiv_
_preprint arXiv:2010.01028._
Bugeun Kim, Kyung Seo Ki, Donggeon Lee, and Gahgene Gweon. 2020. Point to the expression: Solving algebraic word problems using the expressionpointer transformer model. In Proceedings of the
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 3768–3779._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. Transactions of the Association for Computa_tional Linguistics, 3:585–597._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281.
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
[Bing Tian Dai, and Dongxiang Zhang. 2019. Model-](https://doi.org/10.18653/v1/P19-1619)
[ing intra-relation in math word problems with differ-](https://doi.org/10.18653/v1/P19-1619)
[ent functional multi-head attentions. In Proceedings](https://doi.org/10.18653/v1/P19-1619)
_of the 57th Annual Meeting of the Association for_
_Computational Linguistics, pages 6162–6167, Flo-_
rence, Italy. Association for Computational Linguistics.
Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, and Yunbo
Cao. 2021. [Improving BERT with syntax-aware](https://doi.org/10.18653/v1/2021.findings-acl.57)
[local attention.](https://doi.org/10.18653/v1/2021.findings-acl.57) In Findings of the Association
_for Computational Linguistics: ACL-IJCNLP 2021,_
pages 645–653, Online. Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2017. [Fixing](http://arxiv.org/abs/1711.05101)
[weight decay regularization in adam.](http://arxiv.org/abs/1711.05101) _CoRR,_
abs/1711.05101.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems?](https://doi.org/10.18653/v1/2021.naacl-main.168) In Proceedings of the 2021
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
A. Schoenfeld. 1992. Learning to think mathematically: Problem solving, metacognition, and sense
making in mathematics (reprint). _Journal of Edu-_
_cation, 196:1 – 38._
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Processing,_
pages 1132–1142.
Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing
[Jiang. 2021. Investigating math word problems us-](http://arxiv.org/abs/2105.08928)
[ing pretrained multilingual language models.](http://arxiv.org/abs/2105.08928)
Laurens van der Maaten and Geoffrey Hinton. 2008.
[Visualizing data using t-SNE. Journal of Machine](http://www.jmlr.org/papers/v9/vandermaaten08a.html)
_Learning Research, 9:2579–2605._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://doi.org/10.18653/v1/D17-1088)
_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854, Copenhagen, Denmark. Association for Computational Linguistics.
[Zhipeng Xie and Shichao Sun. 2019. A goal-driven](https://doi.org/10.24963/ijcai.2019/736)
[tree-structured neural model for math word prob-](https://doi.org/10.24963/ijcai.2019/736)
[lems.](https://doi.org/10.24963/ijcai.2019/736) In Proceedings of the Twenty-Eighth In_ternational Joint Conference on Artificial Intelli-_
_gence, IJCAI-19, pages 5299–5305. International_
Joint Conferences on Artificial Intelligence Organization.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le.
2019a. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint
_arXiv:1906.08237._
Zonghan Yang, Yong Cheng, Yang Liu, and Maosong
Sun. 2019b. Reducing word omission errors in neural machine translation: A contrastive learning approach. In Proceedings of the 57th Annual Meet_ing of the Association for Computational Linguistics,_
pages 6191–6196.
Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo
Zhao, and Chao Zhang. 2021. [Fine-tuning pre-](https://doi.org/10.18653/v1/2021.naacl-main.84)
[trained language model with weak supervision: A](https://doi.org/10.18653/v1/2021.naacl-main.84)
[contrastive-regularized self-training approach.](https://doi.org/10.18653/v1/2021.naacl-main.84) In
_Proceedings of the 2021 Conference of the North_
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
pages 1063–1077, Online. Association for Computational Linguistics.
-----
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
[Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-to-](https://doi.org/10.18653/v1/2020.acl-main.362)
[tree learning for solving math word problems. In](https://doi.org/10.18653/v1/2020.acl-main.362)
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 3928–_
3937, Online. Association for Computational Linguistics.
-----
| [
"Zhongli, Li",
"Wenxuan, Zhang",
"Chao, Yan",
"Qingyu, Zhou",
"Chao, Li",
"Hongzhi, Liu",
"Yunbo, Cao"
] | 2022-03-10T00:00:00 | ACL 2022 Findings | false | 49 | 11 | null | http://arxiv.org/abs/2110.08464 | https://arxiv.org/abs/2110.08464 | https://www.semanticscholar.org/paper/39df1a17da84f02bfcb8751de8965b798653a5ee |
Baldur: Whole-Proof Generation and Repair with Large Language Models | Formally verifying software properties is a highly desirable but labor-intensive task. Recent work has developed methods to automate formal verification using proof assistants, such as Coq and Isabelle/HOL, e.g., by training a model to predict one proof step at a time, and using that model to search through the space of possible proofs. This paper introduces a new method to automate formal verification: We use large language models, trained on natural language text and code and fine-tuned on proofs, to generate whole proofs for theorems at once, rather than one step at a time. We combine this proof generation model with a fine-tuned repair model to repair generated proofs, further increasing proving power. As its main contributions, this paper demonstrates for the first time that: (1) Whole-proof generation using transformers is possible and is as effective as search-based techniques without requiring costly search. (2) Giving the learned model additional context, such as a prior failed proof attempt and the ensuing error message, results in proof repair and further improves automated proof generation. (3) We establish a new state of the art for fully automated proof synthesis. We reify our method in a prototype, Baldur, and evaluate it on a benchmark of 6,336 Isabelle/HOL theorems and their proofs. In addition to empirically showing the effectiveness of whole-proof generation, repair, and added context, we show that Baldur improves on the state-of-the-art tool, Thor, by automatically generating proofs for an additional 8.7% of the theorems. Together, Baldur and Thor can prove 65.7% of the theorems fully automatically. This paper paves the way for new research into using large language models for automating formal verification. | This paper introduces a new method to automate formal verification that uses large language models, trained on natural language and code and fine-tuned on proofs, to generate whole proofs at once and demonstrates that whole-proof generation using transformers is possible and is as effective but more efficient than search-based techniques. | ## Baldur: Whole-Proof Generation and Repair with Large Language Models
[Emily First](https://orcid.org/0000-0002-2896-2928)
University of Massachusetts
Amherst, MA, USA
[email protected]
Markus N. Rabe
Google, Inc.
CA, USA
[email protected]
Talia Ringer
University of Illinois Urbana-Champaign
IL, USA
[email protected]
**ABSTRACT**
Formally verifying software properties is a highly desirable but
labor-intensive task. Recent work has developed methods to automate formal verification using proof assistants, such as Coq and
Isabelle/HOL, e.g., by training a model to predict one proof step
at a time, and using that model to search through the space of
possible proofs. This paper introduces a new method to automate
formal verification: We use large language models, trained on natural language text and code and fine-tuned on proofs, to generate
whole proofs for theorems at once, rather than one step at a time.
We combine this proof generation model with a fine-tuned repair
model to repair generated proofs, further increasing proving power.
As its main contributions, this paper demonstrates for the first time
that: (1) Whole-proof generation using transformers is possible and
is as effective as search-based techniques without requiring costly
search. (2) Giving the learned model additional context, such as a
prior failed proof attempt and the ensuing error message, results
in proof repair and further improves automated proof generation.
(3) We establish a new state of the art for fully automated proof
synthesis. We reify our method in a prototype, Baldur, and evaluate
it on a benchmark of 6,336 Isabelle/HOL theorems and their proofs.
In addition to empirically showing the effectiveness of whole-proof
generation, repair, and added context, we show that Baldur improves on the state-of-the-art tool, Thor, by automatically generating proofs for an additional 8.7% of the theorems. Together, Baldur
and Thor can prove 65.7% of the theorems fully automatically. This
paper paves the way for new research into using large language
models for automating formal verification.
[Yuriy Brun](https://orcid.org/0000-0003-3027-7986)
University of Massachusetts
Amherst, MA, USA
[email protected]
As a result, recent research has focused on automated proof synthesis, which can lead to fully automating formal verification.
There are two promising approaches for automating proof synthesis. The first is to use hammers, such as Sledgehammer [63]
for the Isabelle proof assistant. Hammers iteratively apply known
mathematical facts using heuristics. The second is to use searchbased neural theorem provers, such as DeepHOL [4], GPT-f [65],
TacticZero [90], Lisa [33], Evariste [41], Diva [19], TacTok [21],
and ASTactic [95]. Given a partial proof and the current proof state
(which consists of the current goal to prove and the list of known
assumptions), these tools use neural networks to predict the next
individual proof step. They use the proof assistant to evaluate the
proposed next proof steps, which returns a new set of proof states.
Neural theorem provers rely on diverse neural architectures, such
as Wavenet [4, 83], graph neural networks [61], short long-term
memory models [19], and language models with the transformer
architecture [26, 65].
In this paper, we propose Baldur, a different, simpler approach to
proof synthesis. We show that using large language models (LLMs),
fine-tuned on proofs, can produce entire proofs for theorems. LLMs
are scaled-up transformer models trained on a large amount of text
data, including natural language and code, that have proven to be
remarkably effective across a wide variety of applications, including
question answering, and text and code generation [6, 13]. Here, we
show their remarkable effectiveness for whole proof generation.
The main contributions of our work are:
- We develop Baldur, a novel method that generates
whole formal proofs using LLMs, without using hammers or computationally expensive search.
- We define a proof repair task and demonstrate that
repairing incorrectly generated proofs with LLMs further improves Baldur’s proving power when the LLM
is given access to the proof assistant’s error messages.
- We demonstrate empirically on a large benchmark that
Baldur, when combined with prior techniques, significantly improves the state of the art for theorem proving.
We design Baldur to be able to work with any LLM internally,
but we evaluate our implementation using two versions of Minerva [47], one with 8 billion parameters and another with 62 billion
parameters. By contrast, existing tools that use (L)LMs for theorem
**1** **INTRODUCTION**
Formal software verification — proving software correctness and
other properties — is one of the most challenging tasks software
engineers can undertake. It is highly effective at producing high
quality software. For example, CompCert, a C compiler verified
using the Coq interactive theorem prover [80], was the only compiler on a list including the ubiquitous GCC and LLVM, in which a
comprehensive study found no bugs [96]. Similarly, the seL4 project
resulted in an highly reliable operating system microkernel [39].
However, the cost of manual formal verification — writing the
proofs — is often prohibitive. For example, the proof of the C compiler is more than three times as long as the compiler code itself [46].
-----
**2** **THE BALDUR APPROACH**
Prior approaches to proof synthesis employ a neural model to predict the next proof step given the current proof state. The proof step
predictions then guide a search strategy, such as best-first search
or depth-first search. Throughout the search, the proof assistant
needs to check each proof step prediction to determine whether it is
valid. This means that existing proof synthesis tools require a tight
interaction between the neural network and the proof assistant. As
we move to using LLMs, this results in complex systems, as LLMs
need to run on specialized hardware (GPUs or TPUs), while proof
assistants run on CPUs.
We explore a simpler, yet effective method: fine-tuning LLMs
to generate complete proofs. This simplification avoids the finegrained interaction between neural model and the proof assistant,
allowing us to run the jobs of generating proofs and checking
completely separately. Besides reducing complexity, this can also
improve efficiency, because (1) it enables us to use large batch
sizes, which can significantly improve hardware utilization during
inference (cf. [66]), and (2) when providing additional context to
the model, the context now does not have to be reprocessed for
each proof step, but only once per proof.
We fine-tune LLMs on proof data to generate entire proofs and
explore the impact of giving the LLMs additional information. Our
approach and implementation include the following:
- We fine-tune an LLM to generate an entire proof given only
the theorem statement. We call this model the proof generation
_model (Section 2.1)._
- We provide a model a proof attempt that did not check along
with the corresponding error message from the proof assistant
so that the model may attempt to find a better proof. We call
this model the proof repair model (Section 2.2).
- We provide text from the same theory file that the problem was
taken from. We add only the lines from the theory file that
immediately precede the theorem we want to prove. We call this
added information the theory file context and we add it to the
proof generation model (Section 2.3).
- The LLM that we fine-tune at the core of all of this is Minerva [47], which is pretrained on a mathematics corpus. We
describe our Baldur-specific implementation details for how we
use this model (Section 2.4).
These fine-tuned LLMs and their interaction with the Isabelle
proof assistant make up our tool Baldur. This section details the
Baldur approach, which includes creating training datasets and
leveraging LLMs to generate and repair proofs.
**2.1** **Proof Generation**
Existing proof generation methods using neural models generate
the proof one step at a time. In contrast, our approach generates
the entire proof, as illustrated with a single example in Figure 1.
We use only the theorem statement as input to our proof generation
_model. We then sample a proof attempt from this model and perform_
proof checking using Isabelle. If Isabelle accepts the proof attempt
without an error, then we have proven the theorem. Otherwise, we
can try sampling another proof attempt from the proof generation
model. Explicitly, the input and output of our proof generation
model is as follows:
proving, either predict individual proof steps [26, 32, 33], or rely on
few-shot prompting and require the existence of natural language
proofs as hints [34].
We evaluate Baldur on the PISA dataset [33] of Isabelle/HOL theorems and their proofs used in recent state-of-the-art Isabelle/HOL
proof synthesis evaluations [32, 33]. The dataset consists of 183K
theorems, of which we use 6,336 for measuring effectiveness. Our
evaluation answers the following research questions:
RQ1: How effective are LLMs at generating whole proofs?
**LLMs outperform small-model-driven search-based meth-**
**ods. Baldur (without repair) is able to generate whole proofs**
for 47.9% of the theorems completely automatically, whereas
search-based approaches prove 39.0% [32].
RQ2: Can LLMs be used to repair proofs?
**LLMs can repair proofs, including their own erroneous**
**proof attempts. Baldur proves an additional 1.5% of the**
theorems when given access to a previous erroneous proof
attempt and the error messages produced by the proof assistant, even when controlling for the computational cost
of the additional inference. The error message is crucial for
this improvement.
RQ3: Can LLMs benefit from using the context of the theorem?
**In-context learning is remarkably effective for LLM-**
**based theorem proving. With context, Baldur proves 47.5%**
of the theorems, but only 40.7% without context for the same
model size.
RQ4: Does the size of the LLM affect proof synthesis effectiveness?
**Larger LLMs do perform better, suggesting that our ap-**
proach will continue to improve with further developments
in LLM research.
RQ5: How do LLMs compare to other state-of-the-art proof generation methods?
**Baldur complements state-of-the-art approaches by prov-**
**ing theorems they do not. Together with Thor [32], a tool**
that combines a learned model, search, and a hammer, Baldur can prove 65.7% of the theorems, whereas Thor alone
proves 57.0%. These findings suggest that LLM- and searchbased methods’ ideas complement each other and can work
together to further improve the automation of formal verification. An ensemble of 10 different fine-tuned Baldur models
proves 58.0%.
By leveraging LLMs, Baldur simplifies the proof synthesis pipeline,
greatly reducing the complexity and cost of the fine-grained interaction between the prediction model and the proof assistant that
search-based methods require. This reduction enables us to leverage
the power of LLMs, which would be prohibitively computationally
expensive if synthesis required as many LLM queries as searchbased methods. Further, those calls would require re-encoding with
each step the additional information the LLM might need, whereas
our approach allows us to make a single call and process the context only once, sampling multiple proofs of multiple proof steps,
at once.[1] Overall, our study strongly suggest that LLMs are a very
promising direction of research for automating formal verification,
and identifies several new avenues for future explorations.
1Alternatively path advanced caching strategies in the prediction servers of large
language models could address this problem. This is beyond the scope of our work.
-----
If we were to derive a training example from this example, the
input would be theorem statement and the target would be this
human-written proof.
Our tool Baldur, using the proof generation model, is able to
generate the following correct proof for this statement.
by (induct A rule: infinite_finite_induct)
(simp_all add: assms)
Baldur recognizes that induction is necessary and applies a special induction rule called infinite_finite_induct, following the
same overarching approach as the human-written proof, but much
more succinctly. It is interesting to note that Sledgehammer, the
hammer for Isabelle, cannot prove this theorem by default, as it
requires induction.
_Training Data Creation. To train the proof generation model,_
we construct a new proof generation dataset. Existing datasets
for training models in neural theorem provers contain examples of
individual proof steps. Each training example includes, at minimum,
the proof state (the input) and the next proof step to apply (the
target). Given a dataset that contains individual proof steps, we
want to create a new dataset so that we can train models to predict
entire proofs at once. So we extract the proof steps of each theorem
from the dataset and concatenate them to reconstruct the original
proofs. We use this data to generate training examples for the
proof generation model, where the input consists of the theorem
statement and the target consists of the proof.
In particular, this means that we drop the proof states from the
dataset, which make up most of the text in the dataset. We argue
that for Isabelle proofs this is not necessarily a problem, as Isabelle
uses a declarative proof language that is designed to be humanreadable. This is in contrast to other proof assistants, such as Coq,
where the proofs are typically written in a procedural style that is
not easy to interpret for humans without using the proof assistant
to generate the intermediate proof states.
_Inference. We fine-tune an LLM on our data to predict the entire_
proof given only a theorem statement. To synthesize a proof using
the fine-tuned LLM, we provide a potentially unseen theorem statement and sample a fixed number of sequences (typically 16 or 64)
from the language model. We tune the sampling temperature from
a small set (between 0.0 and 1.4 in increments of 0.2), which is a
multiplicative factor on the log probabilities of the distribution of
tokens sampled in each step.
_Proof checking. After sampling proofs from the model, we check_
all of them with the proof assistant. This means that we first load
the context in which the theorem was originally proven and then
replace the original proof of the theorem with the one we sampled
from the model. If Isabelle accepts any of the sampled proofs, we
report the theorem as proven.
**2.2** **Proof Repair**
If a proof is not accepted, Isabelle returns an error message that is
intended to help humans with debugging their proof script. Existing
proof generation methods, however, have no way to leverage error
messages.
|REM> Theorem|m Statement <|
|---|---|
|||
|Proof Generation Model||
|||
_Input:_
```
<THEOREM> Theorem Statement <PROOF>
```
Proof Generation Model
Candidate Proof
Isabelle
(Proof Assistant)
_No error_ _Error_
Success Failure
**Figure 1: An example of using the proof generation model**
**to generate a proof.**
- Input: theorem statement.
- Output: candidate proof.
_Example. To illustrate the power of the proof generation ap-_
proach in our tool Baldur, we first consider, as an example, the
theorem fun_sum_commute.
**lemma fun_sum_commute:**
**assumes "f 0 = 0" and "∧x y. f (x + y) = f x + f y"**
**shows "f (sum g A) = (Σa∈A. f (g a))"**
The theorem states that for an additive function 𝑓 where 𝑓 (0) =
0, and an arbitrary function 𝑔, applying 𝑓 on the sum of the set
resulting from applying 𝑔 on each element in a given set is equal
to the sum of applying 𝑔 followed by 𝑓 to each element in that
set. This theorem is from a project in the Archive of Formal Proofs
called Polynomials, specifically in the file Utils.thy.
The human-written proof distinguishes between two cases: when
the set is finite and when it is not. Induction is used for the finite
set case.
**proof (cases "finite A")**
case True
thus ?thesis
**proof (induct A)**
case empty
thus ?case by (simp add: assms(1))
next
case step: (insert a A)
show ?case by (simp add:
sum.insert[OF step(1) step(2)]
assms(2)
step(3))
qed
next
case False
thus ?thesis by (simp add: assms(1))
qed
-----
_Input:_
```
<THEOREM> Theorem Statement
<INCORRECT_PROOF> Incorrect Proof
|OR> Error M|Message <PR|
|---|---|
|||
|Proof Repair Model||
|||
```
Candidate Proof
Success
Failure
Isabelle
(Proof Assistant)
_No error_ _Error_
**Figure 2: An example of using the proof repair model to re-**
**pair an incorrect proof.**
Building off our proof generation approach, we explore the use
of error messages to improve neural theorem provers by developing
a proof repair approach. Starting with just the problem statement,
we apply the proof generation model from Section 2.1 to sample
a proof attempt. If Isabelle accepts the proof attempt, we can stop.
Otherwise, we use the error message returned by the proof checker
and the incorrect proof attempt to construct an example to serve as
input to the proof repair model. As depicted in Figure 2, we use the
theorem statement, the incorrect proof, and the error message as
input to our proof repair model. We then sample the proof attempt
from this model, and perform proof checking in the same way as
the proof generation approach. Explicitly, the input and output of
our proof repair approach pipeline are as follows:
- Input: theorem statement, incorrect proof, error message.
- Output: candidate proof.
_Example. Starting from the theorem fun_sum_commute, we il-_
lustrate an example of the proof repair approach in our tool Baldur.
We apply the proof generation model to obtain more proof attempts.
The following is a proof attempt generated by Baldur, which fails
in the proof checker.
**proof (induct A)**
case (insert x A)
thus ?case
by (simp add: assms(2))
qed simp
Baldur attempts to apply an induction, but fails to first break
down the proof into two cases (finite vs. infinite set). Isabelle returns
the following error message:
Step error: Unable to figure out induct rule
At command "proof" (line 1)
The error message details where the error occurs (line 1) and
that the issue is regarding the induct rule. With these strings as
input, using the proof repair model, Baldur can attempt to generate
_Proof Generation Model_
_Training Example_
_Input:_ _Output:_
`<THEOREM> Theorem Statement <PROOF>` Ground Truth Proof
Proof
Generation
Model
Candidate Proof
Isabelle No
(Proof Assistant) _No error_ example
_Error_
Error Message
_Input:_ _Output:_
`<THEOREM> Theorem Statement` Ground Truth Proof
```
<INCORRECT_PROOF> Candidate Proof
<ERROR> Error Message <PROOF>
```
_Proof Repair Model_
_Training Example_
**Figure 3: Training data creation for the proof repair model.**
a correct proof for this statement. If we want to instead derive a
proof repair training example from these strings, we concatenate the
theorem statement, the failed proof attempt, and the error message
to serve as the input, and we use the correct human-written proof
(recall from previous section) as the target.
_Training Data Creation. To train the proof repair model, we need_
to generate a proof repair training set. Figure 3 details the training
data creation process. Using the proof generation model, we sample
one proof with temperature 0 for each problem in the original
training set used to train the proof generation model. Using the
proof assistant, we record all failed proofs and their error messages.
We then proceed to construct the new proof repair training set.
For each original training example, we concatenate the theorem
statement, the (incorrect) candidate proof generated by the proof
generation model, and the corresponding error message to obtain
the input sequence of the new training example. For the target
sequence, we reuse the ground truth proof from the original training
example. We fine-tune the pretrained LLM on the proof repair
training set to obtain the proof repair model.
**2.3** **Adding Context**
LLMs possess impressive in-context learning abilities (cf. [6, 13])
that allow them to flexibly use information that is provided as part of
the input sequence (and, in fact, as part of their own output [60, 87]).
In order to explore to what extent in-context learning can help in the
theorem proving domain, we extend their inputs with potentially
helpful context. Adding to our proof generation approach, we use
the theory file contexts (the lines preceding the theorem statement)
-----
as input to our proof generation model with context. Explicitly, the
input and output of our proof generation model with context is as
follows:
- Input: theory file context and theorem statement.
- Output: candidate proof.
_Example. Continuing the example, the theory file context di-_
rectly preceding fun_sum_commute is the following theorem statement and its associated proof.
**lemma additive_implies_homogenous:**
**assumes "∧x y. f (x + y) = f x +**
((f (y::'a::monoid_add))::'b::cancel_comm_monoid_add)"
**shows "f 0 = 0"**
**proof -**
have "f (0 + 0) = f 0 + f 0" by (rule assms)
hence "f 0 = f 0 + f 0" by simp
thus "f 0 = 0" by simp
qed
The proof generation model with context in Baldur can leverage
this additional information. Strings that appear in the theorem
statement for fun_sum_commute, such as "f 0 = 0", appear again
in this context, and so the additional information surrounding them
could help the model make better predictions.
_Training Data Creation. We add the lines of the theory file that_
precede the theorem statement to serve as additional context. This
means that context can include statements, such as the previous
theorems, definitions, proofs, and even natural language comments.
To make use of the available input length of LLMs, we first add up to
50 preceding statements from the same theory file. During training,
we first tokenize all these statements, and then we truncate the left
of the sequence to fit the input length.
_Premise Selection. Many proofs make frequent use of definitions_
and previously proven statements, also known as premises. Some
neural theorem provers, such as HOList [4], focus entirely on the
problem of selecting the right set of premises, which has been
shown to be quite successful in theorem proving.
Premise selection is clearly similar to the addition of context in
some aspects, but we want to emphasize some key differences: (1)
Adding context is an extremely simple technique that only requires
rudimentary text processing, (2) by adding the preceding lines of
the theory file, the model can only observe a small fraction of the
available premises, (3) most of the added context consists of proofs.
**2.4** **Large Language Model**
We use Minerva [47], a large language model pretrained on a mathematics corpus based on the PaLM [13] large language model. Specifically, we use the 8 billion parameter model and the 62 billion
parameter model. The Minerva architecture follows the original
Transformer architecture [84], but has some noteworthy differences.
It is a decoder-only transformer with maximum sequence length of
2,048 tokens. The model uses
- rotary position encodings [78] instead of sinusoidal absolute
position embeddings,
- parallel layers, which compute the feed forward layer and
the attention layer in parallel and add up their results instead
of computing them in sequence, and
- multi-query attention, which uses a single key-value pair
per token per layer for faster decoding [76].
As this model is not a contribution of this paper, we refer the
reader to prior work for lower-level details on the Minerva architecture [13].
_Baldur-specific implementation details. The proof generation task_
naturally consists of an input, which is the theorem statement
(potentially augmented with additional information), and the output
(target), which is the proof for the theorem. To work with the
decoder-only model, we concatenate the inputs and targets, but the
loss is only computed over the target during fine-tuning. The inputs
use bidirectional attention while the targets use causal attention as
in PrefixLM [68].
As the transformer has a maximum context length of 2048, we
pad the sequences with zeros if they are too short, and we need to
truncate them if they are too long. Inputs to the model are truncated
to the maximum input length by dropping tokens on the left. The
rationale for dropping tokens on the left is that the additional
context is given before the theorem statement, and can be truncated
more safely than the theorem statement itself. Similarly, targets (i.e.
the proof to generate) are truncated on the right to the maximum
target length.
We used a maximum input length of 1536 and a maximum target
length of 512 all experiments but the repair study, which used 1024
and 1024 instead. We use a drop-out rate of 0.1 for both generation
and repair models to address overfitting.
During sampling from the language model we restrict the choice
of the next token to the 40 tokens with the highest score, also called
top-K sampling [18]. We sample sequences with a maximal length
of 256 tokens. The model was trained to generate up to 512 tokens,
but since most successful proofs are relatively short, this limitation
has little impact on the proof rate while saving some compute.
We use a batch size of 32, and fine-tune for up to 100,000 steps,
but we observed that the model begins to overfit to the training set
after 50,000 to 70,000 steps. For inference, we selected checkpoints
from just before the model started to overfit.
**3** **EVALUATION**
In this section we present several experiments and discuss the
following research questions:
RQ1: How effective are LLMs at generating whole proofs?
RQ2: Can LLMs be used to repair proofs?
RQ3: Can LLMs benefit from using the context of the theorem?
RQ4: Does the size of the LLM affect proof synthesis effectiveness?
RQ5: How do LLMs compare to other SOTA proof generation
methods?
To answer these questions, we trained several language models
using the approach from Section 2, and evaluated them on the
PISA benchmark (see Section 3.2). Our main results can be found
in Table 4 and in Figure 5.
-----
Model 16 samples 64 samples
Baldur 8b generate 34.8% 40.7%
Baldur 8b generate + repair 36.3%[∗] —
Baldur 8b w/ context 40.9% 47.5%
Baldur 62b w/ context 42.2% 47.9%
Baldur 8b w/ context ∪ Thor — 65.7%
**Table 4: Proof rate of different models.**
∗The repair approach uses half the number of samples, and
**then one repair attempt for each sample.**
**3.1** **Experimental Setup**
_Machine specification. For most of the training runs of the 8b_
model, we used 64 TPUv3 cores distributed across 8 hosts. For
training the 62b model, we used 256 TPUv3 cores distributed across
32 hosts. For most inference jobs, we used between 32 inference
servers using 8 TPUv3 cores each.
_Proof Checker. We use the PISA codebase [33] under a BSD 3-_
clause license, which allows us to interact with the Isabelle proof
assistant to check proofs. To run large jobs of the proof checker, we
package it in a Docker container and run it on GCP. We extended
the proof checker to discard any proofs that contain “sorry” or
“oops”, which are keywords that skip proofs, but otherwise pass the
proof checker. We apply a timeout of 10 seconds to each proof step
in the proof checker.
**3.2** **PISA Benchmark**
We derive our datasets from the PISA dataset [33], which includes
the Isabelle/HOL repository under a BSD-style license and the
Archive of Formal Proofs (AFP) from October 2021. The AFP is a
large collection of Isabelle/HOL proof developments. PISA includes
the core higher-order logic library of Isabelle, as well as a diverse
library of proofs formalised with Isabelle. This includes mathematics proofs and verification of software and hardware systems. The
PISA dataset comes with a 95%/1%/4% split of theorems for the
training/validation/test sets, which we follow in this work as well.
For the test set, prior work randomly chose 3,000 theorems from
the test set to report their results on. We report our results on
the complete test set. Some entries in the dataset are not proper
theorems (starting with the keyword “lemmas” instead of “lemma”),
which we filter out, as did prior work. This leaves us with a total of
6,336 theorems in our test set (originally 6,633 theorems).
**3.3** **RQ1: How effective are LLMs at generating**
**whole proofs?**
We aligned our methodology with the methodology described in
Thor [32] to enable a comparison between various methods. The
Thor paper includes informative baselines for the PISA benchmark,
including Sledgehammer, a method relying on heuristic search, and
a language model approach using search.
Sledgehammer and the search-based language model approach
achieve 25.6% and 39.0%, respectively. In comparison, our naive
proof generation approach with an 8b language model achieves a
proof rate of 34.8% with 16 samples and of 40.7% with 64 samples.
The comparison is even more favorable, if we consider the other
variants of Baldur, which achieve a proof rate of up to 47.9%.
We observe that the comparison depends on the computational
cost that we spend during inference. While comparing the cost
required for the two methods is involved, one measure we can use
is the amount of computational resources reserved during proof
generation. For a single proof, the language model approach using
search [32] requires a TPUv3 with 8 cores for 216 seconds,[2] while
our methodology also requires a TPUv3 with 8 cores for around 35
seconds to sample 64 proofs – a difference of factor 6. This argument
disregards the time spent on proof checking, which is intentional:
proof checking is done on CPUs, which is cheap compared to time
spent on TPUs. So, disentangling these two workloads can lead to
significant reductions in computational cost.
**RA1: These results demonstrate that LLMs can generate**
full proofs just as well as smaller language models augmented with a search strategy.
**3.4** **RQ2: Can LLMs be used to repair proofs?**
We trained models for proof generation and repair as detailed in
Section 2. If we sample from the proof generation model once with
temperature 0, collect the failed proofs, and then repair once with
temperature 0, we generate an additional 266 or 4.2% correct proofs.
However, in this comparison, the generate + repair approach uses
two samples, while the generate approach has only one sample. For
a fair comparison, we have to compare the repair approach to the
generate approach with additional inference attempts.
In Figure 5, we plot the proof success rate of the generate approach and the repair approach against the number of proof attempts. Note that the number of samples for the repair approach
does not perfectly align with the number of samples for the generate
approach. This is because the generate approach tends to produce
multiple copies of the same proofs, which we deduplicate before repair, and only generate one repair attempt per failed proof attempt.
For each of the number of samples of the generate approach, we
tune the temperature in the range of 0.0 to 1.4 in increments of 0.2,
and we always use temperature 0 for the repair approach.
We observe that the repair approach consistently outperforms
the plain proof generation model, which only uses the theorem
statement as input. However, this does not yet answer the question
of where those gains from. To shed some light on this question, we
trained another repair model that is given the same information,
except that it does not see the error message. Plotting the proof
success rate of this model in Figure 5 shows us that while it is able
to prove additional theorems, it does not surpass the performance
of the generate model when normalized for inference cost. This
suggests that the information in the error message is crucial for the
observed gains of the repair approach.
**RA2: LLMs can be used to repair proofs, including their**
own failed proof attempts, and this can boost overall proving power.
2Section 4.1 in [32] states that 1000 problems take around 60 TPU hours.
-----
0.4
0.35
0.3
0.25
0.2
10 15 20 25 30
|Generate Generate+Repair Generate+Repair (no err msg)|Col2|Col3|
|---|---|---|
||||
Generate
Generate+Repair
Generate+Repair (no err msg)
number of proof attempts
**Figure 5: Ratio of theorems proven vs inference cost.**
**3.5** **RQ3: Can LLMs benefit from using the**
**context of the theorem?**
In Table 4, we report the impact of adding theory file context to our
plain generation approach. At 64 samples, the proof rate increases
from 40.7% to 47.5% for the same model size. In Figure 6, we plot
the proof success rate of the generation model with and without
context against the number of proof attempts. We observe that the
proof generation models with context consistently outperform the
plain generation model.
To get a better understanding of where these gains are coming
from, we inspected 5 randomly sampled examples that the model
using context was able to solve, but the plain generation model
could not. Appendix A displays these examples and further details
the process we used to select them.
While the sample size is not large enough to make quantitative judgements, it appears that the model frequently makes use
of similar proofs in the context. We observe that for 3 of the 5
examples (see Appendices A.1, A.3, A.5) the model readily copies
**and adapts proofs that exist in its context. For another example**
(see Appendix A.2), the model made use of a premise that did not
occur in its context, which happened to also be used in the ground
truth proof, but with a different tactic. In the final example (see
Appendix A.4), the model found a simpler proof that did not occur
like this in the context. This suggests that the addition of context
does not play the same role as premise selection.
**RA3: LLMs can benefit from the context in which the**
theorem occurred in the theory file, both quantitatively
by increasing proving power, and qualitatively by copying
and adapting nearby proofs.
**3.6** **RQ4: Does the size of the LLM affect proof**
**synthesis effectiveness?**
We fine-tuned and evaluated the 62b version of Minerva on the
proof generation task with context. In Table 4, we report that for
16 samples, the large model can prove an additional 1.3% over the
8b model, resulting in a total proof rate of 42.2%. For 64 samples,
the large model can prove an additional 0.4% over the 8b model,
resulting in a total proof rate of 47.9%.
In Figure 6, we plot the proof success rate of the generation
model with context for the 8b model and the 62b model against the
number of proof attempts. We observe that the 62b proof generation
model with context outperforms the 8b proof generation model
with context. One caveat here is that we were not able to tune
hyperparameters as well due to the higher cost of these experiments,
so an optimally tuned 62b model may perform even better.
**RA4: Theorem proving performance improves with the**
scale of the language model.
**3.7** **RQ5: How do LLMs compare to other SOTA**
**proof generation methods?**
While comparisons across different neural theorem provers are
hard in general, we can compare to Thor [32], one of the strongest
approaches available. Thor also relies on language models, but uses
smaller models (700m parameters) and uses a different kind of proof
step as its prediction target. Instead of using the human ground
truth proofs, Thor generates a new training set and aims to solve
each proof step by generating a declarative statement, which is
then solved using Sledgehammer. That is, Thor disentangles the
planning stage of the next proof step, which is the specification of
the target state (using a “have” statement) and premise selection,
which is done by Sledgehammer. This enables Thor to solve a total
of 57% of the problems.
-----
0.5
0.45
0.4
0.35
0.3
0.25
10 20 30 40 50 60 70
Generate w/ context 62b t=0.8
Generate w/ context 8b t=0.8
Generate 8b t=0.8
number of proof attempts
**Figure 6: Ratio of theorems proven vs inference cost for models with different sizes and temperatures.**
AFP Topic Test set Baldur Thor
Computer Science 4,019 50.0% 57.5%
Logic 966 51.6% 53.6%
Mathematics 2,200 41.9% 50.5%
Tools 102 53.9% 51.8%
**Table 7: Proof rate by AFP topic classification, and the num-**
**ber of theorems in each category. While there are only 6336**
**theorems in total in the test set, the projects these theorems**
**appear in can fall into multiple topics.**
**RA5: Our findings suggest that LLM-based methods and**
search-based methods are complementary, and together
can lead to large gains in proving power.
**4** **DISCUSSION: WHAT’S NEXT?**
Our evaluation shows that LLMs can generate whole proofs at once,
and can repair their own mistakes, forming the basis for an effective
and simple approach to proof synthesis. Moving forward, we find
three directions particularly promising:
(1) integrating proof generation and proof repair models into a
new learnable proof search strategy,
(2) investigating alternative data splits corresponding to different goals, and
(3) evaluating these techniques across different proof assis**tants.**
_Learnable Proof Search. While our generate + repair approach_
to proof synthesis lets us avoid costly proof search procedures, it
also lends itself to a new proof search strategy. The search strategy
would work as follows:
(1) use the generation model to sample candidate proofs,
(2) use the repair model to attempt to repair those proofs, and
(3) continue to use the repair model to repair the repair-modelgenerated attempts from (2).
This paves the way for a learnable proof search strategy.
We demonstrate a proof-of-concept of this new proof search
strategy. We sample once using the generation model, repair the
generated sample using the repair model, and repair the repair
model’s attempt using the repair model. When using both models,
we sample with temperature 0. So the inference cost in this setup
is 3 (1 for the first generation, 1 for the first repair, and 1 for the
second repair).
The generate + repair approach with inference cost of 2 proves
24.9% of the test set theorems. With a second repair attempt, it
In contrast, our approach solves up to 47.9% of the problems.
While there is a significant gap, we argue that the means by which
the two techniques improve over plain language modeling are
largely orthogonal. In Table 4, we report a large gain from 57%
to 65.7% when we consider the union of Baldur and Thor, which
supports this hypothesis.
We compare the proof rate of Baldur and Thor on different types
of problems. The AFP is indexed by topic and there are four overarching topics: computer science, logic, mathematics, and tools.
The authors of individual proof developments self-identity which
topics their projects fall into. We use these provided topic labels
to determine the categories of problems from our test that Baldur
and Thor can most effectively solve. Table 7 shows the breakdown
of which theorems in the test set fall into which topics and Baldur’s and Thor’s proof success rates on these theorems. In terms of
relative performance, Baldur performs better than Thor on problems related to tools and similarly on problems related to logic. We
observe that Baldur’s performance on mathematics and computer
science is less than that of Thor’s performance. For mathematics
proofs, we hypothesize that premise selection may be particularly
useful, and Thor’s use of Sledgehammer is likely what gives it a leg
up on solving these mathematics problems.
-----
proves an additional 1.3%, for a total of 26.2%. The generation approach with inference cost of 3 proves 25.4%, which is 0.8% less
than the second repair attempt for the same inference cost.
To make this a more viable proof search strategy, future work
needs to focus on generating proof repair training data that better
mirrors the required changes for the subsequent repair attempts.
When proof checking, the resulting error message is for the first occurring error, typically from the first couple of lines of the predicted
proof. So the proof repair model will only learn to address these
types of errors. An alternative approach could be, for example, to
take the training examples from the proof generation model and
use the first few lines of the human-written ground truth proof as
a proof prefix. We could then concatenate this proof prefix to the
end of the input. Since it is a decoder-only model, we can simply
sample the model’s attempt at the rest of the proof. If the proof
prefix concatenated with the rest of the proof does not check, then
that can serve as a new training example for the proof repair model.
_Alternative Data Splits. The PISA benchmark that we use to_
evaluate our approach commits to a particular data split between
training data and testing data. It is interesting to note, however,
that different data splits may themselves correspond to different
goals, even fixing the same evaluation task and metric. Moving
forward, it may be useful to consider different kinds of data splits
corresponding to different goals, even fixing the same dataset and
benchmark suite. Here, we consider two different splits: theorem_wise and project-wise._
PISA uses a random theorem-wise split of the theorems appearing the AFP. This means that for any theorem in the test set, the
theorems and (the corresponding proofs) that appear before or after that theorem may be in the training set. This split is useful to
evaluate since a forward-looking goal of neural theorem prover
researchers is to integrate these tools directly into proof assistants,
where they could make use of the full project context. That project
context may include human-written proofs of nearby theorems
that look similar (or even identical) to one another — automatically
repurposing and adapting those proofs can be quite fruitful.
By contrast with PISA, CoqGym [95], the neural theorem prover
benchmark suite for the Coq proof assistant, uses a project-wise
split, where training and testing data come from entirely different
projects. This is useful when the goal is to help proof engineers
who start completely new projects and want an automated proof
synthesis tool to prove as much as it can. A tool that is trained and
evaluated in a setting where it expects that it has seen proofs in a
given proof development, as may happen with a theorem-wise split,
may not perform as well in this new setting. Explicit consideration
for the data split and the goals it achieves may help drive neural
theorem proving research even further.
_Different Proof Assistants. To make better sense of new strides_
in neural theorem proving, it makes sense to evaluate the same
techniques across many different proof assistants. But this remains
challenging. Consider once again the problem of data splits: since
prover developments that evaluate on CoqGym [20, 21] follow the
same project-wise split as CoqGym, it can be hard to make sense of
how those developments compare to those trained and evaluated
using theorem-wise data splits, like our own Baldur.
We used an established benchmark of Isabelle/HOL proofs to
fairly compare Baldur to prior work and to increase the chances
that our results generalize. However, we observed that search-based
proof-synthesis tools for other proof assistants tend to prove a
smaller fraction of theorems than we have found in our work. For
example, Diva [20], the current state of the art for the Coq proof
assistant, proves 33.8% of its benchmark automatically. This could
be a reflection of size and quality of the available training data or
the complexity of the available evaluation data (which, by necessity,
is different from what we use because it involves theorems and
proofs in different languages), or a more fundamental difference in
the complexity of synthesizing proofs in these respective languages.
Future work should allow for direct comparisons by porting the
developed techniques across proof assistants. Cross-proof-assistant
benchmark suites may help substantially with this, but still have
their limitations. For example, MiniF2F [100] implements the same
benchmark suite for Math Olympiad problems across many different proof assistants. But math problems are not evenly represented
across proof assistants, which draw different user communities
with different emphases. Fair comparisons between proof assistants
are hard, but we do believe they are necessary.
**5** **RELATED WORK**
Existing methods for automating formal theorem proving can be
classified into two categories, hammers and search-based methods.
Hammers, such as CoqHammer [16] and Sledgehammer [63], iteratively use a set of precomputed mathematical facts to attempt
to “hammer” out a proof. While hammers are powerful, they lack
the ability to employ certain tactics, such as induction, preventing
them from proving certain large classes of theorems. Search-based
methods use a prediction model that, given some information about
a partially written proof, the target theorem being proven, and the
current proof state, predicts a set of next likely proof steps. The
methods then use metaheuristic search [27] to attempt to synthesize
a proof. They iterate querying the prediction model for the likely
next steps and using the proof assistant to get feedback on those
steps and prune non-promising paths, generating a search tree of
possible proofs. The proof assistant also determines when the proof
is complete. The tools mostly differ in the prediction model they
use, which are typically learned automatically. For example, ASTactic uses only the proof state [95], TacTok uses the proof state and
the partially written proof script [21], Diva (which combines the
use of many models) also uses the proof term [19], and Passport
also uses identifier information [75]. Other search-based techniques
include Tactician [5], Proverbot9001 [74], and GamePad [30] for
Coq; TacticToe [22] for HOL4; and DeepHOL [4, 61] for HOL Light.
Prior work has found that hammers and search-based methods
are complementary, each often proving theorems the other cannot [19, 21, 95]. Thor [32] combines a search-based method with a
hammer, using both a prediction model and Sledgehammer in its
search. In contrast, our approach uses an LLM to generate an entire
proof at once, and then to one-shot repair it.
The most closely related work to ours is LISA [33], which finetunes a pretrained language model on a large Isabelle/HOL proof
corpus, and uses it inside of a search procedure to predict proof
steps. GPT-f [65] likewise combines a generative language model
-----
with proof search to target the Metamath proof language. A MonteCarlo tree search approach outperforms GPT-f in Lean [41].
TacticZero [90] learns not just tactics but also proof search strategies for end-to-end proof synthesis, rather than relying on a single
fixed proof search strategy like other neural theorem proving approaches. The approach works by way of deep reinforcement learning, and improves over the previous state of the art on a benchmark
for the HOL4 theorem prover.
A related problem to neural theorem proving is autoformal_ization: the automatic translation of natural language specifica-_
tions and proofs into formal, machine-checkable specifications and
proofs. LLMs have shown promise for autoformalization of specifications, and automatically generated proofs of the resulting autoformalized specifications have been used to improve a neural theorem
prover on a widely used benchmark suite in Isabelle/HOL [91].
ProofNet [3] introduces a dataset and benchmark suite for autoformalization in Lean, based on undergraduate mathematics, and
shows preliminary promising results autoformalizing proofs on
that benchmark using Codex [11] with few-short learning. Autoformalization of both theorems and proofs in Coq shows promise
on a small preliminary benchmark suite [15]. Autoformalization
for specification logics in verification is also promising [25].
The Draft, Sketch, and Prove method (DSP) [34] presents a hybrid
between theorem proving and autoformalization, which, similar to
our approach, makes use of LLMs for theorem proving. It provides
informal proofs as drafts for the LLM to translate into a formal
proof sketch, which is then proven via Sledgehammer. In contrast,
we use fine-tuning for LLMs, do not make use of Sledgehammer,
and do not rely on the availability of natural language proofs.
Pretrained language models can be used to answer naturallanguage mathematics questions [59]. Large language models, such
as Minerva [47] and PaLM [13], have been evaluated on natural language mathematics benchmarks, such as GSM8k [14] and
MATH [28]. The ProofNet [3] benchmark suite mentioned above
includes informal proofs alongside formal proofs as a benchmark.
We introduce the proof repair task, with error messages. This
is a new machine learning task for formal proofs. We show that
solving this task improves neural theorem proving performance.
Proof engineers perform proof repair constantly during formal
proof development [71]. Automating this task first arose with the
advent of symbolic tools for automatic proof repair in the Coq proof
assistant [69], and has since made its way into tools for other proof
systems [50]. Our work is among the first to explore proof repair
in a machine learning context, and the first we are aware of to use
error messages for a proof repair task, and to use repair to improve
performance of proof synthesis.
There are numerous other tasks that machine learning tools for
proofs consider that may either help users with proof development
directly, or improve neural theorem proving performance themselves. For example, PaMpeR [54] predicts proof methods alongside
explanations in Isabelle/HOL. ACL2(ml) [29] generates helper lemmas and suggests similar theorems in ACL2. Other popular tasks
leveraging machine learning include premise selection and datatype
alignment, and are described in more detail in QED at Large [70].
Our approach can help minimize human effort in formal verification by automatically synthesizing proofs for some theorems.
Other tools that assist humans writing formal verification proofs
can similarly save time, and can be complementary to our work
for theorems Baldur cannot prove fully automatically. iCoq [7, 8],
and its parallelized version PiCoq [62], find failing proof scripts in
evolving projects by prioritizing proof scripts affected by a revision.
iCoq tracks fine-grained dependencies between Coq definitions,
propositions, and proof scripts to narrow down the potentially affected proof scripts. QuickChick [42], a random testing tool for Coq,
searches for counterexamples to executable theorems, helping a
programmer to become more confident that a theorem is correct.
Roosterize [55, 57] can suggest names for lemmas, and language
models can also help automatically format proofs [56], both improving readability and maintainability. Mutation analysis can identify
weak specifications, when mutating definitions does not break their
proofs [9, 31]. The mutation operators could, hypothetically, be
applied in repair and in providing feedback for developers as to
why a proof has broken.
The automated program repair field studies the task of taking a
program with a bug, evidenced by one or more failing tests, and
automatically producing a modified version of the program that
passes all the tests [45]. Generate-and-validate repair techniques
use search-based techniques or predefined templates to generate
many syntactic candidate patches, validating them against the
tests (e.g., GenProg [44], Prophet [48], AE [88], HDRepair [43],
ErrDoc [82], JAID [10], Qlose [17], and Par [38], ssFix [92], CapGen [89], SimFix [36], Hercules [73], Recoder [101], among others).
Techniques such as DeepFix [24] and ELIXIR [72] use learned models to predict erroneous program locations, as well as the patches.
It is possible to learn how to repair errors together by learning
how to create errors, which can increase the amount of available
training data, but poses an additional challenge of learning to approximate making human-like errors [97]. Unfortunately, these
automated program repair techniques often overfit to the available tests and produce patches that, while passing all the tests, fail
to encode the developers’ intent [53, 58, 67, 77]. Improving the
quality of the resulting repairs can be done via improving fault
localization strategies [2, 35, 40, 49, 52, 79, 93], patch generation
algorithms (e.g., heuristic-based [36, 44, 48, 64, 82, 89], constraintbased [1, 23, 37, 51, 85], and learning-based [12, 24, 72]), and patch
validation methodologies [81, 86, 94, 98, 99]. By contrast, in Baldur’s
domain of theorem proving, it is impossible to produce a proof that
appears to prove the theorems, but actually fails to do so, because
the theorem prover acts as an absolute oracle for the correctness of
the proof. As a result, it may be more difficult to produce a proof in
the first place, but if techniques in this domain do produce proofs,
they are guaranteed to be correct.
**6** **CONTRIBUTIONS**
This paper is the first to fine-tune large language models to generate
entire proofs of theorems without the need for proof search or
hammers. We demonstrate that this approach is more effective
and more efficient than prior methods that use one-step-at-a-time
search-based generation, and that it is complementary to existing
search-based and hammer-based approaches: Together, our Baldur
and prior tools can fully automatically synthesize proofs for 65.7%
of the theorems in a large Isabelle/HOL benchmark, establishing
10
-----
a new state of the art. We further demonstrate that generate-andrepair improves proof synthesis when the language model is given
access to the error messages produced by erroneous proofs.
This work opens new avenues of research into (1) using LLMs
to automate theorem proving and simplify formal verification of
software properties, (2) repair approaches, both for proofs and, potentially, more traditional automated program repair tasks, and
(3) the use of context (e.g., failed synthesis attempts and error messages) in proof generation. Our very encouraging results suggest
a bright future for automated proof generation and repair using
LLMs.
**ACKNOWLEDGMENTS**
This work is supported by the Defense Advanced Research Projects
Agency under grant no. DARPA HR0011-22-9-0063, and by the
National Science Foundation under grant no. CCF-2210243.
**REFERENCES**
[1] Afsoon Afzal, Manish Motwani, Kathryn T. Stolee, Yuriy Brun, and Claire Le
Goues. 2021. SOSRepair: Expressive Semantic Search for Real-World Program
[Repair. TSE 47, 10 (October 2021), 2162–2181. https://doi.org/10.1109/TSE.2019.](https://doi.org/10.1109/TSE.2019.2944914)
[2944914](https://doi.org/10.1109/TSE.2019.2944914)
[2] Fatmah Yousef Assiri and James M Bieman. 2017. Fault Localization for Automated Program Repair: Effectiveness, Performance, Repair Correctness. Software
_[Quality Journal 25, 1 (2017), 171–199. https://doi.org/10.1007/s11219-016-9312-z](https://doi.org/10.1007/s11219-016-9312-z)_
[3] Zhangir Azerbayev, Bartosz Piotrowski, and Jeremy Avigad. 2022. ProofNet: A
benchmark for autoformalizing and formally proving undergraduate-level mathematics problems. In Workshop MATH-AI: Toward Human-Level Mathematical
_Reasoning. New Orleans, Louisiana, USA._
[4] Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart
Wilcox. 2019. HOList: An Environment for Machine Learning of Higher Order
Logic Theorem Proving. In Proceedings of the 36th International Conference
_on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA_
_(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and_
[Ruslan Salakhutdinov (Eds.). PMLR, 454–463. http://proceedings.mlr.press/v97/](http://proceedings.mlr.press/v97/bansal19a.html)
[bansal19a.html](http://proceedings.mlr.press/v97/bansal19a.html)
[5] Lasse Blaauwbroek, Josef Urban, and Herman Geuvers. 2020. The Tactician.
In Intelligent Computer Mathematics, Christoph Benzmüller and Bruce Miller
(Eds.). Springer International Publishing, Cham, 271–277.
[6] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,
Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan,
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter,
Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya
Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners.
In NeurIPS.
[7] Ahmet Celik, Karl Palmskog, and Milos Gligoric. 2017. ICoq: Regression proof
selection for large-scale verification projects. In IEEE/ACM International Con_ference on Automated Software Engineering (ASE). Urbana-Champaign, IL, USA,_
[171–182. https://doi.org/10.1109/ASE.2017.8115630](https://doi.org/10.1109/ASE.2017.8115630)
[8] Ahmet Celik, Karl Palmskog, and Milos Gligoric. 2018. A Regression Proof
Selection Tool for Coq. In International Conference on Software Engineering
_Demonstrations Track (ICSE DEMO). Gothenburg, Sweden, 117–120._ [https:](https://doi.org/10.1145/3183440.3183493)
[//doi.org/10.1145/3183440.3183493](https://doi.org/10.1145/3183440.3183493)
[9] Ahmet Celik, Karl Palmskog, Marinela Parovic, Emilio Jesús Gallego Arias, and
Milos Gligoric. 2019. Mutation Analysis for Coq. In IEEE/ACM International
_Conference on Automated Software Engineering (ASE). San Diego, California,_
[539–551. https://doi.org/10.1109/ASE.2019.00057](https://doi.org/10.1109/ASE.2019.00057)
[10] Liushan Chen, Yu Pei, and Carlo A. Furia. 2017. Contract-based program repair
without the contracts. In IEEE/ACM International Conference on Automated
_Software Engineering (ASE). Urbana, IL, USA, 637–647._
[11] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de
Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy
Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder,
Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens
Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert,
Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss,
Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji,
Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan
Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew
Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew,
Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021.
Evaluating Large Language Models Trained on Code. CoRR abs/2107.03374
[(2021). arXiv:2107.03374 https://arxiv.org/abs/2107.03374](https://arxiv.org/abs/2107.03374)
[12] Zimin Chen, Steve James Kommrusch, Michele Tufano, Louis-Noël Pouchet,
Denys Poshyvanyk, and Martin Monperrus. 2019. Sequencer: Sequence-tosequence learning for end-to-end program repair. TSE 47, 9 (2019), 1943–1959.
[https://doi.org/10.1109/TSE.2019.2940179](https://doi.org/10.1109/TSE.2019.2940179)
[13] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav
Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua
Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar
Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke,
Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier
Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan
Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira,
Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang,
Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
PaLM: Scaling Language Modeling with Pathways. CoRR abs/2204.02311 (2022).
[https://doi.org/10.48550/arXiv.2204.02311 arXiv:2204.02311](https://doi.org/10.48550/arXiv.2204.02311)
[14] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman. 2021. Training Verifiers to
[Solve Math Word Problems. CoRR abs/2110.14168 (2021). arXiv:2110.14168](https://arxiv.org/abs/2110.14168)
[https://arxiv.org/abs/2110.14168](https://arxiv.org/abs/2110.14168)
[15] Garett Cunningham, Razvan C. Bunescu, and David Juedes. 2023. Towards
Autoformalization of Mathematics and Code Correctness: Experiments with
[Elementary Proofs. https://doi.org/10.48550/ARXIV.2301.02195](https://doi.org/10.48550/ARXIV.2301.02195)
[16] Łukasz Czajka and Cezary Kaliszyk. 2018. Hammer for Coq: Automation for
Dependent Type Theory. Journal of Automated Reasoning 61, 1-4 (2018), 423–453.
[https://doi.org/10.1007/s10817-018-9458-4](https://doi.org/10.1007/s10817-018-9458-4)
[17] Loris D’Antoni, Roopsha Samanta, and Rishabh Singh. 2016. Qlose: Program
Repair with Quantitative Objectives. In International Conference on Computer
_Aided Verification (CAV). Toronto, ON, Canada, 383–401._
[18] Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical Neural Story
Generation. In Proceedings of the 56th Annual Meeting of the Association for Com_putational Linguistics (Volume 1: Long Papers). Association for Computational_
[Linguistics, Melbourne, Australia, 889–898. https://doi.org/10.18653/v1/P18-](https://doi.org/10.18653/v1/P18-1082)
[1082](https://doi.org/10.18653/v1/P18-1082)
[19] Emily First and Yuriy Brun. 2022. Diversity-Driven Automated Formal Verification. In International Conference on Software Engineering (ICSE). Pittsburgh, PA,
[749–761. https://doi.org/10.1145/3510003.3510138](https://doi.org/10.1145/3510003.3510138)
[20] Emily First and Yuriy Brun. 2022. Diversity-Driven Automated Formal Verifica[tion. In ICSE (22–27). 749–761. https://doi.org/10.1145/3510003.3510138](https://doi.org/10.1145/3510003.3510138)
[21] Emily First, Yuriy Brun, and Arjun Guha. 2020. TacTok: Semantics-Aware
[Proof Synthesis. PACMPL OOPSLA 4 (November 2020), 231:1–231:31. https:](https://doi.org/10.1145/3428299)
[//doi.org/10.1145/3428299](https://doi.org/10.1145/3428299)
[22] Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael
Norrish. 2021. TacticToe: Learning to Prove with Tactics. J. Autom. Reason. 65,
[2 (feb 2021), 257–286. https://doi.org/10.1007/s10817-020-09580-x](https://doi.org/10.1007/s10817-020-09580-x)
[23] Sumit Gulwani, Ivan Radiček, and Florian Zuleger. 2018. Automated Clustering
and Program Repair for Introductory Programming Assignments. In PLDI. 465–
[480. https://doi.org/10.1145/3192366.3192387](https://doi.org/10.1145/3192366.3192387)
[24] Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish K. Shevade. 2017. DeepFix:
Fixing Common C Language Errors by Deep Learning. In AAAI.
[25] Christopher Hahn, Frederik Schmitt, Julia J. Tillman, Niklas Metzger, Julian
Siber, and Bernd Finkbeiner. 2022. Formal Specifications from Natural Lan[guage. CoRR abs/2206.01962 (2022). https://doi.org/10.48550/arXiv.2206.01962](https://doi.org/10.48550/arXiv.2206.01962)
[arXiv:2206.01962](https://arxiv.org/abs/2206.01962)
[26] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas
Polu. 2022. Proof Artifact Co-Training for Theorem Proving with Language
Models. In The Tenth International Conference on Learning Representations, ICLR
_[2022, Virtual Event, April 25-29, 2022. OpenReview.net. https://openreview.net/](https://openreview.net/forum?id=rpxJc9j04U)_
[forum?id=rpxJc9j04U](https://openreview.net/forum?id=rpxJc9j04U)
[27] Mark Harman. 2007. The Current State and Future of Search Based Software
Engineering. In ACM/IEEE International Conference on Software Engineering
_[(ICSE). 342–357. https://doi.org/10.1109/FOSE.2007.29](https://doi.org/10.1109/FOSE.2007.29)_
[28] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric
Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring Mathematical Problem
[Solving With the MATH Dataset. CoRR abs/2103.03874 (2021). arXiv:2103.03874](https://arxiv.org/abs/2103.03874)
[https://arxiv.org/abs/2103.03874](https://arxiv.org/abs/2103.03874)
[29] Jónathan Heras and Ekaterina Komendantskaya. 2014. ACL2(ml): Machinelearning for ACL2. Electronic Proceedings in Theoretical Computer Science 152
[(04 2014). https://doi.org/10.4204/EPTCS.152.5](https://doi.org/10.4204/EPTCS.152.5)
11
-----
[30] Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. 2019.
GamePad: A Learning Environment for Theorem Proving. In 7th International
_Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May_
_[6-9, 2019. OpenReview.net. https://openreview.net/forum?id=r1xwKoR9Y7](https://openreview.net/forum?id=r1xwKoR9Y7)_
[31] Kush Jain, Karl Palmskog, Ahmet Celik, Emilio Jesús Gallego Arias, and Milos
Gligoric. 2020. MCoq: Mutation Analysis for Coq Verification Projects. In
_International Conference on Software Engineering Demonstrations Track (ICSE_
_[DEMO). Seoul, South Korea, 89–92. https://doi.org/10.1145/3377812.3382156](https://doi.org/10.1145/3377812.3382156)_
[32] Albert Jiang, Konrad Czechowski, Mateja Jamnik, Piotr Milos, Szymon
Tworkowski, Wenda Li, and Yuhuai Tony Wu. 2022. Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers. In Neural
_Information Processing Systems (NeurIPS)._
[33] Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. 2021. LISA:
Language models of ISAbelle proofs. In Conference on Artificial Intelligence and
_Theorem Proving (AITP. Aussois, France, 17.1–17.3._
[34] Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu,
Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. 2022.
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal
[Proofs. CoRR abs/2210.12283 (2022). https://doi.org/10.48550/arXiv.2210.12283](https://doi.org/10.48550/arXiv.2210.12283)
[arXiv:2210.12283](https://arxiv.org/abs/2210.12283)
[35] Jiajun Jiang, Yingfei Xiong, and Xin Xia. 2019. A manual inspection of Defects4J bugs and its implications for automatic program repair. Science China
_Information Sciences 62, 10 (2019), 200102._
[36] Jiajun Jiang, Yingfei Xiong, Hongyu Zhang, Qing Gao, and Xiangqun Chen.
2018. Shaping Program Repair Space with Existing Patches and Similar Code.
[In ISSTA. https://doi.org/10.1145/3213846.3213871](https://doi.org/10.1145/3213846.3213871)
[37] Yalin Ke, Kathryn T. Stolee, Claire Le Goues, and Yuriy Brun. 2015. Repairing
[Programs with Semantic Code Search. In ASE (9–13). 295–306. https://doi.org/](https://doi.org/10.1109/ASE.2015.60)
[10.1109/ASE.2015.60](https://doi.org/10.1109/ASE.2015.60)
[38] Dongsun Kim, Jaechang Nam, Jaewoo Song, and Sunghun Kim. 2013. Automatic patch generation learned from human-written patches. In ACM/IEEE
_International Conference on Software Engineering (ICSE). San Francisco, CA, USA,_
[802–811. http://dl.acm.org/citation.cfm?id=2486788.2486893](http://dl.acm.org/citation.cfm?id=2486788.2486893)
[39] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock,
Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael
Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. 2009. SeL4: Formal
Verification of an OS Kernel. In Proceedings of the ACM SIGOPS 22nd Symposium
_on Operating Systems Principles (Big Sky, Montana, USA) (SOSP ’09). Association_
[for Computing Machinery, New York, NY, USA, 207–220. https://doi.org/10.](https://doi.org/10.1145/1629575.1629596)
[1145/1629575.1629596](https://doi.org/10.1145/1629575.1629596)
[40] Anil Koyuncu, Kui Liu, Tegawendé F Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon. 2019. iFixR: Bug Report Driven Program
[Repair. In ESEC/FSE. 314–325. https://doi.org/10.1145/3338906.3338935](https://doi.org/10.1145/3338906.3338935)
[41] Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet,
Amaury Hayat, Gabriel Ebner, Aurélien Rodriguez, and Timothée Lacroix. 2022.
HyperTree Proof Search for Neural Theorem Proving. CoRR abs/2205.11491
[(2022). https://doi.org/10.48550/arXiv.2205.11491 arXiv:2205.11491](https://doi.org/10.48550/arXiv.2205.11491)
[42] Leonidas Lampropoulos, Zoe Paraskevopoulou, and Benjamin C. Pierce. 2017.
Generating Good Generators for Inductive Relations. Proceedings of the ACM
_[on Programming Languages (PACMPL) 2, POPL (Dec. 2017), 45:1–45:30. https:](https://doi.org/10.1145/3158133)_
[//doi.org/10.1145/3158133](https://doi.org/10.1145/3158133)
[43] Xuan Bach D. Le, David Lo, and Claire Le Goues. 2016. History Driven Program
Repair. In Intl. Conf. on Software Analysis, Evolution, and Reengineering, Vol. 1.
[213–224. https://doi.org/10.1109/SANER.2016.76](https://doi.org/10.1109/SANER.2016.76)
[44] Claire Le Goues, ThanhVu Nguyen, Stephanie Forrest, and Westley Weimer. 2012.
GenProg: A Generic Method for Automatic Software Repair. IEEE Transactions
_on Software Engineering (TSE) 38 (2012), 54–72._ [https://doi.org/10.1109/TSE.](https://doi.org/10.1109/TSE.2011.104)
[2011.104](https://doi.org/10.1109/TSE.2011.104)
[45] Claire Le Goues, Michael Pradel, and Abhik Roychoudhury. 2019. Automated
Program Repair. CACM 62, 12 (Nov. 2019), 56–65. [https://doi.org/10.1145/](https://doi.org/10.1145/3318162)
[3318162](https://doi.org/10.1145/3318162)
[46] Xavier Leroy. 2009. Formal Verification of a Realistic Compiler. ACM 52, 7 (July
[2009), 107–115. https://doi.org/10.1145/1538788.1538814](https://doi.org/10.1145/1538788.1538814)
[47] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk
Michalewski, Vinay V. Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag,
Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant
Misra. 2022. Solving Quantitative Reasoning Problems with Language Models. CoRR abs/2206.14858 (2022). [https://doi.org/10.48550/arXiv.2206.14858](https://doi.org/10.48550/arXiv.2206.14858)
[arXiv:2206.14858](https://arxiv.org/abs/2206.14858)
[48] Fan Long and Martin Rinard. 2016. Automatic Patch Generation by Learning
Correct Code. In ACM SIGPLAN-SIGACT Symposium on Principles of Program_[ming Languages (POPL). St. Petersburg, FL, USA, 298–312. https://doi.org/10.](https://doi.org/10.1145/2837614.2837617)_
[1145/2837614.2837617](https://doi.org/10.1145/2837614.2837617)
[49] Yiling Lou, Ali Ghanbari, Xia Li, Lingming Zhang, Haotian Zhang, Dan Hao, and
Lu Zhang. 2020. Can Automated Program Repair Refine Fault Localization? A
[Unified Debugging Approach. In ISSTA. 75–87. https://doi.org/10.1145/3395363.](https://doi.org/10.1145/3395363.3397351)
[3397351](https://doi.org/10.1145/3395363.3397351)
[50] Paolo Masci and Aaron Dutle. 2022. Proof Mate: An Interactive Proof Helper
for PVS (Tool Paper). In NASA Formal Methods Symposium. Springer, 809–815.
[51] Sergey Mechtaev, Manh-Dung Nguyen, Yannic Noller, Lars Grunske, and Abhik
Roychoudhury. 2018. Semantic Program Repair Using a Reference Implementa[tion. In ICSE. 129–139. https://doi.org/10.1145/3180155.3180247](https://doi.org/10.1145/3180155.3180247)
[52] Manish Motwani and Yuriy Brun. 2023. Better Automatic Program Repair by
Using Bug Reports and Tests Together. In International Conference on Software
_Engineering (ICSE) (14–20). Melbourne, Australia._
[53] Manish Motwani, Mauricio Soto, Yuriy Brun, René Just, and Claire Le Goues.
2022. Quality of Automated Program Repair on Real-World Defects. TSE 48, 2
[(February 2022), 637–661. https://doi.org/10.1109/TSE.2020.2998785](https://doi.org/10.1109/TSE.2020.2998785)
[54] Yutaka Nagashima and Yilun He. 2018. PaMpeR: Proof Method Recommendation
System for Isabelle/HOL. In International Conference on Automated Software En_[gineering (ASE). Montpellier, France, 362–372. https://doi.org/10.1145/3238147.](https://doi.org/10.1145/3238147.3238210)_
[3238210](https://doi.org/10.1145/3238147.3238210)
[55] Pengyu Nie, Karl Palmskog, Junyi Jessy Li, and Milos Gligoric. 2020. Deep
Generation of Coq Lemma Names Using Elaborated Terms. In International
_Joint Conference on Automated Reasoning (IJCAR). Paris, France, 97–118._
[56] Pengyu Nie, Karl Palmskog, Junyi Jessy Li, and Milos Gligoric. 2020. Learning to
Format Coq Code Using Language Models. In The Coq Workshop. Aubervilliers,
France.
[57] Pengyu Nie, Karl Palmskog, Junyi Jessy Li, and Milos Gligoric. 2021. Roosterize:
Suggesting Lemma Names for Coq Verification Projects Using Deep Learning.
In International Conference on Software Engineering Demonstrations Track (ICSE
_[DEMO). Madrid, Spain, 21–24. https://doi.org/10.1109/ICSE-Companion52605.](https://doi.org/10.1109/ICSE-Companion52605.2021.00026)_
[2021.00026](https://doi.org/10.1109/ICSE-Companion52605.2021.00026)
[58] Kunihiro Noda, Yusuke Nemoto, Keisuke Hotta, Hideo Tanida, and Shinji
Kikuchi. 2020. Experience Report: How Effective is Automated Program Repair
for Industrial Software?. In SANER. 612–616.
[59] Kimia Noorbakhsh, Modar Sulaiman, Mahdi Sharifi, Kallol Roy, and Pooyan
Jamshidi. 2021. Pretrained Language Models are Symbolic Mathematics Solvers
[too! CoRR abs/2110.03501 (2021). https://arxiv.org/abs/2110.03501](https://arxiv.org/abs/2110.03501)
[60] Maxwell I. Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski,
Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, Charles Sutton, and Augustus Odena. 2021. Show Your Work:
Scratchpads for Intermediate Computation with Language Models. _CoRR_
[abs/2112.00114 (2021). arXiv:2112.00114 https://arxiv.org/abs/2112.00114](https://arxiv.org/abs/2112.00114)
[61] Aditya Paliwal, Sarah M. Loos, Markus N. Rabe, Kshitij Bansal, and Christian
Szegedy. 2020. Graph Representations for Higher-Order Logic and Theorem
Proving. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI
_2020, The Thirty-Second Innovative Applications of Artificial Intelligence Confer-_
_ence, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial_
_Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press,_
2967–2974.
[62] Karl Palmskog, Ahmet Celik, and Milos Gligoric. 2018. PiCoq: Parallel Regression
Proving for Large-Scale Verification Projects. In ACM SIGSOFT International
_Symposium on Software Testing and Analysis (ISSTA). Amsterdam, Netherlands,_
[344–355. https://doi.org/10.1145/3213846.3213877](https://doi.org/10.1145/3213846.3213877)
[63] Larry Paulson and Tobias Nipkow. 2023. The Sledgehammer: Let Automatic
[Theorem Provers write your Isabelle scripts! https://isabelle.in.tum.de/website-](https://isabelle.in.tum.de/website-Isabelle2009-1/sledgehammer.html)
[Isabelle2009-1/sledgehammer.html.](https://isabelle.in.tum.de/website-Isabelle2009-1/sledgehammer.html)
[64] Justyna Petke and Aymeric Blot. 2018. Refining Fitness Functions in Test-Based
[Program Repair. In APR. 13–14. https://doi.org/10.1145/3387940.3392180](https://doi.org/10.1145/3387940.3392180)
[65] Stanislas Polu and Ilya Sutskever. 2020. Generative Language Modeling for
[Automated Theorem Proving. CoRR abs/2009.03393 (2020). arXiv:2009.03393](https://arxiv.org/abs/2009.03393)
[https://arxiv.org/abs/2009.03393](https://arxiv.org/abs/2009.03393)
[66] Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James
Bradbury, Anselm Levskaya, Jonathan Heek, Kefan Xiao, Shivani Agrawal,
and Jeff Dean. 2022. Efficiently Scaling Transformer Inference. [https:](https://doi.org/10.48550/ARXIV.2211.05102)
[//doi.org/10.48550/ARXIV.2211.05102](https://doi.org/10.48550/ARXIV.2211.05102)
[67] Zichao Qi, Fan Long, Sara Achour, and Martin Rinard. 2015. An Analysis of
Patch Plausibility and Correctness for Generate-and-validate Patch Generation
[Systems. In ISSTA. 24–36. https://doi.org/10.1145/2771783.2771791](https://doi.org/10.1145/2771783.2771791)
[68] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits
of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn.
_[Res. 21 (2020), 140:1–140:67. http://jmlr.org/papers/v21/20-074.html](http://jmlr.org/papers/v21/20-074.html)_
[69] Talia Ringer. 2021. Proof Repair. Ph. D. Dissertation. University of Washington.
[70] Talia Ringer, Karl Palmskog, Ilya Sergey, Milos Gligoric, Zachary Tatlock, et al.
2019. QED at large: A survey of engineering of formally verified software.
_Foundations and Trends® in Programming Languages 5, 2-3 (2019), 102–281._
[71] Talia Ringer, Alex Sanchez-Stern, Dan Grossman, and Sorin Lerner. 2020.
REPLica: REPL Instrumentation for Coq Analysis. In Proceedings of the 9th
_ACM SIGPLAN International Conference on Certified Programs and Proofs (New_
Orleans, LA, USA) (CPP 2020). Association for Computing Machinery, New York,
[NY, USA, 99–113. https://doi.org/10.1145/3372885.3373823](https://doi.org/10.1145/3372885.3373823)
[72] Ripon K. Saha, Yingjun Lyu, Hiroaki Yoshida, and Mukul R. Prasad. 2017. ELIXIR:
Effective object oriented program repair. In ASE. 648–659.
12
-----
[73] Seemanta Saha, Ripon K. Saha, and Mukul R. Prasad. 2019. Harnessing Evolution for Multi-Hunk Program Repair. In ACM/IEEE International Confer_ence on Software Engineering (ICSE) (29–31). Montreal, QC, Canada, 13–24._
[https://doi.org/10.1109/ICSE.2019.00020](https://doi.org/10.1109/ICSE.2019.00020)
[74] Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. 2020.
Generating Correctness Proofs with Neural Networks. In Proceedings of the 4th
_ACM SIGPLAN International Workshop on Machine Learning and Programming_
_Languages (London, UK) (MAPL 2020). Association for Computing Machinery,_
[New York, NY, USA, 1–10. https://doi.org/10.1145/3394450.3397466](https://doi.org/10.1145/3394450.3397466)
[75] Alex Sanchez-Stern, Emily First, Timothy Zhou, Zhanna Kaufman, Yuriy Brun,
and Talia Ringer. 2023. Passport: Improving Automated Formal Verification
Using Identifiers. ACM TOPLAS (2023).
[76] Noam Shazeer. 2019. Fast Transformer Decoding: One Write-Head is All You
[Need. CoRR abs/1911.02150 (2019). arXiv:1911.02150 http://arxiv.org/abs/1911.](https://arxiv.org/abs/1911.02150)
[02150](http://arxiv.org/abs/1911.02150)
[77] Edward K. Smith, Earl Barr, Claire Le Goues, and Yuriy Brun. 2015. Is the Cure
Worse than the Disease? Overfitting in Automated Program Repair. In ESEC/FSE.
[532–543. https://doi.org/10.1145/2786805.2786825](https://doi.org/10.1145/2786805.2786825)
[78] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. RoFormer:
Enhanced Transformer with Rotary Position Embedding. CoRR abs/2104.09864
[(2021). arXiv:2104.09864 https://arxiv.org/abs/2104.09864](https://arxiv.org/abs/2104.09864)
[79] Shuyao Sun, Junxia Guo, Ruilian Zhao, and Zheng Li. 2018. Search-Based
Efficient Automated Program Repair Using Mutation and Fault Localization. In
_[COMPSAC, Vol. 1. 174–183. https://doi.org/10.1109/COMPSAC.2018.00030](https://doi.org/10.1109/COMPSAC.2018.00030)_
[[80] The Coq Development Team. 2017. Coq, v.8.7. https://coq.inria.fr.](https://coq.inria.fr)
[81] Haoye Tian, Kui Liu, Abdoul Kader Kaboré, Anil Koyuncu, Li Li, Jacques Klein,
and Tegawendé F. Bissyandé. 2020. Evaluating Representation Learning of
Code Changes for Predicting Patch Correctness in Program Repair. In ASE.
[https://doi.org/10.1145/3324884.3416532](https://doi.org/10.1145/3324884.3416532)
[82] Yuchi Tian and Baishakhi Ray. 2017. Automatically diagnosing and repairing
error handling bugs in C. In European Software Engineering Conference and
_ACM SIGSOFT International Symposium on Foundations of Software Engineering_
_(ESEC/FSE). Paderborn, Germany, 752–762._
[83] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol
Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray
Kavukcuoglu. 2016. WaveNet: A Generative Model for Raw Audio. CoRR
[abs/1609.03499 (2016). arXiv:1609.03499 http://arxiv.org/abs/1609.03499](https://arxiv.org/abs/1609.03499)
[84] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you
need. In NeurIPS.
[85] Ke Wang, Rishabh Singh, and Zhendong Su. 2018. Search, align, and repair:
Data-driven feedback generation for introductory programming exercises. In
_[PLDI. 481–495. https://doi.org/10.1145/3296979.3192384](https://doi.org/10.1145/3296979.3192384)_
[86] Shangwen Wang, Ming Wen, Bo Lin, Hongjun Wu, Yihao Qin, Deqing Zou,
Xiaoguang Mao, and Hai Jin. 2020. Automated Patch Correctness Assessment:
How Far Are We?. In ASE. Association for Computing Machinery, 968–980.
[https://doi.org/10.1145/3324884.3416590](https://doi.org/10.1145/3324884.3416590)
[87] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc
Le, and Denny Zhou. 2022. Chain of Thought Prompting Elicits Reasoning
[in Large Language Models. CoRR abs/2201.11903 (2022). arXiv:2201.11903](https://arxiv.org/abs/2201.11903)
[https://arxiv.org/abs/2201.11903](https://arxiv.org/abs/2201.11903)
[88] Westley Weimer, Zachary P. Fry, and Stephanie Forrest. 2013. Leveraging
Program Equivalence for Adaptive Program Repair: Models and First Results.
In IEEE/ACM International Conference on Automated Software Engineering (ASE).
Palo Alto, CA, USA, 356–366.
[89] Ming Wen, Junjie Chen, Rongxin Wu, Dan Hao, and Shing-Chi Cheung. 2018.
Context-Aware Patch Generation for Better Automated Program Repair. In
_ACM/IEEE International Conference on Software Engineering (ICSE). Gothenburg,_
[Sweden, 1–11. https://doi.org/10.1145/3180155.3180233](https://doi.org/10.1145/3180155.3180233)
[90] Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. 2021.
TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement
[Learning. In Neural Information Processing Systems. https://arxiv.org/abs/2102.](https://arxiv.org/abs/2102.09756)
[09756](https://arxiv.org/abs/2102.09756)
[91] Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja
Jamnik, and Christian Szegedy. 2022. Autoformalization with Large Language
Models. CoRR abs/2205.12615 (2022). [https://doi.org/10.48550/ARXIV.2205.](https://doi.org/10.48550/ARXIV.2205.12615)
[12615](https://doi.org/10.48550/ARXIV.2205.12615)
[92] Qi Xin and Steven P. Reiss. 2017. Identifying Test-suite-overfitted Patches
through Test Case Generation. In ISSTA. 226–236. [https://doi.org/10.1145/](https://doi.org/10.1145/3092703.3092718)
[3092703.3092718](https://doi.org/10.1145/3092703.3092718)
[93] Deheng Yang, Yuhua Qi, and Xiaoguang Mao. 2018. Evaluating the Strategies
[of Statement Selection in Automated Program Repair. In SATE. Springer. https:](https://doi.org/10.1007/978-3-030-04272-1_3)
[//doi.org/10.1007/978-3-030-04272-1_3](https://doi.org/10.1007/978-3-030-04272-1_3)
[94] Jinqiu Yang, Alexey Zhikhartsev, Yuefei Liu, and Lin Tan. 2017. Better test cases
[for better automated program repair. In ESEC/FSE. 831–841. https://doi.org/10.](https://doi.org/10.1145/3106237.3106274)
[1145/3106237.3106274](https://doi.org/10.1145/3106237.3106274)
[95] Kaiyu Yang and Jia Deng. 2019. Learning to prove theorems via interacting
with proof assistants. In International Conference on Machine Learning. PMLR,
6984–6994.
[96] Xuejun Yang, Yang Chen, Eric Eide, and John Regehr. 2011. Finding and understanding bugs in C compilers. In ACM SIGPLAN Conference on Program_ming Language Design and Implementation (PLDI). San Jose, CA, USA, 283–294._
[https://doi.org/10.1145/1993498.1993532](https://doi.org/10.1145/1993498.1993532)
[97] Michihiro Yasunaga and Percy Liang. 2021. Break-it-fix-it: Unsupervised learning for program repair. In International Conference on Machine Learning (ICML).
PMLR, 11941–11952.
[98] He Ye, Matias Martinez, and Martin Monperrus. 2021. Automated patch assessment for program repair at scale. EMSE 26, 2 (2021).
[99] Zhongxing Yu, Matias Martinez, Benjamin Danglot, Thomas Durieux, and Martin Monperrus. 2019. Alleviating patch overfitting with automatic test generation: A study of feasibility and effectiveness for the Nopol repair system. EMSE
[24, 1 (2019), 33–67. https://doi.org/10.1007/s10664-018-9619-4](https://doi.org/10.1007/s10664-018-9619-4)
[100] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. 2022. miniF2F: A crosssystem benchmark for formal Olympiad-level mathematics. In ICLR.
[101] Qihao Zhu, Zeyu Sun, Yuan an Xiao, Wenjie Zhang, Kang Yuan, Yingfei Xiong,
and Lu Zhang. 2021. A syntax-guided edit decoder for neural program repair.
[In ESEC/FSE. 341–353. https://doi.org/10.1145/3468264.3468544](https://doi.org/10.1145/3468264.3468544)
**A** **EXAMPLES OF PROOF GENERATION**
**WITH CONTEXT**
We provide a number of examples that the model using context
could solve but the plain proof generation model could not. We
determined the lists of problems each model could solve, computed
their difference, and then sampled 5 examples uniformly at random.
For examples that had multiple correct proofs generated by the
model, we selected one at random. We modified whitespace in
the examples to make them more readable with the reduced line
length. Further, we truncated the examples on the left to help with
readability, but we inspected also the full context to ensure that our
conclusions below are not affected. Each example consists of the
“context and problem statement”, the “ground truth proof”, and the
“generated proof”.
We can observe in examples 1, 3, and 5 that the model readily
**copies and adapts proofs that exist in its context. In example 2,**
the model made use of a premise that did not occur in its context,
which happened to also be used by the ground truth proof, but with
a different tactic. In example 4, the model found a simpler proof
that did not occur like this in the context.
**A.1** **Example 1**
Context and problem statement:
**lemma (in Interpretation) InterpExprWellDefined:**
"L\<lbrakk>Vx : A \<turnstile> e :
B\<rbrakk> \<rightarrow> i \<Longrightarrow>
Sig iS \<triangleright> Vx :
A \<turnstile> e : B"
apply (rule Interp.cases)
by auto
**lemma (in Interpretation) WellDefined:**
"L\<lbrakk>\<phi>\<rbrakk> \<rightarrow> i
\<Longrightarrow> Sig iS \<triangleright> \<phi>"
apply(rule Interp.cases)
by (auto simp add: InterpExprWellDefined)
**lemma (in Interpretation) Bool:**
13
-----
**lemma H_assign:**
"rel_kat.H \<lceil>\<lambda>s.
P (s (v := e s))\<rceil> (v ::= e) \<lceil>P\<rceil>"
by (auto simp:
gets_def
rel_kat.H_def
rel_kat.t_op_def
rel_at_def)
**lemma H_assign_var:**
"(\<forall>s. P s \<longrightarrow> Q (s (v := e s)))
\<Longrightarrow>
rel_kat.H
\<lceil>P\<rceil> (v ::= e) \<lceil>Q\<rceil>"
Ground truth proof:
by (auto simp:
gets_def
rel_kat.H_def
rel_kat.t_op_def
rel_at_def)
Generated proof:
by (auto simp:
gets_def
rel_kat.H_def
rel_kat.t_op_def
rel_at_def)
**A.4** **Example 4**
Context and problem statement:
section \<open>Primitive words\<close>
text\<open>If a word 𝑤 is not a non-trivial power of
some other word, we say it is primitive.\<close>
**definition primitive :: "\'a list \<Rightarrow> bool"**
**where "primitive u = (\<forall> r k.**
r\<^sup>@k = u \<longrightarrow> k = 1)"
**lemma primI[intro]:**
"(\<And> r k. r\<^sup>@k = u \<Longrightarrow> k = 1)
\<Longrightarrow> primitive u"
by (simp add: primitive_def)
**lemma prim_nemp:**
"primitive u \<Longrightarrow> u \<noteq> \<epsilon>"
Ground truth proof:
"L\<lbrakk>\<phi>\<rbrakk> \<rightarrow>
(IBool i) \<Longrightarrow> \<exists> A B e d .
\<phi> = (Vx : A \<turnstile> e \<equiv> d : B)"
Ground truth proof:
apply(rule Interp.cases)
by auto
Generated proof:
apply (rule Interp.cases)
by (auto simp add: InterpExprWellDefined)
**A.2** **Example 2**
Context and problem statement:
**lemma Aeta_expand_0[simp]:**
"Aeta_expand 0 e = e"
by transfer simp
**lemma Aeta_expand_inc[simp]:**
"Aeta_expand (inc\<cdot>n) e =
(Lam [fresh_var e].
Aeta_expand n (App e (fresh_var e)))"
apply (simp add: inc_def)
by transfer simp
**lemma subst_Aeta_expand:**
"(Aeta_expand n e)[x::=y] =
Aeta_expand n e[x::=y]"
Ground truth proof:
by transfer (rule subst_eta_expand)
Generated proof:
by transfer (simp add: subst_eta_expand)
**A.3** **Example 3**
Context and problem statement:
subsubsection \<open>Store and Assignment\<close>
**type_synonym \'a store = "string** \<Rightarrow> \'a"
**definition gets ::**
"string \<Rightarrow> (\'a store \<Rightarrow> \'a)
\<Rightarrow> \'a store rel" ("_ ::= _" [70, 65] 61)
**where "v ::= e = {(s, s(v := e s)) |s. True}"**
14
-----
**proof-**
have "u = \<epsilon> \<Longrightarrow>
\<epsilon>\<^sup>@0 = u"
by simp
thus "primitive u \<Longrightarrow>
u \<noteq> \<epsilon>"
using primitive_def zero_neq_one
by blast
qed
Generated proof:
using primitive_def
by force
**A.5** **Example 5**
Context and problem statement:
**lemma drop_take_drop:**
"drop n (take m xs) @ drop m xs = drop (min n m) xs"
**proof cases**
assume A: "\<not>m < n"
then show ?thesis
using drop_append[of n "take m xs" "drop m xs"]
by (cases "length xs < n")
(auto simp: not_less min_def)
qed (auto simp: min_def)
**lemma not_distinct_decomp_min_prefix:**
**assumes "\<not> distinct ws"**
**shows "\<exists> xs ys zs y.**
ws = xs @ y # ys @ y # zs \<and>
distinct xs \<and>
y \<notin> set xs \<and>
y \<notin> set ys "
**proof -**
obtain xs y ys where
"y \<in> set xs" "distinct xs" "ws = xs @ y # ys"
using assms by (auto simp: not_distinct_conv_prefix)
moreover then obtain xs\' ys\'
**where "xs = xs\' @ y # ys\'"**
by (auto simp: in_set_conv_decomp)
ultimately show ?thesis by auto
qed
**lemma not_distinct_decomp_min_not_distinct:**
15
**assumes "\<not> distinct ws"**
**shows "\<exists>xs y ys zs.**
ws = xs @ y # ys @ y # zs \<and>
distinct (ys @ [y])"
Ground truth proof:
using assms
**proof (induct ws)**
case (Cons w ws)
show ?case
**proof (cases "distinct ws")**
case True
then obtain xs ys where
"ws = xs @ w # ys" "w \<notin> set xs"
using Cons.prems
by (fastforce dest: split_list_first)
then have "distinct
(xs @ [w])" "w # ws = [] @ w # xs @ w # ys"
using \<open>distinct ws\<close> by auto
then show ?thesis by blast
next
case False
then obtain xs y ys zs
**where "ws = xs @ y # ys @ y # zs \<and>**
distinct (ys @ [y])"
using Cons by auto
then have "w # ws = (w # xs) @ y # ys @ y # zs \<and>
distinct (ys @ [y])"
by simp
then show ?thesis by blast
qed
qed simp
Generated proof:
**proof -**
obtain xs y ys where
"y \<in> set xs" "distinct xs" "ws = xs @ y # ys"
using assms by (auto simp: not_distinct_conv_prefix)
moreover then obtain xs' ys'
**where "xs = xs' @ y # ys'"**
by (auto simp: in_set_conv_decomp)
ultimately
show ?thesis by auto
qed
-----
| [
"Emily, First",
"Yuriy, Brun",
"Markus N., Rabe",
"Talia, Ringer"
] | 2023-03-08T00:00:00 | ESEC/FSE 2023 | false | 48 | 5 | [
"Isabelle"
] | http://arxiv.org/abs/2303.04910 | https://arxiv.org/abs/2303.04910 | https://www.semanticscholar.org/paper/9f8ac6ee3760ab202e492c733362e5bfc6763934 |
Making Large Language Models Better Reasoners with Alignment | Reasoning is a cognitive process of using evidence to reach a sound conclusion. The reasoning capability is essential for large language models (LLMs) to serve as the brain of the artificial general intelligence agent. Recent studies reveal that fine-tuning LLMs on data with the chain of thought (COT) reasoning process can significantly enhance their reasoning capabilities. However, we find that the fine-tuned LLMs suffer from an \textit{Assessment Misalignment} problem, i.e., they frequently assign higher scores to subpar COTs, leading to potential limitations in their reasoning abilities. To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss. Specifically, the constraint alignment loss has two objectives: a) Alignment, which guarantees that positive scores surpass negative scores to encourage answers with high-quality COTs; b) Constraint, which keeps the negative scores confined to a reasonable range to prevent the model degradation. Beyond just the binary positive and negative feedback, the constraint alignment loss can be seamlessly adapted to the ranking situations when ranking feedback is accessible. Furthermore, we also delve deeply into recent ranking-based alignment methods, such as DPO, RRHF, and PRO, and discover that the constraint, which has been overlooked by these approaches, is also crucial for their performance. Extensive experiments on four reasoning benchmarks with both binary and ranking feedback demonstrate the effectiveness of AFT. | The constraint alignment loss has two objectives: a) Alignment, which guarantees that positive scores surpass negative scores to encourage answers with high-quality COTs, and b) Constraint, which keeps the negative scores confined to a reasonable range to prevent the model degradation. | ## MAKING LARGE LANGUAGE MODELS BETTER REA### SONERS WITH ALIGNMENT
**Peiyi Wang[1]** **Lei Li[3]** **Liang Chen[1]** **Feifan Song[1]**
**Binghuai Lin[2]** **Yunbo Cao[2]** **Tianyu Liu[2]** **Zhifang Sui[1]**
1
National Key Laboratory for Multimedia Information Processing, Peking University
2
Tencent Cloud AI
3
The University of Hong Kong
_{wangpeiyi9979, nlp.lilei}@gmail.com_
[email protected]; [email protected]
_{binghuailin, yunbocao, rogertyliu}@tencent.com; [email protected]_
ABSTRACT
Reasoning is a cognitive process of using evidence to reach a sound conclusion.
The reasoning capability is essential for large language models (LLMs) to serve
as the brain of the artificial general intelligence agent. Recent studies reveal that
fine-tuning LLMs on data with the chain of thought (COT) reasoning process
can significantly enhance their reasoning capabilities. However, we find that the
fine-tuned LLMs suffer from an Assessment Misalignment problem, i.e., they
frequently assign higher scores to subpar COTs, leading to potential limitations
in their reasoning abilities. To address this problem, we introduce an Alignment
_Fine-Tuning (AFT) paradigm, which involves three steps: 1) fine-tuning LLMs with_
COT training data; 2) generating multiple COT responses for each question, and
categorizing them into positive and negative ones based on whether they achieve
the correct answer; 3) calibrating the scores of positive and negative responses
given by LLMs with a novel constraint alignment loss. Specifically, the constraint
alignment loss has two objectives: a) Alignment, which guarantees that positive
scores surpass negative scores to encourage answers with high-quality COTs;
b) Constraint, which keeps the negative scores confined to a reasonable range
to prevent the model degradation. Beyond just the binary positive and negative
feedback, the constraint alignment loss can be seamlessly adapted to the ranking
situations when ranking feedback is accessible. Furthermore, we also delve deeply
into recent ranking-based alignment methods, such as DPO, RRHF, and PRO, and
discover that the constraint, which has been overlooked by these approaches, is also
crucial for their performance. Extensive experiments on four reasoning benchmarks
with both binary and ranking feedback demonstrate the effectiveness of AFT. In
addition, AFT also performs well in multi-task and out-of-distribution situations.
1 INTRODUCTION
Reasoning is a cognitive process that involves utilizing evidence to reach a well-founded conclusion
(Qiao et al., 2023; Huang & Chang, 2023). Recently, there has been a growing focus on enhancing
the reasoning abilities of Large Language Models (LLMs) (Li et al., 2023b), particularly open-source
LLMs (Yuan et al., 2023a; Luo et al., 2023; Mukherjee et al., 2023), because LLMs still lack reasoning
skills (Wang et al., 2023b;d; Zheng et al., 2023) that are essential for them to serve as the brain of
artificial general intelligence agents (Wang et al., 2023a; Yao et al., 2023; Song et al., 2023b).
Recent works (Chung et al., 2022; Hsieh et al., 2023; Mukherjee et al., 2023) find that training LLMs
using data with a chain of thought (COT) reasoning process is a very effective method to improve the
reasoning ability of LLMs. These studies typically train LLMs using maximum likelihood estimation
(MLE), and employ a next-token prediction objective. However, MLE only assigns probability mass
to the reference COT, which contradicts reasoning tasks where various reasoning paths can lead to the
correct answer. In this paper, we find that previous vanilla fine-tuning (VFT) paradigm causes LLMs
to suffer from an Assessment Misalignment problem, i.e., LLMs struggle with accessing the quality
-----
**Question: Weng earns $12 an hour for babysitting. Yesterday,**
she just did 50 minutes of babysitting. How much did she earn?
**Reference Answer (PPL 1.05) : Weng earns 12/60 =**
$<<12/60=0.2>>0.2 per minute. Working 50 minutes, she
earned 0.2 x 50 = $<<0.2*50=10>>10.
**Candidate Answer 1 (PPL 1.90) : Weng earns 12/60 =**
<<12/60=0.2>>0.2$ per minute Yesterday she did 50 minutes
of babysitting, so she earned 50 * 0.2 = <<50*0.2=10>>10$.✓
**Candidate Answer 2 (PPL 1.35) : Weng earns $12/hour x 60**
minutes = $<<12*60=720>>720 per hour. Working 50
minutes, she earned $720 x 50/60 = $<<720*50/60=40>>40.[✗]
Figure 1: Perplexity of different answers given by the vanilla fine-tuning (VFT) LLM, where LLM
assigns a lower perplexity to the incorrect candidate answer compared to the correct candidate answer.
of different COTs, ultimately limiting their reasoning capabilities. Take Figure 1 as an example,
VFT-LLMs learn to generate the Reference Answer for the given Question by allocating probability
mass to this Reference Answer and treating all other answers as negative outcomes. As a result, they
struggle to assess the quality of other answers and tend to assign lower perplexity (higher score) to
_incorrect Candidate Answer 1 compared to the correct Candidate Answers 2._
This behavior of VFT-LLMs is not consistent with that of humans, as humans have the ability to
access the quality of different COTs after learning to reason. In addition, our pilot experiments
(Section 3) find that after the same VFT process, the LLMs with better reasoning performance can
give a more reasonable assessment to different COTs. Therefore, we hypothesize that we can improve
the reasoning ability of LLMs by alleviating the assessment misalignment problem caused by VFT.
To address the assessment misalignment problem, in this paper, we propose an alignment fine-tuning
(AFT) paradigm to improve LLM reasoning with three steps: 1) fine-tuning LLMs using COT
training data; 2) generating multiple COT responses for each question using the fine-tuned LLMs,
and categorizing them as positive and negative based on whether they deduce the correct answer;
**3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint**
alignment (CA) loss. Specifically, the CA loss ensures that all positive scores (the scores of positive
COTs) are larger than negative scores. In addition, the negative scores are protected by a constraint
term, which is proven to be very important in preventing model degradation. Beyond just binary
positive and negative feedback, the CA loss can be seamlessly adapted to ranking situations when
ranking feedback is accessible. Furthermore, we also delve deeply into recent ranking-based methods
for alignment, such as DPO (Rafailov et al., 2023), PRO (Song et al., 2023a) and RRHF (Yuan et al.,
2023b), and find that the constraint, which has been overlooked by these approaches, is also crucial
for their effectiveness.
In summary, our contributions are:
**1) We discover that LLMs fine-tuned by the vanilla fine-tuning (VFT) paradigm suffer from an**
Assessment Misalignment problem: they frequently assign lower scores to high-quality COTs
compared to low-quality ones, which hinders their reasoning ability.
**2) We present an Alignment Fine-Tuning (AFT) paradigm, which comprises three straightforward**
steps with a novel constraint alignment loss to address the identified problem.
**3) We delve deeply into recent ranking-based methods for alignment and find that the constraint,**
which has been overlooked by these approaches, is also crucial for their performance.
**4) Experiments on four reasoning benchmarks with both binary and ranking feedback demonstrate**
the effectiveness of AFT. AFT also performs well in multi-task and out-of-distribution situations.
2 RELATED WORKS
2.1 IMPROVE REASONING OF LARGE LANGUAGE MODELS
Reasoning is a cognitive process that involves utilizing evidence to reach a well-founded conclusion,
which is a core ability of LLMs to serve as the brain of the artificial general intelligence agent.
Researchers have proposed a lot of methods to improve the reasoning ability of LLMs, which can
be broadly divided into three groups: 1) pre-training: The pre-training methods pre-train the LLMs
on a vast of unsupervised datasets, such as the pile (Gao et al., 2020), the stack (Kocetkov et al.,
2022), and so on, with a simple next token prediction objective. Researchers find that a larger
model pre-trained on more data tends to have better reasoning ability (OpenAI, 2023; Anil et al.,
-----
2023; Touvron et al., 2023); 2) fine-tuning: The fine-tuning methods can also enhance the reasoning
ability of LLMs. Researchers have found that fine-tuning LLMs on the data with the reasoning
chain-of-thought process can significantly improve the reasoning of LLMs (Mukherjee et al., 2023;
Chung et al., 2022; Li et al., 2023a); 3) prompting: The prompting methods aims to improve the
reasoning ability by carefully designed prompting strategy, such as chain-of-thought prompting (Wei
et al., 2022), self-consistency (Wang et al., 2023c) strategy, and so on. The prompting methods do not
change the model parameters, which is very convenient and practical. In this paper, we focus on the
fine-tuning methods and find that traditional vanilla chain-of-thought fine-tuned LLMs suffer from an
assessment misalignment problem, which hinders their reasoning ability. To this end, we propose an
alignment fine-tuning paradigm to address this problem to enhance the reasoning ability of LLMs.
2.2 ALIGNMENT OF LARGE LANGUAGE MODELS
AI alignment research focuses on directing AI systems toward human-intended goals, preferences,
or ethical principles. There are two primary categories of AI alignment methods: 1) Reinforcement
_Learning from Human Feedback (RLHF) (Ouyang et al., 2022), which trains a reward model by_
utilizing human feedback, which subsequently acts as a reward function for optimizing an agent’s
policy through reinforcement learning (RL) techniques, such as Proximal Policy Optimization
(Schulman et al., 2017). RLHF is employed to align powerful LLMs, like ChatGPT and GPT4. However, RL-based methods face limitations concerning training efficiency and complexity;
2) Supervised Fine-tuning with Ranking (Liu et al., 2022; Yuan et al., 2023b; Song et al., 2023a;
Rafailov et al., 2023), which involves training LLMs using a supervised fine-tuning paradigm and
incorporating a ranking loss to help LLMs align with human preferences. Previous alignment
research has mainly focused on improving the safety of LLMs, frequently neglecting the importance
of alignment for reasoning. Furthermore, widely used ranking methods often neglect the constraint
term when reducing scores of low-quality examples, which can potentially have a negative impact
on model performance. In this paper, we point out the effectiveness of alignment for reasoning and
introduce a novel constraint alignment loss to make LLMs better reasoners with alignment.
3 PILOT EXPERIMENTS
In this section, we first briefly introduce the vanilla fine-tuning (VFT) paradigm, and then we
demonstrate the assessment misalignment problem of VFT for reasoning.
3.1 VANILLA FINE-TUNING
VFT finetunes LLMs on a dataset {(qi, ci, ai)}i[N]=1 [with][ N][ examples. Each example consists of a]
question qi, a COT reasoning process ci, and an answer ai. The LLMs are finetuned to generate the
reference response ri = [ci; ai] based on qi with a MLE objective loss function:
_|ri|_
log P (ri,j _ri,<j, qi; θ)._ (1)
_j=1_ _|_
X
_LV F T = −_
where θ is the model parameter and ri,j is the j-th token of ri.
3.2 ASSESSMENT MISALIGNMENT OF VFT FOR REASONING
Intuitively, the MLE objective seeks to exclusively allocate probability mass to the reference COT
_ci for question qi, which does not correspond with the characteristics of reasoning tasks, where the_
correct COT is not limited to the reference one. This objective uniformly treats all other correct and
incorrect COTs as negative examples. As a result, it will impede LLMs from learning to assess the
quality of various COTs and degrade their reasoning ability.
To demonstrate this, we first fine-tune LLama-7B, LLama-13B, LLama2-7B, and LLama2-13B on the
training data of GSM8k and ECQA with Equation 1 (please refer to Section 5.1 for the detailed VFT
settings). Then, for each question qi in the training data, we use VFT-LLMs to generate three positive
COTs {c[p]i [1][, c]i[p][2][, c]i[p][3][}][ that induce to the correct answer and three negative COTs][ {][c]i[n][1][, c]i[n][2][, c]i[n][3][}][ that]
-----
**GSM8K (PEARSON = 0.93)** **ECQA (PEARSON = 0.98)**
**MODELS**
**TAccuracy(%)** **AAccuracy(%)** **TAccuracy(%)** **AAccuracy(%)**
LLama-7B 36.48±0.92 68.41±0.32 70.40±0.92 61.62±0.01
LLama2-7B 40.71±0.16 71.22±0.12 72.34±0.22 61.96±0.02
LLama-13B 42.07±0.15 72.25±0.23 72.74±0.43 61.89±0.01
LLama2-13B 47.29±1.24 73.06±0.78 74.76±0.56 62.29±0.01
Table 1: The final task accuracy (TAccuracy) and the assessment accuracy (AAccuracy) of different
vanilla fine-tuned models. TAccuracy and AAccuracy exhibit a strong positive correlation, with
Pearson Correlation Coefficients of 0.93 and 0.98 at GSM8K and ECQA, respectively.
induce to the incorrect answer, respectively. Upon manually examining 50 examples, we observe that
the quality of positive COTs is noticeably better than that of negative COTs.
We further compute the token-averaged log-likelihood score of each positive and negative COT c
using the fine-tuned LLMs as follows:
_|c|_
log P (cj _c<j, q; θ),_ (2)
_j=1_ _|_
X
_s[c]θ_ [= 1]
_|c|_
where q is the corresponding question. It is reasonable to expect that the fine-tuned LLMs will be able
to assess the quality of different candidate COTs of previously encountered questions, i.e., assigning
higher scores to the positive ones. Therefore, we use an assessment accuracy AAccuracy to assess
the capability of fine-tuned LLMs in assigning appropriate scores to various COTs:
I(scθ[pj]i _> s[c]θi[nk]_ ) (3)
_k=1_
X
**AAccuracy =**
9N
_i=1_
_j=1_
As shown in Table 1, the assessment accuracy of the VFT-LLMs falls short of expectations, with an
average AAccuracy of merely around 70% on GSM8K and 62% on ECQA, respectively. Note that
this is a two-class classification problem where a random baseline can achieve the 50.00% accuracy.
These results show that the assessment ability of VFT-LLMs is far from expected, as they cannot
accurately discern the quality of various COTs of previously learned questions. This behavior of
VFT-LLMs is not consistent with that of humans, as humans have the ability to access the quality
of different COTs after learning to reason. In addition, we also notice that LLMs with stronger
reasoning abilities have better assessment accuracy. Specifically, the task accuracy and the assessment
accuracy exhibit a strong positive correlation, with Pearson Correlation Coefficients of 0.93 and 0.98
at GSM8K and ECQA, respectively. This observation inspires us to improve the reasoning ability of
LLMs by aligning their scoring behaviors with the golden standard assessment.
4 METHODOLOGY
We have demonstrated that the scoring behaviors of vanilla fine-tuned LLMs exhibit misalignment
with the gold standard assessment. In this section, we propose an alignment fine-tuning (AFT)
paradigm to address this problem to enhance their reasoning ability. Specifically, on top the VFT
objective LV F T, AFT further introduce an alignment objective LA[∗] [:]
_LAF T = LV F T + LA[∗]_ _[.]_ (4)
In the following part of this section, we will introduce the design process of LA[∗] [.]
4.1 GENERATE COTS FOR TRAINING DATA
To implement AFT, we first need to generate multiple COTs for each question in the training set. For
each training example (q, c, a), we first sample k generation results {(ci, ai)}i[k]=1 [from the VFT-LLMs]
based on the input question q. Then, we divide these generation results into two groups, namely
positive group GP and negative group GN, based on the correctness of their answer. Formally, a
generation results (ci, ai) belongs to GP if ai = a, otherwise it is part of GN . Generally, the quality
of COTs in the positive group GP is better than that of GN .
-----
4.2 ALIGNMENT
As demonstrated by our pilot experiment, VFT-LLMs fail to give reasonable scores to COTs in GP
and GN. To align the scoring behaviors of LLMs with the golden standard assessment, we need to
design an objective to let the scores of all positive COTs in GP larger than that of negative COTs in
**GN. This objective bears resemblance to contrastive learning, which aims to ensure that the score of**
positive example is larger than those of all negative examples, utilizing an InfoNCE loss:
exp(s[c]θ[p] [)]
_LInfoNCE = −_ log " exp(s[c]θ[p] [) +][ P]cn∈GN [exp(][s]θ[c][n] [)] # = log "1 + _cnX∈GN_ exp(s[c]θ[n] _[−]_ _[s]θ[c][p]_ [)]# (5)
Intuitively, minimizing Equation 5 aims to make the positive score s[c]θ[p] [larger than all negative scores.]
However, since there is more than one positive example in GP, inspired by (Su et al., 2022; Wang
et al., 2022), we extend LInfoNCE to accommodate multiple positive examples:
_A = log_ 1 + exp(s[c]θ[n] _θ_ [)] (6)
_L_ _cpX∈GP_ _cnX∈GN_ alignment term[−] _[s][c][p]_
| {z }
where s[c]θ [is the average log-likelyhood score of the COT][ c][ calculated by Equation 2. Minimizing][ L][A]
encourages all positive scores to be larger than all negative scores.
4.3 CONSTRAINT
Nevertheless, although the quality of negative COTs may not be as high as that of positive COTs,
they still retain a respectable quality, as they are sampled from fine-tuned, powerful LLMs. We find
that reducing their scores by Equation 6 without setting any constraint will result in the degradation
of the LLMs. Therefore, we further design two constrained methods, Detached Constraint (DC), and
Boundary Constraint (BC) to avoid such catastrophe.
4.3.1 DETACHED CONSTRAINT
To prevent model degradation, DC adds constraint to negative scores by detaching their gradient:
_LA[DC]_ = log 1 + exp **D(s[c]θ[n]** [)][ −] _[s]θ[c][p]_ _,_ (7)
_cpX∈GP_ _cnX∈GN_ detached alignment term
where D(·) denotes the detach operation, which means the gradient would not back-prop through the| {z }
negative scores. As a results, _A_ achieves the alignment by only increasing positive scores without
_L[DC]_
explicitly decreasing negative ones.
4.3.2 BOUNDARY CONSTRAINT
Besides DC, we also want to explore whether better results can be obtained by marginally decreasing
negative scores. To this end, we propose BC that adds a constraint term to _A:_
_L_
_LA[BC]_ = log 1 + _cpX∈GP_ _cnX∈GN_ exp(alignment terms[c]θ[n] _[−]_ _[s]θ[c][p]_ [)] + exp(constraint termT − _s[c]θ[n]_ [)] (8)
Intuitively, the constraint term increases the score of the negative COT | {z } | {z s[c]θ[n][, with the extent of]}
improvement regulated by the value of T . We aim for T to achieve the effect of increasing s[c]θ[n] [when]
it is lower than a boundary B. In this paper, we chose B as the minimum positive COT score minus a
hyper-parameter β, i.e., B = s[c]θ[p][∗] _−_ _β, where s[c]θ[p][∗]_ = mincp∈GP sθ[c][p] [. To achieve this, we analyze the]
-----
gradient of Equation 8 with respect to the parameters θ:
exp(s[c]θ[n] _[−]_ _[s]θ[c][p]_ [)(][∇][θ][s]θ[c][p] _[−∇][θ][s]θ[c][n]_ [) + exp(][T][ −] _[s]θ[c][n]_ [)][∇][θ][s]θ[c][n]
_θ_ _A_
_∇_ _L[BC]_ _∝−_
_cp∈GP_
_cn∈GN_
(9)
= − _cpX∈GP_ _cnX∈GN_ exp(sincrease s[c]θ[n] _[−]_ _[s]θ[c][p]cpθ[)][∇][θ][s]θ[c][p]_ + exp(change sT − _s[c]θ[n][cn]θ[)][ −]based on the coefficient[exp(][s]θ[c][n]_ _[−]_ _[s]θ[c][p]_ [)] _∇θs[c]θ[n]_
Because the score s[c]θ[∗] [increases along the gradient]| {z } |[ ∇][θ][s]θ[c][∗][, based on][ ∇]{z[θ][L]A[BC][, for each pair]}[ (][c][p][, c][n][)][,]
_Lelevates the negative scoreA[BC]_ consistently increases s s[c]θ[n][c]θ[p][when:][due to the positive coefficient][ exp(][s]θ[c][n] _[−]_ _[s]θ[c][p]_ [)][ >][ 0][. Additionally, it]
exp(s[c]θ[n] _θ_ [)][ <][ exp(][T][ −] _[s]θ[c][n]_ [)][ ⇒] _[s]θ[c][n]_ _[< T][ +][ s]θ[c][p]_ = B (10)
2
_[−]_ _[s][c][p]_
Otherwise, it tends to decrease or keep the score of s[c]θ[n][. Combing][ B][ =][ s]θ[c][p][∗] _β and Equation 10,_
_−_
we can achieve the value of T = 2s[c]θ[p][∗] _−_ 2β − _s[c]θ[p]_ [.]
4.4 EXTENDING TO RANKING ALIGNMENT
The quality of the different COTs is not a simple binary relationship GP **GN, i.e., the quality**
of positive COTs is better than that of negative COTs. In a more general situation, COTs in each ≻
group can also have quality differences, which means the quality of all generated COTs can be
ranked as a sequence c1 _c2_ _ck. If we can obtain such a quality ranking sequence, we can_
easily extend our binary-feedback boundary constraint alignment loss ⪰ _⪰· · · ⪰_ _A_ to a ranking-feedback
_L[BC]_
boundary-constrained alignment loss as follows:
_L[RBC]A_ = log 1 + _cXi≻cj_ exp(alignment terms[c]θ[j] _[−]_ _[s]θ[c][i]_ [)] + exp(2s[c]θconstraint term[j] _[∗]_ _−_ 2β − _s[c]θ[i]_ _[−]_ _[s]θ[c][j]_ [)] (11)
Where s[c]θ[j] _[∗]_ = minck _cj sθ[c][k]_ is the minimal score of COTs that have the better quality than| {z } | {z } _cj._
_≻_
Compared with LA[BC][,][ L]A[RBC] can bring LLMs more detailed training signals of the COT assessment,
which can further enhance their performance. We also try to extend _A_ to the ranking situation,
_L[DC]_
and we find it slightly underperforms in comparison to L[RBC]A . Please refer to Appendix C for details.
5 EXPERIMENTS
5.1 EXPERIMENTAL SETUPS
**Datasets** We conduct our experiments on three widely used reasoning datasets with humanannotated chain-of-thoughts, including math reasoning tasks GSM8K (Cobbe et al., 2021), AQUA**RAT (Ling et al., 2017), commonsense reasoning task ECQA (Aggarwal et al., 2021). Furthermore,**
we create GSM8K-RANK to evaluate the effectiveness of our AFT in the ranking situation. Please
refer to Appendix A for more details of these datasets.
**Parameter Setting** We conduct experiments on four large language models, LLama(2)-7B and
LLama(2)-13B. We do not conduct experiments on larger models due to resource limitations. We
sample k = 6 COTs from VFT-LLMs with a sampling temperature of 1. Our detached constraint
alignment loss does not introduce any hyper-parameters, and we search the boundary constraint
hyper-parameter β based on the validation set. For more training details, please refer to Appendix B.
**Baselines** We compare our AFT with the following baselines: 1) VFT: the vanilla fine-tuning
(VFT) method that simply trains LLMs with the reference COT using the MLE loss, which is the most
widely used training strategy; 2) RFT: Rejective sampling fine-tuning (RFT) (Yuan et al., 2023a)
selects the COTs with the correct answer, adds these COTs to the origin training data, and uses the
-----
**MODELS** **METHODS** **GSM8K** **AQUA** **ECQA** **AVERAGE (∆)**
VFT 36.48±0.92 31.19±0.28 70.40±1.07 46.02 ( – )
RFT 39.75±1.03 32.81±1.48 **72.23±0.11** 48.28 (↑ 2.26)
AFT (LA[DC][)] **40.43±1.04** 33.01±0.95 **72.23±0.43** **48.55 (↑** **2.53)**
AFT (LA[BC][)] 40.26±0.36 **33.20±1.24** 72.15±0.57 48.53 (↑ 2.51)
VFT 40.71±0.16 31.49±1.96 72.34±0.22 48.18 ( – )
RFT 43.65±0.13 33.25±1.23 **73.86±0.38** 50.25 (↑ 2.07)
AFT (LA[DC][)] **44.25±0.43** **33.49±0.63** 73.71±0.65 **50.75 (↑** **2.57)**
AFT (LA[BC][)] 44.16±0.81 32.89±0.98 73.23±0.82 50.09 (↑ 1.91)
VFT 42.07±0.15 33.91±0.60 72.74±0.43 49.57 ( – )
RFT 46.13±1.41 34.29±1.28 **75.03±0.35** 51.80 (↑ 2.23)
AFT (LA[DC][)] 46.31±1.52 34.49±1.21 74.32±0.09 51.70 (↑ 2.13)
AFT (LA[BC][)] **46.46±0.28** **34.79±0.37** 74.53±0.68 **51.93 (↑** **2.36)**
VFT 47.29±1.24 34.68±1.36 74.76±0.56 52.24 ( – )
RFT 50.12±1.57 34.95±0.88 76.21±0.80 53.75 (↑ 1.51)
AFT (LA[DC][)] 50.67±1.16 **35.78±0.45** 76.42±0.82 54.29 (↑ 2.05)
AFT (LA[BC][)] **51.03±0.54** 35.49±1.19 **76.57±0.83** **54.36 (↑** **2.12)**
LLAMA-7B
LLAMA2-7B
LLAMA-13B
LLAMA2-13B
Table 2: The accuracy of different methods on three reasoning datasets. ∆ denotes the improvement
compared to VFT. AFT significantly outperforms VFT, and is slightly better than RFT (Yuan et al.,
2023a). Note that RFT is a concurrent work to ours.
new augmented training data to train LLMs, which is proven to be a very strong baseline; 3) RRHF:
Rank Responses to align Human Feedback (RRHF) (Yuan et al., 2023b), which takes candidate
ranking into account and distinguishes different candidates through a pair-wise ranking loss; 4) PRO:
Preference Ranking Optimization (PRO) (Song et al., 2023a), which takes candidate ranking into
account and distinguishes different candidates through a ranking loss with a dynamic temperature.
**Metrics** We use the accuracy to measure the model performance. Specifically, we conduct 3 runs
with 3 different seeds and report the average results with the standard deviation.
5.2 RESULTS WITH BINARY FEEDBACK
Table 2 displays the results of different fine-tuning methods on three reasoning datasets. As is shown:
**1): AFT significantly outperforms VFT on all three datasets, improving the average accuracy by**
1.91% ∼ 2.57% for all models, showing the effectiveness of AFT; 2): Our concurrent work RFT
also expresses notable improvement compared with VFT. However, the original RFT paper only
treats RFT as a simple data augmentation method without explaining the reasons behind its notable
improvement. Our alignment perspective can provide an explanation for the effectiveness of RFT,
i.e., RFT can alternatively be regarded as an alignment strategy that bolsters the scores of numerous
positive COTs and thus can alleviate the assessment misalignment problem of VFT; 3) Our proposed
two constraint alignment strategies slightly outperform RFT with the binary feedback. In addition,
our AFT can be also easily extended to utilize the ranking feedback that RFT can not well utilize.
These results demonstrate the importance of revealing the assessment misalignment problem of VFT
and the effectiveness of our AFT approach.
5.3 RESULTS WITH RANKING FEEDBACK
As described in Section 4.4, our AFT can also be easily adapted to the ranking situation where we
can obtain the quality ranking sequence of generated COTs. Table 3 illustrates the results of different
methods in the GSM8k-RANK. As is shown: 1) Our AFT surpasses all other methods, demonstrating
its effectiveness with ranking feedback. For instance, AFT exceeds the strongest baseline RFT by
0.88% in average accuracy. This superiority can be attributed to AFT’s ability to help LLMs recognize
quality differences among any given pair in a ranking context, while RFT only focuses exclusively on
optimizing the probability of the highest-quality examples; 2) Prior methods utilizing ranking loss
-----
**METHODS** **LLAMA-7B** **LLAMA-13B** **LLAMA2-7B** **LLAMA2-13B** **AVERAGE (∆)**
VFT 20.82±0.71 24.12±0.42 24.08±0.22 30.28±1.46 24.83 ( – )
RFT 25.09±1.18 28.21±0.86 28.25±0.78 34.53±0.51 29.02 (↑ 4.19)
RRHF 7.51±0.56 9.92±0.82 9.21±0.25 13.35±1.26 10.00 (↓ 14.8)
PRO 18.73±0.31 20.34±1.51 21.40±0.92 23.55±0.98 21.00 (↓ 3.82)
AFT (L[RBC]A ) **26.08** **1.05** **28.97** **0.35** **29.05** **0.75** **35.48** **1.35** **29.90 (** **5.07)**
_±_ _±_ _±_ _±_ _↑_
Table 3: Test accuracy of different methods on GSM8K trained with GSM8K-RANK.
**WITHOUT CONSTRAINT** **WITH CONSTRAINT (OURS)**
**METHODS**
**TAccuracy** **AAccuracy** **PPL (↓)** **TAccuracy** **AAccuracy** **PPL (↓)**
VFT **20.82±0.71** 68.72±1.48 **1.60±0.01** 20.82±0.71 68.72±1.48 1.60±0.01
RRHF 7.51±0.56 87.44±1.28 1.80±0.01 25.53±0.27 79.89±0.60 **1.35±0.01**
PRO 18.73±0.31 86.58±1.09 2.34±0.02 25.82±0.48 80.34±0.97 1.45±0.01
AFT (L[RBC]A ) 7.03 0.98 **88.89** **0.78** 7.81 0.03 **26.08** **1.05** **81.36** **0.78** 1.37 0.01
_±_ _±_ _±_ _±_ _±_ _±_
Table 4: Task accuracy (TAccuracy) and assessment accuracy (AAccuracy) on GSM8K for LLama7B, which is fine-tuned by different methods (with or without constraint) on GSM8K-Rank. PPL (↓,
lower is better) denotes the average perplexity of all positive COTs.
have a substantial negative impact on model performance. For example, integrating RRHF loss into
VFT leads to a 14.8% reduction in accuracy. In fact, the performance reduction is also observed in
their own paper (Song et al., 2023a), which demonstrates that ranking loss often enhances the reward
of LLMs, yet results in lower BLEU scores. However, they do not identify the cause, and in this
paper, we find that a potential reason for the performance decline is the absence of a constraint in
their loss, which we will discuss in Section 6.1.
6 ANALYSIS
6.1 DELVE DEEPLY INTO RECENT RANKING LOSSES FOR ALIGNMENT
Our experiments on GSM8K-RANK show that adding ranking loss will harm the model performance.
We think the reason is that previous alignment ranking losses will unreasonably decrease the score of
non-optimal COTs (Please refer to Appendix D for our detailed analysis). To empirically validate this
hypothesis, we add a detached constraint to these two ranking losses similar to _A_ (Equation
_L[RDC][1]_
12). Consequently, these ranking losses will only make the scores of higher-quality COTs larger than
those of lower-quality ones, without explicitly decreasing the scores of COTs with lower quality.
Table 4 illustrates the final accuracy TAccucary of different methods in the testing set, the assessment
accuracy AAccuracy and average perplexity of positive COTs PPL in the training set[1]. As is shown:
**1) Without the constraint strategy, all three ranking losses harm the model performance, leading**
to higher perplexity and lower final task accuracy compared to VFT; 2) We observe that the task
accuracy of PRO does not decline as significantly as RRHF and AFT. We think this is because PRO
employs a dynamic temperature that reduces the negative score in a more reasonable manner (Please
refer to Appendix D.3 for details); 3) By adding the constraint, all ranking losses can not only improve
two accuracies but also decrease the perplexity. These results show the importance of constraint for
other ranking losses for alignment. Furthermore, we also conduct a case study in Appendix E to
intuitively show the model degradation without constraint.
6.2 EFFECTIVENESS OF THE NUMBER OF CANDIDATE COTS
As described in Section 4.1, AFT samples k candidate generation results to align LLMs. In this section,
we explore the influence of k. We sampled 0, 8, 16, 32, and 64 results from the VFT-LLama-7B, and
then de-duplicated these sampling results. Then, we train LLama-7B on the de-duplicated datasets.
1For each question in the training set, we sample new COTs (three positive and three negative COTs,
respectively) that are different from training COTs for evaluation.
-----
44
42
40
38
36
42
40
38
36
34
32
30
50
45
40
35
|Col1|AFT AFT|:( B A C) :( D A C)|Col4|Col5|Col6|
|---|---|---|---|---|---|
||VFT|||||
|||||||
|||||||
|||||||
|||||||
|||||||
8 16 32 64
AFT:( BCA [)]
AFT:( DCA [)]
VFT
(a) The number of sampling path k
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|||||||BC|||||
||||||AFT AFT|( A ( DC|) )||||
||||||VFT|A|||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
|||||A A|FT ( B A C) FT ( D A C)||
|||||V|FT||
-0.8 -0.4 -0.2 0.0 0.05 0.1 0.15 0.2 0.4 0.8
AFT ( BCA [)]
AFT ( DCA [)]
VFT
(b) The boundary hyper-parameter
1 4 8 16 32 64
AFT ( BCA [)]
AFT ( DCA [)]
VFT
(c) The number of self-consitency path
Figure 2: Variation of accuracy with (a): different number of sampling COTs for training; (b) different
boundary constraint hyper-parameter; (c) different number of voting paths of self-consistency.
**METHODS** **GSM8K** **AQUA** **ECQA** **MMLU** **AVERAGE (∆)**
VFT 35.72±0.95 32.95±0.98 69.25±0.74 37.52±1.03 43.86 ( – )
AFT (LA[DC][)] 40.24±0.63 33.72±0.92 71.38±0.64 39.25±0.35 46.15 (↑ 2.29)
AFT (LA[BC][)] 40.00±0.69 33.45±0.56 71.48±0.89 38.89±0.70 45.96 (↑ 2.10)
Table 5: Comparison of VFT- and AFT-LLama-7B with training data “GSM8K+AQUA+ECQA” on
three in-domain benchmarks and an out-of-domain benchmark MMLU.
As shown in Figure 2(a), we can see that AFT can consistently improve the model performance with
_k improvement, which is a promising result. We think the reason is that with large k, the AFT will_
have more data to help the LLM perceive the quality of different COT paths, which enhances the
final performance. This growing accuracy shows the effectiveness and the potential of AFT.
6.3 ABLATION ON THE BOUNDARY VALUE
The boundary constraint term of AFT requires a hyper-parameter β to regulate the boundary. In this
section, we conduct an ablation study to demonstrate the impact of varying β values. As depicted in
Figure 2(b), the performance initially increases and subsequently decreases as β ranges from -0.8
to 0.8. These findings align with expectations, as a small β cannot effectively widen the score gap
between high-quality and low-quality COTs, while an overly large β may result in excessively low
scores for non-optimal COTs, thereby compromising the model’s generative abilities. In conclusion,
the results emphasize the importance of the boundary constraint term and indicate that the value of β
can significantly affect model performance. Therefore, it is essential to carefully adjust this value
when using our boundary constraint alignment loss.
6.4 EFFECTIVENESS OF AFT WITH SELF-CONSISTENCY
Self-consistency is a highly effective strategy for improving LLM’s reasoning performance. This
method involves sampling multiple COTs and utilizing a voting process to determine the final answer
during inference. AFT samples COTs for training to develop better LLMs. Both methods utilize
COTs to enhance the model’s reasoning ability. In this section, we explore the combination of AFT
and Self-Consistency. As illustrated in Figure 2(c), as the number of paths increases, the improvement
of AFT is more significant than VFT, demonstrating that AFT effectively enhances self-consistency.
We believe the reason is that AFT helps models learn to assess the quality of different COTs by
encouraging larger scores for high-quality COTs compared to low-quality ones. This means that
high-quality COTs are more likely to be sampled, and thus, AFT can enhance self-consistency.
6.5 EFFECTIVENESS OF AFT ON THE MULTI-TASK AND OUT-OF-DOMAIN SITUATIONS
To further demonstrate the effectiveness and versatility of AFT, we investigate its performance in
multi-task scenarios. We combine the training sets of three datasets and use both AFT and VFT
to train the LLama-7B model. As depicted in Table 5, AFT is able to simultaneously enhance the
performance of all corresponding test sets. Additionally, we evaluate both AFT and VFT on the
MMLU (zero-shot), an out-of-distribution benchmark, and AFT also outperforms VFT. These results
-----
indicate that AFT not only improves the performance of in-distribution tasks but also enhances the
model’s transfer ability, leading to significantly better out-of-distribution performance.
7 CONCLUSION
In this paper, we find that the vanilla fine-tuned (VFT) LLMs with chain-of-thought (COT) reasoning
process suffer from an assessment misalignment problem, i.e, they fail to access the quality of different
COTs of the learned questions, which hinders the reasoning ability of LLMs. To this end, we propose
an alignment fine-tuning (AFT) paradigm. Our AFT consists of a novel constraint alignment loss that
can align the model assessment behaviors without harming the model performance. Furthermore, we
also delve deeply into recent widely used ranking losses for alignment and find that the constraint,
which has been overlooked by these approaches, is also crucial for their performance. Extensive
experiments on four reasoning benchmarks demonstrate the effectiveness of AFT. In addition, AFT
also performs well in multi-task and out-of-distribution situations.
8 LIMITATIONS
Our paper has some limitations, which should be discussed in future works: 1) Due to the resource
limit, we do not scale the AFT to larger LLMs such as 65B and 70B LLama models. However, we
believe that larger models still suffer from the assessment misalignment problem of VFT, and thus
AFT can improve the performance of these larger models; 2) Our boundary constraint alignment loss
incorporates a hyper-parameter β that regulates the constraint strength, significantly impacting the
model’s performance. Finding the optimal hyper-parameter requires constructing a validation set
and a certain search overhead. Although our detached alignment loss can mitigate the assessment
misalignment problem without requiring any hyper-parameters, it sometimes falls short in comparison
to the boundary constraint alignment loss, especially in ranking situations. Therefore, how to design
a dynamic boundary constraint without introducing the hyper-parameter is a meaningful question,
which leaves for further work.
REFERENCES
Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla,
and Dinesh Garg. Explanations for CommonsenseQA: New Dataset and Models. In Proceed_ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th_
_International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp._
3050–3065, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/
[2021.acl-long.238. URL https://aclanthology.org/2021.acl-long.238.](https://aclanthology.org/2021.acl-long.238)
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403, 2023._
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models.
_arXiv preprint arXiv:2210.11416, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL
[https://arxiv.org/abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for
language modeling. arXiv preprint arXiv:2101.00027, 2020.
Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner,
Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Distilling step-by-step! outperforming larger
language models with less training data and smaller model sizes. In Findings of the Associa_tion for Computational Linguistics: ACL 2023, pp. 8003–8017, Toronto, Canada, July 2023._
-----
Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.507. URL
[https://aclanthology.org/2023.findings-acl.507.](https://aclanthology.org/2023.findings-acl.507)
Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. In
_Findings of the Association for Computational Linguistics: ACL 2023, pp. 1049–1065, Toronto,_
Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.
[67. URL https://aclanthology.org/2023.findings-acl.67.](https://aclanthology.org/2023.findings-acl.67)
Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Munoz Ferrandis,˜
Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, et al. The stack: 3 tb of permissively
licensed source code. arXiv preprint arXiv:2211.15533, 2022.
Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang,
Jingjing Xu, Xu Sun, et al. M3it: A large-scale dataset towards multi-modal multilingual instruction
tuning. arXiv preprint arXiv:2306.04387, 2023a.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making
language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–5333,_
Toronto, Canada, July 2023b. Association for Computational Linguistics. doi: 10.18653/v1/2023.
[acl-long.291. URL https://aclanthology.org/2023.acl-long.291.](https://aclanthology.org/2023.acl-long.291)
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th
_Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
pp. 158–167, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi:
[10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015.](https://aclanthology.org/P17-1015)
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for
_Computational Linguistics (Volume 1: Long Papers), pp. 2890–2903, Dublin, Ireland, May_
2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.207. URL
[https://aclanthology.org/2022.acl-long.207.](https://aclanthology.org/2022.acl-long.207)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical
reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583,
2023.
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed
Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint
_arXiv:2306.02707, 2023._
OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/arXiv.2303.08774.
[URL https://doi.org/10.48550/arXiv.2303.08774.](https://doi.org/10.48550/arXiv.2303.08774)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser
Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan
Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In
_[NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)_
[b1efde53be364a73914f58805a001731-Abstract-Conference.html.](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)
Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei
Huang, and Huajun Chen. Reasoning with language model prompting: A survey. In Proceedings of
_the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
pp. 5368–5393, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:
[10.18653/v1/2023.acl-long.294. URL https://aclanthology.org/2023.acl-long.](https://aclanthology.org/2023.acl-long.294)
[294.](https://aclanthology.org/2023.acl-long.294)
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward
model. _CoRR, abs/2305.18290, 2023._ doi: 10.48550/arXiv.2305.18290. [URL https:](https://doi.org/10.48550/arXiv.2305.18290)
[//doi.org/10.48550/arXiv.2305.18290.](https://doi.org/10.48550/arXiv.2305.18290)
-----
Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, and Sarath Chandar.
Complex sequential question answering: Towards learning to converse over linked question
answer pairs with a knowledge graph. In Sheila A. McIlraith and Kilian Q. Weinberger (eds.),
_Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the_
_30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium_
_on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA,_
_[February 2-7, 2018, pp. 705–713. AAAI Press, 2018. URL https://www.aaai.org/ocs/](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17181)_
[index.php/AAAI/AAAI18/paper/view/17181.](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17181)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
[optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/](http://arxiv.org/abs/1707.06347)
[1707.06347.](http://arxiv.org/abs/1707.06347)
Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang.
Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023a.
Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Restgpt:
Connecting large language models with real-world applications via restful apis. arXiv preprint
_arXiv:2306.06624, 2023b._
Jianlin Su, Mingren Zhu, Ahmed Murtadha, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Zlpr: A
novel loss for multi-label classification. arXiv preprint arXiv:2208.02955, 2022.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation
and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and
Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv
_preprint arXiv:2305.16291, 2023a._
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and
Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926,
2023b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, ICLR 2023,
_[Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023c. URL https://openreview.net/](https://openreview.net/pdf?id=1PL1NIMMrw)_
[pdf?id=1PL1NIMMrw.](https://openreview.net/pdf?id=1PL1NIMMrw)
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu,
David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go?
exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751,
2023d.
Zihan Wang, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui, and Houfeng Wang.
HPT: Hierarchy-aware prompt tuning for hierarchical text classification. In Proceedings of
_the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3740–3751,_
Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguis[tics. doi: 10.18653/v1/2022.emnlp-main.246. URL https://aclanthology.org/2022.](https://aclanthology.org/2022.emnlp-main.246)
[emnlp-main.246.](https://aclanthology.org/2022.emnlp-main.246)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V.
Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In
_[NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)_
[9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html.](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. In The Eleventh International Confer_ence on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net,_
[2023. URL https://openreview.net/pdf?id=WE_vluYUL-X.](https://openreview.net/pdf?id=WE_vluYUL-X)
-----
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling
relationship on learning mathematical reasoning with large language models. arXiv preprint
_arXiv:2308.01825, 2023a._
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf:
Rank responses to align language models with human feedback without tears. arXiv preprint
_arXiv:2304.05302, 2023b._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
-----
I want you to act as a grade school math teacher, and evaluate the quality of the answer
provided by an AI assistant to the math Question displayed below.
You will be given a reference answer and the assistant’s answer, and Your evaluation should
consider the correctness of the assistant’s answer.
Begin your evaluation by comparing the assistant’s answer with the reference answer step-bystep. Identify and correct any mistakes.
The answer is scored out of 10 points, with one point deducted for each wrong step. Be as
objective as possible.
Your need first provide your Evaluation Evidence and then rate the response on a scale of 1 to
10.
[Question]:
_{question}_
[The Start of Reference Answer]
_{reference}_
[The End of Reference Answer]
[The Start of Assistant’s Answer]
_{answer}_
[The End of Assistant’s Answer]
You MUST output with two lines:
Evaluation Evidence: <Explanation>
Rating: <ONLY a single digit>
Table 6: The evaluation template that prompts ChatGPT to score each candidate COT.
A DATASETS
We conduct our experiments on three widely used reasoning datasets with human-annotated chainof-thoughts, including math reasoning tasks GSM8K (Cobbe et al., 2021), AQUA-RAT (Ling et al.,
2017), commonsense reasoning task ECQA (Aggarwal et al., 2021):
**GSM8K GSM8K is a widely used mathematical reasoning dataset, which comprises 8.5K varied**
math word problems for grade school, developed by human authors. It is partitioned into 7.5K
training problems and 1K testing problems. We sample 400 problems from the testing set to form the
validation set, and thus we have 7, 473, 400, and 919 examples in training, validation, and testing
sets, respectively.
**AQUA-RAT AQUA-RAT comprises approximately 100, 000 algebra-based word problems, each**
accompanied by a natural language rationale. Each example in the dataset consists of four components:
1) question, which statement is written in natural language, 2) options, a set of five potential answers
with one being correct, 3) rationale, a natural language explanation of the problem’s solution, and 4)
correct, the right answer choice. For efficiency, we randomly sample 5, 000, 400, and 1, 254 examples
as the training, validation, and test set, respectively.
**ECQA ECQA is derived from CommonsenseQA (CQA) (Saha et al., 2018) by generating a free-flow**
explanation for each QA pair in CQA. CQA is a comprehensive dataset for commonsense reasoning,
containing QA pairs with five choices and a single correct answer. ECQA comprises 11K QA pairs in
total and has 7, 598, 1, 090, and 2, 194 examples in the training, validation, and test sets, respectively.
**GSM8K-RANK To evaluate the effectiveness of our AFT in the ranking situation, we randomly**
select 1,000 examples from GSM8K’s training set and generate 8 candidate COTs for each question.
We then prompt ChatGPT to rate these candidates by providing the question, reference answer, and
the COT to be assessed and thus we can achieve a quality ranking sequence for different generated
COTs. We randomly sampled 20 examples and found that ChatGPT’s scoring results align well
with human assessment. ChatGPT is instructed to assign a score between 1 and 10, indicating the
quality of each COT. To ensure the reliability of the ratings, following (Wang et al., 2023b), we
require ChatGPT to present evaluation evidence before assigning a score, and simple 3 scores for
each example. We take the average score as the final score for each COT.
-----
Models GSM8K AQUA ECQA GSM8K-RANK
LLama-7B 0.15 0.15 0.15 0.05
LLama2-7B 0.15 0.40 0.35 0.15
LLama-13B 0.15 0.15 0.15 0.15
LLama2-13B 0.15 0.15 0.20 0.15
Table 7: The value of hyper-parameter β for boundary constraint alignment.
Methods _V F T_ +L[RBC]A + _A_ + _A_ + _A_
_L_ _L[RDC][1]_ _L[RDC][2]_ _L[R]_
Accuracy 20.82±0.71 26.08±1.05 25.68±0.49 12.57±1.34 7.03±0.98
Table 8: Results of LLama-7B on GSM8K fin-tuned by different methods.
B PARAMETER SETTING
We conduct experiments on four large language models, LLama-7B, LLama-13B, LLama2-7B, and
LLama2-13B. We do not conduct experiments on larger models due to resource limitations. We
sample k = 6 COTs from VFT-LLMs with a sampling temperature of 1. Our detached constraint
alignment loss does not introduce any hyper-parameters, and we search the hyper-parameter of
boundary constraint loss within the range (0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5) on
the validation set. The value of β of different models and datasets is provided in Table 7. On GSM8K,
AQUA, and ECQA, the models are trained for 3, 3, and 1 epochs, respectively. The learning rate is
set to 2e-5, featuring linear decay and a linear warmup for 3% of the total training steps. 7B and
13B models are trained on 8 and 32 V100 GPUs with 32GB memory, respectively. We employ a
maximum sequence length of 512 and utilize the DeepSpeed library and ZeRO optimizer during
training.
C DETACHED CONSTRAINT RANKING LOSS
Given a ranking sequence c1 _c2_ _ck, besides extending_ _A_ (Equation 8) to the ranking
_⪰_ _⪰· · · ⪰_ _L[BC]_
loss L[RBC]A (Equation 14), we also try to extend _A_ to two types of detached constraint ranking
_R[DC]_
loss as follows:
exp(D(s[c]θ[j] [)][ −] _[s]θ[c][i]_ [)]
_cXi≻cj_
_L[RDC]A_ [1] = log
1 +
(12)
(13)
_ci≻cjX,cj /∈cmin_ exp(s[c]θ[j] _[−]_ _[s]θ[c][i]_ [) +]
exp(D(s[c]θ[j] [)][ −] _[s]θ[c][i]_ [)]
_ci≻cjX,cj_ _∈cmin_
_L[RDC]A_ [2] = log 1 + exp(s[c]θ[j] _θ_ [) +] exp(D(s[c]θ[j] [)][ −] _[s]θ[c][i]_ [)] (13)
_ci≻cjX,cj /∈cmin_ _[−]_ _[s][c][i]_ _ci≻cjX,cj_ _∈cmin_
where cmin is the set of all lowest-quality examples. Specifically, L[RDC]A [1] detachs the score of c
when it serves as a negative example, while L[RDC]A [2] only detach the score of lowest-quality examples.
We design L[RDC]A [2] as we consider that in a ranking scenario, higher-quality examples are inherently
constrained by lower-quality ones. Consequently, we hypothesize that constraining only the lowest
examples could potentially prevent model degradation.
_L[RDC]A_ [2] = log
We also consider a ranking baseline without any constraint:
_A_ [= log] 1 + exp(s[c]θ[j] _θ_ [)] (14)
_L[R]_ _cXi≻cj_ _[−]_ _[s][c][i]_
Table 8 illustrates the results of LLama7B fine-tuned by different methods on GSM8K-RANK. As
is shown: 1): The method without setting any constraint LA only achieves 7.03 accuracy, showing
-----
the importance of adding a constraint to the alignment loss. 2): L[RDC]A [2], which applies a detached
constraint solely to the lowest-quality examples, attains a marginally improved accuracy of 12.57.
However, it also considerably impairs the model’s overall performance compared with VFT, indicating
that constraining only the lowest-quality examples is insufficient. 3): L[RDC]A [1] is much better than VFT,
_LA[RDC][2]_ and _A, we think the reason is that after detaching all negative scores, L[RDC]A_ [1] prevents the
_L_
model degradation, however, it is worse than L[RBC]A, we hypnosis that L[RDC]A [1] only tries to improve
all scores, although with different extends, which is not good enough in the ranking situation.
D DELVE DEEPLY INTO PREVIOUS RANKING LOSSES FOR ALIGNMENT
In this section, we delve deeply into previous widely used ranking losses for alignment, DPO (Rafailov
et al., 2023), RRHF (Yuan et al., 2023b) and PRO (Song et al., 2023a), and point out that they all
suffer from lack of a constraint term.
Given a ranking sequence c1 _c2_ _ck, all ranking losses are proposed to ensure the scores of_
high-quality examples are larger than those of low-quality examples. Ranking losses usually use the ⪰ _⪰· · · ⪰_
token-averaged log-likelihood to represent the score of an example c given by an LLM parameterized
by θ:
_|c|_
log P (cj _c<j, q; θ),_ (15)
_j=1_ _|_
X
_s[c]θ_ [= 1]
_|c|_
D.1 DPO
Direct Preference Optimization (DPO) (the ranking version) optimizes LLMs with the following
ranking loss:
exp(βs[c]θ[i] _θref_ [)]
_LDP O = −_ _ci_ log exp(βs[c]θ[i] _θref_ [) +][ P][−]cj _[βs]ci_ _[c][exp(][i]_ _[βs]θ[c][j]_ _θref_ [)]
X _[−]_ _[βs][c][i]_ _≺_ _[−]_ _[βs][c][j]_
(16)
= log 1 + exp(βs[c]θ[j] _θref_ _θ_ [+][ βs]θ[c][i]ref [)]
Xci _cXj_ _≺ci_ _[−]_ _[βs][c][j]_ _[−]_ _[βs][c][i]_
where θ and θref are parameters of the training model and reference model, respectively. The training
model and reference model are usually initialized by the same LLM, and DPO freezes the reference
model during fine-tuning. β is a hyper-parameter of DPO.
To analyze the effectiveness of DPO, we compute the gradient with respect to the parameters θ:
_∇θLDP O = −_
_ci_
_cj_ _cj_ _cj_ _cj_ _cj_
Pcj _≺ci_ [[][β][ exp(][βs]θ _[−]_ _[βs]θref1 +[−]_ _[βs]cjθ[c]≺[i]c[+]i_ [exp(][ βs]θ[c][i]ref[βs]cθ[)]j[∇][−][θ][s][βs]θ[c][i] _cθ[−]jref[β][ exp(][−]_ _[βs][βs]θ[c][i]_ _θ[+][ βs][−]_ _[βs]θ[c][i]refθref[)]_ _[−]_ _[βs]θ[c][i]_ [+][ βs]θ[c][i]ref [)][∇][θ][s](17)θ []]
[P]
Based on _θ_ _DP O, for each pair (ci, cj),_ _DP O will decrease the s[c]θ[j]_ [with the gradient weight]
1+[P]βcj exp( ≺ci ∇βs[exp(]cjθ _L[−][βs][βs]cjθ_ _cjθref[−][βs][−]cjθref[βs]ciθ[−][+][βs][βs]ciθ_ _ciθref[+][βs][)]ciθref_ [)] [, which may lead the model degradation.] L
In the original DPO paper (Rafailov et al., 2023), they observe this catastrophe and alleviate it by
setting a very small β (e.g., 0.1) to achieve a small gradient weight. Please refer to the original
paper for more details. However, based on Equation 17, the small β also hamper the improvement of
positive examples, which may also hinder the model’s performance. Furthermore, solely relying on
reducing gradient weights might not be sufficient to prevent model deterioration, as demonstrated in
the subsequent analysis of RRHF and PRO. In this paper, we do not replicate DPO since there is no
official public code available for ranking.
-----
Scaling Factor β 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Accuracy 18.75 18.01 15.05 13.20 11.79 11.79 9.83 8.78 8.62 7.51
Table 9: The influence of gradient weight scaling factor β for RRHF.
D.2 RRHF
Rank Responses to align Human Feedback (RRHF), which takes candidate ranking into account and
distinguishes different candidates through a pair-wise ranking loss:
_cXi≻cj_ max(0, s[c]θ[j] _[−]_ _[s]θ[c][i]_ [)] (18)
_LRRHF =_
We compute the gradient of LRRHF with respect to θ:
_∇θLRRHF = −_ _θ_ _[> s]θ[c][i]_ [)][∇][θ][s]θ[c][i] _−_ I(s[c]θ[j] _[> s]θ[c][i]_ [)][∇][θ][s]θ[c][j] (19)
_cXi≻cj_ increase sciθ decrease scjθ
[I][(][s][c][j]
| {z } | {z }
Based on ∇θLRRHF, we can see that although RRHF implicitly introduces a constraint by setting the
loss to 0 when the positive score is larger than the negative score, it still has a drawback: Whenever
_s[c]θ[j]_ _[> s]θ[c][i][,][ L][RRHF][ will decrease the][ s]θ[c][j]_ [with the same gradient weight][ I][(][s]θ[c][j] _[> s]θ[c][i]_ [) = 1][. This]
weight might be too large, potentially harming the model’s performance.
To illustrate this, we explore the performance of RRHF with a scaling factor β on its gradient
weight. As shown in Table 9, it is evident that as the weight increases (larger β), the model’s
performance declines, showing that: 1) The constraint of RRHF is not effective enough to prevent
model degradation; 2) We can alleviate the model degradation by making the gradient weight smaller
suggested by DPO (Rafailov et al., 2023); 3) Although we have tried a very small β = 0.1, RRHF
still harms the performance, which shows solely relying on reducing gradient weights might not be
sufficient to prevent model deterioration.
In fact, in the original RRHF paper (Yuan et al., 2023b), the authors have observed that a large
ranking weight, such as 10 or 100, significantly impairs model performance, leading them to try a
smaller weight (i.e., 1). However, they do not analyze the potential reason. In this paper, we highlight
that a key factor causing this discrepancy is the unwarranted reduction of the negative example score,
which necessitates imposing a constraint on the ranking loss. In addition, we discovered that a weight
of 1 can also substantially harm the model’s performance in the reasoning task. We believe that the
optimal weight of RRHF varies across tasks.
D.3 PRO
Preference Ranking Optimization (PRO), which takes candidate ranking into account and distinguishes different candidates through a ranking loss with a dynamic temperature:
_LP RO = −_ Xci log exp(τi[max]sexp([c]θ[i] [) +]τ[ P]i[max]cjs≺[c]θc[i]i[)][exp(][τ][ j]i _[s]cθj_ [)] = Xci log[1+cXj _≺ci_ exp(τi[j][s]cθj _[−][τ][ max]i_ _s[c]θ[i]_ [)]][ (20)]
_τi[j]_ [=][ r][c][i][ −] _[r][c][j][ >][ 0][,]_ _τi[max]_ = maxcj _ci_ _[τ][ j]i_ (21)
_≺_
where r[c] is the score of c given by a reward model. τi[j] [is the dynamic temperature for score][ s]θ[c][j] [. We]
compute the gradient with respect to the parameters θ:
_cj_ _ci_ [[][τ][ max]i exp(τi[j][s]cθj _i_ _s[c]θ[i]_ [)][∇][θ][s]θ[c][i] _i_ [exp(][τ][ j]i _[s]cθj_ _i_ _s[c]θ[i]_ [)][∇][θ][s]cθj []]
_≺_ 1 +[−] _[τ]c[ max]j_ _≺ci_ [exp(][τ][ j]i _[s]cθj[−][−][τ][ j][τ][ max]i_ _s[c]θ[i]_ [)] _[−]_ _[τ][ max]_ (22)
[P]
_∇θLP RO = −_
_ci_
-----
|Methods PRO PRO (remove τ)|PRO + RDC1 PRO (remove τ) + RDC1|
|---|---|
|Accuracy 18.73 0.31 7.18 0.78 ± ±|25.84 0.48 25.43 0.98 ± ±|
|---|---|
Table 10: The importance of dynamic temperature of PRO. “remove τ ” denotes remove the dynamic
temperature term, i.e., τj[j] [and][ τ][ max]i from PRO. “+RDC1” denotes add our ranking detach technical
(Equation 12).
Based on ∇θLP RO, we can see that for each pair (ci, cj), LP RO will decrease s[c]θ[j] [with the dynamic]
gradient weight:
_τi[j]_ [exp(][τ][ j]i _[s]cθj_ _i_ _s[c]θ[i]_ [)]
DGW[j]i [=] _[−]_ _c[τ]j[ max]_ _,_ (23)
1 + _cj_ _≺ci_ [exp(][τ][ j]i _[s]θ_ _[−]_ _[τ][ max]i_ _s[c]θ[i]_ [)]
which may harm the model’s performance. However, the dynamic gradient weight that is computed
[P]
based on the reward is more reasonable than the constant value of 1 used in RRHF, and thus PRO
outperforms RRHF. Specifically, when there is a substantial reward gap between higher-quality and
lower-quality, indicated by a large value τi[j][. It is reasonable to increase the penalty for negative]
example scores (large DGWi[j][), and vice versa. To demonstrate this, we remove the dynamic]
temperature term, i.e., τj[j] [and][ τ][ max]i, from PRO. As shown in Table 10, we can see that PRO
significantly outperforms PRO (remove τ ) when there is no constraint. However, the performance
gap shrinks when adding our detached constraint. These results indicate: 1) To a certain extent, the
dynamic temperature’s effectiveness stems from its ability to make PRO reduce the negative score in
a more reasonable manner. 2) The dynamic temperature is useful to prevent model degradation but is
not good enough.
D.4 SUMMARY
Our analysis reveals that previous ranking-based alignment works have some limitations:
1) Although their methods consist of some strategies to prevent model degradation (i.e., using a
scaling factor to reduce gradient weight for DPO, setting the loss to 0 for RRHF, and incorporating a
dynamic temperature for PRO), they do not recognize the importance of constraints for ranking-based
alignment methods in their papers.
2) Their strategies essentially involve diminishing the gradient weight’s magnitude, which is proven
to be insufficient (at least in the reasoning tasks).
Different from previous works, in this paper:
1) We point out an assessment misalignment problem of VFT for reasoning and highlight the
importance of constraint for alignment to prevent model degradation.
2) We introduce a novel constraint alignment loss. The constraint alignment loss with a boundary
constraint term not only alters the magnitude but also adjusts the direction of the gradient weight
depending on the condition, which is proven to be very effective in preventing model degradation and
enhancing the reasoning ability of LLMs.
E CASE STUDY
We also conducted a case study to intuitively show the importance of our constraint alignment. As
shown in Table 11, given the question, our AFT successfully gives the correct COT and answer, while
VFT gives the wrong COT at the second step (colored red), demonstrating the superiority of AFT.
More importantly, after removing the boundary constraint, the generative ability of LLM seems to
degrade, resulting in outputting many repeat and meaningless output tokens.
-----
|Question|At the end of the school year, Kate asked her teacher if she could have the 3 boxes of 64 crayons since they were all worn down to small pieces. The teacher agreed. Kate took the crayons home and decided to make crayon muffins by melting 8 small pieces of crayons together in a muffin tin that she placed in an oven for a few minutes. If she can sell her muffin crayons for $1.50 each, how much money can Kate make?|
|---|---|
|VFT|She has 3 boxes of crayons that are all worn down to small pieces so she has 3*64 = <<3*64=192>>192 pieces of crayons She melts 8 pieces of crayons together in a muffin tin so she has 8+192 = <<8+192=200>>200 crayon muffnis She can sell her muffin crayons for $1.50 each so she can make 200*1.5 = $<<200*1.5=300>>300 [ANS] 300.|
|---|---|
|AFT|Kate has 3 boxes * 64 crayons = <<3*64=192>>192 crayons. She melts 192 crayons / 8 crayons per muffin = <<192/8=24>>24 muffnis. She sells the muffnis for $1.50 each and has 24 muffnis, so she makes 24 muffins * $1.50/muffin = $<<24*1.5=36>>36. [ANS] 36.|
|---|---|
**Answer** She was able to get 3 boxes of 64 crayons so she has 3*64 =
_<<3*64=192>>192 crayons Each crayon muffin needs 8 crayons and she_
has 192 crayons so she can make 192/8 = <<192/8=24>>24 crayon muffins
If she makes 24 crayon muffins and sells them for $1.50 apiece, she can make
24*1.5 = $<<24*1.5=36.00>>36.00 [ANS] 36.
|AFT (remove constraint)|She can make 8 pieces of crayons per muffin tin so 3 boxes * 8 pieces/tin = 8 * 8 = <<3*8=24>>24 pieces fit in 1 tin Each melted crayon is worth $1.50 because 24 pieces * 1.50/64 pieces = 1.50/1.50/24 pieces = 1.50/1.50 She can make 4 tins of melted crayons because 24 pieces / 1.50 / 1.5 / 24 pieces / 1.50 / 1.50 / 24 pieces / 1.50 / 1.50 / 24 pieces / 1.50 / 1.50 / 24 pieces / 1.50 / 1.50 / 24 pieces / 1.50 / 1.50 / 24 pieces / 1.50 / 1.50 / 24 pieces / 1.50 / 1.50|
|---|---|
Table 11: A case study to intuitively show the effectiveness of AFT with boundary constraint. the
right and wrong steps are colored by blue and red, respectively.
-----
| [
"Lei, Li",
"Peiyi, Wang",
"Liang, Chen",
"Feifan, Song",
"Binghuai, Lin",
"Zhifang, Sui",
"Yunbo, Cao",
"Tianyu, Liu"
] | 2023-09-05T00:00:00 | null | false | 48 | 5 | null | http://arxiv.org/abs/2309.02144 | https://arxiv.org/abs/2309.02144 | https://www.semanticscholar.org/paper/74b4b993babe99bc5f5c589c27fef0f1baba606b |
A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models | We have recently witnessed a number of impressive results on hard mathematical reasoning problems with language models. At the same time, the robustness of these models has also been called into question; recent works have shown that models can rely on shallow patterns in the problem description when generating a solution. Building on the idea of behavioral testing, we propose a novel framework, which pins down the causal effect of various factors in the input, e.g., the surface form of the problem text, the operands, and math operators on the output solution. By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space. We apply our framework on a test bed of math word problems. Our analysis shows that robustness does not appear to continuously improve as a function of size, but the GPT-3 Davinci models (175B) achieve a dramatic improvement in both robustness and sensitivity compared to all other GPT variants. | This work proposes a novel framework, which pins down the causal effect of various factors in the input, e.g., the surface form of the problem text, the operands, and math operators on the output solution, and applies this framework on a test bed of math word problems. | ## A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models
**Alessandro Stolfo[∗]**
ETH Zürich
[email protected]
**Zhijing Jin[∗]**
MPI & ETH Zürich
[email protected]
**Kumar Shridhar**
ETH Zürich
[email protected]
**Bernhard Schölkopf**
MPI & ETH Zürich
[email protected]
**Mrinmaya Sachan**
ETH Zürich
[email protected]
**Abstract**
We have recently witnessed a number of impressive results on hard mathematical reasoning problems with language models. At the
same time, the robustness of these models has
also been called into question; recent works
have shown that models can rely on shallow
patterns in the problem description when predicting a solution. Building on the idea of
behavioral testing, we propose a novel framework, which pins down the causal effect of
various factors in the input, e.g., the surface
form of the problem text, the operands and
math operators on the output solution. By
grounding the behavioral analysis in a causal
graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space. We apply our framework on a test bed of bivariate
math word problems. Our analysis shows that
robustness does not appear to continuously improve as a function of scale, but that the recent
LLM, GPT-3-Instruct (175B), achieves a dramatic improvement in both robustness and sensitivity, compared to all other GPT variants.[1]
Figure 1: Our framework conducts do-interventions on
the input, and obtains the change in the the distribution
_P_ (R) of the prediction (R) by GPT-x. The interventions are then used with causal mediation analysis to
understand the causal effect of each intervention on the
output.
(Brown et al., 2020; Chowdhery et al., 2022) and
nuanced ways to prompt them (Drori et al., 2021;
Wei et al., 2022b; Zhou et al., 2022). Yet, the robustness of these models on the math reasoning
tasks remains questionable (Patel et al., 2021)[2].
A well-known way to check robustness of models is behavioral testing using a CheckList (Ribeiro
et al., 2020). CheckLists are metamorphic tests (as
in software engineering), such as invariance tests
and directional expectation tests, used to identify
critical failures in our models. Inspired by the robustness study presented in Patel et al. (2021), we
investigate the robustness of the reasoning in LLMs,
building our approach on the idea of behavioral
testing that underlies the CheckList framework.
To achieve this goal, we propose a causal framework to quantify the robustness of NLP models’
math reasoning ability. Specifically, we first de
2We include an interesting sample interaction with GPT3 in the appendix (Figure 6) where GPT-3 cannot robustly
answer a simple calculation question.
**1** **Introduction**
Math reasoning has been a longstanding challenge
for AI (Bobrow, 1964), as it requires both the linguistic ability to map a problem into a set of mathematical operations, and the ability to execute the
math operations correctly. While there has been a
lot of work on building supervised domain-specific
solvers for these problems in the past decade (Hosseini et al., 2014; Kushman et al., 2014; Roy et al.,
2015; Seo et al., 2015; Sachan and Xing, 2017;
Sachan et al., 2017, 2018, inter alia), recently, we
have seen astounding progress in this area led by
the development of large language models (LLMs)
_∗_ Equal contributions.
[1Our code and data are at https://github.com/](https://github.com/alestolfo/causal-math)
[alestolfo/causal-math.](https://github.com/alestolfo/causal-math)
-----
Figure 2: Causal graph of model predictions on math questions. Explained in detail in text.
scribe a causal graph formulation of math reasoning, where the goal is to quantify the difference in
the structural causal models of human reasoning
and model judgment. We consider causal factors
such as the textual framing of the question, number
operands, and operations. Then, we identify the
set of interventions feasible in the context of math
word problems (MWPs), and provide a causal inference framework to obtain causal influences of each
factor via direct do-interventions (Pearl, 1995) and
causal mediation analysis (Pearl, 2001). Using our
framework, we disentangle the factors affecting the
model’s predictions and measure their influence.
This way, we are able to provide insights into the
model’s reasoning in terms of robustness and sen_sitivity._
Finally, we apply our framework and evaluate
a series of GPT models with increasing sizes. We
show that larger GPT models tend to be more sensitive to changes in the ground-truth result of a MWP,
but not significantly more robust. An exception to
thus phenomenon is the most recent and largest
variant of Instruct-GPT-3 (Ouyang et al., 2022),
which shows a remarkable improvement in both
sensitivity and robustness.
**2** **Causal Graph Formulation**
We present our framework for bivariate MWPs with
a single arithmetic operation (addition, subtraction,
multiplication or division). This framework can be
extended to more variables and other math problems in future work.
We consider a dataset D of MWPs, where each
MWP is denoted as a question Q. Q is an ordered
list (t, (n1, n2), g) consisting of a question template t with two operands n1, n2, and the groundtruth result g. Each question template t := (o, s)
further contains two types of information: the arithmetic operation type o ∈{+, −, ×, ÷} implicitly
expressed in the question, and the text surface form
_s irrelevant to the arithmetic operation. The ground-_
truth result g = fo(n1, n2) is calculated by applying the operation fo( _,_ ) on the two operands. An
_·_ _·_
example math question in this form from Patel et al.
(2021) is as follows:
**Template t: Mark has n1 trees in his**
backyard. If he plants n2 more, how
many trees will he have?
**Operands n: n1 = 12, n2 = 13**
**Operation o: “+”**
**Result: g = fo(n1, n2) = n1+n2 = 25**
Our goal is to quantify the robustness of a model
_M on the set of problems Q ∈D. Ideally, D_
should be a dataset not seen by the model M during
training. We assume that M takes Q as input and
predicts a probability distribution of the result R:
_P_ (R (t, (n1, n2))). Our formulation below will be
_|_
easier to understand using this finite discrete set,
and can be generalized to infinite or continuous
sets for other types of operands in future work.
**3** **The Framework**
In this section, we describe the formulation of our
framework in three steps. First, we define the factors that might influence the model’s predictions.
Then, we identify the possible do-interventions that
we can perform. Finally, we describe the causal
effects that we measure.
**3.1** **Step 1. Question Reformulation**
We address the research question: Is a model rea_soning robustly on MWPs?_ by comparing the
causal mechanisms of the model’s decisions to an
hypothesized human reasoning mechanism. Note
-----
that we do not claim to know how humans reason about these problems. We simply propose a
reasonable and intuitive mechanism based on the
independence of language and mathematical reasoning in humans (Brannon, 2005; Monti et al.,
2012).
**Human** **reasoning** **mechanism.** The causal
mechanisms of how humans might solve Q include
_o = fabstract(Q),_ (1)
_g = fo(n1, n2),_ (2)
where they first abstract the arithmetic operation
_o from the problem Q by some cognitive pro-_
cess fabstract, and then apply the operation to the
operands to obtain the result g. We show these
mechanisms in the green subgraph G1 of Figure 2.
**Model reasoning mechanism.** In contrast, the
causal mechanisms of how a model might solve Q
are as follows:
_r = fblackBox(t, n1, n2),_ (3)
where we are unsure about (1) what part(s) of t the
model takes into account, and (2) how it operates
over the relevant variables.
Thus, we draw all possible causal mechanisms
that might take place in the black-box model
_fblackBox in the model causal graph G2 in Figure 2._
Some possible fine-grained causal mechanisms are
1. The model might attend over the question
template t in two ways: paying attention to
the text surface form s via the causal path
**_T →_** _S →_ _R, or text relevant to the math_
operation o via the causal path T → _O →_ _R._
2. The model might also attend to the operands
**_n := (n1, n2) via a causal path N_** _R._
_→_
3. If the model learns the correct causal mechanisms as in the human cognitive process,
it should capture how the operator and the
operands matter to the ground-truth result g
(via O → _G and N →_ _G) and then the model_
prediction should be sensitive to any changes
in the ground truth, namely G → _R. No spuri-_
ous correlations can directly affect R without
going through the mediator G.
Hence, to answer the question “How robust is
the mathematical reasoning of a model on MWPs?”
we can answer the following subquestions:
1. How does R change in response to G? By
quantifying this, we assess the sensitivity (correct responsiveness) of the model to changes
in the problem. In other words, does the model
correctly adjust its prediction in response to a
change in the correct solution of the problem?
2. What is the (unwanted) direct causal effect
size of S → _R, and N →_ _R? We see the_
quantities as a measure of the brittleness (i.e.,
wrong responsiveness) of the model to resultpreserving changes in the input. The lower the
direct causal effect of S and N, more robust
the model is.
**3.2** **Step 2. Causal Intervention List**
After formulating the two causal graphs, we then
list all feasible limited actions that allow us to perform our causal analysis. In the context of MWPs
in this work, we use the following interventions:
1. Direct intervention on all possible n1, n2
2. Partially controllable interventions on T . We
can replace the template T in one of the two
ways:
(a) both S and O are affected, or
(b) S is affected but O is not affected.
**3.3** **Step 3. Turning Limited Actions into**
**Causal Effect Sizes**
Next, we explain how we can obtain causal effect sizes we want (listed in Step 1) from the limited set of interventions we can do (listed in Step
2). Specifically, we first start from all the feasible interventions, and for variables that we cannot
directly intervene on, we apply deductions from
do-calculus (Pearl, 1995) to obtain or approximate
the direct causal effect sizes. In the following, we
describe a list of causal effect sizes that we need.
**Causal Effects of the Operands.** When intervening on the operands N := (N1, N2), we can
obtain the size of the total causal effect (TCE, i.e.,
the joint effect through all the directed causal paths
from a variable to another) of N on R, namely
TCE(N on R) :=E[int+]N [R] E[int]N _[−][R]_ (4)
_−_
Here, E[int+]N [R] denotes the expected result after intervention on N and E[int]N _[−][R] denotes the ex-_
pected result prior to the intervention. Note that this
-----
TCE is not the exact quantity that we are looking
for, because we want to separate two different paths
of how N affects R: (1) the path N → _G →_ _R,_
which is the correct decision path that we want
the model to pick up (where the model reacts to
the change in the ground-truth answer), and (2)
the path N → _R, which is the spurious correla-_
tion that the model might have learned (where the
model relies on some spurious correlations with
certain number operands, which could be traced to
perhaps their frequencies in the training corpus).
We can quantify the direct causal effect (DCE,
i.e., the effect from the directed causal path from
a variable to another that does not go through any
intermediate variables) (Pearl, 2001) of N on R,
namely the strength of the direct causal path N →
_R, by controlling for G to be fixed every time we_
intervene on N :
DCE(N → _R) :=_ (5)
_P_ (G)(E[int+]N [R _G = g]_ E[int]N _[−][R_ _G = g]) ._
_|_ _−_ _|_
_g_
X
For example, if we observe a model doing
100+100=200 correctly, we want to separate the
math ability here into (1) the model’s sensitivity towards the ground-truth answer, and (2) the
model’s decisions based on its familiarity with just
the operands 100. Here, the overall effect is the
calculable TCE(N on R) by Eq. (4), and one of
the subeffects is the calculable DCE(N → _R) by_
Eq. (6).
**Causal Effects of the Text Surface Form.** As
for the operands, we can compute both the direct
and indirect effects of the surface form representing
the math problem. In particular, intervening on T
without controlling for O (intervention 2a in Sec.
3.2), we can compute the total effect, i.e.,
TCE(T on R) :=E[int+]T [R] E[int]T _[−][R]._ (6)
_−_
Controlling for the operation O (intervention 2b
in Sec. 3.2) will instead allow us to obtain the
direct causal effect of the surface text:
DCE(S → _R)_
:= E[int+]S [R] E[int]S _[−][R]_ (7)
_−_
= _P_ (O)(E[int+]T [R _O = o]_ E[int]T _[−][R_ _O = o])._
_|_ _−_ _|_
_o_
X
Note that since there is no mediator between S
and R, the DCE(S → _R) is also TCE of S on_
_R. The only adaptation that we need to make with_
regard to the MWPs is that it is not feasible to enumerate all possible perturbations of S. Therefore,
the practical results that researchers can achieve are
over a certain subset of S. In practice, we obtain
this by intervening on T without affecting O.
**Causal Effects of the Operator.** The ideal way
to obtain the TCE of O on R is through some careful human annotation that minimally change the
templates as Kaushik et al. (2020) do for sentiment classification. The challenge for MWPs in
our case is that with all our possible set of interventions, we cannot only intervene O without introducing changes to the irrelevant surface form.
However, we might get some information about
TCE(O on R) because, on the causal graph, the
total causal influence of T on R actually flows into
two directed paths, one through S to R (which
is the DCE(S → _R)), and the other from O to_
_R, which is our interested quantity TCE(O on R)._
Therefore, we compare the two quantities we know,
TCE(T → _R) and DCE(S →_ _R), to get a sense_
of the causal influence of O on R that we cannot
obtain in any other way.
**3.4** **Quantifying the Causal Influence**
Given a pair of problems Q : **_t, (n1, n2), g_** and
_{_ _}_
**_Q[′]_** : {t[′], (n[′]1[, n]2[′] [)][, g][′][}][ representing an interven-]
tion do(X : x → _x[′]), where X ∈{T, S, N_ _},_
denote the distribution before the intervention as
_P_ (R (t, (n1, n2))) as P and the distribution af_|_
ter intervention P (R | (t[′], (n[′]1[, n][′]2[)))][ as][ P][ ′][. The]
support of R is R, the set of possible results.
We quantify the causal effect of a factor X on
the model’s prediction R in two ways: by assessing the change in the the predicted result, and by
measuring the change in the probability assigned
by the model to the correct result g (or g[′]).
**Change in the Prediction. To account for the in-**
ability of LMs to capture the continuous property
of numbers (Jin et al., 2021a), we measure the
change in model’s prediction using an indicator of
the “change result” event:
_dcp(P, P_ ) := 1(r = r[′]), (8)
_[′]_ _̸_
where r = arg maxx _P_ (x), and r[′] =
_∈R_
arg maxx∈R P _[′](x)._
**Relative Change in Confidence. Inspired by Fin-**
layson et al. (2021), we also highlight the change
in terms of the relative difference in the probability
-----
assigned to g and g[′]. We formulate two types of
relative change, one quantifying the relative change
in the confidence of g, and the other quantifying
the relative change in the confidence of g[′]:
∆rel = _[P]_ [(][g][)][ −] _[P][ ′][(][g][)]_ (9)
_P_ _[′](g)_
∆[′]rel [=][ P][ ′][(][g][′][)][ −] _[P]_ [(][g][′][)] _._ (10)
_P_ (g[′])
question. In order to obtain suitable prompts for
the models, we convert the problems’ questions
into statements where the result of the problem is
expected to be the first token after the prompt. E.g.,
in the example in section 2, how many trees will
_he have? is converted into the number of trees that_
_he will have is. We consider templates describ-_
ing a two-variable expression from the union of
the three datasets, and we filter out instances for
which the conversion into statement is not possible.
More details about this process are provided in the
Appendix C. We obtain in this way a set of ∼400
template-expression pairs that we use to generate
pairs of prompts representing an intervention. For
the sake of consistency, we keep the notation t to
refer to the statement-converted template, and we
use (t, (n1, n2)) to refer to an instantiated template
that we use as prompt.
**4.2** **Intervention Data**
Given an MWP Q : ((t, (n1, n2)), g), we
generate a second problem instance Q[′] _∈_
_{((t[′], (n[′]1[, n][′]2[))][, g][′][)][ | C}][ using a set of constraints]_
_C depending on the type of causal effect CE we_
want to measure and on the considered variable.
**Intervening on N** **. When intervening on the num-**
bers in the problem, the sets of constraints C take
the following form:
CE =DCE(N → _R) =⇒C =_
_s = s[′], o = o[′], n[′]1_ 2
_{_ _[̸][=][ n][1][, n][′]_ _[̸][=][ n][2][, g][′][ =][ g][}]_
CE =TCE(N on R) =⇒C =
_s = s[′], o = o[′], n[′]1_ 2
_{_ _[̸][=][ n][1][, n][′]_ _[̸][=][ n][2][, g][′][ ̸][=][ g][}][.]_
That is, the text of the problem is kept unaltered and
a set of new numbers N = _n1, n2_ is sampled
_{_ _}_
in such a way that the result g is affected or not
depending on the effect what is being measured.
**Intervening on T . When changing the textual**
description of the problem, we have:
We quantify the overall relative change in confidence (RCC) as the average of the two relative
changes above:
12 ∆rel + ∆[′]rel if g = g[′]
_drcc(P, P_ ) = _̸_
_[′]_
max ∆rel, ∆[′]rel if g = g[′] _._
(11)
Å [ ] ã
**A Unified Form. We are interested in the average**
causal effect of the intervention across all problems
in D:
CEmetric(R do(X : x _x[′]))_ (12)
_|_ _→_
=CEmetric(X on R) (13)
_dmetric(Pi, Pi[′][)]_
=EQi **_D_**
_∈_
(14)
_∀_ metric ∈{rcc, cp}, where Pi and Pi[′] [are the pre-]
and post-intervention distribution for Qi **_D. We_**
_∈_
describe how we construct the dataset D in section
4.2. We additionally report results measuring the
JS divergence between P and P _[′]_ in Appendix H.
**4** **Experimental Setup**
In this section, we describe the data used to perform
the interventions and to measure the causal effects.
**4.1** **Dataset**
For our analyses, we use instances of math word
problems from three popular datasets: ASDiv-A
(Miao et al., 2020), MAWPS (Koncel-Kedziorski
et al., 2016), and SVAMP (Patel et al., 2021).
The examples contained in these collections are
pairs (t, o) consisting of a question template t
with its annotated operation o. Each of these pairs
can be instantiated multiple times into problems
**_Q : ((t, (n1, n2)), g) by filling the template with_**
numerical values n1, n2 and computing the groundtruth result g = fo(n1, n2).
The textual template t consists of a context (describing a real-world state and/or actions) and a
CE =DCE(S → _R) =⇒C =_
_{s ̸= s[′], o = o[′], n[′]1_ [=][ n][1][, n]2[′] [=][ n][2][, g][′][ =][ g][}]
CE =TCE(T on R) =⇒C =
_{s ̸= s[′], o ̸= o[′], n[′]1_ [=][ n][1][, n]2[′] [=][ n][2][, g][′][ ̸][=][ g][}][.]
In other words, we change t such that either o[′] = o,
or o[′] ≠ _o. In the former case we sample a differ-_
ent template t[′] = (s[′], o) from the set of templates
describing the same operation o, in the latter case
-----
we sample a new t[′] describing a different operation. In Appendix D we report some examples of
(Q, Q[′]) pairs representing the different types of
interventions.
Given a model P, we use the pair (Q, Q[′]) to
obtain a pair of distributions P (R (t, (n1, n2)))
_|_
and P (R|(t[′], (n[′]1[, n][′]2[)))][, which we use to measure]
the causal effect of the intervention. We consider
the result space R = {1, 2, . . ., C} consisting of
integer values, following the setup of several existing MWP datasets (Miao et al., 2020; KoncelKedziorski et al., 2016; Patel et al., 2021). To control our experimental costs and make sure the models keep the number as one token, we set C = 300.
And we additionally enforce Ni 1, 2, . . ., C,
_∈{_ _}_
_Ni_ **_N_** . From all the tokens in a model’s vo_∀_ _∈_
cabulary, we focus on the probability assigned to
the numbers in our result space R, and thus we use
_P_ (R = r) to denote the normalized probability
_Praw(R = r)/Z, where Z =_ _r=1_ _[P][raw][(][R][ =][ r][)][,]_
and Praw(x) is the raw probability score assigned
to the vocabulary token x. For each intervention[P][C]
type, we generate a dataset D consisting of (Q, Q[′])
pairs. Unless otherwise specified, for our experiments we generate 500 intervention pairs for each
template, and results are average over three seeds.
**4.3** **Models to Evaluate**
We use our framework to assess the robustness of
reasoning in eleven pre-trained language models.
We consider five sizes of the GPT-2 model (Radford
et al., 2019): distilled (Sanh et al., 2019), regular,
medium, large, and XL. We evaluate three models
from EleutherAI that were pre-trained on the Pile
(Gao et al., 2020): GPT-Neo 1.3B and 2.7B (Black
et al., 2021), and GPT-J-6B (Wang and Komatsuzaki, 2021). We use HuggingFace Transformers
(Wolf et al., 2019) to access the models. Additionally, we conduct a set of experiments with the
Instruct versions (Ouyang et al., 2022) of GPT-3
(Brown et al., 2020): Babbage, Curie and Davinci[3].
Experiments with GPT-3 are carried out under the
constraints set by the OpenAI APIs[4], which prevent
us from computing the causal effect using the same
procedure as for the other models. We report the
details about how the metrics were computed for
3The sizes of the three models are believed to be, respectively, 1.3B, 6.7B and 175B parameters. Evidence suggest[ing this is presented in https://blog.eleuther.ai/](https://blog.eleuther.ai/gpt3-model-sizes/)
[gpt3-model-sizes/.](https://blog.eleuther.ai/gpt3-model-sizes/)
[4https://openai.com/api/](https://openai.com/api/)
GPT-3 in Appendix E. In the reported results, we
indicate with an asterisk ([∗]) the metrics that were
influenced by this limitation.
**5** **Results**
We compare the direct causal effect DCE and the
total causal effect TCE of N and T on R. DCE
represents the undesired effect for a model to being
mistakenly responsive to a change in N or T not
leading to a change in the result g (low robustness),
whereas higher values of TCE indicate a higher
ability of the model to correctly adjust the probability weight assigned to the new solution g[′] after
the intervention (high sensitivity).
**5.1** **Effect of N on R**
From the results in Figure 3, we notice that larger
models exhibit a larger TCErcc/DCErcc ratio. In
particular, in GPT-3 Curie and GPT-J-6B, the TCE
is, respectively, 3.5x and 12x larger than the DCE.
In GPT-3 Davinci, the total causal effect grows as
much as 1000x larger than the DCE. The magnitude
of the two effects in terms of change of predictions
_dcp is comparable for all models except GPT-3_
Davinci. For small models (Distilled and Regular
GPT-2) DCEcp and TCEcp are considerably smaller
than for other models, indicating high robustness
but low sensitivity. Contrarily, for InstructGPT-3
we observe a remarkable 63% absolute difference
between direct and total effect.
For a different visualization of the direct causal
effect of N on the the model’s prediction. We report heatmaps showing the probability assigned
by the model to the result g of a math problem (t, (n1, n2), g) | g = _n1 + n2, ∀g_ _∈_
0, 1, . . ., 50 _,_ (n1, n2) 0, 1, . . ., 50
_{_ _}_ _∀_ _∈_ _{_ _} ×_
_{0, 1, . . ., 50}. For Distil-GPT-2 we observe low_
overall probability assigned to g and diagonal patterns indicating a consistency in assigning higher
probability to specific results (e.g., 10, 20, 30, 40,
50). For the two larger models we notice higher
probability mass assigned to the problem’s result,
but less consistency on the prediction of the same
result with different sets of operands (this is true
for GPT-J in particular). This result is consistent
with the observed higher DCE and TCE in larger
models: P (g) might vary more considerably when
intervening on the N without affecting g, but overall the model assigns higher probability weight to
the correct result, which correlates with higher sensitivity.
-----
result is that GPT-3 Davinci consistently predicts
the same answer r = r[′] when g = g[′], however,
the probabilities P (g) and P _[′](g) might vary signif-_
icantly.
The results observed for the two kinds of intervention do(T : t **_t[′]) and do(N : (n1, n2)_**
_→_ _→_
(n[′]1[, n][′]2[))][ show similar trends. Small models (Dis-]
tilled and Regular GPT-2) exhibit low sensitivity to interventions. Larger models (from GPT-2
Medium to GPT-Neo) appear to be more influenced
by changes in both N and T . However, they display similar sensitivity to both result-altering and
result-preserving interventions. An improvement
in sensitivity is noticeable in GPT-J and GPT-3
Curie, though not accompanied by an improvement
in robustness. A remarkably different behaviour is
instead showed by GPT-3 Davinci, which demonstrates substantially higher sensitivity to resultaltering interventions (high TCE), and higher robustness (in terms of prediction change).
These results seem to support the so-called emer_gent abilities hypothesis (Wei et al., 2022a), which_
postulates the existence of skills that are displayed
by large-scale models but are not present in smallerscale models, and thus cannot be predicted by simply extrapolating the performance improvements
on smaller-scale models. In our case, the ability of
reasoning robustly appears to develop in an emergent way. Stronger evidence supporting this theory
could be obtained evaluating models with size in
the range 6-175B parameters.
**5.4** **Quantitative Validation of the**
**Framework**
10[4]
10[2]
10[0]
|Col1|DCE of N TCE of N rcc rcc|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||||||
|||||||||||||||||||||||||
|||||||||||||||||||||||||
0.8
cp 0.6
_d_
0.4
0.2
DistilledRegularMediumLarge3-BabbageNeo-1.3BNeo-2.7BXL J-6B3-Curie3-Davinci
|Col1|DCE of N cp|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|TCE of N cp|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
||||||||||||||||||||||||||||
Figure 3: Comparison of DCE(N _→_ _R) and_
TCE(N on R). We use some approximation method
for GPT3-Instruct (denoted by _[∗]) which is explained in_
Appendix E.
**5.2** **Effect of T on R**
In Figure 5 we report the total causal effect of the
question T and the direct causal effect of the irrelevant text elements S on the model’s prediction.
The considerations made for the effects of N can
be drawn in this case as well: larger models show
a larger TCErcc/DCErcc ratio. For models smaller
than GPT-J, this ratio is ≤ 1, which indicates that
an intervention in the textual description of the
MWP leads to a comparable effect both when affecting the ground truth result (i.e. when g = g[′])
and when g ̸= g[′]. The large TCErcc/DCErcc ratio
of GPT-3 Davinci (∼280) suggests that the model
tends to adjust its prediction accordingly after a
result-altering intervention, more than varying the
probability score assigned to the correct solution
after an intervention that does not affect the result
of the problem. For dcp, GPT-3 Davinci shows a
substantial difference (57%) between direct and
total effect, as observed for N .
We examine the relationship between performance
and robustness, computing the Pearson correlation coefficient between accuracy (precision@10)
and the relative confidence change (RCC) metric. On a per-template basis (500 instances for
each template), we found accuracy to be positively
correlated with TCE(N on R) and TCE(T on R)
(0.24 and 0.49, respectively) and negatively correlated with DCE(N → _R)and DCE(S →_ _R)_
(-0.26 and -0.36, respectively). We see these results as a quantitative validation of the intuition
behind our framework: the better the model’s performance, the more the model tends to correctly
adjust its prediction after a result-altering intervention (higher sensitivity) and to correctly not change
its prediction after a result-preserving intervention
(higher robustness).
**5.3** **Considerations**
In comparison to other models, GPT-3 Davinci
shows the highest DCErcc, but low DCEcp. This
discrepancy is related to the quantities that the two
metrics consider. drcc takes into account the probability assigned to g, while dcp does not consider
the ground truth solution. One interpretation of this
-----
0.200
0.175
0.150
0.125
0.100
0.075
0.050
0.025
0.000
0.200
0.175
0.150
0.125
0.100
0.075
0.050
0.025
0.000
0.200
0.175
0.150
0.125
0.100
0.075
0.050
0.025
0.000
10 15 20 25 30 35 40 45 50
10 15 20 25 30 35 40 45 50
10 15 20 25 30 35 40 45 50
Figure 4: Heatmaps displaying P (g) for Distil-GPT-2 (left) and GPT-J-6B (center) and GPT-3 Davinci (right).
The probability values for each combination of ((n1, n2), g) are averaged over 20 different templates. Probability
values over 0.2 are displayed with the darkest color.
2017). Recently, causal inference has been introduced in NLP for different uses (Feder et al.,
2021a), such as formulating NLP tasks in terms of
causal and anticausal learning (Jin et al., 2021c),
text as a variable in causal inference (Roberts et al.,
2020; Veitch et al., 2020; Jin et al., 2021b, 2022),
effect of certain neurons on predictions (Vig et al.,
2020; Meng et al., 2022), and inspecting the effect
of the properties in data and learning on the NLP
model performance (Ni et al., 2022). The most similar line of research to our work is the application
of causal effect estimation on interpreting model’s
behavior, such as how models understand syntactic
agreement (Finlayson et al., 2021), and how interventions in the representations and weghts affect
the model prediction Feder et al. (2021b).
To the best of our knowledge, our work is the
first to formulate a causal framework for robustness
behavioral tests to improve the CHECKLIST, and
also we are the first to introduce the idea to quantify
the differences in the causal mechanisms of human
reasoning and model decisions.
**Math Reasoning in NLP. A growing body of**
work tries to improve the math reasoning capability in NLP models (Zhang et al., 2020; Geva
et al., 2020; Spokoyny et al., 2021), and prompting techniques for LLMs (Cobbe et al., 2021; Shen
et al., 2021; Kojima et al., 2022; Wei et al., 2022b;
Chowdhery et al., 2022). For analysis, significant
attention has been given to models’ ability to understand numerical quantities (Wallace et al., 2019;
Thawani et al., 2021) and numerical operations (Pal
and Baral, 2021; Berg-Kirkpatrick and Spokoyny,
2020; Pi˛ekos et al., 2021; Razeghi et al., 2022).
**7** **Conclusion**
Medium3-Babbage*LargeNeo-1.3BNeo-2.7BXL J-6B
10[4]
10[2]
10[0]
|DCE of S TCE of T rcc rcc|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
0.8
cp 0.6
_d_
0.4
0.2
DistilledRegularMediumLarge3-BabbageNeo-1.3BXLNeo-2.7BJ-6B3-Curie3-Davinci
|DCE of S TCE of T cp cp|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
Figure 5: Comparison of DCE(S _→_ _R) and_
TCE(T on R). _[∗]approx values, see Appendix E._
Moreover, we conduct an additional sanity check
as in Patel et al. (2021): removing the question
from the MWP templates, we observe a sensitivityrobustness degradation to random guessing. This
indicates that the measurement of the causal effects
within our framework is not affected by patterns
in the templates that might have been picked up or
memorized by large models.
We additionally report in Appendix G the precision of the models on the generated instances
of MWPs, which shows an improvement with the
model sizes that follows a similar trend as the robustness/sensitivity changes we observed.
**6** **Related Work**
**Causal Inference for NLP. Causal inference is**
traditionally applied to various phenomena in nature and human society (Pearl, 2009; Peters et al.,
In this paper, we proposed a framework to disentangle and separately measure the effect of different
factors influencing the predictions of LLMs. Our
-----
results indicate that the robustness and the sensitivity of LLMs on simple mathematical reasoning
do not improve linearly as a function of scale, but
that they seem to develop in an emergent fashion.
Our framework provided a set of robustness indicators, and also opened new future directions to
design behavioral tests of models in a more causal,
principled way.
**Ethical Considerations**
As for the ethical practice in this work, the data
involved are from existing MWP datasets with no
private user information. As for the ethical impact
of the use of this work, the study is about providing
a metric and analyzing existing models’ robustness, so there is less concern over harmful usage.
Rather, it is more about putting checks on existing
AI models and helping humans understand them
better before use. Potential stakeholders that could
benefit from this research include NLP researchers
working on math models, and people involved with
applications about math questions in text and elearning design.
**Limitations**
A key limitation in our work is that LLMs might
have seen these math problems. Our work theoretically assumes this is not the case. Another
limitation is that for sake of simplicity, our work
makes some assumptions. For example, we assume
all numbers in the range of integers 0 to C=300.
This would not cover every MWP out there. And
future work is needed to generalize our framework
to other forms of MWPs. In this work, we are also
constrained by the limitations of the OpenAI policy
on the GPT-3 API. This limits the number of perturbations we consider in this work as well as the
accuracy with which we can estimate our causal distributions. Finally, our work is restricted to English,
and extending it to other languages will require us
to create a MWP dataset in that language.
**Acknowledgments**
This material is based in part upon works supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center,
FKZ: 01IS18039B; by the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project
number 390727645; by the John Templeton Foundation (grant #61156); by a Responsible AI grant
by the Haslerstiftung; and an ETH Grant (ETH-19
21-1). Alessandro Stolfo is supported by armasuisse Science and Technology through a CYD Doctoral Fellowship. Zhijing Jin is supported by PhD
fellowships from the Future of Life Institute and
Open Philanthropy, as well as the travel support
from ELISE (GA no 951847) for the ELLIS program. We also thank OpenAI Researcher Access
Program for granting our team credits to their API.
**References**
Taylor Berg-Kirkpatrick and Daniel Spokoyny. 2020.
[An empirical investigation of contextualized number](https://doi.org/10.18653/v1/2020.emnlp-main.385)
[prediction. In Proceedings of the 2020 Conference](https://doi.org/10.18653/v1/2020.emnlp-main.385)
_on Empirical Methods in Natural Language Process-_
_ing (EMNLP), pages 4754–4764, Online. Associa-_
tion for Computational Linguistics. 8
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and
Stella Biderman. 2021. GPT-Neo: [Large Scale](https://doi.org/10.5281/zenodo.5297715)
[Autoregressive Language Modeling with Mesh-](https://doi.org/10.5281/zenodo.5297715)
[Tensorflow. If you use this software, please cite it](https://doi.org/10.5281/zenodo.5297715)
using these metadata. 6
Daniel G. Bobrow. 1964. Natural language input for a
computer problem solving system. Technical report,
USA. 1
Elizabeth M. Brannon. 2005. [The independence of](https://doi.org/10.1073/pnas.0500328102)
[language and mathematical reasoning. Proceedings](https://doi.org/10.1073/pnas.0500328102)
_of the National Academy of Sciences, 102(9):3177–_
3178. 3
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry,
Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon
Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu,
Clemens Winter, Chris Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners. In](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
_Advances in Neural Information Processing Systems,_
volume 33, pages 1877–1901. Curran Associates,
Inc. 1, 6
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311. 1, 8_
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint
_arXiv:2110.14168. 8_
-----
Iddo Drori, Sunny Tran, Roman Wang, Newman
Cheng, Kevin Liu, Leonard Tang, Elizabeth Ke,
Nikhil Singh, Taylor L Patti, Jayson Lynch, et al.
2021. A neural network solves and generates mathematics problems by program synthesis: Calculus, differential equations, linear algebra, and more.
_arXiv preprint arXiv:2112.15594. 1_
Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid
Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brando n M. Stewart, Victor Veitch,
[and Diyi Yang. 2021a. Causal inference in natural](http://arxiv.org/abs/2109.00725)
[language processing: Estimation, prediction, inter-](http://arxiv.org/abs/2109.00725)
[pretation and beyond. CoRR, abs/2109.00725. 8](http://arxiv.org/abs/2109.00725)
Amir Feder, Nadav Oved, Uri Shalit, and Roi Re[ichart. 2021b. CausaLM: Causal model explanation](https://doi.org/10.1162/coli_a_00404)
[through counterfactual language models. Computa-](https://doi.org/10.1162/coli_a_00404)
_tional Linguistics, 47(2):333–386. 8_
Matthew Finlayson, Aaron Mueller, Sebastian
Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan
Belinkov. 2021. [Causal analysis of syntactic](https://doi.org/10.18653/v1/2021.acl-long.144)
[agreement mechanisms in neural language models.](https://doi.org/10.18653/v1/2021.acl-long.144)
In Proceedings of the 59th Annual Meeting of the
_Association for Computational Linguistics and the_
_11th International Joint Conference on Natural Lan-_
_guage Processing (Volume 1: Long Papers), pages_
1828–1843, Online. Association for Computational
Linguistics. 4, 8
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020.
The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
6
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
[Injecting numerical reasoning skills into language](https://doi.org/10.18653/v1/2020.acl-main.89)
[models. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/v1/2020.acl-main.89)
_ing of the Association for Computational Linguis-_
_tics, pages 946–958, Online. Association for Com-_
putational Linguistics. 8
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the MATH dataset. In Ad_vances in Neural Information Processing Systems_
_(NeurIPS). 13_
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
Oren Etzioni, and Nate Kushman. 2014. Learning
to solve arithmetic word problems with verb categorization. In Empirical Methods in Natural Language
_Processing (EMNLP), pages 523–533. 1_
Zhihua Jin, Xin Jiang, Xingbo Wang, Qun Liu,
Yong Wang, Xiaozhe Ren, and Huamin Qu.
2021a. Numgpt: Improving numeracy ability
of generative pre-trained models. _arXiv preprint_
_arXiv:2109.03137. 4_
Zhijing Jin, Zhiheng Lyu, Yiwen Ding, Mrinmaya
Sachan, Kun Zhang, Rada Mihalcea, and Bernhard
[Schoelkopf. 2022. AI Scholars: A dataset for NLP-](https://zhijing-jin.com/files/papers/AIScholar_2022.pdf)
[involved causal inference. 8](https://zhijing-jin.com/files/papers/AIScholar_2022.pdf)
Zhijing Jin, Zeyu Peng, Tejas Vaidhya, Bernhard
[Schoelkopf, and Rada Mihalcea. 2021b. Mining the](https://doi.org/10.18653/v1/2021.findings-emnlp.27)
[cause of political decision-making from social me-](https://doi.org/10.18653/v1/2021.findings-emnlp.27)
[dia: A case study of COVID-19 policies across the](https://doi.org/10.18653/v1/2021.findings-emnlp.27)
[US states. In Findings of the Association for Compu-](https://doi.org/10.18653/v1/2021.findings-emnlp.27)
_tational Linguistics: EMNLP 2021, pages 288–301,_
Punta Cana, Dominican Republic. Association for
Computational Linguistics. 8
Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas
Vaidhya, Ayush Kaushal, Mrinmaya Sachan, and
Bernhard Schoelkopf. 2021c. [Causal direction of](https://doi.org/10.18653/v1/2021.emnlp-main.748)
[data collection matters: Implications of causal and](https://doi.org/10.18653/v1/2021.emnlp-main.748)
[anticausal learning for NLP. In Proceedings of the](https://doi.org/10.18653/v1/2021.emnlp-main.748)
_2021 Conference on Empirical Methods in Natural_
_Language Processing, pages 9499–9513, Online and_
Punta Cana, Dominican Republic. Association for
Computational Linguistics. 8
Immanuel Kant. 1781. Critique of pure reason.
_Modern Classical Philosophers, Cambridge, MA:_
_Houghton Mifflin. 1908, pages 370–456. 13_
Divyansh Kaushik, Eduard H. Hovy, and
Zachary Chase Lipton. 2020. [Learning the differ-](https://openreview.net/forum?id=Sklgs0NFvr)
[ence that makes A difference with counterfactually-](https://openreview.net/forum?id=Sklgs0NFvr)
[augmented data. In 8th International Conference on](https://openreview.net/forum?id=Sklgs0NFvr)
_Learning Representations, ICLR 2020, Addis Ababa,_
_Ethiopia, April 26-30, 2020. OpenReview.net. 4_
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. _arXiv_
_preprint arXiv:2205.11916. 8_
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
[MAWPS: A math word problem repository. In Pro-](https://doi.org/10.18653/v1/N16-1136)
_ceedings of the 2016 Conference of the North Amer-_
_ican Chapter of the Association for Computational_
_Linguistics: Human Language Technologies, pages_
1152–1157, San Diego, California. Association for
Computational Linguistics. 5, 6
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Association for
_Computational Linguistics (ACL). 1_
Kevin Meng, David Bau, Alex Andonian, and Yonatan
Belinkov. 2022. Locating and editing factual associations in gpt. arXiv preprint arXiv:2202.05262. 8
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and develop-](https://doi.org/10.18653/v1/2020.acl-main.92)
[ing English math word problem solvers. In Proceed-](https://doi.org/10.18653/v1/2020.acl-main.92)
_ings of the 58th Annual Meeting of the Association_
_for Computational Linguistics, pages 975–984, On-_
line. Association for Computational Linguistics. 5,
-----
Martin M Monti, Lawrence M Parsons, and Daniel N
Osherson. 2012. Thought beyond language: Neural
dissociation of algebra and natural language. Psy_chological science, 23(8):914–922. 3_
Jingwei Ni, Zhijing Jin, Markus Freitag, Mrinmaya
[Sachan, and Bernhard Schölkopf. 2022. Original or](https://doi.org/10.48550/arXiv.2205.02293)
[translated? A causal analysis of the impact of trans-](https://doi.org/10.48550/arXiv.2205.02293)
[lationese on machine translation performance.](https://doi.org/10.48550/arXiv.2205.02293) In
_NAACL. Association for Computational Linguistics._
8
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul F. Christiano, Jan Leike, and Ryan Lowe.
[2022. Training language models to follow instruc-](https://doi.org/10.48550/arXiv.2203.02155)
[tions with human feedback. CoRR, abs/2203.02155.](https://doi.org/10.48550/arXiv.2203.02155)
2, 6
[Kuntal Kumar Pal and Chitta Baral. 2021. Investigat-](https://doi.org/10.18653/v1/2021.findings-emnlp.265)
[ing numeracy learning ability of a text-to-text trans-](https://doi.org/10.18653/v1/2021.findings-emnlp.265)
[fer model. In Findings of the Association for Com-](https://doi.org/10.18653/v1/2021.findings-emnlp.265)
_putational Linguistics: EMNLP 2021, pages 3095–_
3101, Punta Cana, Dominican Republic. Association for Computational Linguistics. 8
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems?](https://doi.org/10.18653/v1/2021.naacl-main.168) In Proceedings of the 2021
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics. 1, 2, 5,
6, 8
Judea Pearl. 1995. Causal diagrams for empirical research. Biometrika, 82(4):669–688. 2, 3
[Judea Pearl. 2001. Direct and indirect effects. In UAI](https://dslpitt.org/uai/displayArticleDetails.jsp?mmnu=1\&smnu=2\&article\_id=126\&proceeding\_id=17)
_’01: Proceedings of the 17th Conference in Uncer-_
_tainty in Artificial Intelligence, University of Wash-_
_ington, Seattle, Washington, USA, August 2-5, 2001,_
pages 411–420. Morgan Kaufmann. 2, 4
Judea Pearl. 2009. Causality. Cambridge University
Press. 8
Jonas Peters, Dominik Janzing, and Bernhard
Schölkopf. 2017. _[Elements of causal inference:](https://mitpress.mit.edu/books/elements-causal-inference)_
_[Foundations and learning algorithms.](https://mitpress.mit.edu/books/elements-causal-inference)_ The MIT
Press. 8
Piotr Pi˛ekos, Mateusz Malinowski, and Henryk
Michalewski. 2021. [Measuring and improving](https://doi.org/10.18653/v1/2021.acl-short.49)
[BERT’s mathematical abilities by predicting the or-](https://doi.org/10.18653/v1/2021.acl-short.49)
[der of reasoning.](https://doi.org/10.18653/v1/2021.acl-short.49) In Proceedings of the 59th An_nual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Confer-_
_ence on Natural Language Processing (Volume 2:_
_Short Papers), pages 383–394, Online. Association_
for Computational Linguistics. 8
Alec Radford, Jeff Wu, Rewon Child, David Luan,
[Dario Amodei, and Ilya Sutskever. 2019. Language](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
[models are unsupervised multitask learners. 6](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
Yasaman Razeghi, Robert L Logan IV, Matt Gardner,
and Sameer Singh. 2022. Impact of pretraining term
frequencies on few-shot reasoning. arXiv preprint
_arXiv:2202.07206. 8_
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin,
and Sameer Singh. 2020. [Beyond accuracy: Be-](https://doi.org/10.18653/v1/2020.acl-main.442)
[havioral testing of NLP models with CheckList. In](https://doi.org/10.18653/v1/2020.acl-main.442)
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 4902–_
4912, Online. Association for Computational Linguistics. 1
Margaret E Roberts, Brandon M Stewart, and
[Richard A Nielsen. 2020. Adjusting for confound-](http://www.mit.edu/~rnielsen/textmatching.pdf)
[ing with text matching. American Journal of Politi-](http://www.mit.edu/~rnielsen/textmatching.pdf)
_cal Science, 64(4):887–903. 8_
Subhro Roy, Tom Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transac_tions of the Association for Computational Linguis-_
_tics (TACL), 1. 1_
Mrinmaya Sachan, Kumar Dubey, and Eric Xing. 2017.
From textbooks to knowledge: A case study in
harvesting axiomatic knowledge from textbooks to
solve geometry problems. In Proceedings of the
_2017 Conference on Empirical Methods in Natural_
_Language Processing, pages 773–784. 1_
Mrinmaya Sachan, Kumar Avinava Dubey, Tom M
Mitchell, Dan Roth, and Eric P Xing. 2018. Learning pipelines with limited data and domain knowledge: A study in parsing physics problems. _Ad-_
_vances in Neural Information Processing Systems,_
31. 1
Mrinmaya Sachan and Eric Xing. 2017. Learning
to solve geometry problems from natural language
demonstrations in textbooks. In Proceedings of the
_6th Joint Conference on Lexical and Computational_
_Semantics (* SEM 2017), pages 251–261. 1_
Victor Sanh, Lysandre Debut, Julien Chaumond, and
Thomas Wolf. 2019. Distilbert, a distilled version of
bert: smaller, faster, cheaper and lighter. In NeurIPS
_EMC[2]_ _Workshop. 6_
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
[Etzioni, and Clint Malcolm. 2015. Solving geome-](https://doi.org/10.18653/v1/D15-1171)
[try problems: Combining text and diagram interpre-](https://doi.org/10.18653/v1/D15-1171)
[tation. In Proceedings of the 2015 Conference on](https://doi.org/10.18653/v1/D15-1171)
_Empirical Methods in Natural Language Processing,_
pages 1466–1476, Lisbon, Portugal. Association for
Computational Linguistics. 1
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin
Jiang, Ming Zhang, and Qun Liu. 2021. Generate &
rank: A multi-task framework for math word problems. arXiv preprint arXiv:2109.03034. 8
-----
Daniel Spokoyny, Ivan Lee, Zhao Jin, and Taylor BergKirkpatrick. 2021. Masked measurement prediction:
Learning to jointly predict quantities and units from
textual context. arXiv preprint arXiv:2112.08616. 8
Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro
[Szekely. 2021. Representing numbers in NLP: a sur-](https://doi.org/10.18653/v1/2021.naacl-main.53)
[vey and a vision. In Proceedings of the 2021 Con-](https://doi.org/10.18653/v1/2021.naacl-main.53)
_ference of the North American Chapter of the Asso-_
_ciation for Computational Linguistics: Human Lan-_
_guage Technologies, pages 644–656, Online. Asso-_
ciation for Computational Linguistics. 8
Victor Veitch, Dhanya Sridhar, and David M. Blei.
2020. [Adapting text embeddings for causal infer-](http://proceedings.mlr.press/v124/veitch20a.html)
[ence. In Proceedings of the Thirty-Sixth Conference](http://proceedings.mlr.press/v124/veitch20a.html)
_on Uncertainty in Artificial Intelligence, UAI 2020,_
_virtual online, August 3-6, 2020, volume 124 of Pro-_
_ceedings of Machine Learning Research, pages 919–_
928. AUAI Press. 8
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov,
Sharon Qian, Daniel Nevo, Simas Sakenis, Jason
Huang, Yaron Singer, and Stuart Shieber. 2020.
Causal mediation analysis for interpreting neural
nlp: The case of gender bias. _arXiv preprint_
_arXiv:2004.12265. 8_
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh,
and Matt Gardner. 2019. [Do NLP models know](https://doi.org/10.18653/v1/D19-1534)
numbers? [probing numeracy in embeddings.](https://doi.org/10.18653/v1/D19-1534) In
_Proceedings of the 2019 Conference on Empirical_
_Methods in Natural Language Processing and the_
_9th International Joint Conference on Natural Lan-_
_guage Processing (EMNLP-IJCNLP), pages 5307–_
5315, Hong Kong, China. Association for Computational Linguistics. 8
Ben Wang and Aran Komatsuzaki. 2021. GPTJ-6B: A 6 Billion Parameter Autoregressive
Language Model. [https://github.com/](https://github.com/kingoflolz/mesh-transformer-jax)
[kingoflolz/mesh-transformer-jax. 6](https://github.com/kingoflolz/mesh-transformer-jax)
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022a. Emergent abilities of large language models.
_arXiv preprint arXiv:2206.07682. 7_
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed H. Chi, Quoc Le, and Denny Zhou.
[2022b. Chain of thought prompting elicits reasoning](http://arxiv.org/abs/2201.11903)
[in large language models. CoRR, abs/2201.11903.](http://arxiv.org/abs/2201.11903)
1, 8
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. 6
Xikun Zhang, Deepak Ramachandran, Ian Tenney,
[Yanai Elazar, and Dan Roth. 2020. Do language em-](https://doi.org/10.18653/v1/2020.findings-emnlp.439)
[beddings capture scales? In Findings of the Associ-](https://doi.org/10.18653/v1/2020.findings-emnlp.439)
_ation for Computational Linguistics: EMNLP 2020,_
pages 4889–4896, Online. Association for Computational Linguistics. 8
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
[Least-to-most prompting enables complex reasoning](https://arxiv.org/abs/2205.10625)
[in large language models. 1](https://arxiv.org/abs/2205.10625)
-----
on larger and larger dataset), LLMs will need enumerations of the combinations of all possible two
operands where addition is defined, which is an
infinite set.[5]
Due to the distinction of empirical and rational
natures, robustness testing is a more appropriate
way to check the rational ability. When we test the
rational ability of LLMs in math, the key is not to
empirically check whether LLMs are correct on a
given set of math questions (since an empirically
strong model can also memorize common maths),
but to check whether they can answer math questions consistently/robustly, which can be indicative
of their extrapolation ability in rational reasoning.
Note that an alternative is to test on really difficult
math questions, but this test is also empirical in
nature, because humans draw a finite subset from
the infinite set of possibile operands, which is not
the way to a thorough test for rational abilities.
**C** **Creation of the Prompts**
From the MWP templates of the SVAMP/ASDivA/MAWPS collection (we consider all splits),
we select the templates describing a simple twovariable expression. We then filter out the templates
whose questions do not start with How many..., and
we use spaCy[6] to identify the subject, the object
and the verbs in the sentence. This allows us to
convert the last sentence of the template from The
_number of... is. This way, we obtain 437 statement-_
based MWP templates. We manually checked a
subset of the templates to identify possible mistakes in the conversion procedure.
**D** **Examples**
In Table 1 we report examples of MWP pairs representing different types of intervention.
**E** **Computation of Causal Effects for**
**GPT-3**
We accessed GPT-3 through the OpenAI APIs,
which allow a user to prompt the model and obtain the probabilities assigned by the model to the
_k-th most likely vocabulary entries, for each token_
generated. To overcome this limitation, we approx
5The set of two operands that are valid for addition has
the infinity size ℵ0 (countably-infinite) if the two numbers are
integers, ℵ1 (uncountably-infinite) if the two numbers are real
or complex numbers, and even more if we want to generalize
to abstract objects such as subspaces and subgroups.
[6https://spacy.io](https://spacy.io)
Figure 6: A funny interaction with GPT-3. Credit and
Copyright: Peter Wildeford, Twitter
**A** **A Funny Interaction with GPT-3**
Figure 6 shows an amusing example of interaction
between an user and GPT-3.
**B** **A philosophical discussion on**
**Reasoning**
One promising angle to answer Q1 is the limit
of empiricism, which inspires this work to analyze the robustness of LLMs in math problems, because math is an important example where a pure
empirical but not rational approach can fail badly
(Hendrycks et al., 2021). We take insights from
the philosophy branch of epistemology, which considers the nature of knowledge and how it can be
acquired. Addressing math questions aligns with
the profound study of empiricism vs rationalism
in epistemology. For example, in his famous Cri_tique of Pure Reason (Kant, 1781), Kant states that_
knowledge is empirical (a posteriori), such as the
ratio of a circle’s circumference to its diameter being 3.1415. . . . In the case of LLMs, their empirical
knowledge comes from seeing lots of ways how
existing texts describe certain things, which may
be a reason for performing well on many tasks.
However, another type of knowledge is rational
(a priori), such as pure logic and math. This is
very difficult for LLMs, or takes infinite computation. For example, LLMs can memorize some
commonly occurring two-number addition (e.g.,
5+5=10), but, to consistently master this rational
ability, if we keep the empirical approach (to train
-----
imated the the relative probability change drcc as
follows, depending on the kind of effect measured.
The limit for k is set by OpenAI to 5. However,
for our main set of experiments (i.e., computing the
causal effects of N, S, and T ) we were granted
an increased limit of k to 100. This allowed us to
obtain reasonable estimates for the causal effects,
as the number of cases in which P (g) is not defined
are less than 10% of the number of examples that
we consider.
**E.1** TCE(N on R) and TCE(T on R)
In cases when P (g) is defined (i.e., when g appears
in the top k token predictions) and P _[′](g) is not_
defined, we compute a lower bound on the relative
change using the upper bound on P _[′](g) given by_
the probability of the k-th most likely token. This
gives us a conservative estimate of ∆. For cases in
which P (g) is not defined, we cannot say anything
about the relative change, and we set ∆= 0. The
same applies swapping P and P _[′]. This procedure_
is illustrated by Algorithm 1.
**Algorithm 1 Computation of drcc for GPT-3**
**_Q = (t, (n1, n2), g)_**
**_Q[′]_** = (t[′], (n[′]1[, n]2[′] [)][, g][′][)]
**if P** (g) is defined then
**if P** _[′](g) is defined then_
∆= _[P]_ [(][g][)][−][P][ ′][(][g][)]
_P_ _[′](g)_
**else**
∆=Pˆ[′] _←[P]P[(][g]P[′]ˆ[)]([−][′]kP[ˆ]-th most likely token[′]_ )
**end**
**else**
∆= 0
**end**
**if P** _[′](g[′]) is defined then_
**if P** (g[′]) is defined then
∆[′] = _[P][ ′][(][g][′][)][−][P]_ [(][g][′][)]
_P_ (g[′])
**else**
**E.2** DCE(N → _R) and DCE(S →_ _R)_
In this case we simply discard the examples for
which P (g) is not defined or P _[′](g) are not defined._
In that is not the case, then we compute drcc as in
Section 3.4.
**E.3** **Heatmap Illustration**
The heatmap for GPT-3 displayed in Figure 4 was
computed by taking the raw probability score produced by the model over the whole vocabulary,
as the limit on the available top predicted tokens
makes it impossible to normalize it over the set
_{0, . . ., 300}, as done for the other models. The_
probability was set to 0 when g did not appear in
the model’s top 5 predictions for the next token
after the prompt.
**F** **Computing Infrastructure & Inference**
**Details**
To run our experiments, we use a single NVIDIA
TITANRTX with a 24GB memory for all the versions of GPT-2 and GPT-Neo. We use a single
NVIDIA A100 with a 40GB memory for GPT-J6B. We access GPT-3 using the OpenAI APIs. Running the largest locally-stored model (GPT-J-6B)
on the four kinds of experiments related to the four
kinds of effects measured took ∼12 hours, using
500 MWP instances for each of the 437 templates.
Due to budget constraints, the experiments on GPT3 were carried out using 20 examples generated
for each template, and took ∼7 hours. Experiment
tracking was carried out using Weights & Biases[7].
**G** **Accuracy of the Evaluated Models**
We report the accuracy of the nine models considered for evaluation in terms of precision at 1 and
precision at 10. Results are displayed in Figure 7.
**H** **Results using the JS Divergence**
**Definition** Using he same notation as in Section
3.4, we consider the Jensen–Shannon (JS) divergence, which is formulated as follows:
∆Pˆ ←[′] = _P[P][ ′]([(]k[g]Pˆ[′]-th most likely token[)][−]P[ˆ]_ )
**end**
**else**
∆[′] = 0
_dJS(P, P_ ) := [1]
_[′]_ 2
_dKL(P, M_ ) + dKL(P _, M_ ) _,_
_[′]_
(15)
ã
**end**
_drcc =_ [1]2 [(∆+ ∆][′][)]
where M := [1]2 [(][P][ +][ P][ ′][)][, and][ d][KL][ is the Kullback-]
Leibler (KL) divergence between two distributions
[7http://wandb.ai/](http://wandb.ai/)
-----
0.8
0.6
0.4
0.2
Distilled0 Regular
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||Precision@1|||||||||Precision@10||||||||||||||||
|||||||||||||||||||||||||||
|||||||||||||||||||||||||||
|||||||||||||||||||||||||||
|||||||||||||||||||||||||||
2
1.5
0.5
Distilled0 RegularMediumLarge3-BabbageNeo-1.3BXLNeo-2.7BJ-6B3-Curie
|·10−|−2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||
||Precision@1||||||||||||||||||
||||||||||||||||||||
||||||||||||||||||||
||||||||||||||||||||
||||||||||||||||||||
Figure 7: Average precision of the models on the generated instances of MWPs. Results are averaged over two
sets consisting of 500 problem instances generated for
each template. The lower figures shows a zoomed-in
visualization of the precision at 1.
0.3
0.2
0.1
|DCE of N TCE of N JS JS|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
_P and S:_
_dKL(P, S) :=_
_P_ (r) log _[P]_ [(][r][)] (16)
_S(r)_ _[.]_
_r∈{1X,...,300}_
0.1
_JS_
_d_
DistilledRegularMediumLarge XLNeo-1.3BNeo-2.7B J-6B
|Col1|DCE of T TCE of T JS JS|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
Figure 8: Effects quantified as JS divergence between
_P and P_ _[′]._
**Results** The results in Figure 8 show an increasingly large difference between TCEJS of Q and
DCEJS of T as the model size increases. The gap
in dJS observed when intervening on s and t appears increasing with model size, phenomenon not
observed for interventions on N . This suggests a
higher sensitivity of larger models to word changes
than to changes in the numerical values.
-----
Ruby has 87 candies. If she shares the candies among 29
_g = 87/29 = 3_
friends, the number of candies that each friend gets is
Ruby has 35 candies. If she shares the candies among 5
_g = 35/5 = 7_
friends, the number of candies that each friend gets is
The school is composed of 13 buildings each having 10
_g = 10_ 13 = 130
classrooms. The number of classrooms that the school has is _×_
The school is composed of 65 buildings each having 2 class_g = 65_ 2 = 130
rooms. The number of classrooms that the school has is _×_
The razorback t-shirt shop ordered 6 cases of t-shirts. If
each case contains 17 t-shirts the number of t-shirts that they _g = 17 × 6 = 102_
ordered is
The roller coaster at the state fair costs 6 tickets per ride. If
17 friends were going to ride the roller coaster the number of _g = 17 × 6 = 102_
tickets that they would need is
Sean has 23 whistles. He has 6 more whistles that Charles.
_g = 23_ 6 = 17
The number of whistles that Charles has is _−_
Jovana filled her bucket with 23 pounds of shells. If she
adds 6 more pounds of shell to fill her bucket, the number of _g = 23 + 6 = 29_
pounds that she has is
TCE(N → _R)_
DCE(N → _R)_
DCE(S → _R)_
TCE(T → _R)_
Table 1: For each of the causal effects measured (left column), we report a pair of MWPs illustrating the intervention performed (center), along with their respective ground-truth result (left column).
-----
| [
"Alessandro, Stolfo",
"Zhijing, Jin",
"Kumar, Shridhar",
"Bernhard, Schölkopf",
"Mrinmaya, Sachan"
] | 2022-01-01T00:00:00 | ACL 2023 Long Papers | true | 47 | 3 | null | https://arxiv.org/abs/2210.12023 | https://arxiv.org/abs/2210.12023 | https://www.semanticscholar.org/paper/9b45af10429681249fafb07c3b6012ea4ce63ffe |
LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning | While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks. Here, we replace architecture engineering by encoding inductive bias in the form of datasets. Inspired by Peirce's view that deduction, induction, and abduction are the primitives of reasoning, we design three synthetic tasks that are intended to require the model to have these three abilities. We specifically design these tasks to be synthetic and devoid of mathematical knowledge to ensure that only the fundamental reasoning biases can be learned from these tasks. This defines a new pre-training methodology called "LIME" (Learning Inductive bias for Mathematical rEasoning). Models trained with LIME significantly outperform vanilla transformers on four very different large mathematical reasoning benchmarks. Unlike dominating the computation cost as traditional pre-training approaches, LIME requires only a small fraction of the computation cost of the typical downstream task. The code for generating LIME tasks is available at https://github.com/tonywu95/LIME. | A new pre-training methodology called LIME (Learning Inductive bias for Mathematical rEasoning). | # LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
**Yuhuai Wu** [1 2] **Markus Rabe** [3] **Wenda Li** [4] **Jimmy Ba** [1 2] **Roger Grosse** [1 2] **Christian Szegedy** [3]
**Abstract**
While designing inductive bias in neural architectures has been widely studied, we hypothesize
that transformer networks are flexible enough to
_learn inductive bias from suitable generic tasks._
Here, we replace architecture engineering by encoding inductive bias in the form of datasets. Inspired by Peirce’s view that deduction, induction,
and abduction are the primitives of reasoning, we
design three synthetic tasks that are intended to
require the model to have these three abilities.
We specifically design these tasks to be synthetic
and devoid of mathematical knowledge to ensure
that only the fundamental reasoning biases can
be learned from these tasks. This defines a new
pre-training methodology called “LIME” (Learning Inductive bias for Mathematical rEasoning).
Models trained with LIME significantly outperform vanilla transformers on four very different
large mathematical reasoning benchmarks. Unlike dominating the computation cost as traditional pre-training approaches, LIME requires
only a small fraction of the computation cost
of the typical downstream task. The code for
generating LIME tasks is available at https:
//github.com/tonywu95/LIME.
**1. Introduction**
Inductive bias is essential for successful neural network
learning. Many of the breakthroughs in machine learning
are accompanied by new neural architectures with better
inductive biases, such as locality bias in convolutional neural networks (LeCun et al., 1999), recurrence and memory
in LSTMs (Hochreiter & Schmidhuber, 1997), and structural bias in graph neural networks (Scarselli et al., 2008).
However, explicitly encoding inductive biases as new neural
architectures can be difficult for abstract concepts such as
*Equal contribution 1University of Toronto, Toronto, Canada
2Vector Institute, Toronto, Canada 3Google Research 4University
of Cambridge, Cambridge, UK. Correspondence to: Yuhuai Wu
<[email protected]>.
_Proceedings of the 38_ _[th]_ _International Conference on Machine_
_Learning, PMLR 139, 2021. Copyright 2021 by the author(s)._
_mathematical reasoning. Attempts to design elaborate ar-_
chitectures for reasoning often fall short of the performance
of the more generic transformer architecture. In this work,
we aim to avoid the search for new architectures and investigate whether one can learn useful inductive bias for
_mathematical reasoning through pretraining._
Large-scale unsupervised pretraining of language models revolutionized the field of natural language processing
(NLP), improving the state-of-the-art in question answering,
name entity recognition, text classification, and other domains, e.g. (Radford et al., 2018; Devlin et al., 2019; Yang
et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown
et al., 2020). As a result, pretraining has become a common
practice for modern neural network based NLP. A popular
explanation for the benefit of pretraining is that the model
can learn world knowledge by memorizing the contents of
the natural language corpus, which can be useful in downstream tasks, such as question answering and text classification. However, there is another potential advantage of
pretraining—it may distill inductive biases into the model
that are helpful for training on downstream tasks (Brown
et al., 2020; Warstadt & Bowman, 2020). We focus on the
latter and design pretraining tasks that are intentionally devoid of world knowledge and only allow the model to learn
inductive bias for reasoning.
Inspired by the logician Charles Peirce (Peirce, 1992), we
consider the following three reasoning primitives:
1. Deduction: the ability to deduce new truths from given
facts and inference rules.
2. Induction: the ability to induce general inference rules
from a set of known facts.
3. Abduction: the ability to explain the relationship be
tween the evidences and inference rules.
To endow the models with an inductive bias for mathematical reasoning, we design a synthetic task for each of the three
reasoning primitives. We hypothesize that the transformer
networks are flexible enough to learn strong inductive bias
from the three synthetic reasoning tasks, which helps to
improve the performance on downstream tasks. Although
such inductive bias may be useful in general reasoning tasks
(e.g., NLP tasks), in this work, we focus on mathematical
reasoning benchmarks, for which we expect to observe the
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
largest gains. We call training on these tasks LIME – an
acronym for “Learning Inductive Bias for Mathematical
rEasoning”. Note that there is only a limited amount of pretraining data available for formal mathematical benchmarks,
therefore the study of generic pre-training techniques is particularly important for the success of machine learning in
mathematical reasoning.
We demonstrate that LIME pretrained models provide significant gains across four large mathematical reasoning benchmarks: IsarStep (Li et al., 2021), HOList Skip-tree (Rabe
et al., 2021), MetaMathStep (Polu & Sutskever, 2020), and
LeanStep (de Moura et al., 2015). Notably, LIME improved
the top-1 accuracy from 20.4% to 26.9% IsarStep, and from
15.5% to 29.8% on LeanStep. Compared to traditional pretraining tasks, LIME has two major differences. First, LIME
requires only a fraction of the computational cost of downstream tasks. With only about two hours of training on a
single modern GPU, one already obtains all the benefits,
in contrast to days of training on a large natural language
corpus with hundreds of GPUs/TPUs. Secondly, LIME does
not load the input embeddings or the weights in the output
layer for finetuning on downstream tasks. This allows one to
use the same pretrained model for a variety of downstream
tasks, which can have vastly different vocabularies due to
language or tokenization differences.
Our method can also be regarded as a form of curriculum
learning, in which the model is taught basic, extremely
generic but general skills before being trained on the specific
problem domain.
To summarize, the contributions of the paper are:
1. Providing the first method to design inductive biases in
the form of datasets for mathematical reasoning.
2. Demonstrating significant improvements in the reason
ing performance of transformer models on three large
mathematical reasoning benchmarks with negligible extra computation cost.
3. By showing how pretraining brings benefits other than
learning content knowledge, disentangling the study of
its working mechanism.
**2. Related Work**
**Learning Models Applied to Mathematics** There has
been increasing interest in applying deep learning methods
to Interactive Theorem Provers (ITP) (Bansal et al.; 2019;
Gauthier et al., 2020; Huang et al., 2019; Yang & Deng,
2019; Wu et al., 2021; Li et al., 2021; Polu & Sutskever,
2020). The work that is most related to ours is GPT-f (Polu
& Sutskever, 2020). The authors performed pretraining on
several natural language corpora and showed significant
improvements for an ITP system – MetaMath. Different
from ours, they used GPT-style large-scale language modeling pretraining, which dominates the computation cost
compared to the downstream task. We, on the other hand,
propose pretraining on a few lightweight synthetic tasks
costing only a minor fraction of the computation spent on
the downstream task.
Lample & Charton (2020) have demonstrated that transformer models can be used for symbolic mathematics by
successfully predicting the integrals of formulas from a randomly generated dataset. Similar observations are made for
logical problems relevant to verification: that transformer
networks can learn the semantics of logics (Hahn et al.,
2020). Rabe et al. (2021) have shown that mathematical
reasoning can emerge from self-supervised training alone.
Li et al. (2021) show that language models can learn to synthesize missing high-level intermediate propositions given
a local context. Piotrowski & Urban (2020) used RNNs in
automated theorem provers for first-order logic. Wang et al.
(2020) explored the use of machine translation to translate
between synthetically generated natural language descriptions of proofs and formally represented proofs. Urban &
Jakub˚uv (2020) present initial experiments on generating
mathematical conjectures with a Transformer model.
Saxton et al. (2019) suggest a dataset for the analysis of
mathematical reasoning skills. In contrast to the datasets
considered here, their dataset is synthetic, focuses on calculation with concrete numbers, and only contains relatively
few symbolic tasks.
**Language Model Pretraining** The advent of the trans
former architecture (Vaswani et al., 2017) and the BERT
style pretraining (Devlin et al., 2019) represented a huge
improvement in the quality of language modeling. Since
then, an explosion of research activity in the area pushed the
quality of language models through better pretraining tasks.
Where BERT (Devlin et al., 2019) masks out a fraction of
the input tokens, later works demonstrated the advantages
of masking out subsequences (Song et al., 2019; Dong et al.,
2019; Joshi et al., 2020; Raffel et al., 2020; Conneau &
Lample, 2019) and whole sentences (Zhang et al., 2020).
Besides the choice of pretraining tasks, the scale of language models is also an important factor. Language models
improve in quality and develop new abilities as they grow
larger while trained on the same data (Radford et al., 2018;
Raffel et al., 2020; Brown et al., 2020).
**Inductive Biases in General** There have been works
studying learning inductive biases in other contexts. In
particular, McCoy et al. (2020) studied whether one can
learn linguistic inductive biases on synthetic datasets via
meta-learning. Papadimitriou & Jurafsky (2020) shows
inductive biases learned in music data can be useful for
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
|Reasoning Primitives|Inference Map|
|---|---|
|Deduction|Rule, Case Result !|
|Abduction|Rule, Result Case !|
|Induction|Case, Result Rule !|
To give an example, we let Rule be “All the beans in this
bag are white”, Case be “These beans are from this bag”,
and Result be “These beans are white”. Deduction is to
derive the fact that these beans are white (Re) from knowing
all the beans from this bag are white (R) and these beans
are from this bag (C). Abduction explains why the beans
are white (Re) from knowing that all the beans in the bag
are white (R) – because these beans must be from the bag
(C). Lastly, induction aims to provide a general principle
to observing the fact that the beans are white (Re) and they
come from this bag (C), which is that all the beans in the bag
must be white (R). We refer to Peirce (1992) and Bellucci
& Pietarinen (2015) for more elaborate discussions on the
primitives of reasoning.
Mathematical reasoning exhibits nontrivial uses of these
reasoning primitives. Deduction happens when one needs to
derive new valid statements from the given premise (Case)
and theorems in the library (Rule). Abduction is used to
postulate conjectures from the known facts and theorems,
allowing one to decompose the challenging theorem into
subgoals for proof. Induction, the ability to extract general
principles from known facts and theorems is also one of the
major activities of mathematical reasoning. It is used when
one derives theorems from special cases and proposes new
definitions and general frameworks to encapsulate existing
knowledge.
**3.2. LIME Synthetic Tasks For Reasoning Primitives**
We design three synthetic tasks inspired by the three reasoning primitives. As discussed in the previous section, all of
the reasoning primitives consist of three essential elements:
Rule, Case, and Result. Inspired by this, we first design a
method to generate those elements. Once they are generated,
we can construct tasks that predict one element from the
other two. In the following, we describe one simple way to
generate those three elements, though we acknowledge that
there are many other possible approaches.
We require two types of symbols: 1. math symbols, 2. rule
_symbols. In general, these symbols can take any forms (e.g.,_
integer representations). But for the ease of discussion, we
will think of math symbols as the union of those operators
used in mathematics (e.g., “+ −⇤ = ()&”) and lower case
letters (e.g., a, b, c ...), and rule symbols as upper case
letters (e.g., A, B, C ...). We now construct Rule, Case,
and Result in order:
1. Rule is a randomly sampled string that consists of i)
natural language. They further designed several synthetic
tasks and showed similar kind of improvements for natural
language tasks. From a more theoretical point of view, Xu
et al. (2020) formalize an aspect of inductive (architectural)
bias under the context of GNNs, with a notation called ar_chitectural alignment. The architecture is aligned when_
the architecture can perfectly simulates the ground truth
solution. But their work is limited to showing alignment
in combinatorial problems, whose ground truth solutions
are known. In contrast, our work tries to learn architectural
bias by relying on the flexible Transformer architecture and
training on synthetic datasets.
**Inductive Biases for Mathematics** Previous work study
ing inductive biases for logical reasoning has focused on
encoding bias in the neural architecture. Initial works focused on encoding the tree structure of expressions using
TreeRNNs (Evans et al., 2018). Graph neural networks
are shown to provide a much stronger performance than
tree models in premise selection (Wang et al., 2017) and
theorem proving (Paliwal et al., 2020). GNNs also scale
to larger formulas in SAT (Selsam et al., 2019; Selsam &
Bjørner, 2019; Han, 2020), QBF (Lederman et al., 2020),
and #SAT (Vaezipoor et al., 2021). Crouse et al. (2019)
have shown that pooling mechanisms can have an impact
on the performance of GNNs on logical formulas as well.
Closely related, Hellendoorn et al. (2020) have shown that
it can be helpful to hard-code the tree structure of programs
in the attention mask of transformers. Schlag et al. (2019)
developed an architecture for encoding relational information using tensor product representation for mathematical
reasoning.
**3. Methods**
In this section, we first discuss the primitives of reasoning,
inspired by Peirce’s views, and design one synthetic task for
each reasoning primitive.
**3.1. Reasoning Primitives**
In Peirce’s view, there are exactly three kinds of reasoning:
deduction, abduction, and induction. Deduction is known as
the workhorse for mathematics. It is the process of deriving
new facts by applying logical inference rules to known facts
or premises. On the other hand, abduction and induction
can be thought of as the inverses of deduction. If we call the
premise used in deduction as Case, its logical rule as Rule,
and its conclusion as Result, then abduction is equivalently
the inference of a Case from a Rule and a Result, while
induction may be said to be the inference of a Rule from
a Case and a Result. We summarize the three reasoning
primitives in the following table:
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
rule symbols and ii) math symbols. The length of the
string is randomly sampled from a range. For instance,
a randomly sampled rule can be: A ⇤ _A + B = C with_
rule symbols A, B, and C.
2. Case is a dictionary that represents substitutions. For
each rule symbol used in the Rule string, we sample a
random string of random length that consists of math
symbols. This forms a dictionary, whose keys are all rule
symbols, and the values are the corresponding sampled
string. To illustrate, following the previous example, for
each A, B and C, we sample a random string to form a
dictionary as: {A : a, B : b, C : d + e}.
3. Result is the outcome of the substitution. For each rule
symbol in the Rule string, we replace it with the corresponding value stored in the Case dictionary. This
gives rise to the Result string. As per the previous example, we now substitute A with a, B with b, and C with
_d + e into the Rule string, generating the Result string:_
_a ⇤_ _a + b = d + e._
After Rule, Case, and Result are generated, we can construct three tasks for deduction, abduction, and induction
respectively. We define the three synthetic tasks as follows:
- Deduct: Source: Rule string and Case dictionary.
**Target: Result string.**
- Abduct: Source: Rule string and Result string.
**Target: Case dictionary.**
- Induct: Source: Case dictionary and Result string.
**Target: Rule string.**
We also consider a task called Mix, which is a uniform mix
of three tasks. Namely, during generation, we randomly
select a task and sample an example from that task. To
formulate them as sequence to sequence tasks, we represent
the Case dictionary also as a string, e.g., “{A : a, B :
_b, C : d_ + _e}”. An example of Abduct using the examples_
of Rule, Case, and Result above is to predict the target
_{A : a, B : b, C : d + e} from the source A ⇤_ _A + B = C_
<s> a ⇤ _a + b = d + e._
Pre-training on our synthetic tasks can be seen as a form of
skip-component learning. There are three essential components: Rule, Case and Result, and we skip one of them and
use the remaining two elements to reconstruct the missing
one. Past work has shown that learning to predict missing
words (Devlin et al., 2019), subsequences (Song et al., 2019;
Raffel et al., 2020), or subtrees (Rabe et al., 2021) are strong
pre-training tasks.
**3.3. Symbol-Agnostic Representation**
In order to solve the synthetic tasks, the model needs to
distinguish which set of symbols can be substituted (rule
symbols). As a result, the model may memorize information
about the symbols that is irrelevant to the inductive biases
encoded in the task. To prevent such memorization, we
propose a way to make the synthetic tasks agnostic to the
choice of symbols.
We first note that the choice of symbols is irrelevant to our
synthetic tasks. To avoid symbol-specific memorization, for
each training and evaluation example, we randomly sample
two sets of symbols to be used in Rules and in the rest of
the example. But for the Abduct task, the model needs
to know which symbols are replaced by the Rule part of
the example and which symbols are in the Result language.
We simply list the split of the symbols used in the example
at the beginning of the input string, marked by two special
symbols, <Rule> and <Math>. They are followed by the
original source string. The target string remains unchanged.
For example, the previous example in the Abduct task
becomes,
Source: <Rule> A B C <Math> ⇤ + = a b d e <s>
_A ⇤_ _A + B = C <s> a ⇤_ _a + b = d + e_
Target: {A : a, B : b, C : d + e}
In our implementation, we use integers to represent symbols. Specifically, for each example, we sample two disjoint
sets of integers from the set {1, . . ., S} to represent the
math symbols and the rule symbols, where S is the size
of the vocabulary. In our experiments, we sample 44 math
symbols and 24 rule symbols for each problem. The complete pseudo-code of generating the symbols, Rule, Case,
and Result for one task example is provided in Appendix
Algorithm 1.
**4. Experiments**
In this section, we present results on four large mathematical
reasoning tasks that are especially useful in the context of
automated theorem proving. Our results show significant
gains in learning inductive biases from synthetic tasks. We
have selected four tasks to cover various different styles
of interactive theorem provers: The HOL-Light (skip-tree)
corpus was created from very high-level tactic-based proofs,
but it is less interpretable than IsarStep’s declarative style
corpus. We also evaluate on model’s ability to conjecture
unseen lemma strings with Lean theorem prover, which is
host to some of the most sophisticated formalized mathematics. Lastly, we evaluate the next proof-step prediction
task on the set.mm library of MetaMath, which consists of
very granular, basic proof steps. Namely, the proof steps are
more predicable and average proof lengths have significantly
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
_Table 1. Test top-1, top-10 (%) accuracy on the IsarStep task._
Model Top-1 Acc. Top-10 Acc.
increased.
**4.1. Experiment Details**
No pretrain (Li et al., 2021) 20.4 33.1
HAT (Li et al., 2021) 22.8 35.2
LIME Deduct 24.7 37.7
LIME Abduct 26.7 **41.0**
LIME Induct 23.9 38.8
LIME Mix **26.9** 40.4
_Figure 1. Validation BLEU along with training on the IsarStep_
task.
attention and attention over the encoder output were merged.
**LIME Pretraining** We generate datasets of our synthetic
tasks for pretraining: Deduct, Abduct, Induct, Mix.
For pretraining of IsarStep, we used a vocabulary size S of
1000. For the other two downstream tasks, we used a vocabulary size of 100. The reason we used different vocabulary
sizes was that we found (cf. appendix) the discrepancy in
vocabulary size affects the performance of a downstream
task if it has a very large vocabulary size (IsarStep has 28K).
We use 44 math symbols and 24 rule symbols. The length
of the Rule string is sampled from 5 to 20, the length of the
string for each substitution (the values of Case dictionary)
is sampled from 2 to 8. We used word-level tokenization for
all the tasks. We pretrained the model for 20K updates. For
tasks with larger vocabulary size (i.e., 1000), we found the
learning became more difficult. Hence we used a curriculum
learning scheme: we first trained the model for 10K steps on
the same task with a vocabulary size of 100, then continue
training for another 10K step on vocabulary size of 1000.
The pretraining was done on a single Nvidia Tesla T4 GPU
with 4 CPU cores for 2 hours. We set the maximum number
of tokens in a batch to 4096, and accumulate four batches
of gradients for one parameter update. We used the Adam
optimizer (Kingma & Ba, 2015) with learning rate 3 · 10[−][4].
We used a dropout rate of 0.1 and label smoothing (Szegedy
et al., 2016) with a coefficient 0.1.
**Fine-tuning** For all the downstream tasks in this section,
when loading the pretrained models for fine-tuning, we do
not load in the vocabulary embeddings nor the output layer
weights. For the downstream task IsarStep and MetaMathStep, we used four Nvidia Tesla T4 GPU with 16 CPU cores
for training. We set the maximum number of tokens in a
batch to 4096, and accumulated four batches of gradients
for one parameter update. We trained the model for 200K
updates. We used the Adam optimizer, and we searched
over the learning rates {3 · 10[−][4], 7 · 10[−][4]}, and warmup
steps {4000, 8000}. We used a dropout rate of 0.1 and label
smoothing with a coefficient 0.1. For the HOList skip-tree
task, we used TPUs for running the experiments. We used
a batch size of 256 sequences and trained the model for 1
million updates.
**Evaluation** During training, we kept track of the best
validation tokenized BLEU score [1], and we used the model
with validation BLEU for evaluation on the test set. We
report top-1 and top-10 accuracies. We consider an output
sequence as correct if it matches the target sequence exactly.
We performed a beam search with width 10. The top-1
accuracy is then defined as the percentage of the best output
sequences that are correct. The top-n accuracy is defined as
the percentage of target sequences appearing in the top n
generated sequences.
**4.2. IsarStep**
The IsarStep task is taken from (Li et al., 2021). IsarStep is
a task of predicting the missing intermediate propositions
given surrounding propositions to bridge the gap between
the goal and the current state of the proof. The dataset was
mined from the public repository of formal proofs of the
Isabelle proof assistant (Paulson, 1994). Unlike HOList and
MetaMath, IsarStep contains mostly declarative proofs, a
proof style close to humans’ prose proofs. The dataset has a
broad coverage of undergraduate and research-level mathematics and computer science theorems. There are 820K,
1https://github.com/pytorch/fairseq/blob/
master/fairseq/tasks/translation.py#L396
**Architecture** All experiments used the transformer base
model from (Vaswani et al., 2017), i.e. 512 hidden size,
2048 filter size, 8 attention heads. For the IsarStep and
MetaMathStep task, we used 6 layers for both the encoder
and decoder, implemented using fairseq (Ott et al., 2019).
For the HOList skip-tree experiment, we used a somewhat
modified transformer architecture with 8 encoder and 4
decoder layers of the same size as above in which the self
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
_Table 2. Test top-8 Accuracy on Skip-Tree HOList (%)._
Model Equation completion Hard type inference Missing assumptions Easy type inference
No pretrain (Rabe et al., 2021) 46.3 95.0 41.8 95.9
LIME Deduct 50.3 94.8 **47.9** 97.0
LIME Abduct 48.4 94.8 46.1 96.3
LIME Induct 44.8 94.9 42.6 96.4
LIME Mix **51.7** **95.6** 46.1 **97.6**
_Table 3. Test top-1, top-10 (%) accuracy on the MetaMathStep_
task.
Model Top-1 Acc. Top-10 Acc.
No pretrain 67.7 76.5
LIME Deduct 68.8 77.4
LIME Abduct 68.8 76.1
LIME Induct **69.9** **78.0**
LIME Mix 69.1 77.9
5000, 5000 sequence pairs for the training, validation, and
test sets with a maximum of 800 tokens in source sequences
and 200 tokens in the target sequences. Following (Li et al.,
2021), during training, we use 512 as the maximum length
for both the source and target, and truncated those that exceed the length to 512. For reporting, we evaluate all 5000
test examples regardless of their lengths.
The results on the IsarStep task for four pretrained models
and the baseline transformer model without pretraining is
shown in Table 1. We also include another baseline, HAT
transformer introduced in (Li et al., 2021), which is a specially designed hierarchical transformer architecture tailored
to this task. We see the pretrained model achieved substantial improvement over the model trained from scratch as well
as HAT. Notably, the model that was pretrained on Abduct
improved the top-10 accuracy from 33.1% to 41.0%, for
almost 8% absolute improvement. The model pretrained on
Mix performed the best on top-1 accuracy, improving the
baseline by 6.5% accuracy. We also showed the validation
BLEU scores along training in Figure 1. We can see that
the pretrained models learned much faster than the model
trained from scratch. With around 50K steps of updates, the
pretrained model already obtained better BLEU scores than
the best score achieved by the un-pretrained model. Moreover, since the downstream task requires 200K steps of
training with 4 GPUs, the amount of computation spent on
pretraining is only 2.5% of the downstream task, strongly
demonstrating the efficiency of the proposed pretraining
method.
_Table 4. Test top-1, top-10 (%) accuracy on the LeanStep unseen_
lemma prediction task.
Model Top-1 Acc. Top-10 Acc.
No pretrain 15.8 27.4
LIME Deduct 25.8 38.0
LIME Abduct 26.0 38.6
LIME Induct 25.0 38.2
LIME Mix **29.8** **41.8**
**4.3. HOList Skip-Tree**
As the second mathematical reasoning benchmark, we consider the HOList skip-tree evaluation tasks by Rabe et al.
(2021). These tasks include two variants of type inference,
predicting under which assumptions theorems hold, and
completing equalities. All source expressions for these tasks
are taken from the validation set of the theorem database of
the HOList proof logs (Bansal et al.). The evaluations are
done on a random sample of 1000 instances from the full
evaluation sets. We initialized the model parameters with
the pretrained weights and then repeated the experiments
by Rabe et al. (2021). That is, we trained the models for up
to 1M parameter updates on the training set with batch size
256 and repeat the evaluation every 100K steps. In Table 2
we present the best result from these 10 evaluation runs.
We see a significant improvement in these reasoning tasks
when the models are initialized with the pretrained weights.
Notably, on equation completion and missing assumptions
task, we improved the beam search (with width 8) exact
match rate performance from 46.3% to 51.7% and 41.8%
to 47.9%. Note that this is despite the amount of pretraining
compute cost being negligible: it takes less than 1 percent
of the cost of the downstream task training. Pretraining
used 1/20 number of the update steps (50K vs 1M) with 8
(and 4) times smaller batches (pretraining has much shorter
sequence lengths, 128 vs. 1024 and 512, respectively).
**4.4. MetaMathStep**
Compared to other ITPs, MetaMath is a low-level proving
system: each proof step makes only a small step towards the
goal. As such, each proof contains many more proof steps
than in other ITPs: with 37, 000 theorems in the human
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
written theorem library, there are around 3 million proof
steps. We extract the proof steps and use them to construct
a sequence-to-sequence task following Polu & Sutskever
(2020) (their proof step training objective).
In this task, the model is asked to generate PROOFSTEPS
given a GOAL, namely, the GOAL string is the source input,
and PROOFSTEPS is the target output. We follow (Polu
& Sutskever, 2020) and use their string representation for
the GOAL and the PROOFSTEPS. Instead of using subword tokenization in Polu & Sutskever (2020), we use a
character-level representation for our task. Following Polu
& Sutskever (2020), we split theorems into train/valid/test
theorems of size 35K, 1K, 1K, and associate all proof steps
of a theorem with that split. For each dataset, we filter examples with lengths longer than 1024. This reduced the
total number of proof steps to 1.4 million. For validation
and test set, we randomly sample 3000 examples out of 40K
(after filtering) and perform validation and test evaluations
on them. In Table 3 we present the impact of LIME on
MetaMathStep. We also observe gains from LIME on this
dataset, with the model trained on Induct task achieving
2.2% top-1 and 1.5% top-10 test accuracy improvement.
Similarly, as for the IsarStep task, the computation spent on
pretraining is only 2.5% of the downstream task.
**4.5. LeanStep: Unseen Next Lemma Prediction Task**
Lastly, we look at a mathematical benchmark based on Lean
3 theorem prover. Lean has an extremely active community and is host to some of the most sophisticated formalized mathematics in the world, including scheme theory
(Buzzard et al., 2019), forcing (Han & van Doorn, 2020),
perfectoid spaces (Buzzard et al., 2020), and condensed
mathematics (Scholze, 2020). We extracted a similar style
of dataset as MetaMathStep from Lean, that is, we predict
the next lemma to apply given the current goal state (or
commonly known as the tactic state in Lean). Unlike MetaMathStep, we focus on predicting lemmas that have not been
seen during training time. Namely, in this task, we evaluate
the model’s capability of conjecturing a novel lemma string
given a goal. Specifically, we extracted 498, 624 number
of goal, next lemma pairs from Lean mathlib library (mathlib, 2020; Han et al., 2021). We found there are 34, 867
lemmas that appeared only once in the entire dataset. We
then randomly sampled 8k of lemmas from this set and
used the corresponding goal lemma pairs for the validation
and the tests (each 4k). As such, during validation and
testing, the model needs to predict lemmas that have not
been seen during training. We present the results on LIME
and the baseline in Table 4. We observed a huge gain with
LIME pretraining. Remarkably, LIME MIX doubled the top
1 accuracy compared to the baseline unpretrained model,
improving the accuracy from 15.8% to 29.8%.
_Table 5. Comparisons to other pretraining tasks on IsarStep task._
Model Top-1 Acc. Top-10 Acc
No pretrain (Li et al., 2021) 20.4 33.1
LIME Mix 26.9 40.4
Pretrain on MetaMathStep 23.1 35.7
Pretrain on WMT En-De 17.2 30.3
_Table 6. Pretraining on IsarStep for the MetaMathStep task._
Model Top-1 Acc. Top-10 Acc.
No pretrain 67.7 76.5
LIME Mix 69.1 77.9
Pretrain on IsarStep 67.0 76.1
**5. Ablation Studies**
In this section, we perform ablation studies. Additional
ablation studies can be found in Appendix C.
**5.1. Pretraining on Formal Reasoning and Natural**
**Language Tasks**
Here we investigate how LIME compares to pretraining
on natural language or existing formal reasoning datasets.
In this set of experiments, we pretrained three models on
Mix, MetaMathStep, and on the WMT 2016 English-toGermany (WMT En-De) translation task, and then we finetuned and evaluated these models on the IsarStep task. We
pretrained the model on MetaMathStep and WMT EN-DE
for 200K steps with 4 GPUs, which is 40 times more computation spent than on LIME. Due to the mismatch between
vocabularies of the pretraining task and the downstream
task, we do not load the vocabulary embeddings nor output
layer weights. The results in Table 5 show that pretraining
on MetaMathStep did provide gains, though significantly
smaller than gains provided by LIME Mix, despite their 40
times higher computational cost. Moreover, pre-training on
WMT translation had even a negative effect on the performance. We also conducted an analogous experiment with an
evaluation on the MetaMathStep. The result is shown in Table 6. In contrast to MetaMath helping IsarStep, we see that
pretraining on IsarStep task did not help the downstream
task MetaMathStep. We hypothesize that this could be due
to MetaMathStep task is closer to the LIME tasks than IsarStep, and hence providing more gains than the opposite
direction. We leave investigations to the future versions.
**5.2. Do we need vocabulary embeddings for**
**fine-tuning?**
As mentioned earlier, we did not load in the vocabulary
embeddings from the pretrained models when we switched
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
_Table 7. Whether one needs to load vocabulary embeddings and_
output layer weights on IsarStep tasks.
Model Top-1 Acc. Top-10 Acc
No pretrain (Li et al., 2021) 20.4 33.1
LIME Mix 26.9 40.4
LIME Mix + Loading All Weights 26.7 40.6
to fine-tuning on downstream tasks. Even without loading
the vocab embeddings, the pretrained models still improved
the performance. In this ablation study, we investigate how
much this decision has affected the results and whether vocabulary embeddings can help improve the performance
even further. We performed the comparisons on IsarStep.
The task contains a token vocabulary of size 28336. We
generated new synthetic tasks for the same vocabulary size,
such that we can load the vocabulary embeddings and output layers when initializing the model for IsarStep. Table 7
shows that this led to similar performance. This aligns with
our expectation that the model should not learn content
specific knowledge that is potentially stored in the vocabulary. These weights turn out to be non-essential for the final
performance, supporting the evidence that the transformer
learns inductive biases from the pretraining task.
**5.3. Does LIME help LSTMs?**
In this section, we investigate if LIME also helps other
architectures than transformers. In particular, we applied
LIME to two LSTM based architectures: 1. vanilla LSTM,
2. LSTM with attention mechanism. The vanilla LSTM is
a stacking LSTM with 4 layers, each with 1000 cells, and
1000-dimensional embeddings. The LSTM with attention
architecture is taken from (Luong et al., 2015), also with 4
layers, 1000 cells and 1000-dimensional embeddings. We
evaluate on the IsarStep task, and compared a model trained
from scratch and a model pre-trained on LIME abduct
task. We used the same training protocol as described in
4.1. The results are shown in Table 8, along with the results
on transformer. We observe that LIME improved LSTM as
well as LSTM with attention, but the improvements were
small compared to transformer. Specifically, if we compare
Top-1 accuracy, we can see that LIME improved LSTM
from 5.5% to 6.9%, LSTM with attention from 12.3% to
13.4%, and transformer from 20.4% to 26.7%. This observation is aligned with our hypothesis that the transformer
is a malleable architecture and hence it is capable of learning architectural inductive biases from datasets. This is
mainly attributed to the potential of learning dynamic attention graphs in self-attention layers. We note that this still
warrants further investigation as the performance of these
architectures are not at the same level, and that may also
lead to different improvements.
_Table 8. Comparing LIME’s benefits on LSTMs on the IsarStep_
Task
Model Top-1 Acc. Top-10 Acc.
LSTM 5.5 11.3
LSTM + LIME Abduct 6.9 14.3
LSTM + attention 12.3 22.7
LSTM + attention + LIME Abduct 13.4 26.3
Transformer 20.4 33.1
Transformer + LIME Abduct 26.7 41.0
**6. Does LIME encode Induction, deduction**
**and abduction?**
Although LIME has shown to achieve substantial improvements across various benchmarks, it is not entirely clear that
the specific synthetic tasks necessarily enforce the reasoning
ability of induction, deduction and abduction. We would
like to note that deduction, induction, and abduction are
high-level and philosophical concepts, and serve only as
an inspiration for us to design the synthetic tasks. We do
not expect the model will necessarily learn exactly these
three capabilities. After all, we have chosen a particular implementation of "Case", "Rule" and "Result". Furthermore,
we also design tasks mimic proof steps in formal theorem
proving (see the rewrite task in Appendix B.1), which also
achieved excellent results. Nevertheless, we believe LIME
is a first step towards building reasoning inductive biases,
and provides many inspirations and directions for future
work.
**7. Conclusion**
In this work, we encoded inductive biases for mathematical
reasoning in the form of datasets. We created three synthetic
tasks inspired by three reasoning primitives of deduction,
induction, and abduction. We demonstrated that pretraining
on these tasks (LIME) significantly improved the performances across four mathematical reasoning benchmarks.
Notably, LIME requires negligible computation compared
to the downstream task, unlike being the dominating factor in previous pretraining methods. Our work naturally
poses many future research questions. Could the primitive
tasks provide similar gains for NLP tasks? Are there similar
primitive tasks for natural language reasoning? We also
look forward to disentangling the effects of pretraining between learning content knowledge and inductive bias for all
downstream tasks to better understand pre-training.
**Acknowledgments**
YW is supported by a Vector Institute research grant. Li
is supported by the ERC Advanced Grant ALEXANDRIA
(Project 742178), funded by the European Research Council. YW and CS would like to thank Rif A. Saurous for
discussions and proofreading.
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
**References** _Automated Deduction - CADE-25 - 25th International_
_Conference on Automated Deduction, Berlin, Germany,_
Bansal, K., Loos, S. M., Rabe, M. N., Szegedy, C., and
_August 1-7, 2015, Proceedings, volume 9195 of Lecture_
Wilcox, S. HOList: An Environment for Machine
_Notes in Computer Science, pp. 378–388. Springer, 2015._
Learning of Higher Order Logic Theorem Proving. In
doi: 10.1007/978-3-319-21401-6\_26. URL https://
_36th International Conference on Machine Learning,_
doi.org/10.1007/978-3-319-21401-6_26.
_ICML 2019, Long Beach, California, USA, June 9-15,_
_2019. URL http://proceedings.mlr.press/_ Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT:
v97/bansal19a.html. pre-training of deep bidirectional transformers for lan
guage understanding. In Burstein, J., Doran, C., and
Bansal, K., Szegedy, C., Rabe, M. N., Loos, S. M., and
Solorio, T. (eds.), Proceedings of the 2019 Conference of
Toman, V. Learning to Reason in Large Theories without
_the North American Chapter of the Association for Com-_
Imitation. arXiv preprint arXiv:1905.10501, 2019.
_putational Linguistics: Human Language Technologies,_
_NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7,_
Bellucci, F. and Pietarinen, A.-V. Charles Sanders Peirce:
_2019, Volume 1 (Long and Short Papers), pp. 4171–4186._
Logic. In The Internet Encyclopedia of Philosophy, 2015.
Association for Computational Linguistics, 2019. doi:
URL https://iep.utm.edu/peir-log/.
10.18653/v1/n19-1423. URL https://doi.org/
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, 10.18653/v1/n19-1423.
J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Dong, L., Yang, N., Wang, W., Wei, F., Liu, X., Wang,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Y., Gao, J., Zhou, M., and Hon, H. Unified Language
Henighan, T., Child, R., Ramesh, A., Ziegler, D. M.,
Model Pre-training for Natural Language Understanding
Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E.,
and Generation. In Advances in Neural Information Pro
Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C.,
_cessing Systems, NeurIPS 2019, Vancouver, BC, Canada,_
McCandlish, S., Radford, A., Sutskever, I., and Amodei,
_December 8-14, 2019, pp. 13063–13075, 2019._
D. Language models are few-shot learners. _CoRR,_
abs/2005.14165, 2020. URL https://arxiv.org/ Evans, R., Saxton, D., Amos, D., Kohli, P., and Grefenstette,
abs/2005.14165. E. Can Neural Networks Understand Logical Entailment?
In International Conference on Learning Representations,
Buzzard, K., Hughes, C., Lau, K., Livingston, A., Mir, 2018. URL https://openreview.net/forum?
R. F., and Morrison, S. Schemes in lean. arXiv preprint id=SkZxCk-0Z.
_arXiv:2101.02602, 2019._
Gauthier, T., Kaliszyk, C., Urban, J., Kumar, R., and Norrish,
Buzzard, K., Commelin, J., and Massot, P. Formalis- M. TacticToe: Learning to Prove with Tactics. Journal of
ing perfectoid spaces. In Blanchette, J. and Hritcu, _Automated Reasoning, pp. 1–30, 2020._
C. (eds.), Proceedings of the 9th ACM SIGPLAN
Hahn, C., Schmitt, F., Kreber, J. U., Rabe, M. N., and
_International Conference on Certified Programs and_
Finkbeiner, B. Transformers Generalize to the Semantics
_Proofs, CPP 2020, New Orleans, LA, USA, January_
of Logics. arXiv preprint arXiv:2003.04218, 2020.
_20-21, 2020, pp. 299–312. ACM, 2020._ doi: 10.
1145/3372885.3373830. URL https://doi.org/ Han, J. M. Enhancing SAT solvers with glue variable pre10.1145/3372885.3373830. dictions. arXiv preprint arXiv:2007.02559, 2020.
Conneau, A. and Lample, G. Cross-lingual Language Han, J. M. and van Doorn, F. A formal proof of the inde
Model Pretraining. In Advances in Neural Informa- pendence of the continuum hypothesis. In Blanchette, J.
_tion Processing Systems, NeurIPS 2019, Vancouver,_ and Hritcu, C. (eds.), Proceedings of the 9th ACM SIG_BC, Canada, December 8-14, 2019, pp. 7057–7067,_ _PLAN International Conference on Certified Programs_
2019. URL http://papers.nips.cc/paper/ _and Proofs, CPP 2020, New Orleans, LA, USA, Jan-_
8928-cross-lingual-language-model-pretraininguary 20-21, 2020., pp. 353–366. ACM, 2020. doi: 10.
1145/3372885.3373826. URL https://doi.org/
Crouse, M., Abdelaziz, I., Cornelio, C., Thost, V., Wu, L., 10.1145/3372885.3373826.
Forbus, K., and Fokoue, A. Improving Graph Neural Net
Han, J. M., Rute, J., Wu, Y., Ayers, E. W., and Polu, S.
work Representations of Logical Formulae with Subgraph
Proof artifact co-training for theorem proving with lan
Pooling. arXiv preprint arXiv:1911.06904, 2019.
guage models. The First Mathematical Reasoning in
de Moura, L. M., Kong, S., Avigad, J., van Doorn, F., and _General Artificial Intelligence Workshop, ICLR 2021,_
von Raumer, J. The lean theorem prover (system de- 2021. URL https://mathai-iclr.github.
scription). In Felty, A. P. and Middeldorp, A. (eds.), io/papers/papers/MATHAI_23_paper.pdf.
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
Hellendoorn, V. J., Sutton, C., Singh, R., Maniatis, P., and
Bieber, D. Global relational models of source code. In 8th
_International Conference on Learning Representations,_
_ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020._
OpenReview.net, 2020. URL https://openreview.
net/forum?id=B1lnbRNtwr.
Hochreiter, S. and Schmidhuber, J. Long Short-Term Mem
ory. Neural computation, 9(8):1735–1780, 1997.
Huang, D., Dhariwal, P., Song, D., and Sutskever, I.
GamePad: A learning environment for theorem proving.
In 7th International Conference on Learning Representa_tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019._
OpenReview.net, 2019. URL https://openreview.
net/forum?id=r1xwKoR9Y7.
Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer,
L., and Levy, O. Spanbert: Improving pre-training
by representing and predicting spans. Transactions of
_the Association for Computational Linguistics, 8:64–_
77, 2020. doi: 10.1162/tacl\_a\_00300. URL https:
//doi.org/10.1162/tacl_a_00300.
Kingma, D. P. and Ba, J. Adam: A method for stochas
tic optimization. In Bengio, Y. and LeCun, Y. (eds.),
_3rd International Conference on Learning Represen-_
_tations, ICLR 2015, San Diego, CA, USA, May 7-9,_
_2015, Conference Track Proceedings, 2015. URL http:_
//arxiv.org/abs/1412.6980.
Lample, G. and Charton, F. Deep learning for symbolic
mathematics. In 8th International Conference on Learn_ing Representations, ICLR 2020, Addis Ababa, Ethiopia,_
_April 26-30, 2020. OpenReview.net, 2020. URL https:_
//openreview.net/forum?id=Ske31kBtPr.
LeCun, Y., Haffner, P., Bottou, L., and Bengio, Y. Object
recognition with gradient-based learning. In Shape, Con_tour and Grouping in Computer Vision, pp. 319, Berlin,_
Heidelberg, 1999. Springer-Verlag. ISBN 3540667229.
Lederman, G., Rabe, M., Seshia, S., and Lee, E. A. Learn
ing heuristics for quantified boolean formulas through
reinforcement learning. In International Conference on
_Learning Representations, 2020._
Li, W., Yu, L., Wu, Y., and Paulson, L. C. Isarstep: a
benchmark for high-level mathematical reasoning. In
_International Conference on Learning Representations,_
2021. URL https://openreview.net/forum?
id=Pzj6fzU6wkj.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy,
O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta:
A robustly optimized BERT pretraining approach. CoRR,
abs/1907.11692, 2019. URL http://arxiv.org/
abs/1907.11692.
Luong, T., Pham, H., and Manning, C. D. Effective ap
proaches to attention-based neural machine translation.
In Màrquez, L., Callison-Burch, C., Su, J., Pighin, D.,
and Marton, Y. (eds.), Proceedings of the 2015 Confer_ence on Empirical Methods in Natural Language Process-_
_ing, EMNLP 2015, Lisbon, Portugal, September 17-21,_
_2015, pp. 1412–1421. The Association for Computational_
Linguistics, 2015. doi: 10.18653/v1/d15-1166. URL
https://doi.org/10.18653/v1/d15-1166.
mathlib. The lean mathematical library. In Blanchette, J.
and Hritcu, C. (eds.), Proceedings of the 9th ACM SIG_PLAN International Conference on Certified Programs_
_and Proofs, CPP 2020, New Orleans, LA, USA, Jan-_
_uary 20-21, 2020, pp. 367–381. ACM, 2020. doi: 10._
1145/3372885.3373824. URL https://doi.org/
10.1145/3372885.3373824.
McCoy, R. T., Grant, E., Smolensky, P., Griffiths, T., and
Linzen, T. Universal linguistic inductive biases via metalearning. Proceedings of CogSci, abs/2006.16324, 2020.
Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S.,
Ng, N., Grangier, D., and Auli, M. fairseq: A fast,
extensible toolkit for sequence modeling. In Ammar,
W., Louis, A., and Mostafazadeh, N. (eds.), Proceed_ings of the 2019 Conference of the North American_
_Chapter of the Association for Computational Linguis-_
_tics: Human Language Technologies, NAACL-HLT 2019,_
_Minneapolis, MN, USA, June 2-7, 2019, Demonstra-_
_tions, pp. 48–53. Association for Computational Lin-_
guistics, 2019. doi: 10.18653/v1/n19-4009. URL
https://doi.org/10.18653/v1/n19-4009.
Paliwal, A., Loos, S. M., Rabe, M. N., Bansal, K., and
Szegedy, C. Graph representations for higher-order logic
and theorem proving. In The Thirty-Fourth AAAI Con_ference on Artificial Intelligence, AAAI 2020, The Thirty-_
_Second Innovative Applications of Artificial Intelligence_
_Conference, IAAI 2020, The Tenth AAAI Symposium on_
_Educational Advances in Artificial Intelligence, EAAI_
_2020, New York, NY, USA, February 7-12, 2020, pp. 2967–_
2974. AAAI Press, 2020. URL https://aaai.org/
ojs/index.php/AAAI/article/view/5689.
Papadimitriou, I. and Jurafsky, D. Learning Music
Helps You Read: Using transfer to study linguistic
structure in language models. In Proceedings of the
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pp. 6829–6839, On-_
line, November 2020. Association for Computational
Linguistics. URL https://www.aclweb.org/
anthology/2020.emnlp-main.554.
Peirce, C. S. Reasoning and the logic of things: The Cam
_bridge conferences lectures of 1898. Harvard University_
Press, 1992.
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
Piotrowski, B. and Urban, J. Guiding Inferences in Con
nection Tableau by Recurrent Neural Networks. In
Benzmüller, C. and Miller, B. (eds.), Intelligent Com_puter Mathematics, pp. 309–314, Cham, 2020. Springer_
International Publishing. ISBN 978-3-030-53518-6.
Polu, S. and Sutskever, I. Generative Language Modeling
for Automated Theorem Proving. CoRR, abs/2009.03393,
2020. URL https://arxiv.org/abs/2009.
03393.
Rabe, M. N., Lee, D., Bansal, K., and Szegedy, C. Mathe
matical Reasoning via Self-supervised Skip-tree Training.
In International Conference on Learning Representations,
2021. URL https://openreview.net/forum?
id=YmqAnY0CMEy.
Radford, A., Wu, J., Child, R., Luan, D., Amodei,
D., and Sutskever, I. Language models are unsu
pervised multitask learners. In OpenAI Blog, 2018.
URL https://d4mucfpksywv.cloudfront.
net/better-language-models/language_
models_are_unsupervised_multitask_
learners.pdf.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the Limits of Transfer Learning with a Unified Textto-Text Transformer. J. Mach. Learn. Res., 21:140:1–
140:67, 2020. URL http://jmlr.org/papers/
v21/20-074.html.
Saxton, D., Grefenstette, E., Hill, F., and Kohli, P. Analysing
mathematical reasoning abilities of neural models. In
_Proceedings of International Conference on Learning_
_Representations (ICLR), 2019._
Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and
Monfardini, G. The graph neural network model. IEEE
_Transactions on Neural Networks, 20(1):61–80, 2008._
Schlag, I., Smolensky, P., Fernandez, R., Jojic, N., Schmid
huber, J., and Gao, J. Enhancing the transformer with
explicit relational encoding for math problem solving.
_CoRR, abs/1910.06611, 2019. URL http://arxiv._
org/abs/1910.06611.
Scholze, P. Liquid tensor experiment. https:
//xenaproject.wordpress.com/2020/
12/05/liquid-tensor-experiment/, 2020.
Formalization available at https://github.com/
leanprover-community/lean-liquid.
Selsam, D. and Bjørner, N. Guiding High-Performance SAT
solvers with Unsat-Core Predictions. In International
_Conference on Theory and Applications of Satisfiability_
_Testing, pp. 336–353. Springer, 2019._
Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L.,
and Dill, D. L. Learning a SAT solver from single-bit
supervision. In 7th International Conference on Learning
_Representations, ICLR 2019, New Orleans, LA, USA,_
_May 6-9, 2019. OpenReview.net, 2019. URL https:_
//openreview.net/forum?id=HJMC_iA5tm.
Song, K., Tan, X., Qin, T., Lu, J., and Liu, T. MASS:
masked sequence to sequence pre-training for language
generation. In 36th International Conference on Machine
_Learning, ICML 2019, Long Beach, California, USA,_
_June 9-15, 2019, 2019. URL http://proceedings._
mlr.press/v97/song19d.html.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna,
Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer
_vision and pattern recognition, pp. 2818–2826, 2016._
Urban, J. and Jakub˚uv, J. First Neural Conjecturing Datasets
and Experiments. In Benzmüller, C. and Miller, B. (eds.),
_Intelligent Computer Mathematics, pp. 315–323, Cham,_
2020. Springer International Publishing. ISBN 978-3030-53518-6.
Vaezipoor, P., Lederman, G., Wu, Y., Maddison, C. J.,
Grosse, R. B., Lee, E. A., Seshia, S. A., and Bacchus, F.
Learning Branching Heuristics for Propositional Model
Counting. In AAAI 2021, 2021.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention
is All you Need. In Proceedings of Advances in Neural
_Information Processing Systems (NeurIPS), 2017._
Wang, M., Tang, Y., Wang, J., and Deng, J. Premise selec
tion for theorem proving by deep graph embedding. In
_Advances in Neural Information Processing Systems, pp._
2786–2796, 2017.
Wang, Q., Brown, C., Kaliszyk, C., and Urban, J. Explo
ration of neural machine translation in autoformalization of mathematics in mizar. Proceedings of ACM SIG_PLAN International Conference on Certified Programs_
_and Proofs, 2020._
Warstadt, A. and Bowman, S. R. Can neural networks
acquire a structural bias from raw linguistic data? Pro_ceedings of CogSci, 2020._
Wu, Y., Jiang, A., Ba, J., and Grosse, R. INT: An Inequal
ity Benchmark for Evaluating Generalization in Theorem Proving. In International Conference on Learning
_Representations, 2021. URL https://openreview._
net/forum?id=O6LPudowNQm.
Xu, K., Li, J., Zhang, M., Du, S. S., Kawarabayashi, K.-i.,
and Jegelka, S. What can neural networks reason about?
In ICLR 2020, 2020.
-----
**LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning**
Yang, K. and Deng, J. Learning to Prove Theorems via
Interacting with Proof Assistants. In Proceedings of In_ternational Conference on Machine Learning (ICML),_
2019.
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov,
R. R., and Le, Q. V. Xlnet: Generalized autoregressive
pretraining for language understanding. In Advances in
_Neural Information Processing Systems, NeurIPS 2019,_
_Vancouver, BC, Canada, December 8-14, 2019, 2019._
Zhang, J., Zhao, Y., Saleh, M., and Liu, P. J. PEGASUS:
pre-training with extracted gap-sentences for abstractive
summarization. In 37th International Conference on
_Machine Learning, ICML 2020, Vienna, Austria, 2020,_
volume 119. PMLR, 2020.
-----
# LIME: LEARNING INDUCTIVE BIAS FOR PRIMITIVES
## OF MATHEMATICAL REASONING
**Anonymous authors**
Paper under double-blind review
ABSTRACT
While designing inductive bias in neural architectures has been widely studied, we
hypothesize that transformer networks are flexible enough to learn inductive bias
from suitable generic tasks. Here, we replace architecture engineering by encoding
inductive bias in the form of datasets. Inspired by Peirce’s view that deduction,
induction, and abduction form an irreducible set of reasoning primitives, we design
three synthetic tasks that are intended to require the model to have these three
abilities. We specifically design these synthetic tasks in a way that they are devoid
of mathematical knowledge to ensure that only the fundamental reasoning biases
can be learned from these tasks. This defines a new pre-training methodology called
“LIME” (Learning Inductive bias for Mathematical rEasoning). Models trained
with LIME significantly outperform vanilla transformers on three very different
large mathematical reasoning benchmarks. Unlike dominating the computation
cost as traditional pre-training approaches, LIME requires only a small fraction of
the computation cost of the typical downstream task.
1 INTRODUCTION
Inductive bias is essential for successful neural network learning. Many of the breakthroughs in
machine learning are accompanied by new neural architectures with better inductive biases, such
as locality bias in convolutional neural networks (LeCun et al., 1999), recurrence and memory in
LSTMs (Hochreiter and Schmidhuber, 1997), and structural bias in graph neural networks (Scarselli
et al., 2008). However, existing designs of inductive biases need to be explicitly encoded in neural
architecture. This is sometimes difficult as one may not know the exact mechanism for an abstract
ability, in order to describe the architectural bias explicitly. In particular, designing proper inductive
bias for abstract concepts such as mathematical reasoning becomes an extremely challenging task.
Moreover, attempts to design elaborate architectures for reasoning often fall short of the performance
of more generic transformer architecture. In this work, we aim to avoid the search for new architectures and investigate whether one can learn useful inductive bias for mathematical reasoning through
_pretraining._
Large-scale unsupervised pretraining of language models revolutionized the field of natural language
processing (NLP), improving the state-of-the-art in question answering, name entity recognition,
text classification, and other domains, e.g. (Radford et al., 2018; Devlin et al., 2019; Yang et al.,
2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020). As a result, pretraining has become a
common practice for modern neural network based NLP. One plausible explanation for the benefit of
pretraining is that the model can learn world knowledge by memorizing the contents of the natural
language corpus. This can be useful in various natural language downstream tasks, such as question
answering and text classification. However, there is another potential advantage of pre-training—it
may distill inductive biases into the model that are helpful for training on downstream tasks (Brown
et al., 2020; Warstadt and Bowman, 2020). We focus on the latter and design pre-training tasks that
are intentionally devoid of knowledge and only allow the model to learn inductive bias for reasoning.
Inspired by the logician Charles Peirce (Peirce, 1992), we believe that the following three primitives
are the most crucial for reasoning:
1. Deduction: the ability to deduce new truths from given facts and inference rules.
2. Induction: the ability to induce general inference rules from a set of known facts.
3. Abduction: the ability to explain the relationship between the evidences and inference rules.
-----
To endow the models with an inductive bias for mathematical reasoning, we design a synthetic task
for each of the three inductive biases. We hypothesize that the transformer networks are flexible
enough to learn strong inductive bias from the three synthetic reasoning tasks and consequently
improving the downstream tasks. Although such inductive bias may be useful in general reasoning
tasks (e.g., NLP tasks), in this work, we focus on mathematical reasoning benchmarks, for which
we expect to observe the largest gains. We call training on these tasks LIME – an acronym for
“Learning Inductive Bias for Mathematical rEasoning”. Note that there is only a limited amount
of pretraining data available for formal mathematical benchmarks, therefore the study of generic
pre-training techniques is particularly important for the success of machine learning in mathematical
reasoning.
We demonstrate that LIME pretrained models provide significant gains across three large mathematical reasoning benchmarks: IsarStep (Li et al., 2020), HOList Skip-tree (Rabe et al., 2020)
and MetaMathStep (Polu and Sutskever, 2020). Notably, on the IsarStep benchmark, pre-training
improved the top-1 accuracy from 20.4% to 26.9% and top-10 accuracy from 33.1% to 41.0%.
Compared to the traditional pre-training tasks, there are two major differences. First, we do not
load the input embeddings or the weights in the output layer for finetuning on downstream tasks.
This allows us to use the same pre-trained model for a variety of downstream tasks, which can
have vastly different vocabularies due to language or tokenization differences. Also, it prevents the
transfer of content knowledge from the pretraining to downstream tasks, supporting the evidence of
learning inductive biases. Furthermore, pretraining on synthetic tasks require only a fraction of the
computational cost of downstream tasks. With only about two hours of training on a single modern
GPU, one already obtains all the benefits, in contrast to days of training on a large natural language
corpus with hundreds of GPUs/TPUs.
Our method can also be regarded as a form of curriculum learning, in which the model is taught basic,
extremely generic but general skills before being trained on the specific problem domain.
To summarize, the contributions of the paper are:
1. Providing the first method to design inductive biases in the form of datasets for mathematical
reasoning.
2. Demonstrating significant improvements in the reasoning performance of transformer models on
three large mathematical reasoning benchmarks with negligible extra computation cost.
3. By showing how pretraining brings benefits other than learning content knowledge, disentangling
the study of its working mechanism.
2 RELATED WORK
**Learning Models Applied to Mathematics** There has been increasing interest in applying deep
learning methods to Interactive Theorem Provers (ITP) (Bansal et al.; 2019; Gauthier et al., 2020;
Huang et al., 2019; Yang and Deng, 2019; Wu et al., 2020; Li et al., 2020; Polu and Sutskever,
2020). The work that is most related to ours is GPT-f (Polu and Sutskever, 2020). The authors
performed pretraining on several natural language corpora and showed significant improvements for
an ITP system – MetaMath. Different from ours, they used GPT-style large-scale language modeling
pretraining, which dominates the computation cost compared to the downstream task. We, on the
other hand, propose pretraining on a few lightweight synthetic tasks costing only a minor fraction of
the computation spent on the downstream task.
Lample and Charton (2020) have demonstrated that transformer models can be used for symbolic
mathematics by successfully predicting the integrals of formulas from a randomly generated dataset.
Similar observations are made for logical problems relevant to verification: that transformer networks
can learn the semantics of logics (Hahn et al., 2020). Rabe et al. (2020) have shown that mathematical
reasoning can emerge from self-supervised training alone. Li et al. (2020) show that language models
can learn to synthesize missing high-level intermediate propositions given a local context. Piotrowski
and Urban (2020) used RNNs in automated theorem provers for first-order logic. Wang et al. (2020)
explored the use of machine translation to translate between synthetically generated natural language
descriptions of proofs and formally represented proofs. Urban and Jakub˚uv (2020) present initial
experiments on generating mathematical conjectures with a Transformer model.
-----
Saxton et al. (2019) suggest a dataset for the analysis of mathematical reasoning skills. In contrast to
the datasets considered here, their dataset is synthetic, focuses on calculation with concrete numbers,
and only contains relatively few symbolic tasks.
**Language Model Pretraining** The advent of the transformer architecture (Vaswani et al., 2017)
and the BERT style pretraining (Devlin et al., 2019) represented a huge improvement in the quality
of language modeling. Since then, an explosion of research activity in the area pushed the quality of
language models through better pretraining tasks. Where BERT (Devlin et al., 2019) masks out a fraction of the input tokens, later works demonstrated the advantages of masking out subsequences (Song
et al., 2019; Dong et al., 2019; Joshi et al., 2020; Raffel et al., 2020; Conneau and Lample, 2019) and
whole sentences (Zhang et al., 2020).
Besides the choice of pretraining tasks, the scale of language models is also an important factor.
Language models improve in quality and develop new abilities as they grow larger while trained on
the same data (Radford et al., 2018; Raffel et al., 2020; Brown et al., 2020).
**Inductive Biases in General** There have been works studying learning inductive biases in other
contexts. In particular, McCoy et al. (2020) studied whether one can learn linguistic inductive biases
on synthetic datasets via meta-learning. Papadimitriou and Jurafsky (2020) shows inductive biases
learned in music data can be useful for natural language. They further designed several synthetic tasks
and showed similar kind of improvements for natural language tasks. From a more theoretical point of
view, Xu et al. (2020) formalize an aspect of inductive (architectural) bias under the context of GNNs,
with a notation called architectural alignment. The architecture is aligned when the architecture
can perfectly simulates the ground truth solution. But their work is limited to showing alignment
in combinatorial problems, whose ground truth solutions are known. In contrast, our work tries to
learn architectural bias by relying on the flexible Transformer architecture and training on synthetic
datasets.
**Inductive Biases for Mathematics** Previous work studying inductive biases for logical reasoning
has focused on encoding bias in the neural architecture. Initial works focused on encoding the tree
structure of expressions using TreeRNNs (Evans et al., 2018). Graph neural networks are shown to
provide a much stronger performance than tree models in premise selection (Wang et al., 2017) and
theorem proving (Paliwal et al., 2020). GNNs also scale to larger formulas in SAT (Selsam et al.,
2019; Selsam and Bjørner, 2019; Han, 2020), QBF (Lederman et al., 2020), and #SAT (Vaezipoor
et al., 2020). Crouse et al. (2019) have shown that pooling mechanisms can have an impact on the
performance of GNNs on logical formulas as well. Closely related, Hellendoorn et al. (2020) have
shown that it can be helpful to hard-code the tree structure of programs in the attention mask of
transformers. Schlag et al. (2019) developed an architecture for encoding relational information
using tensor product representation for mathematical reasoning.
3 METHODS
In this section, we first discuss the primitives of reasoning, inspired by Peirce’s views, and design one
synthetic task for each reasoning primitive.
3.1 REASONING PRIMITIVES
In Peirce’s view, there are exactly three kinds of reasoning: deduction, abduction, and induction.
Deduction is known as the workhorse for mathematics. It is the process of deriving new facts by
applying logical inference rules to known facts or premises. On the other hand, abduction and
induction can be thought of as the inverses of deduction. If we call the premise used in deduction as
_Case, its logical rule as Rule, and its conclusion as Result, then abduction is equivalently the inference_
of a Case from a Rule and a Result, while induction may be said to be the inference of a Rule from a
Case and a Result. We summarize the three reasoning primitives in the following table:
|Reasoning Primitives|Inference Map|
|---|---|
|Deduction|Rule, Case Result →|
|Abduction|Rule, Result Case →|
|Induction|Case, Result Rule →|
-----
To give an example, we let Rule be “All the beans in this bag are white”, Case be “These beans are
from this bag”, and Result be “These beans are white”. Deduction is to derive the fact that these
beans are white (Re) from knowing all the beans from this bag are white (R) and these beans are from
this bag (C). Abduction explains why the beans are white (Re) from knowing that all the beans in the
bag are white (R) – because these beans must be from the bag (C). Lastly, induction aims to provide
a general principle to observing the fact that the beans are white (Re) and they come from this bag
(C), which is that all the beans in the bag must be white (R). We refer to Peirce (1992) and Bellucci
and Pietarinen (2015) for more elaborate discussions on the primitives of reasoning.
Mathematical reasoning exhibits nontrivial uses of these reasoning primitives. Deduction happens
when one needs to derive new valid statements from the given premise (Case) and theorems in
the library (Rule). Abduction is used to postulate conjectures from the known facts and theorems,
allowing one to decompose the challenging theorem into subgoals for proof. Induction, the ability
to extract general principles from known facts and theorems is also one of the major activities of
mathematical reasoning. It is used when one derives theorems from special cases and proposes new
definitions and general frameworks to encapsulate existing knowledge.
3.2 LIME SYNTHETIC TASKS FOR REASONING PRIMITIVES
We design three synthetic tasks inspired by the three reasoning primitives. As discussed in the
previous section, all of the reasoning primitives consist of three essential elements: Rule, Case, and
Result. Inspired by this, we first design a method to generate those elements. Once they are generated,
we can construct tasks that predict one element from the other two. In the following, we describe
one simple way to generate those three elements, though we acknowledge that there are many other
possible approaches.
We require two types of symbols: 1. math symbols, 2. rule symbols. In general, these symbols can
take any forms (e.g., integer representations). But for the ease of discussion, we will think of math
symbols as the union of those operators used in mathematics (e.g., “+ −∗ = ()&”) and lower case
letters (e.g., a, b, c . . . ), and rule symbols as upper case letters (e.g., A, B, C . . . ). We now construct
Rule, Case, and Result in order:
1. Rule is a randomly sampled string that consists of i) rule symbols and ii) math symbols. The
length of the string is randomly sampled from a range. For instance, a randomly sampled rule can
be: A ∗ _A + B = C with rule symbols A, B, and C._
2. Case is a dictionary that represents substitutions. For each rule symbol used in the Rule string, we
sample a random string of random length that consists of math symbols. This forms a dictionary,
whose keys are all rule symbols, and the values are the corresponding sampled string. To illustrate,
following the previous example, for each A, B and C, we sample a random string to form a
dictionary as: {A : a, B : b, C : d + e}.
3. Result is the outcome of the substitution. For each rule symbol in the Rule string, we replace it
with the corresponding value stored in the Case dictionary. This gives rise to the Result string. As
per the previous example, we now substitute A with a, B with b, and C with d + e into the Rule
string, generating the Result string: a ∗ _a + b = d + e._
After Rule, Case, and Result are generated, we can construct three tasks for deduction, abduction,
and induction respectively. We define the three synthetic tasks as follows:
_• Deduct: Source: Rule string and Case dictionary. Target: Result string._
_• Abduct: Source: Rule string and Result string. Target: Case dictionary._
_• Induct: Source: Case dictionary and Result string. Target: Rule string._
We also consider a task called Mix, which is a uniform mix of three tasks. Namely, during generation,
we randomly select a task and sample an example from that task. To formulate them as sequence to
sequence tasks, we represent the Case dictionary also as a string, e.g., “{A : a, B : b, C : d + e}”.
An example of Abduct using the examples of Rule, Case, and Result above is to predict the target
_{A : a, B : b, C : d + e} from the source A ∗_ _A + B = C <s> a ∗_ _a + b = d + e._
Pre-training on our synthetic tasks can be seen as a form of skip-component learning. There are
three essential components: Rule, Case and Result, and we skip one of them and use the remaining
-----
two elements to reconstruct the missing one. Past work has shown that learning to predict missing
words (Devlin et al., 2019), subsequences (Song et al., 2019; Raffel et al., 2020), or subtrees (Rabe
et al., 2020) are strong pre-training tasks.
3.3 SYMBOL-AGNOSTIC REPRESENTATION
In order to solve the synthetic tasks, the model needs to distinguish which set of symbols can be
substituted (rule symbols). As a result, the model may memorize information about the symbols that
is irrelevant to the inductive biases encoded in the task. To prevent such memorization, we propose a
way to make the synthetic tasks agnostic to the choice of symbols.
We first note that the choice of symbols is irrelevant to our synthetic tasks. To avoid symbol-specific
memorization, for each training and evaluation example, we randomly sample two sets of symbols
to be used in Rules and in the rest of the example. But for the Abduct task, the model needs to
know which symbols are replaced by the Rule part of the example and which symbols are in the
Result language. We simply list the split of the symbols used in the example at the beginning of
the input string, marked by two special symbols, <Rule> and <Math>. They are followed by the
original source string. The target string remains unchanged. For example, the previous example in
the Abduct task becomes,
Source: <Rule> A B C <Math> ∗ + = a b d e <s> A ∗ _A + B = C <s> a ∗_ _a + b = d + e_
Target: {A : a, B : b, C : d + e}
In our implementation, we use integers to represent symbols. Specifically, for each example, we
sample two disjoint sets of integers from the set {1, . . ., S} to represent the math symbols and the
rule symbols, where S is the size of the vocabulary. In our experiments, we sample 44 math symbols
and 24 rule symbols for each problem. The complete pseudo-code of generating the symbols, Rule,
Case, and Result for one task example is provided in Appendix Algorithm 1.
4 EXPERIMENTS
In this section, we present results on three large mathematical reasoning tasks that are especially
useful in the context of automated theorem proving. Our results show significant gains in learning
inductive biases from synthetic tasks. We have selected three tasks to cover three different styles of
interactive theorem provers: The HOL-Light (skip-tree) corpus was created from very high-level
tactic-based proofs, but it is less interpretable than IsarStep’s declarative style corpus. We also
evaluate the next proof-step prediction task on the set.mm library of MetaMath, which consists
of very granular, basic proof steps. Namely, the proof steps are more predicable and average proof
lengths have significantly increased.
4.1 EXPERIMENT DETAILS
**LIME Pretraining** We generate datasets of our synthetic tasks for pretraining: Deduct, Abduct,
Induct, Mix. For pretraining of IsarStep, we used a vocabulary size S of 1000. For the other
two downstream tasks, we used a vocabulary size of 100. The reason we used different vocabulary
sizes was that we found (cf. appendix) the discrepancy in vocabulary size affects the performance
of a downstream task if it has a very large vocabulary size (IsarStep has 28K). We use 44 math
symbols and 24 rule symbols. The length of the Rule string is sampled from 5 to 20, the length
of the string for each substitution (the values of Case dictionary) is sampled from 2 to 8. We used
word-level tokenization for all the tasks. We pretrained the model for 20K updates. For tasks with
larger vocabulary size (i.e., 1000), we found the learning became more difficult. Hence we used
a curriculum learning scheme: we first trained the model for 10K steps on the same task with a
vocabulary size of 100, then continue training for another 10K step on vocabulary size of 1000. The
pretraining was done on a single Nvidia Tesla T4 GPU with 4 CPU cores for 2 hours. We set the
maximum number of tokens in a batch to 4096, and accumulate four batches of gradients for one
parameter update. We used the Adam optimizer (Kingma and Ba, 2015) with learning rate 3 · 10[−][4].
We used a dropout rate of 0.1 and label smoothing (Szegedy et al., 2016) with a coefficient 0.1.
-----
Table 1: Test top-1, top-10 (%) accuracy on the IsarStep task.
Model Top-1 Acc. Top-10 Acc.
No pretrain (Li et al., 2020) 20.4 33.1
HAT (Li et al., 2020) 22.8 35.2
LIME Deduct 24.7 37.7
LIME Abduct 26.7 **41.0**
LIME Induct 23.9 38.8
LIME Mix **26.9** 40.4
Table 2: Test top-8 Accuracy on Skip-Tree HOList (%).
Model Equation completion Hard type inference Missing assumptions Easy type inference
No pretrain (Rabe et al., 2020) 46.3 95.0 41.8 95.9
LIME Deduct 50.3 94.8 **47.9** 97.0
LIME Abduct 48.4 94.8 46.1 96.3
LIME Induct 44.8 94.9 42.6 96.4
LIME Mix **51.7** **95.6** 46.1 **97.6**
**Fine-tuning** For all the downstream tasks in this section, when loading the pretrained models for
fine-tuning, we do not load in the vocabulary embeddings nor the output layer weights. For the
downstream task IsarStep and MetaMathStep, we used four Nvidia Tesla T4 GPU with 16 CPU
cores for training. We set the maximum number of tokens in a batch to 4096, and accumulated four
batches of gradients for one parameter update. We trained the model for 200K updates. We used the
Adam optimizer, and we searched over the learning rates {3 · 10[−][4], 7 · 10[−][4]}, and warmup steps
_{4000, 8000}. We used a dropout rate of 0.1 and label smoothing with a coefficient 0.1. For the_
HOList skip-tree task, we used TPUs for running the experiments. We used a batch size of 256
sequences and trained the model for 1 million updates.
**Architecture** All experiments used the transformer base model from Vaswani et al. (2017), i.e. 512
hidden size, 2048 filter size, 8 attention heads. For the IsarStep and MetaMathStep task, we used 6
layers for both the encoder and decoder, implemented using fairseq (Ott et al., 2019). For the HOList
skip-tree experiment, we used a somewhat modified transformer architecture with 8 encoder and 4
decoder layers of the same size as above in which the self-attention and attention over the encoder
output were merged.
**Evaluation** During training, we kept track of the best validation tokenized BLEU score [1], and we
used the model with validation BLEU for evaluation on the test set. We report top-1 and top-10
accuracies. We consider an output sequence as correct if it matches the target sequence exactly. We
performed a beam search with width 10. The top-1 accuracy is then defined as the percentage of
the best output sequences that are correct. The top-n accuracy is defined as the percentage of target
sequences appearing in the top n generated sequences.
4.2 ISARSTEP
The IsarStep task is taken from Li et al. (2020). IsarStep is a task of predicting the missing intermediate
propositions given surrounding propositions to bridge the gap between the goal and the current state
of the proof. The dataset was mined from the public repository of formal proofs of the Isabelle proof
assistant (Paulson, 1994). Unlike HOList and MetaMath, IsarStep contains mostly declarative proofs,
a proof style close to humans’ prose proofs. The dataset has a broad coverage of undergraduate and
research-level mathematics and computer science theorems. There are 820K, 5000, 5000 sequence
pairs for the training, validation, and test sets with a maximum of 800 tokens in source sequences and
200 tokens in the target sequences. Following Li et al. (2020), during training, we use 512 as the
[1https://github.com/pytorch/fairseq/blob/master/fairseq/tasks/](https://github.com/pytorch/fairseq/blob/master/fairseq/tasks/translation.py#L396)
[translation.py#L396](https://github.com/pytorch/fairseq/blob/master/fairseq/tasks/translation.py#L396)
-----
Table 3: Test top-1, top-10 (%) accuracy on the MetaMathStep task.
Model Top-1 Acc. Top-10 Acc.
No pretrain 67.7 76.5
LIME Deduct 68.8 77.4
LIME Abduct 68.8 76.1
LIME Induct **69.9** **78.0**
LIME Mix 69.1 77.9
maximum length for both the source and target, and truncated those that exceed the length to 512.
For reporting, we evaluate all 5000 test examples regardless of their lengths.
The results on the IsarStep task for four pretrained Validation BLEU: IsarStep
models and the baseline transformer model without 65
pretraining is shown in Table 1. We also include another baseline, HAT transformer introduced in Li 60
et al. (2020), which is a specially designed hierarchical transformer architecture tailored to this task. 55 LIME Deduct
We see the pretrained model achieved substantial LIME Induct
improvement over the model trained from scratch LIME Abduct
as well as HAT. Notably, the model that was pre- LIME Mix
trained on Abduct improved the top-10 accuracy
from 33.1% to 41.0%, for almost 8% absolute im- 50K 100K 150K 200K
provement. The model pretrained on Mix performed
Validation BLEU: IsarStep
65
60
55 LIME Deduct
LIME Induct
LIME Abduct
50
LIME Mix
No pretrain
45
50K 100K 150K 200K
Training steps
the best on top-1 accuracy, improving the baseline Figure 1: Validation BLEU along training on
by 6.5% accuracy. We also showed the validation the IsarStep task.
BLEU scores along training in Figure 1. We can see
that the pretrained models learned much faster than
the model trained from scratch. With around 50K steps of updates, the pretrained model already
obtained better BLEU scores than the best score achieved by the un-pretrained model. Moreover,
since the downstream task requires 200K steps of training with 4 GPUs, the amount of computation
spent on pretraining is only 2.5% of the downstream task, strongly demonstrating the efficiency of
the proposed pretraining method.
4.3 HOLIST SKIP-TREE
As the second mathematical reasoning benchmark we consider the HOList skip-tree evaluation
tasks by Rabe et al. (2020). These tasks include two variants of type inference, predicting under
which assumptions theorems hold, and completing equalities. All source expressions for these tasks
are taken from the validation set of the theorem database of the HOList proof logs (Bansal et al.).
The evaluations are done on a random sample of 1000 instances from the full evaluation sets. We
initialized the model parameters with the pretrained weights and then repeated the experiments
by Rabe et al. (2020). That is, we trained the models for up to 1M parameter updates on the training
set with batch size 256 and repeat the evaluation every 100K steps. In Table 2 we present the best
result from these 10 evaluation runs. We see a significant improvement in these reasoning tasks when
the models are initialized with the pretrained weights. Notably, on equation completion and missing
assumptions task, we improved the beam search (with width 8) exact match rate performance from
46.3% to 51.7% and 41.8% to 47.9%. Note that this is despite the amount of pretraining compute cost
being negligible: it takes less than 1 percent of the cost of the downstream task training. Pretraining
used 1/20 number of the update steps (50K vs 1M) with 8 (and 4) times smaller batches (pretraining
has much shorter sequence lengths, 128 vs. 1024 and 512, respectively).
4.4 METAMATHSTEP
Compared to other ITPs, MetaMath is a low-level proving system: each proof step makes only a
small step towards the goal. As such, each proof contains many more proof steps than in other ITPs:
with 37, 000 theorems in the human-written theorem library, there are around 3 million proof steps.
-----
Table 4: Comparisons to other pretraining tasks on IsarStep task.
Model Top-1 Acc. Top-10 Acc
No pretrain (Li et al., 2020) 20.4 33.1
LIME Mix 26.9 40.4
Pretrain on MetaMathStep 23.1 35.7
Pretrain on WMT En-De 17.2 30.3
We extract the proof steps and use them to construct a sequence-to-sequence task following Polu and
Sutskever (2020) (their proof step training objective).
In this task, the model is asked to generate PROOFSTEPS given a GOAL, namely, the GOAL
string is the source input, and PROOFSTEPS is the target output. We follow Polu and Sutskever
(2020) and use their string representation for the GOAL and the PROOFSTEPS. Instead of using
subword tokenization in Polu and Sutskever (2020), we use a character-level representation for our
task. Following Polu and Sutskever (2020), we split theorems into train/valid/test theorems of size
35K, 1K, 1K, and associate all proof steps of a theorem with that split. For each dataset, we filter
examples with lengths longer than 1024. This reduced the total number of proof steps to 1.4 million.
For validation and test set, we randomly sample 3000 examples out of 40K (after filtering) and
perform validation and test evaluations on them. In Table 3 we present the impact of pretraining
on our synthetic reasoning tasks on MetaMathStep. We also observe gains from pretraining on this
dataset, with the model trained on Induct task achieving 2.2% top-1 and 1.5% top-10 test accuracy
improvement. Similarly, as for the IsarStep task, the computation spent on pretraining is only 2.5%
of the downstream task.
5 ABLATION STUDIES
In this section, we perform ablation studies. Additional ablation studies can be found in Appendix C.
5.1 PRETRAINING ON FORMAL REASONING AND NATURAL LANGUAGE TASKS
Here we investigate how LIME compares to pretraining on natural language or existing formal
reasoning datasets. In this set of experiments, we pretrained three models on Mix, MetaMathStep,
and on the WMT 2016 English-to-Germany (WMT En-De) translation task, and then we fine-tuned
and evaluated these models on the IsarStep task. We pretrained the model on MetaMathStep and
WMT EN-DE for 200K steps with 4 GPUs, which is 40 times more computation spent than on LIME.
Due to the mismatch between vocabularies of the pretraining task and the downstream task, we do not
load the vocabulary embeddings nor output layer weights. The results in Table 4 show that pretraining
on MetaMathStep did provide gains, though significantly smaller than gains provided by LIME Mix,
despite their 40 times higher computational cost. Moreover, pre-training on WMT translation had
even a negative effect on the performance. We also conducted an analogous experiment with an
evaluation on the MetaMathStep, which we present in Appendix C.
5.2 DO WE NEED VOCABULARY EMBEDDINGS FOR FINE-TUNING?
As mentioned earlier, we did not load in the vocabulary embeddings from the pretrained models
when we switched to fine-tuning on downstream tasks. Even without loading the vocab embeddings,
the pretrained models still improved the performance. In this ablation study, we investigate how
much this decision has affected the results and whether vocabulary embeddings can help improve the
performance even further. We performed the comparisons on IsarStep. The task contains a token
vocabulary of size 28336. We generated new synthetic tasks for the same vocabulary size, such that
we can load the vocabulary embeddings and output layers when initializing the model for IsarStep.
Table 5 shows that this led to similar performance. This aligns with our expectation that the model
should not learn content specific knowledge that is potentially stored in the vocabulary. These weights
turn out to be non-essential for the final performance, supporting the evidence that the transformer
learns inductive biases from the pretraining task.
-----
Table 5: Whether one needs to load vocabulary embeddings and output layer weights on IsarStep
tasks.
Model Top-1 Acc. Top-10 Acc
No pretrain (Li et al., 2020) 20.4 33.1
LIME Mix 26.9 40.4
LIME Mix + Loading All Weights 26.7 40.6
6 DOES LIME ENCODE INDUCTION, DEDUCTION AND ABDUCTION?
Although LIME has shown to achieve substantial improvements across various benchmarks, it is not
entirely clear that the specific synthetic tasks necessarily enforce the reasoning ability of induction,
deduction and abduction. We would like to note that deduction, induction, and abduction are highlevel and philosophical concepts, and serve only as an inspiration for us to design the synthetic tasks.
We do not expect the model will necessarily learn exactly these three capabilities. After all, we have
chosen a particular implementation of "Case", "Rule" and "Result". Furthermore, we also design
tasks mimic proof steps in formal theorem proving (see the rewrite task in Appendix B.1), which also
achieved excellent results. Nevertheless, we believe LIME is a first step towards building reasoning
inductive biases, and provides many inspirations and directions for future work.
7 CONCLUSION
In this work, we encoded inductive biases for mathematical reasoning in the form of datasets. We
created three synthetic tasks inspired by three reasoning primitives of deduction, induction, and
abduction. We demonstrated that pretraining on these tasks (LIME) significantly improved the
performances across three mathematical reasoning benchmarks. Notably, LIME requires negligible
computation compared to the downstream task, unlike being the dominating factor in previous
pretraining methods. Our work naturally poses many future research questions. Could the primitive
tasks provide similar gains for NLP tasks? Are there similar primitive tasks for natural language
reasoning? We also look forward to disentangling the effects of pretraining between learning content
knowledge and inductive bias for all downstream tasks to better understand pre-training.
-----
REFERENCES
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. HOList: An
Environment for Machine Learning of Higher Order Logic Theorem Proving. In 36th International
_Conference on Machine Learning, ICML 2019, Long Beach, California, USA, June 9-15, 2019._
[URL http://proceedings.mlr.press/v97/bansal19a.html.](http://proceedings.mlr.press/v97/bansal19a.html)
Kshitij Bansal, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, and Viktor Toman. Learning to
Reason in Large Theories without Imitation. arXiv preprint arXiv:1905.10501, 2019.
Francesco Bellucci and Ahti-Veikko Pietarinen. Charles Sanders Peirce: Logic. In The Internet
_[Encyclopedia of Philosophy, 2015. URL https://iep.utm.edu/peir-log/.](https://iep.utm.edu/peir-log/)_
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler,
Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott
Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya
Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165,
[2020. URL https://arxiv.org/abs/2005.14165.](https://arxiv.org/abs/2005.14165)
Alexis Conneau and Guillaume Lample. Cross-lingual Language Model Pretraining. In Ad_vances in Neural Information Processing Systems, NeurIPS 2019, Vancouver, BC, Canada, De-_
_[cember 8-14, 2019, pages 7057–7067, 2019. URL http://papers.nips.cc/paper/](http://papers.nips.cc/paper/8928-cross-lingual-language-model-pretraining)_
[8928-cross-lingual-language-model-pretraining.](http://papers.nips.cc/paper/8928-cross-lingual-language-model-pretraining)
Maxwell Crouse, Ibrahim Abdelaziz, Cristina Cornelio, Veronika Thost, Lingfei Wu, Kenneth Forbus,
and Achille Fokoue. Improving Graph Neural Network Representations of Logical Formulae with
Subgraph Pooling. arXiv preprint arXiv:1911.06904, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of
deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and
Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter
_of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT_
_2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–_
4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL
[https://doi.org/10.18653/v1/n19-1423.](https://doi.org/10.18653/v1/n19-1423)
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou,
and Hsiao-Wuen Hon. Unified Language Model Pre-training for Natural Language Understanding
and Generation. In Advances in Neural Information Processing Systems, NeurIPS 2019, Vancouver,
_BC, Canada, December 8-14, 2019, pages 13063–13075, 2019._
Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. Can Neural
Networks Understand Logical Entailment? In International Conference on Learning Representa_[tions, 2018. URL https://openreview.net/forum?id=SkZxCk-0Z.](https://openreview.net/forum?id=SkZxCk-0Z)_
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. TacticToe:
Learning to Prove with Tactics. Journal of Automated Reasoning, pages 1–30, 2020.
Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, and Bernd Finkbeiner.
Transformers Generalize to the Semantics of Logics. arXiv preprint arXiv:2003.04218, 2020.
Jesse Michael Han. Enhancing SAT solvers with glue variable predictions. _arXiv preprint_
_arXiv:2007.02559, 2020._
Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. Global
relational models of source code. In 8th International Conference on Learning Representations,
_[ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https:](https://openreview.net/forum?id=B1lnbRNtwr)_
[//openreview.net/forum?id=B1lnbRNtwr.](https://openreview.net/forum?id=B1lnbRNtwr)
Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural computation, 9(8):
1735–1780, 1997.
-----
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A learning environment
for theorem proving. In 7th International Conference on Learning Representations, ICLR 2019,
_[New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.](https://openreview.net/forum?id=r1xwKoR9Y7)_
[net/forum?id=r1xwKoR9Y7.](https://openreview.net/forum?id=r1xwKoR9Y7)
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert:
Improving pre-training by representing and predicting spans. Transactions of the Association
_[for Computational Linguistics, 8:64–77, 2020. doi: 10.1162/tacl\_a\_00300. URL https:](https://doi.org/10.1162/tacl_a_00300)_
[//doi.org/10.1162/tacl_a_00300.](https://doi.org/10.1162/tacl_a_00300)
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua
Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations,
_ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL_
[http://arxiv.org/abs/1412.6980.](http://arxiv.org/abs/1412.6980)
Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In 8th
_International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia,_
_[April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=Ske31kBtPr)_
[Ske31kBtPr.](https://openreview.net/forum?id=Ske31kBtPr)
Yann LeCun, Patrick Haffner, Léon Bottou, and Yoshua Bengio. Object recognition with gradientbased learning. In Shape, Contour and Grouping in Computer Vision, page 319, Berlin, Heidelberg,
1999. Springer-Verlag. ISBN 3540667229.
Gil Lederman, Markus Rabe, Sanjit Seshia, and Edward A Lee. Learning heuristics for quantified
boolean formulas through reinforcement learning. In International Conference on Learning
_Representations, 2020._
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Modelling High-Level Mathematical
Reasoning in Mechanised Declarative Proofs. arXiv preprint arXiv:2006.09265, 2020.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining
[approach. CoRR, abs/1907.11692, 2019. URL http://arxiv.org/abs/1907.11692.](http://arxiv.org/abs/1907.11692)
Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based
neural machine translation. In Lluís Màrquez, Chris Callison-Burch, Jian Su, Daniele Pighin,
and Yuval Marton, editors, Proceedings of the 2015 Conference on Empirical Methods in Natural
_Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412–_
1421. The Association for Computational Linguistics, 2015. doi: 10.18653/v1/d15-1166. URL
[https://doi.org/10.18653/v1/d15-1166.](https://doi.org/10.18653/v1/d15-1166)
R. Thomas McCoy, E. Grant, P. Smolensky, T. Griffiths, and Tal Linzen. Universal linguistic inductive
biases via meta-learning. Proceedings of CogSci, abs/2006.16324, 2020.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier,
and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Waleed Ammar,
Annie Louis, and Nasrin Mostafazadeh, editors, Proceedings of the 2019 Conference of the
_North American Chapter of the Association for Computational Linguistics: Human Language_
_Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, pages_
48–53. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-4009. URL
[https://doi.org/10.18653/v1/n19-4009.](https://doi.org/10.18653/v1/n19-4009)
Aditya Paliwal, Sarah M. Loos, Markus N. Rabe, Kshitij Bansal, and Christian Szegedy. Graph
representations for higher-order logic and theorem proving. In The Thirty-Fourth AAAI Conference
_on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intel-_
_ligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial_
_Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 2967–2974. AAAI Press,_
[2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/5689.](https://aaai.org/ojs/index.php/AAAI/article/view/5689)
Isabel Papadimitriou and Dan Jurafsky. Learning Music Helps You Read: Using transfer to study linguistic structure in language models. In Proceedings of the 2020 Conference on Empirical Methods
-----
_in Natural Language Processing (EMNLP), pages 6829–6839, Online, November 2020. Associa-_
[tion for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.](https://www.aclweb.org/anthology/2020.emnlp-main.554)
[emnlp-main.554.](https://www.aclweb.org/anthology/2020.emnlp-main.554)
Charles Sanders Peirce. Reasoning and the logic of things: The Cambridge conferences lectures of
_1898. Harvard University Press, 1992._
Bartosz Piotrowski and Josef Urban. Guiding Inferences in Connection Tableau by Recurrent Neural
Networks. In Christoph Benzmüller and Bruce Miller, editors, Intelligent Computer Mathematics,
pages 309–314, Cham, 2020. Springer International Publishing. ISBN 978-3-030-53518-6.
Stanislas Polu and Ilya Sutskever. Generative Language Modeling for Automated Theorem Proving.
_[CoRR, abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393.](https://arxiv.org/abs/2009.03393)_
Markus N. Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical Reasoning via
Self-supervised Skip-tree Training. arXiv preprint arXiv:2006.04757, 2020.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.
Language models are unsupervised multitask learners. In OpenAI Blog, 2018. URL
[https://d4mucfpksywv.cloudfront.net/better-language-models/](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
[language_models_are_unsupervised_multitask_learners.pdf.](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J. Liu. Exploring the Limits of Transfer Learning with a Unified Text-to[Text Transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. URL http://jmlr.org/](http://jmlr.org/papers/v21/20-074.html)
[papers/v21/20-074.html.](http://jmlr.org/papers/v21/20-074.html)
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In Proceedings of International Conference on Learning
_Representations (ICLR), 2019._
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The
graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
Imanol Schlag, Paul Smolensky, Roland Fernandez, Nebojsa Jojic, Jürgen Schmidhuber, and Jianfeng
Gao. Enhancing the transformer with explicit relational encoding for math problem solving. CoRR,
[abs/1910.06611, 2019. URL http://arxiv.org/abs/1910.06611.](http://arxiv.org/abs/1910.06611)
Daniel Selsam and Nikolaj Bjørner. Guiding High-Performance SAT solvers with Unsat-Core
Predictions. In International Conference on Theory and Applications of Satisfiability Testing,
pages 336–353. Springer, 2019.
Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L. Dill.
Learning a SAT solver from single-bit supervision. In 7th International Conference on Learning
_Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL_
[https://openreview.net/forum?id=HJMC_iA5tm.](https://openreview.net/forum?id=HJMC_iA5tm)
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: masked sequence to sequence
pre-training for language generation. In 36th International Conference on Machine Learning, ICML
_[2019, Long Beach, California, USA, June 9-15, 2019, 2019. URL http://proceedings.](http://proceedings.mlr.press/v97/song19d.html)_
[mlr.press/v97/song19d.html.](http://proceedings.mlr.press/v97/song19d.html)
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking
the inception architecture for computer vision. In Proceedings of the IEEE conference on computer
_vision and pattern recognition, pages 2818–2826, 2016._
Josef Urban and Jan Jakub˚uv. First Neural Conjecturing Datasets and Experiments. In Christoph
Benzmüller and Bruce Miller, editors, Intelligent Computer Mathematics, pages 315–323, Cham,
2020. Springer International Publishing. ISBN 978-3-030-53518-6.
Pashootan Vaezipoor, Gil Lederman, Yuhuai Wu, Chris J. Maddison, Roger B. Grosse, Edward A.
Lee, Sanjit A. Seshia, and Fahiem Bacchus. Learning Branching Heuristics for Propositional Model
[Counting. CoRR, abs/2007.03204, 2020. URL https://arxiv.org/abs/2007.03204.](https://arxiv.org/abs/2007.03204)
-----
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. Attention is All you Need. In Proceedings of Advances in Neural
_Information Processing Systems (NeurIPS), 2017._
Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding. In Advances in Neural Information Processing Systems, pages 2786–2796,
2017.
Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine translation in autoformalization of mathematics in mizar. Proceedings of ACM SIGPLAN
_International Conference on Certified Programs and Proofs, 2020._
Alex Warstadt and Samuel R. Bowman. Can neural networks acquire a structural bias from raw
linguistic data? Proceedings of CogSci, 2020.
Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Grosse. INT: An Inequality Benchmark for Evaluating
Generalization in Theorem Proving. arXiv preprint arXiv:2007.02924, 2020.
Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka.
What can neural networks reason about? In ICLR 2020, 2020.
Kaiyu Yang and Jia Deng. Learning to Prove Theorems via Interacting with Proof Assistants. In
_Proceedings of International Conference on Machine Learning (ICML), 2019._
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le.
Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural
_Information Processing Systems, NeurIPS 2019, Vancouver, BC, Canada, December 8-14, 2019,_
2019.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. PEGASUS: pre-training with
extracted gap-sentences for abstractive summarization. In 37th International Conference on
_Machine Learning, ICML 2020, Vienna, Austria, 2020, volume 119. PMLR, 2020._
-----
APPENDIX A SYNTHETIC TASK GENERATION PSEUDOCODE
**Algorithm 1**
1: function GENERATE_TUPLE( Vocabulary size S)
2: Vocabulary V ←{1, 2, . . ., S}. _▷_ Use an integer representation of symbols.
3: Math symbol set M ← SAMPLE(V, n=44, replacement=False). _▷_ Sample 44 distinct symbols.
4: Rule symbol set R ← SAMPLE(V\M, n=20, replacement=False). _▷_ Sample 20 distinct symbols.
5: Rule R ← SAMPLE(M _R, n=RANDOM(5,20), replacement=False)._ _▷_ Sample a sequence of
symbols of length between 5 and 20.
6: Case dictionary C [S].
_←{}_
7: **for s in R do**
8: Case dictionary C[s] ← SAMPLE(M, n=RANDOM(2,8), replacement=True). ▷ Sample a sequence
of symbols for each rule symbol, of length of length between 2 and 8.
9: **end for**
10: Result R[′] _←_ Rule R. _▷_ Set result string R[′] to be the same as rule string R.
11: **for s in R do**
12: SUBSTITUTE(R[′], s, C[s]). _▷_ Substitute every rule symbol s in result string R[′] with previously
randomly sampled string C[s].
13: **end for**
14: **return Math symbol set M, Rule symbol set R, Rule R, Case C, Result R[′].**
15: end function
APPENDIX B OTHER SYNTHETIC TASKS
In this section, we give descriptions of other variants of the synthetic tasks we considered than the
ones introduced in the main paper.
APPENDIX B.1 REWRITE AND RE W R I T E_M U L T I S T E P
We propose a rewrite task, inspired by the rewrite tactic used in interactive theorem provers. The
Rewrite task requires the model to rewrite a string according to a rule transformation. One example
of the task is:
Source: a + b − _c <s> A + B = B + A_
Target: b + a − _c_
“A + B = B + A“ is the rule transformation, which is applied to the LHS string “a + b − _c”. The_
model needs to predict the RHS string as the result of the rule application, i.e., b + a − _c. Besides rule_
symbols and math symbols, we also require the third set of symbols, named as "string symbols". For
the ease of our discussion, we we will think of math symbols as the union of those operators used in
mathematics (e.g., “+ −∗ = ()&”), rule symbols as upper case letters (e.g., A, B, C . . . ), and string
symbols as lower case letters (e.g., a, b, c ...). We first sample a random string as the LHS string,
consisting of math symbols and string symbols (e.g., a + b − _c). We sample a sub-string of the LHS_
string, and replace the string symbols in the sub-string with rule symbols. For example, we sample
and obtain the substring a + b from a + b − _c, and we replace a, b with rule symbols A, B. This then_
forms the LHS of the rule transformation, A + B, with the substitution dictionary {A : a, B : b}. We
then sample the RHS of the rule transformation from the union of rule symbols A and B, and all math
symbols, e.g., B + A. This gives the rule transformation A + B = B + A. We substitute the value
of the substitution dictionary for each rule symbol in the RHS rule, and then substitute back to the
original LHS string to obtain b + a − _c. The task example is constructed by using the LHS string and_
the rule transformation as the source input, and use the result of the rule transformation as the target.
We further introduce a multi-step version of the rewrite task: Rewrite_multistep. In this task,
the source may contain more than one rewrite rule, and the target is the result of applying all the
rewrite rules in a sequence. This task is motivated from the need to perform multi-step planning in
-----
Table 6: Test top-1, top-10 (%) accuracy on the IsarStep task.
Model Top-1 Acc. Top-10 Acc.
No pretrain (Li et al., 2020) 20.4 33.1
HAT (Li et al., 2020) 22.8 35.2
LIME Deduct 24.7 37.7
LIME Abduct 26.7 41.0
LIME Induct 23.9 38.8
LIME Mix 26.9 40.4
LIME Rewrite 26.0 38.6
LIME Rewrite_multistep **28.6** **43.9**
LIME Induct_v2 25.6 39.8
LIME Induct_v3 25.0 38.8
LIME Induct_rewrite 25.8 39.5
mathematical reasoning tasks. During pre-training, for each training example, we uniformly sample
the number of rewrite steps from 1 to 5.
APPENDIX B.2 OTHER VARIANTS OF IN D U C T TASK
We introduce three other variants of the Induct task.
1. Induct_v2: We move the Case dictionary from the source input to the target output. This
makes the task significantly harder, which requires the agent to synthesize a rule and a
possible explanation (Case) to explain the Result.
2. Induct_v3: Instead of providing the Case dictionary, we provide two Result strings,
coming from the same Rule. Namely, we sample two Case dictionaries, and applying each
to the Rule string to obtain two Result strings. Both Result strings are used as source, and
the target is the Rule string.
3. Induct_rewrite: We also create a “induction” version of the Rewrite task. In this
task, the source is the LHS string concatenated with the RHS string, that is the result of the
rewrite. The target is the rewrite rule that is used to do the rewrite.
APPENDIX B.3 A FULL COMPARISON OF ALL SYNTHETIC TASKS
In this section we present a full comparison for all synthetic tasks. We followed the training protocol
in 4.1 and evaluate the method on IsarStep. The results are reported in Table 6. We can see that
the Rewrite_multistep achieved the best performance across all synthetic tasks, surpassing the
baseline by 8.2% for Top-1 accuracy and 10.8% for Top-10 accuracy. This indicates the inductive
bias for long horizon reasoning encoded in Rewrite_multistep is very useful for the reasoning
task.
APPENDIX C MORE ABLATION STUDIES
APPENDIX C.1 DOES THE VOCABULARY SIZE MATTER?
In this section, we investigate whether the vocabulary size S in the synthetic task generation algorithm
has an effect on the performance. We used the REWRITE task for the experiment in this section.
We generated datasets of various vocabulary sizes, 100, 512, 1000, 5000, 25000. We used the same
curriculum learning for pre-training as described in 4.1 on larger vocabulary sizes: first training on
the Rewrite task of vocabulary size 100 for 10K steps, then training on each individual dataset
for another 10K steps. We compare the performance on the downstream task Isarstep. The results
are presented in Table 7. We see that when the vocabulary size is equal or larger than 512, the
performance were similar. The smallest vocabulary size 100 obtained the worst performance among
all, and all the other four models achieved similar BLEU scores. The model trained on the largest
vocabulary achieved best performance on top-1 accuracy and top-10 accuracy. The results show
there is a non-trivial effect of the vocabulary size of the synthetic task to the performance of the
-----
downstream task. Hence we use vocabulary size of 1000 for all the experiments in the main paper.
We leave investigations of the causes to future work.
Table 7: Vocabulary sizes’ effects on the IsarStep task.
Model Top-1 Acc. Top-10 Acc
No pretrain 20.4 33.1
LIME on Rewrite, S = 100 24.1 37.5
LIME on Rewrite, S = 512 25.4 38.8
LIME on Rewrite, S = 1000 26.0 38.6
LIME on Rewrite, S = 5000 25.8 38.5
LIME on Rewrite, S = 25000 27.4 40.9
APPENDIX C.2 PRE-TRAINING ON ISARSTEP FOR METAMATHSTEP
Following Section 5.1, we performed pre-training on IsarStep for MetaMathStep. The result is shown
in Table 8. In contrast to MetaMath helping IsarStep, we see that pretraining on IsarStep task did not
help the downstream task MetaMathStep. We hypothesize that this could be due to MetaMathStep
task is closer to the LIME tasks than IsarStep, and hence providing more gains than the opposite
direction. We leave investigations to the future versions.
Table 8: Pretraining on IsarStep for the MetaMathStep task.
Model Top-1 Acc. Top-10 Acc.
No pretrain 67.7 76.5
LIME Mix 69.1 77.9
Pretrain on IsarStep 67.0 76.1
APPENDIX C.3 DOES LIME HELP LSTMS?
In this section, we investigate if LIME also helps other architectures than transformers. In particular,
we applied LIME to two LSTM based architectures: 1. vanilla LSTM, 2. LSTM with attention
mechanism. The vanilla LSTM is a stacking LSTM with 4 layers, each with 1000 cells, and 1000dimensional embeddings. The LSTM with attention architecture is taken from Luong et al. (2015),
also with 4 layers, 1000 cells and 1000-dimensional embeddings. We evaluate on the IsarStep task,
and compared a model trained from scratch and a model pre-trained on LIME abduct task. We
used the same training protocol as described in 4.1. The results are shown in Table 9, along with the
results on transformer. We observe that LIME improved LSTM as well as LSTM with attention, but
the improvements were small compared to transformer. Specifically, if we compare Top-1 accuracy,
we can see that LIME improved LSTM from 5.5% to 6.9%, LSTM with attention from 12.3% to
13.4%, and transformer from 20.4% to 26.7%. This observation is aligned with our hypothesis that
the transformer is a malleable architecture and hence it is capable of learning architectural inductive
biases from datasets. This is mainly attributed to the potential of learning dynamic attention graphs
in self-attention layers. We note that this still warrants further investigation as the performance of
these architectures are not at the same level, and that may also lead to different improvements.
Table 9: Comparing LIME’s benefits on LSTMs on the IsarStep Task
Model Top-1 Acc. Top-10 Acc.
LSTM 5.5 11.3
LSTM + LIME Abduct 6.9 14.3
LSTM + attention 12.3 22.7
LSTM + attention + LIME Abduct 13.4 26.3
Transformer 20.4 33.1
Transformer + LIME Abduct 26.7 41.0
-----
| [
"Wenda, Li",
"Yuhuai, Wu",
"Jimmy, Ba",
"Markus N., Rabe",
"Christian, Szegedy",
"Roger B., Grosse"
] | 2021-01-01T00:00:00 | ICML 2021 | true | 47 | 8 | [
"Isabelle",
"MetaMath",
"HOL Light",
"Lean"
] | https://arxiv.org/abs/2101.06223 | https://arxiv.org/abs/2101.06223 | https://www.semanticscholar.org/paper/58c74cec28f3416b9a1d308bb2d6519d21d53ab0 |
Solving Math Word Problems via Cooperative Reasoning induced Language Models | Large-scale pre-trained language models (PLMs) bring new opportunities to challenging problems, especially those that need high-level intelligence, such as the math word problem (MWPs). However, directly applying existing PLMs to MWPs can fail as the generation process lacks sufficient supervision and thus lacks fast adaptivity as humans. We notice that human reasoning has a dual reasoning framework that consists of an immediate reaction system (system 1) and a delicate reasoning system (system 2), where the entire reasoning is determined by their interaction. This inspires us to develop a cooperative reasoning-induced PLM for solving MWPs, called Cooperative Reasoning (CoRe), resulting in a human-like reasoning architecture with system 1 as the generator and system 2 as the verifier. In our approach, the generator is responsible for generating reasoning paths, and the verifiers are used to supervise the evaluation in order to obtain reliable feedback for the generator. We evaluate our CoRe framework on several mathematical reasoning datasets and achieve decent improvement over state-of-the-art methods, up to 9.6% increase over best baselines. | This work develops a cooperative reasoning-induced PLM for solving MWPs, called Cooperative Reasoning (CoRe), resulting in a human-like reasoning architecture with system 1 as the generator and system 2 as the verifier, which achieves decent improvement over state-of-the-art methods. | ## Solving Math Word Problems via Cooperative Reasoning induced Language Models
**Xinyu Zhu[♢∗]** **Junjie Wang[♠∗]** **Lin Zhang[♡]** **Yuxiang Zhang[♠]**
**Ruyi Gan[♡]** **Jiaxing Zhang[♡]** **Yujiu Yang[♢†]**
_♢Tsinghua University_ _♠Waseda University_
_♡International Digital Economy Academy_
[email protected] [email protected]
[email protected] [email protected]
{zhanglin, ganruyi, zhangjiaxing}@idea.edu.cn
**Abstract**
Large-scale pre-trained language models
(PLMs) bring new opportunities to challenging problems, especially those that need highlevel intelligence, such as the math word problem (MWPs). However, directly applying existing PLMs to MWPs can fail as the generation
process lacks sufficient supervision and thus
lacks fast adaptivity as humans. We notice that
human reasoning has a dual reasoning framework that consists of an immediate reaction
system (system 1) and a delicate reasoning system (system 2), where the entire reasoning is
determined by their interaction. This inspires
us to develop a cooperative reasoning-induced
PLM for solving MWPs, called Cooperative
**Reasoning (CoRe), resulting in a human-like**
reasoning architecture with system 1 as the generator and system 2 as the verifier. In our approach, the generator is responsible for generating reasoning paths, and the verifiers are used
to supervise the evaluation in order to obtain
reliable feedback for the generator. We evaluate
our CoRe framework on several mathematical
reasoning datasets and achieve decent improvement over state-of-the-art methods, up to 9.6%
increase over best baselines.[1]
(a) Prompt-based Methods (zero-shot/few-shot)
Reasoning Path
Large-scale
Prompt … Answer
PLMs
Reasoning Path
Chain-of-Thought: 1 path;
Self-consistency: multiple paths.
(b) Dual Process Theory (System 1&2) (few-shot/finetuning)
If few-shot: Reasoning Path
Prompt Large-scale PLMs … VerifiersRules / Answer
Reasoning Path
(c) CoRe (ours) (zero-shot)
Generators
Cooperative Training
Data Cooperative Inference Unseen
Self-Thinking datasets
Verifiers
Figure 1: Comparing our CoRe with popular methods
in mathematical logic reasoning tasks.
2022c; Li et al., 2022; Brown et al., 2020). Although appealing in terms of efficiency, its success
relies on memorizing patterns with a sufficiently
large number of parameters (≥ 100 billion) (Wei
et al., 2022b), differentiating it from the fast adaptivity in the human reasoning process.
Active disciplines like neuroscience and cognitive science attempt to uncover the mechanism of
human reasoning, and agree that our learning process is governed by an interaction mechanism, often referred to as System 1 and System 2 (Evans
2003; Kahneman, 2011). In particular, System 1
offers fast responses like human instinct, and Sys
tem 2 performs deliberate reasoning. Interactions
between them are important for adapting to a continuously changing environment. PLMs behave
more like System 1, according to the above theory,
and thus lack the generalization ability in reasoning (Nye et al., 2021).
In this work, we explore a new line of zero-shot
math problem reasoning, using a human reasoning
**1** **Introduction**
Addressing math problems is a hallmark of human
intelligence, which allows reasoning and adapting
from limited data. We want neural models to be
able to do the same, however, quick and flexible
reasoning is challenging to current neural models
as they must possess a certain level of prior experience from a limited amount of new data while
avoiding overfitting. The rapid growth of largescale Pre-trained Language Models (PLMs) offers
unprecedented potential for this issue, often relying on well-designed trigger prompts (Wei et al.,
*Equal contribution.
†Corresponding Author.
[1Our codes are available at https://github.com/](https://github.com/TianHongZXY/CoRe)
[TianHongZXY/CoRe](https://github.com/TianHongZXY/CoRe)
-----
alike framework with feedback in the solution generation loop as opposed to pure PLM-based methods, called Cooperative Reasoning (CoRe). Intuitively, System 1 and System 2 are embodied as
generators and verifiers, respectively, and they are
defined as follows: generators for generating reasoning paths, and verifiers for supervising the paths’
evaluation. Specifically, we train a LM beyond the
question-answer paradigm by integrating in-theloop reasoning, i.e., we let the LM output both
the answer and the corresponding reasoning process for a given question. Meanwhile, we introduce
two types of verifiers, including token-level and
sentence-level, allowing us to provide feedback
in the whole solution generation lifecycle. Notice
that the solution path is generated by selecting candidate tokens with some probability so that it is
tree-alike and much coincides with the tree search
process of Monte Carlo Tree Search (MCTS) (Kocsis and Szepesvári, 2006). With this in mind, the
verifiers can score tokens along the solution generation process from start to end when using the
MCTS. Therefore, we can use the score to evaluate
the quality of the generation process during inferring before finalizing the solution, making timely
feedback available for supervising the generation
process. With this, the evaluation goes beyond the
quality of the final result at the granularity of each
reasoning step, extending the supervision from the
solution level to the path level. We combine the solution score and the perplexity of its corresponding
reasoning path to encourage the overall training
towards high-quality augmented solutions while
aligning with the reliable reasoning process, aiming to improve generalization ability.
Our experimentally evaluate CoRe on multiple mathematical reasoning datasets in both zeroshot and fine-tuning settings. CoRe consistently
achieves better performance than competing baselines. Notably, CoRe has up to 9.6% improvements
on MultiArith over SoTA baselines, which are
dozens of times larger than our model.
In summary, our contributions are as follows.
- We propose a novel reasoning method
for mathematical problem solving, called
**Cooperative Reasoning (CoRe), that intro-**
duces feedback in the loop during solution
generation as opposed to the sequential learning process in the previous ones, resulting in
the first method for this task that builds on
top of the learning mechanism in the human
brain.
- We develop a self-thinking strategy for further
boosting reasoning ability with generated data
from the cooperation between System 1 and
System 2.
- We demonstrate the superiority of CoRe comparing to other zero-shot and fine-tuning methods, which has 9.6% improvements on MultiArith over SoTA baselines.
**2** **Related Work**
**2.1** **Dual Process System**
Dual-process theory (Evans, 2003; Kahneman,
2011) argues there are two cognitive systems underpinning human reasoning: System 1 and System
2. The purpose of clarifying these systems is that
they have the potential to help us construct artificial intelligence systems that benefit from human
flexibility and methodical generalization.
Dual process system model guidance is not
new. Nye et al. (2021) simulated Systems 1 and
2 to improve consistency and coherence of neural
networks. Similar to several studies Cobbe et al.
(2021); Li et al. (2022); Scialom et al. (2021), in
addition to System 1 for the generation, we develop
a distinct model as System 2, called Verifier. The
Verifier checks the feasibility and correctness of
the generator’s content and collaboratively solves
the reasoning task together.
**2.2** **Multi-step Reasoning**
Many works exploit the multi-step reasoning ability of language models. Cobbe et al. (2021) showed
that training a verifier to score the solutions generated by a fine-tuned GPT-3 could improve the
performance compared to solely fine-tuning a GPT3. Nye et al. (2022) discovered that asking the
language model to write the intermediate process
could achieve better results on various NLP tasks.
Likewise, Chain-of-Thought (CoT) prompts (Wei
et al., 2022c) prepended exemplars with intermediate reasoning steps as prompts and achieved SoTA
on several reasoning benchmarks by using largescale PLMs. Wang et al. (2022) further boosted
CoT’s performance by sampling a bunch of possible solutions and then obtained the final answer by
majority voting. DIVERSE (Li et al., 2022) proved
diverse CoT prompts and an extra verifier were
both helpful for PLMs to solve reasoning problems.
Kojima et al. (2022) found that by simply adding
“Let’s think step by step” after the question. PLMs
-----
could successfully step by step solve the problems,
called Zero-shot-CoT.
These above methods rely on extremely large
language models, resulting in high computational cost and time-consuming. Moreover, several
works (Wei et al., 2022c; Kojima et al., 2022) point
out that neither CoT nor Zero-shot-CoT is helpful to smaller models. While our method does not
necessarily require extremely large PLMs and can
work with models with different size scales, thus reducing computational cost and inference time. Our
approach has competitive zero-shot performance
thanks to the efficient and collaborative application
of a dual-process system.
**3** **Cooperative Reasoning**
In this section, we will present the proposed cooperative reasoning framework, CoRe, that enforces System 1 and System 2 mutually cooperating, which includes 3 sequential steps: cooperative
training, cooperative inference, and self-thinking.
**3.1** **Preparation**
As discussed in Sec. 1, we expect a PLM (G) to
fast generate multiple reasoning paths like System
1. Then, considering that System 2 is responsible
for deliberate evaluations of the reasoning paths,
we employ two modules: a step verifier (Vstep) for
reasoning steps, and a path verifier (Vpath) for reasoning paths.
**3.2** **Cooperative Training**
Before applying System 1&2 to inference, a critical issue for them is learn how to generate rea_soning paths and evaluate reasoning steps/paths._
Inspired by a widely-used training strategy for reasoners (Cobbe et al., 2021), we present a cooperative training method as shown in Fig. 2 Step 1.
Moreover, we discuss hyper-parameter configurations and extra training details in Appendix B.1
and Appendix B.2.
**Step 1.1: We first fine-tune G on a dataset D =**
_{(qi, pi, gti)}i[N]=1_ [consisting of][ N][ samples. Each]
sample x is composed of a question q, a reasoning
path p and a ground truth answer gt. We fine-tuen
_G with standard language modeling objective_ _LM_
_L_
as Eq. (1).
As a result, we obtain a new dataset D[+] =
_i=1,...,N_
_qi, rpi,j, ai,j_ _j=1,...,M_ [with][ M][ generated rea-]
soning paths (rp) and answers (a) for each q.
**Step 1.3: Different from the popular methods, we**
train two verifiers to model human reasoning procedure with deliberate analysis for each step and
the whole path. To evaluate several reasoning steps
in a path, we desire a token-level scorer, which is
named step verifier Vstep. Therefore, we fine-tune a
PLM with two tasks jointly: 1) the language modeling task mentioned before; 2) the verification task
to predict a score for each token in the solution.
The verification loss LV S is calculated as the Mean
Squared Error (MSE) of the predicted score with
respect to the label as follows:
_|rp|+|a|_
_i=1_ (scorei − I(a == gt))[2], (2)
X
_LV S =_
where, (rp, a) from D[+] and gt with same q from
_D._
On the other hand, we need a path-level scorer
for reasoning paths. Different from step verifier, we
simply extract an overall presentation of the reasoning path for prediction. Specifically, we employ
a BERT-like model and take the [CLS] token to
calculate MSE loss LV P similar to LV S.
In summary, the overall training objective for
verifiers is given by:
_LV = LV S + LLM + LV P ._ (3)
**3.3** **Cooperative Inference**
After obtaining a generator and two verifiers, we
propose cooperative inference to generate solutions
for unseen questions. Instead of treating verifiers
as voters, we argue that verifiers should offer appropriate guidance and feedback during the reasoning process. Therefore, we integrate a cooperative
search algorithm. In particular, we adopt the popular Monte Carlo Tree Search (MCTS) (Kocsis and
Szepesvári, 2006) to enable controlled reasoning.
The cooperative inference starts from the root node,
which preserves question tokens. We detail the cooperative inference process as follows.
**Selection. If the current node has children, with**
50% probability, we select a node from its children
with the modified PUCT formula (Czech et al.,
2021) as Eq. (4),
_|p|+|gt|_
_i=1_ log P (xi | x<i) (1)
X
_LLM = −_
_b_ _[N]_ [(][s, b][)]
_∈C_
), (4)
qP1 + N (s, n)
**Step 1.2: Once G has learned how to generate**
solutions, we employ it on questions q from D.
_n[∗]_ = arg max(R(n)+cpuctπ(n _s)_
_n∈C_ _|_
-----
Step 1: Cooperative Training Step 2: Cooperative inference by Trained System 1&2
System 1&2 D: {Q} Reasoning Step RP with scores
D: {Q, P, GT} **Step 1.1 Fine-tuning**
({Q, P, GT}) 💡System2
🤔System 1 🤔System 1 (Vpath)
(Generator) Generating (Generator)
Reasoning Steps ({Q})
**Step 1.2 Generating** Scoring
Reasoning Paths ({Q}) Reasoning Paths
Generated 💡System2
D[+]: {Q, RP, A} {Q, RP, A} (Vstep) Scoring
Reasoning Steps
💡System2
**Step 1.3 Fine-tuning** (Vstep & Vpath) D: {Q, P, GT} D𝑛𝑒𝑤: {Q, RP, A, S} D𝑛𝑒𝑤: {Q, RP, A}
({Q, RP, A})
**Step 2**
**Step 1**
A: Generated Answers
GT: Ground Truth
S: Scores
P: GT Reasoning PathRP: Generated Reasoning Path Step 3: Self-Thinking Filtering Merge (𝐷𝑛𝑒𝑤, 𝐷)
Figure 2: Cooperative reasoning framework.
**Algorithm 1 Self-Thinking**
**Input: Generator G; Step verifier Vstep; Path veri-**
fier Vpath; Dataset D.
1: Combine generator and verifiers with a cooperative search algorithm.
2: repeat
3: Generate a new dataset Dnew from input
questions.
4: Filter Dnew.
5: Merge Dnew with D in Step 1.
6: Do Step 1.
7: Do Step 2.
8: until performance is saturated.
It is challenging to fine-tune models on the data
synthesized by themselves, which indicates they
have to be very confident in the content they generate. A proper self-training method can enhance
the robustness of the whole system and allow deep
data mining. Therefore, we introduce self-thinking
as described in Fig. 2 Step 3 and Algorithm 1. Considering the noise contained in generated data, we
build a filter by using scores from verifiers and perplexity (PPL) from the generator. In detail, we select high-quality reasoning paths by setting a score
threshold. Moreover, we only keep the reasoning
paths with no higher PPL than the ground truth
solutions. After filtering, we merge Dnew with D
and send it to Step 1. Once the several iterations
are completed, we obtain a powerful System 1&2.
More details can be found in Appendix B.3.
**3.5** **Zero-shot Inference**
We simply perform cooperative inference as Fig. 2
Step 2 with trained System 1&2 on unseen datasets.
where the state s represents the sequence consisting of all tokens in the current search path. And,
_N_ (s, n) means the times that node n has been selected in state s. Reward R(n) records all the scores
received from the backup. We perform selection
again with the selected node as the current node.
Otherwise, we perform expansion once and choose
the returned new node as current node.
**Expansion. During expansion, the generator is re-**
quired to generate a sequence of tokens based on
the current state. A new node is created to store the
generated tokens and added to the current node’s
children. Then, Vstep evaluates the current reasoning path and predict a score scorestep. Finally, the
new node is returned.
**Roll-Out. After selection and expansion, we start**
from the current node and let the generator complete the reasoning path until it meets [EOS] token or reaches the max token length limit. Next,
_Vpath evaluates the whole reasoning path and pro-_
duces a score scorepath. Remember that Vstep also
provides a score scorestep during the expansion.
Therefore to leverage both scores, we introduce a
hyper-parameter α to adjust their contributions to
the node’s reward,
_s = scorepath + α × scorestep_ (5)
where s is the final score that each node receives
by the backup.
**Backup. We update the rewards back from the**
current node to the root node. The scores produced
by verifiers are added to R(n) and the visited time
_N_ (s, n) is increased by 1.
**3.4** **Self-Thinking**
-----
After obtaining several reasoning paths with scores,
we arrive at the final answer by weighted voting
based on scores following (Li et al., 2022).
**4** **Experiments**
**4.1** **Experimental Setup**
**4.1.1** **Datasets**
We consider several widely-used math word problem datasets: GSM8K (Cobbe et al., 2021), ASDivA (Miao et al., 2020), SingleOp (Roy et al., 2015),
SinlgeEq (Koncel-Kedziorski et al., 2015) and
MultiArith (Roy and Roth, 2015). (Details in Appendix A). Following the general setting as in (Kojima et al., 2022; Wei et al., 2022c), we employ
accuracy as the evaluation metric for all datasets.
**4.1.2** **Baselines**
For comparison under the zero-shot setting, the results of Instruct GPT-3 (175B) and PaLM (540B)
with their various methods are from Kojima et al.
(2022). The zero-shot[∗] and zero-shot-CoT[∗] imply not the standard prompt (see details in Appendix B.4). We also provide our generator as a
baseline when compared to previous fine-tuning
methods. Regarding to sampling multiple solutions,
we search 40 paths with the same setting as SelfConsistency (Wang et al., 2022).
For GSM8K, we select various powerful PLMs
enhanced by the chain of thought prompt as baselines, including LaMDA (137B) (Thoppilan et al.,
2022), GPT-3 (175B) (Brown et al., 2020) and
PaLM (540B) (Chowdhery et al., 2022). Except
for the few-shot methods, we also include a finetuned baseline that applies two GPT-3 (175B), one
as the generator and the other as verifier (Cobbe
et al., 2021).
**4.1.3** **Implementation Details**
Since cooperative training requires a highquality dataset with reasoning paths, we treat
GSM8K (Cobbe et al., 2021) as the seed dataset
_D in Sec. 3.2. Unless otherwise, we employ GPT-_
J (Wang and Komatsuzaki, 2021) as the generator
and the step verifier, DeBERTa-large (He et al.,
2021) as the path verifier. Since the default setting
consists of two GPT-J (6B) and a DeBERTa-large
(0.4B), we note our backbone as “GPT-J 12B”,
which implies around 12.4 billion parameters in
total. During generation, we apply calculator as
assistant following Cobbe et al. (2021). We run all
the experiments for 3 times and report the best re
|Backbone Method|SingleEq MultiArith|
|---|---|
|Instruct GPT-3 175B zero-shot zero-shot∗ zero-shot-CoT zero-shot-CoT∗ PaLM 540B zero-shot zero-shot-CoT + Self-Consistency|74.6 17.7 78.7 22.7 78.0 78.7 78.7 79.3 - 25.5 - 66.1 - 89.0|
|GPT-J 12B CoRe (ours)|79.5 97.5|
Table 1: Zero-shot results on SingleEq and MultiArith.
sult, detailed hyper-parameters setting can be found
in Appendix B.1. Our zero-shot setting is similar
to the transferring setting in T0 (Sanh et al., 2022)
and FLAN (Wei et al., 2022a). All the training and
testing procedures are done on a DGX station with
8 A100 GPUs.
**4.2** **Main Results**
**4.2.1** **Zero-shot Results**
Table 1 presents main results on two mathematical reasoning datasets, demonstrating the zero-shot
generalization ability. CoRe achieves superior performance on both datasets, demonstrating its capability of mathematical reasoning on unseen datasets.
Note that the baselines are several dozen times
larger than ours and still underperform our model.
The improvement might be explained by two potential reasons. One is that applying the CoRe framework on PLMs can activate their reasoning ability,
even though their scales are small (≤ 100B). Another one is that self-thinking can provide valuable
self-produced data to teach Systems 1&2. Therefore, the results present the effectiveness of cooperative working with System 1&2 and self-thinking.
**4.2.2** **Zero-shot v.s. Fine-tuning**
We compare CoRe with previous fine-tuned SoTA
baselines on four datasets, and results are presented
in Table 2. To show the importance of cooperative
reasoning, we apply our generator as a baseline.
The results demonstrate that without any guidance
generator underperforms previous methods on most
datasets. Despite the gain from self-consistency, it
still lags behind other fine-tuned SoTAs. While
after applying our method CoRe, it surpasses previous fine-tuned SoTAs on all datasets in a zero-shot
setting. The results clearly demonstrate the capability of CoRe to greatly boost PLMs’ reasoning
ability.
-----
|Backbone Method|ASDiv-A SingleOp SingleEq MultiArith|
|---|---|
|Fine-tune Previous SoTA|75.3a 80.1b 72.3c 60.5d|
|---|---|
|Zero-shot GPT-J 6B Generator only + Self-Consistency GPT-J 12B CoRe (ours)|51.7 53.2 49.2 77.3 63.7 59.6 60.2 92.3 90.5 85.2 79.5 97.5|
|---|---|
Table 2: Zero-shot results v.s. previous fine-tuned SoTA results on math reasoning tasks. The previous SoTA
baselines are obtained from:a: (Lan et al., 2022), b: LogicForm (Liang et al., 2016), c: UNITDEP (Roy and Roth,
2017), d: Relevance and LCA operation classifier (Roy and Roth, 2015). The best scores are in bold.
|Backbone Method|GSM8K|
|---|---|
|few-shot LaMDA 137B CoT + Self-Consistency GPT-3 175B CoT + Self-Consistency PaLM 540B CoT + Self-Consistency|17.1 27.7 49.6 - 56.5 74.4|
|---|---|
|fine-tune GPT-3 350B - GPT-J 12B CoRe (ours)|57.0 63.2|
|---|---|
Table 3: Fine-tuning v.s. Few-shot results on GSM8K
with various PLMs. Results are reported from Cobbe
et al. (2021); Wei et al. (2022c); Wang et al. (2022). The
best score is in bold and the second is underlined.
**4.2.3** **GSM8K Results**
Beyond improvements on zero-shot results, we observe that the fine-tuning setting can benefit a lot
from our CoRe framework, as shown in Table 3.
Compared to previous fine-tuned SoTA (Cobbe
et al., 2021) (GPT-3 350B), CoRe outperforms it
with much fewer parameters, computation and inference time. Note that it samples 100 solutions for
each question while we only search 40 paths.
For a comprehensive comparison, we include
few-shot results with large-scale PLMs due to a
limited number of “fine-tune” competitors. With regard to few-shot methods applied on large-scale
PLMs (≥ 100B parameters), CoRe only underperforms PaLM-540B strengthened by chain of
thought prompt and self-consistency, further proving the effectiveness of our method.
**4.3** **Ablation Study**
**4.3.1** **Is guidance important during path**
**searching reasoning?**
We argued that it is important to introduce guidance in the loop during reasoning path searching.
To validate this argument, we adjust the weight of
|Guidance α|SingleOp MultiArith|
|---|---|
|w/o verifiers - V only 0 path V + V 0.1 path step 1|59.6 92.3 80.2 95.8 81.3 95.8 82.9 96.8|
|---|---|
Table 4: Zero-shot results with different levels of guidance from verifiers. α comes from Eq. (5).
reward provided by verifiers during reasoning. The
experiments are conducted using models without
self-thinking. Table 4 summarizes the performance
on zero-shot datasets with different settings of guidance. For “w/o verifiers”, the solutions are predicted by a generator only and applied with “SelfConsistency”. As demonstrated in Table 4, guidance from Vpath can provide performance gains on
SingleOp, with a 20.6% absolute improvement. We
further incorporate the guidance from the step-level
verifier Vstep. As described in Eq. (5), increasing
the weight of reward (α) from Vstep, CoRe achieves
a higher accuracy on both SingleOp and MultiArith.
Thanks to the feedback and guidance during the reasoning stage, the generator tends to explore more
often on a path with a higher reward score. As a
result, CoRe increases the accuracy on SingleOP
from 59.6% to 82.9% and MultiArith from 92.3%
to 96.8%.
**4.3.2** **How much does self-thinking boost the**
**reasoning ability of a language model?**
To examine the effect of self-thinking, we explore
it along with two axes: 1) the number of iterations
and 2) the type of search strategy. Since we apply
the self-thinking procedure on the GSM8K dataset,
we investigate the performance of models under
different settings on GSM8K, as shown in Table 5.
First, increasing the number of iterations can always improve the performance for both greedy decode and self-consistency. Our CoRe reaches sat
-----
AsDiv-A
10 15 20 25 30 35 40
SingleOp
10 15 20 25 30 35 40
SingleEq
10 15 20 25 30 35 40
MultiArith
10 15 20 25 30 35 40
100
80
60
40
20
100
80
60
40
20
100
80
60
40
20
100
80
60
40
20
Number of reasoning paths
Self-Consistency CoRe
Number of reasoning paths
Self-Consistency CoRe
Number of reasoning paths
Self-Consistency CoRe
Number of reasoning paths
Self-Consistency CoRe
Figure 3: Zero-shot results with different search strategies in cooperative inference.
|# of iterations|0 1 2|
|---|---|
|Generator only (Greedy) Generator + Self-Consistency CoRe|29.9 34.7 34.9 42.0 43.1 45.9 60.0 63.2 61.6|
|---|---|
Table 5: Results on GSM8K with models undergone a
different number of self-thinking iterations. Outcomes
of various search strategies are provided.
|# of iterations Generator Verifiers|SingleOp MultiArith|
|---|---|
|0 0 0 1 1 0 1 1|82.9 96.8 85.2 97.5 81.9 96.3 83.3 97.2|
|---|---|
Table 6: Zero-shot results with a different number of
self-thinking iterations for generator and verifiers respectively.
uration in one round, which might be attributed to
the fact that System 1&2 learns better and faster on
self-generated data by collaborative working. Second, regardless of the search strategy, self-thinking
consistently boost the model’s performance, which
verifies that self-thinking boost language model’s
reasoning ability.
**4.3.3** **Do self-thinking generalize to other**
**datasets?**
**4.3.4** **How performance varies as the number**
**of search iterations for different search**
**strategies changes?**
As shown in Fig. 3, accuracy on 4 datasets consistently increases along with the growth of search
iterations for both search strategies. However, the
scaling curves of self-consistency and CoRe are
quite different. The performance gain quickly saturates with self-consistency. Sampling 40 paths can
not further improve the accuracy, while the scaling
curve of CoRe is much sharper. Due to the heuristic algorithm that requires the model to continue
exploring on the previously generated paths, CoRe
starts from a relatively lower level in the beginning, whereas the accuracy quickly improves as the
number of search iterations increases. The result
demonstrates the effectiveness of CoRe in searching reasoning paths, with a fast growth curve and a
slow saturation rate.
We have performed self-thinking on GSM8K and
proved that it improves the model’s reasoning ability in 4.3.2. Furthermore, we explore whether the
improvement on GSM8K comes at the cost of performance degradation on other datasets, i.e. the
model overfits the dataset. As presented in Table 6,
we vary the number of self-thinking iterations for
the generator and verifiers respectively and provide
results on SingleOp and MultiArith. The results
show that the performance of the generator suffers
a little, but verifiers can eliminate this undesirable
effect and benefit a lot from self-thinking. The best
results are obtained when only the verifiers are
further fine-tuned, with the 2.3% and 0.7% absolute improvement on the two datasets respectively.
This observation implies that we can economize on
training costs and time where target datasets are
not included in self-thinking data.
**4.4** **Case studies**
**4.4.1** **Improvements from CoRe**
A typical exemplar from GSM8K is presented in Table 7. Greedy decode fails to find a reasonable path
due to the limited exploration in the output space.
In contrast, self-consistency samples multiple reasoning paths randomly, resulting in a richer candidate set. Although it finds some right solutions
occasionally, without any guidance, it fails to explore more frequently on the high-quality paths,
-----
|Question|Sophia and Rose went together to the market to buy onions and potatoes. Rose bought 4 times the number of onions and potatoes Sophia bought. If Rose bought 12 onions and 4 potatoes, how many onions and potatoes in total did Sophia buy at the market?|Score|
|---|---|---|
|Greedy Decode|Rose bought 12 onions * 4 = 12 * 4 = 48 onions. Rose bought 4 potatoes * 4 = 4 * 4 = 16 potatoes. Sophia and Rose bought 48 onions + 16 potatoes = 64 onions and potatoes in total. [ANS] 64.|−|
|---|---|---|
|Most ap- propriate SC Path|If Rose bought 12 onions at the market, then Sophia bought 12 / 4 = 3 onions. If Rose bought 4 potatoes at the market, then Sophia bought 4 / 4 = 1 potato. Since Rose bought 12 onions and 4 potatoes, Sophia bought 3 + 1 = 4 onions and potatoes in total. [ANS] 4.|−|
|---|---|---|
|Most con- sistent SC Path|Rose bought 4 x 48 = 192 potatoes. In total, Sophia bought 192 + 48 = 240 onions and potatoes at the market. [ANS] 240.|−|
|---|---|---|
|CoRe Path 1|Since Rose bought 12 onions, this means there were 12 / 4 = 3 onions that Sophia bought. The number of potatoes that Sophia bought is 4, meaning there were 4 / 4 = 1 potato that Sophia bought. The total number of onions and potatoes that Sophia bought is 3 + 1 = 4. [ANS] 4.|0.994|
|---|---|---|
|CoRe Path 2|The number of onions Rose bought is 12 onions / 4 = 3. Sophia bought 4 / 12 = 1 / 3 of the number of onions Rose bought. The number of potatoes Rose bought is 4 potatoes / 4 = 1.33 of the number of potatoes Rose bought is 4 potatoes / 12 onions = 0.33. The total number of onions and potatoes Rose bought at the market is 3 onions + 1 potatoes = 4. [ANS] 4.|0.363|
|---|---|---|
|CoRe Path 3|Let x be the number of onions Sophia bought. Rose bought 4 * x onions. The total number of onions and potatoes Rose bought is 4 * x + x = 12. 5 * x = 12. x = 4. Sophia bought 4 onions. [ANS] 4.|0.023|
|---|---|---|
Table 7: GSM8K samples generated through different searching methods. SC refers to Self-Consistency. The
erroneous parts of each path are highlighted in red and the corresponding correct parts within other paths are blue.
|Question|Gunter is trying to count the jelly beans in a jar. He asks his friends how many they think are in the jar. One says 80. Another says 20 more than half the first one. A third says 25% more than the first one. What is their average guess?|
|---|---|
|No Self-Thinking|Half the first guess is 80 / 2 = 40. The third guess is 40 * .25 = 10. Their total guesses are 80 + 20 + 40 + 10 = 150. Their average guess is 150 / 4 = 37.5. [ANS] 37.5.|
|---|---|
|With Self-Thinking|Half the first friend’s guess is 80 / 2 = 40. The second friend’s guess is 40 + 20 = 60. The third friend’s guess is 80 * 1.25 = 100. Their total guesses are 80 + 60 + 100 = 240. Their average guess is 240 / 3 = 80. [ANS] 80.|
|---|---|
Table 8: An example of GSM8K, model with self-thinking reasoned correctly, while the non-self-thinking model
generated a wrong reasoning path and therefore failed.
**5** **Discussion**
Although we only fine-tune the language model
on GSM8K due to the scarcity of QA datasets
annotated with intermediate rationales, zero-shot
results on several arithmetic datasets prove that
basic reasoning capability is transferable across
datasets within the same domain. This observation
implies that when it comes to a new domain, we
only need to collect a limited number of questionanswer pairs with reasoning paths, model’s reasoning ability can generalize to other unseen datasets
and can be further strengthened by our approach
CoRe according to the experimental results.
**6** **Conclusions**
In this work, we mimic the dual system of human
cognition to develop an effective reasoning framework for solving the math word problems. The
proposed approach is consisting of two ingredients:
the generator as System 1 and the verifiers as System 2, and overall reasoning is conducted based on
their mutual reinforcement. From the robustness
and generalization aspects, CoRe activates superior
reasoning ability of LMs, and thus outperforms
PLMs that are dozens of times larger.
thus ending up with a wrong answer obtained by
majority voting as shown in the fourth row.
As a comparison, results generated by CoRe are
listed with their scores. Similar to random sampling, the reasoning paths might be partially illogical, even though the final answers happen to be
correct. Despite this challenge, CoRe is capable of
distinguishing those poor-quality paths from the
superior ones thanks to the verifiers. Adhering to
the philosophy of cooperative reasoning we have
emphasized, the verifiers managed to harness the
generator throughout the reasoning procedure with
the help of MCTS. Therefore, CoRe enjoys not
only the advantage of having a diverse candidate
set, but also the merit of being wiser and efficient
during reasoning path searching.
**4.4.2** **Improvements from Self-Thinking**
Table 8 shows an example that the vanilla model
failed to solve the given question, whereas after the
self-thinking, the model rectified the faulty parts
and successfully addressed it. This displays that
self-thinking boosts language models’ inner reasoning ability regardless of the search strategy, which
is also proved in Sec. 4.3.2.
-----
**Limitations**
The outcome on multiple datasets verifies the powerful reasoning ability, which even works on models with only several billion parameters. However, our self-thinking procedure utilizes only one
dataset, GSM8K, and the available training set
size is only 7.5K. The main reason is the scarcity
of high-quality datasets with rich reasoning paths.
And, collecting such data incurs huge computation
costs and expensive human resources. Another limitation is that we have not conducted experiments on
bigger language models, such as GPT-3 and PaLM,
due to the expensive usage costs and the fact of no
open-source codes. In a nutshell, in the future, we
will focus on collecting more high-quality labeled
data and exploring our method on more powerful
language models.
**Ethics Statement**
In this work, our CoRe shows impressive reasoning capability, however, it also comes with social
risks. Here, we summarize three possible ethical
impacts: i) PLMs with bias, ii) generated data with
social stereotypes and iii) problematic data environments. Considering utilizing PLMs as backbones, several works present various potential risks
in PLMs (Lucy and Bamman, 2021; Amin and
Kabir, 2022). Fortunately, our method supports the
replacement of different PLMs. Therefore, we encourage deploying some risk-free PLMs, expecting
to reduce the potential ethical risks. Furthermore,
once deploying harmful PLMs, the self-thinking
process might generate several undesired data and
those data are fed into language models, which
deepens the bias and causes unintended social
impacts. For reducing the aforementioned cases,
we suggest recording generated sentences. In realworld applications, a good choice is to monitor
generated content and then hand them over for human review. In addition to the two risks posed by
PLMs, the data in downstream tasks is of great
concern. In particular, private data might cause unpredictable influence because of their nature as a
non-open source. Therefore, we believe that a data
cleaning workflow is necessary to mitigate potential risks, such as PrivateClean (Krishnan et al.,
2016). Finally, we encourage open debating about
its utilization for increasing transparency and reducing the potential for misuse.
**Acknowledgements**
This work was partly supported by the National
Key Research and Development Program of China
(No. 2020YFB1708200), the "Graph Neural Network Project" of Ping An Technology (Shenzhen)
Co., Ltd. and the Shenzhen Science and Technology Program (JCYJ20220818101001004).
**References**
Akhter Al Amin and Kazi Sinthia Kabir. 2022. A disability lens towards biases in GPT-3 generated openended languages. CoRR, abs/2206.11993.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In
_NeurIPS._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat,
Sunipa Dev, Henryk Michalewski, Xavier Garcia,
Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
Barret Zoph, Alexander Spiridonov, Ryan Sepassi,
David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira,
Rewon Child, Oleksandr Polozov, Katherine Lee,
Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark
Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov,
and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. CoRR, abs/2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Jacob Hilton, Reiichiro Nakano, Christopher Hesse,
and John Schulman. 2021. Training verifiers to solve
math word problems. CoRR, abs/2110.14168.
Johannes Czech, Patrick Korus, and Kristian Kersting.
2021. Improving alphazero using monte-carlo graph
search. In ICAPS, pages 103–111. AAAI Press.
-----
Jonathan St.B.T. Evans. 2003. [In two minds: dual-](https://doi.org/https://doi.org/10.1016/j.tics.2003.08.012)
[process accounts of reasoning. Trends in Cognitive](https://doi.org/https://doi.org/10.1016/j.tics.2003.08.012)
_Sciences, 7(10):454–459._
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and
Weizhu Chen. 2021. Deberta: decoding-enhanced
bert with disentangled attention. In ICLR. OpenReview.net.
Daniel Kahneman. 2011. Thinking, fast and slow. Farrar, Straus and Giroux.
Levente Kocsis and Csaba Szepesvári. 2006. Bandit
based monte-carlo planning. In ECML, volume 4212
of Lecture Notes in Computer Science, pages 282–
293. Springer.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. _CoRR,_
abs/2205.11916.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
[2015. Parsing algebraic word problems into equa-](https://doi.org/10.1162/tacl_a_00160)
[tions. Transactions of the Association for Computa-](https://doi.org/10.1162/tacl_a_00160)
_tional Linguistics, 3:585–597._
Sanjay Krishnan, Jiannan Wang, Michael J. Franklin,
Ken Goldberg, and Tim Kraska. 2016. Privateclean:
Data cleaning and differential privacy. In SIGMOD
_Conference, pages 937–951. ACM._
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan,
Bing Tian Dai, Yan Wang, Dongxiang Zhang, and
Ee-Peng Lim. 2022. Mwptoolkit: An open-source
framework for deep learning-based math word problem solvers. In AAAI, pages 13188–13190. AAAI
Press.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2022. On the](https://doi.org/10.48550/arXiv.2206.02336)
[advance of making language models better reasoners.](https://doi.org/10.48550/arXiv.2206.02336)
_CoRR, abs/2206.02336._
Chao-Chun Liang, Kuang-Yi Hsu, Chien-Tsung Huang,
Chung-Min Li, Shen-Yu Miao, and Keh-Yih Su. 2016.
A tag-based statistical english math word problem
solver with understanding, reasoning and explanation.
In IJCAI, pages 4254–4255. IJCAI/AAAI Press.
[Li Lucy and David Bamman. 2021. Gender and rep-](https://doi.org/10.18653/v1/2021.nuse-1.5)
[resentation bias in GPT-3 generated stories. In Pro-](https://doi.org/10.18653/v1/2021.nuse-1.5)
_ceedings of the Third Workshop on Narrative Un-_
_derstanding, pages 48–55, Virtual. Association for_
Computational Linguistics.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and developing](https://doi.org/10.18653/v1/2020.acl-main.92)
[English math word problem solvers. In Proceedings](https://doi.org/10.18653/v1/2020.acl-main.92)
_of the 58th Annual Meeting of the Association for_
_Computational Linguistics, pages 975–984, Online._
Association for Computational Linguistics.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, Charles Sutton, and Augustus Odena.
[2022. Show your work: Scratchpads for interme-](https://openreview.net/forum?id=HBlx2idbkbq)
[diate computation with language models. In Deep](https://openreview.net/forum?id=HBlx2idbkbq)
_Learning for Code Workshop._
Maxwell I. Nye, Michael Henry Tessler, Joshua B.
Tenenbaum, and Brenden M. Lake. 2021. Improving
coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. In
_NeurIPS, pages 25192–25204._
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/d15-1202)
[metic word problems. In Proceedings of the 2015](https://doi.org/10.18653/v1/d15-1202)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, EMNLP 2015, Lisbon, Portugal,_
_September 17-21, 2015, pages 1743–1752. The As-_
sociation for Computational Linguistics.
Subhro Roy and Dan Roth. 2017. Unit dependency
graph and its application to arithmetic word problem
solving. In AAAI, pages 3082–3088. AAAI Press.
[Subhro Roy, Tim Vieira, and Dan Roth. 2015. Rea-](https://doi.org/10.1162/tacl_a_00118)
[soning about quantities in natural language. Trans.](https://doi.org/10.1162/tacl_a_00118)
_Assoc. Comput. Linguistics, 3:1–13._
Victor Sanh, Albert Webson, Colin Raffel, Stephen
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey,
M Saiful Bari, Canwen Xu, Urmish Thakker,
Shanya Sharma Sharma, Eliza Szczechla, Taewoon
Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti
Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han
Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong,
Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan
Teehan, Teven Le Scao, Stella Biderman, Leo Gao,
Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In ICLR. OpenReview.net.
Thomas Scialom, Paul-Alexis Dray, Jacopo Staiano, Sylvain Lamprier, and Benjamin Piwowarski. 2021. To
beam or not to beam: That is a question of cooperation for language gans. In NeurIPS, pages 26585–
26597.
Romal Thoppilan, Daniel De Freitas, Jamie Hall,
Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du,
YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng,
Amin Ghafouri, Marcelo Menegali, Yanping Huang,
Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao
Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts,
Maarten Bosma, Yanqi Zhou, Chung-Ching Chang,
Igor Krivokon, Will Rusch, Marc Pickett, Kathleen S.
Meier-Hellstern, Meredith Ringel Morris, Tulsee
Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran,
Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora
-----
Aroyo, Ravi Rajakumar, Alena Butryna, Matthew
Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed H. Chi, and
Quoc Le. 2022. Lamda: Language models for dialog
applications. CoRR, abs/2201.08239.
Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Lan[guage Model. https://github.com/kingoflolz/](https://github.com/kingoflolz/mesh-transformer-jax)
[mesh-transformer-jax.](https://github.com/kingoflolz/mesh-transformer-jax)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le,
Ed H. Chi, and Denny Zhou. 2022. Self-consistency
improves chain of thought reasoning in language
models. CoRR, abs/2203.11171.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin
Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned
language models are zero-shot learners. In ICLR.
OpenReview.net.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy
Liang, Jeff Dean, and William Fedus. 2022b. Emergent abilities of large language models. _CoRR,_
abs/2206.07682.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022c.
Chain of thought prompting elicits reasoning in large
language models. CoRR, abs/2201.11903.
-----
**A** **Dataset Details**
The mathematical reasoning datasets with details
are as follows (Detailed description of the statistics
in Table 9). We follow the licenses for their papers.
The dataset in fine-tuning:
**GSM8K (Cobbe et al., 2021) is a high-quality**
dataset with reasoning paths. It consists of 8.8K
grade school math problems created by human writers, which are divided into a train set (7.5K) and a
test set (1.3K). The reasoning paths include 2 to 8
steps with considering basic arithmetic operations.
Furthermore, we conduct cooperative training and
self-thinking on its training set.
The datasets in zero-shot inference:
**ASDiv-A (Miao et al., 2020) includes diverse math**
word problems, which are required to answer a
number for each question.
**SingleOP (Roy et al., 2015) is proposed with ele-**
mentary math problems of a single operation.
**SingleEq (Koncel-Kedziorski et al., 2015) is con-**
strued with both single-step and multi-step math
problems from mixed sources.
**MultiArith (Roy and Roth, 2015) includes ele-**
mentary math problems with multiple steps.
**B** **Experimental Settings**
**B.1** **Hyper-parameters Setting**
For the generator and the step verifier, we train
them for two epochs. The batch size is set to 16.
The learning rate (LR) is set to 1e − 5 at the first
epoch and 1e − 6 at the second epoch for generator.
On the hand of step verifier we apply the warmup
method then linearly decaying scheduler, LR is set
to 1e − 6 and warmup ratio is 0.1.
For the path verifier, we train it for three epochs
with batch size set to 128 and LR set to 1e − 5.
Same LR scheduler as the step verifier has been
applied for the path verifier. We set the gradient clip
norm to 1.0 and the sampling temperature to 0.7.
The random seed is set to 19990303 throughout the
training process.
For MCTS, we set max search iterations to 40
during inference. In expansion, we search 20 tokens each time. In order to avoid expanding too
many homogeneous children for the same node, we
simply penalize the probability of first token if it
has appeared in other child nodes. We set the max
token number to 300 in roll out and limit the total
token number of reasoning path to 400.
|Dataset|# of samples Avg # of words in questions|
|---|---|
|GSM8K ASDiv-A SingleOp SingleEq MultiArith|1319 46.9 1218 29.2 562 20.9 508 27.2 600 31.8|
|---|---|
Table 9: Dataset statistics.
**B.2** **Details of Training Verifiers**
Before two verifiers are fine-tuned, we utilize the
generator to sample 100 solutions for each question
following Cobbe et al. (2021). Then we train the
two verifiers on the generated data as described in
Sec. 3.2 Step 1.3.
**B.3** **Details of Self-Thinking**
In each iteration of self-thinking, we initialize the
model with the weights obtained from the previous round so as to save the computational costs.
Since we use cooperative inference rather than random sampling to generate data for further training, solutions are expected more high-quality. Thus,
the number of generated solutions M mentioned
in Sec. 3.2 is set to 50 for saving computational
cost and time. Due to the flexibility of MCTS, we
have also tried to limit the time for searching rather
than the number of iterations, which makes the total
search time controllable and predictable. Moreover,
this allows the model to adaptively adjust the final
number of solutions searched for each question,
due to the different levels of difficulty in questions.
In our experiments, we observe that setting the
time limit to 320 seconds provides better results
than setting the iteration limit to 50, while maintaining approximately the same time consumption.
Therefore, we use time control to generate data
during self-thinking.
**B.4** **Baseline Settings**
As shown in Table 1, the Instruct GPT-3 is based on
text-davinci-002 version. Moreover, since Kojima
et al. (2022) provides difference prompt setting,
we list them in Table 10. For few-shot scenarios
with the chain of thought prompts, we follow the
original paper (Wei et al., 2022c).
**C** **Extended Experiments**
This section we replicate the work of Cobbe et al.
(2021) with GPT-J and report the results in Table 11 for comprehensive comparison. CoRe fully
surpasses Cobbe et al. (2021) when the number of
-----
|Backbone Method|Reasoning Extraction Prompts Answer Extraction Prompts|
|---|---|
|Instruct GPT-3 175B zero-shot zero-shot∗ zero-shot-CoT zero-shot-CoT∗|Let’s think step by step. The answer (arabic numerals) is Let’s think step by step. The answer is Let’s think step by step. The answer (arabic numerals) is Let’s think step by step. The answer is|
|---|---|
|PaLM 540B zero-shot zero-shot-CoT + Self-Consistency|Let’s think step by step. The answer (arabic numerals) is Let’s think step by step. The answer (arabic numerals) is Let’s think step by step. The answer (arabic numerals) is|
|---|---|
Table 10: Prompt setting for few-shot baselines.
|# of reasoning paths|ASDiv-A SingleOp SingleEq MultiArith|
|---|---|
|Cobbe et al. 5 10 20 30 40|71.9 70.5 68.5 92.3 76.9 73.1 74.6 95.0 79.6 74.6 76.0 95.5 81.4 76.2 76.2 95.2 81.4 76.9 78.1 94.8|
|---|---|
|CoRe 5 10 20 30 40|13.7 22.2 14.6 4.3 41.7 47.7 33.7 26.8 78.4 77.0 64.6 80.7 88.9 84.9 77.4 95.0 90.5 85.2 79.5 97.5|
|---|---|
Table 11: Comparison between Cobbe et al. (2021) and CoRe with GPT-J as backbone model. The best scores are in
**bold.**
reasoning paths reaches 30 and maintains a faster
increasing rate after that. As a result, CoRe has
a superior performance over Cobbe et al. (2021)
on all the datasets and achieves a 9.1% and 8.3%
improvement compared to it on ASDiv-A and SingleOp.
**D** **Future Work**
We focus on measuring our method in boosting the
language model’s arithmetic reasoning ability in
this work. Nevertheless, we believe that our framework can also be applied to other reasoning tasks
seamlessly, e.g., commonsense reasoning and symbolic reasoning. We choose arithmetic reasoning
because it is the fundamental type of reasoning
task. Additionally, we believe solving arithmetic
reasoning is the first step toward a general cognitive reasoning system. In the future, we will explore
other reasoning tasks and put more effort into lowresource scenarios.
-----
| [
"Xinyu, Zhu",
"Junjie, Wang",
"Lin, Zhang",
"Yuxiang, Zhang",
"Yujiu, Yang",
"Yongfeng, Huang",
"Ruyi, Gan",
"Jiaxing, Zhang"
] | 2022-01-01T00:00:00 | ACL 2023 Long | true | 47 | 7 | null | https://arxiv.org/abs/2210.16257 | https://arxiv.org/abs/2210.16257 | https://www.semanticscholar.org/paper/d33504263ff0b263a1562075711264228d83b181 |
V-STaR: Training Verifiers for Self-Taught Reasoners | Common self-improvement approaches for large language models (LLMs), such as STaR, iteratively fine-tune LLMs on self-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this shortcoming, we propose V-STaR that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier using DPO that judges correctness of model-generated solutions. This verifier is used at inference time to select one solution among many candidate solutions. Running V-STaR for multiple iterations results in progressively better reasoners and verifiers, delivering a 4% to 17% test accuracy improvement over existing self-improvement and verification approaches on common code generation and math reasoning benchmarks with LLaMA2 models. | V-STaR is proposed that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier using DPO that judges correctness of model-generated solutions. | ## V-STaR: Training Verifiers for Self-Taught Reasoners
**Arian Hosseini** _∗_ **Xingdi Yuan** **Nikolay Malkin**
**Aaron Courville** **Alessandro Sordoni** **Rishabh Agarwal**
Mila, Universit´e de Montr´eal
Microsoft Research
University of Edinburgh
Google Deepmind
### Abstract
Common self-improvement approaches for large language models (LLMs),
such as STaR, iteratively fine-tune LLMs on self-generated solutions to
improve their problem-solving ability. However, these approaches discard
the large amounts of incorrect solutions generated during this process,
potentially neglecting valuable information in such solutions. To address
this shortcoming, we propose V-STaR that utilizes both the correct and
incorrect solutions generated during the self-improvement process to train
a verifier using DPO that judges correctness of model-generated solutions.
This verifier is used at inference time to select one solution among many
candidate solutions. Running V-STaR for multiple iterations results in
progressively better reasoners and verifiers, delivering a 4% to 17% test
accuracy improvement over existing self-improvement and verification
approaches on common code generation and math reasoning benchmarks
with LLaMA2 models.
### 1 Introduction
Learning to recognize and correct mistakes is a feature of human intelligence (Metcalfe,
2017). When dealing with complex tasks, such as coding or solving a math problem, we can
recognize errors in reasoning and explore alternative paths to a solution. To improve the
reasoning performance of LLMs, several approaches exploit the ability of LLMs to produce
solutions and check the correctness of these solutions during training, for example, using
test cases for code generation. These self-improvement approaches, such as STaR Zelikman
et al. (2022), RFT (Yuan et al., 2023), and ReST[EM] (Singh et al., 2023), improve LLMs by
fine-tuning them on their self-generated solutions and optionally iteratively running this
process. However, all these approaches are data-inefficient in that they use only correct
solutions, and discard incorrect solutions, which is often a large portion of model-generated
solutions, especially for challenging reasoning tasks.
Orthogonal to self-improvement, another promising direction to improve LLM reasoning is
to use learned LLM verifiers at test time (Cobbe et al., 2021; Wang et al., 2023b). Specifically,
the LLM generates multiple candidate solutions and the verifier ranks these solutions
and selects the best one. Such verifiers are trained by fine-tuning an LLM on a dataset of
solutions generated from a frozen LLM, labeled with either final correctness (ORM, Cobbe
et al., 2021) or step-by-step human annotations (Lightman et al., 2024). Using such verifiers
allows LLMs to trade-off additional test-time compute for better performance.
We propose Verification for Self-Taught Reasoners (V-STaR). The key idea in V-STaR is
to utilize both the correct and incorrect LLM-generated solutions during the iterative selfimprovement process to train a verifier using DPO[1] (Rafailov et al., 2023), in addition to
training a LLM as generator using correct solutions (Fig. 1). The iterative self-improvement
process yields progressively improved generators, trained on augmented data, which leads
_∗_ Correspondence to: <[email protected]> and <[email protected]>
1We also considered using this DPO setup to train a generator in §4.6.
-----
|(𝑥2, 𝑦ො 2, (𝑥3, 𝑦ො 3𝑡,|) )|
|---|---|
|𝒟GEN (𝑥, 𝑦ො<𝑡)|Union 𝐺𝑡 ( ( (𝑥 𝑥 𝑥1 2 3,, 𝑦 𝑦 𝑦ො ො ො1 2 3𝑡 𝑡 𝑡) ) ( ( (𝑥 𝑥 𝑥1 2 3,, 𝑦 𝑦 𝑦ො ො ො1 2 3𝑡 𝑡 𝑡,, )) ( (𝑥 𝑥1 3,, 𝑦 𝑦ො ො1 3𝑡 𝑡,, ) ) SFT Generate, ) Label,, ) Filter … … … {𝑥1, 𝑥2, 𝑥3, … } 𝑉𝑡 DPO Union Training iteration 𝒕|Col3|
|---|---|---|
|…|||
|(𝑥, 𝑦ො<𝑡)|||
|… 𝒟VER|||
|𝒟VER||Training iteration 𝒕|
Figure 1: Generator and verifier training in V-STaR. Left: In each training iteration,
the generator G[t] is fine-tuned (from a pretrained LLM) on the current buffer of problem
instances and correct solutions DGEN. Generated solutions that yielded a correct answer are
added to DGEN to be used in future iterations, and all the generated solutions (correct and
incorrect) are added to DVER. The verifier V[t] is trained using DPO with a preference dataset
constructed from pairs of correct and incorrect solutions from DVER. Right: At test time,
the verifier is used to rank solutions produced by the generator. Such iterative training and
inference-time ranking yields large improvements over generator-only self-improvement.
to higher quality completions and more challenging negative examples for the verifier
training. At test time, the verifier ranks multiple candidate solutions from the generator
and selects the best one.
We empirically evaluate V-STaR to improve reasoning capabilities of LLMs: (1) Math
problem-solving using GSM8K (Cobbe et al., 2021) and a subset of MATH (Hendrycks et al.,
2021), and (2) Code-generation using MBPP (Austin et al., 2021) and HumanEval (Chen et al.,
2021). Fine-tuning LLaMA2 (Touvron et al., 2023) and CodeLLaMA (Roziere et al.`, 2023),
we compare V-STaR to other self-improvement (RFT, STaR) and verification-based methods (ORM), self-consistency (Wang et al., 2023a), and a non-iterative V-STaR baseline (RFT
+ Verifier) that uses the same number of generation samples to bootstrap a generator and
verifier. V-STaR works remarkably well, leading to 6% to 17% absolute improvement in test
accuracy over prior self-improvement and verification-based methods for math reasoning,
and 4% to 12% in code generation. Notably, 7B V-STaR surpasses base LLaMA2 70B (8-shot)
on GSM8K, and nearly match CodeLLaMA 34B (zero-shot) on HumanEval.
Our contributions are:
- We propose V-STaR, a simple and effective approach, which uses iteratively generated
correct and incorrect solutions from an LLM to train a better generator and verifier.
V-STaR outperforms prior self-improvement approaches (RFT, STaR) as well as ORM
verification (Cobbe et al., 2021) for math reasoning and code generation (Fig. 2). VSTaR better utilizes adaptive test-time compute for improving performance than strong
baselines – ORM verification, RFT + verifier (Fig. 5), and self-consistency (Fig. 6, left).
- As a secondary contribution, we find DPO to be more effective for training verifiers
than the prevalent ORM approach by Cobbe et al. (2021). We also propose a formula for
Best-of-k (§4.2), akin to Pass@k, to reliably evaluate test performance with verification.
### 2 Preliminaries
Given a pretrained language model G and the original training data of a task DSFT =
_{(x1, y1), (x2, y2), · · · (xN, yN)}, where x is typically a description of a problem and y is the_
solution, such as chain-of-thought rationale or generated code. The de facto approach for
such tasks with causal language models is supervised fine-tuning (SFT) with the negative
log-likelihood objective on the training data:
_LSFT(G) = −_ **E(x,y)∼DSFT**
_T_
## ∑ log G(yt | y<t, x) (1)
_t=1_
where G is also referred to as generator in reasoning tasks. LLMs can be used to generate
high quality chain-of-thought rationales or solutions for a range of tasks. This observation
-----
Table 1: Comparison of self-improvement and verification methods, showing the data used
to train the generator and verifier (if applicable), and whether or not the method is iterative.
METHOD GENERATOR DATA VERIFIER DATA ITERATIVE
SFT _DSFT_ ✗ ✗
VERIFICATION _DSFT_ _DSFT ∪_ GENERATED ✗
STAR CORRECT GENERATEDt−1 ✗ ✓
RFT (STAR[†][1 ITER]) _DSFT ∪_ CORRECT GENERATED ✗ ✗
STAR[†] _DSFT ∪_ CORRECT GENERATED<t ✗ ✓
V-STAR [1 ITER] _DSFT ∪_ CORRECT GENERATED _DSFT ∪_ GENERATED ✗
**V-STAR** **_DSFT ∪_** **CORRECT GENERATED<t** **_DSFT ∪_** **GENERATED<t** ✓
has motivated using correct generations from the model itself to bootstrap problem-solving
and reasoning abilities (Zelikman et al., 2022; Singh et al., 2023; Yuan et al., 2023).
**2.1** **Self-improvement approaches**
**Self-Taught Reasoner (STaR; Zelikman et al., 2022) corresponds to an iterative approach**
where a language model improves itself using correctness feedback. In each iteration, one solution ˆy is generated using greedy decoding from the generator G for each problem x in training dataset D. Having access to test cases or ground truth answers, generated solutions can
be checked for their binary correctness label z by z = is correct(x, ˆy), _z_ 0, 1 . A com_∈{_ _}_
pletion ˆy is labeled correct if it has the same final answer as the ground truth answer for math
problems, or if it passes all the test cases for code generation problems. Only correct solutions
(z = 1) are included in the dataset at iteration j where Dj = {(x1, ˆy1), (x2, ˆy2), · · · (xN, ˆyN)}.
Then, the generator is fine-tuned on this new dataset using (Eq. 1) where DSFT = Dj. This
fine-tuned generator is used in subsequent iterations.
**Rejection Sampling Fine-tuning (RFT; Yuan et al., 2023) first fine-tunes a pretrained LM**
on the training dataset DSFT to obtain G. For each problem xi, k solutions are sampled
_{ ˆyi,j ∼_ _G(y|xi)}[k]j=1_ [and similar to STaR, only correct generated solutions (][z][ =][ 1) are kept.]
In RFT, the original dataset is then augmented with the correct completions to Dj, and G is
fine-tuned on the new Dj to obtain GRFT. Unlike STaR, RFT is not an iterative approach.
**STaR[†]. Each STaR iteration can be performed similarly to RFT, akin to ReST[EM]** (Singh et al.,
2023). Since there could be multiple correct solutions for a problem, one could sample k
solutions per problem at each STaR iteration, but this is not prescribed in the original STaR
paper, which we will denote as STaR[†]for the rest of the paper.
**2.2** **Test-time verification**
Cobbe et al. (2021) trained verifiers, which they refer as outcome-supervised reward model
(ORM), that assess the probability that a candidate solution is correct for a given problem.
At test time, the language model G generates many candidate solutions and the one ranked
highest by the verifier is selected, also known as Best-of-k. To train a verifier V, similar to
RFT, k candidate solutions are sampled from a generator G for each training problem and
labeled for their correctness z to make the verifier training data DVER = {(xi, ˆyi,j, zi,j)}i[N]=1[,]
where zi,j is a binary label indicating whether ˆyi,j is a correct or incorrect solution.
To train the verifier V, Cobbe et al. (2021) fine-tune a LLM on DVER using a combination
of language modeling (Eq. 1) and binary classification. The model is trained to predict ˆyi,j
given xi and zi,j given {xi; ˆyi,j} with the language modeling and the classification objective,
respectively. See §5 for more details.
**2.3** **Preference learning with DPO**
Fine-tuning pretrained LLMs with human feedback can result in large performance gains in
downstream tasks Ouyang et al. (2022); Bai et al. (2022). The typical framework to do so is to
-----
Figure 2: Test accuracy of 7B V-STaR compared to self-improvement and verification
**baselines. We report Best-of-64 for verification-based methods and Pass@1 for others. All**
methods, except SFT, have access to the SFT baseline model and K = 48 output generations
per problem. STaR[†] and V-STaR are run for 3 iterations, where each iteration uses K/3 = 16
samples. Verification corresponds to using a test-time verifier trained on K generated completions from the SFT generator. Numbers above each bar show the absolute improvement
over SFT. (Left) Test accuracy on tasks used for V-STaR training. (Right) Transfer evaluation
of GSM8K and MBPP trained models on MATH subset and HumanEval respectively.
collect paired human preferences for a set of input prompts Dpref = {(xi, yi[+][,][ y]i[−][)][}]i[N]=1[, train]
a reward model using Dpref, and fine-tune the LLM using this reward (Stiennon et al., 2020).
More recently, Rafailov et al. (2023) proposed Direct Preference Optimization (DPO), which
does not use a separately trained reward model during fine-tuning. DPO requires supervised
fine-tuning (SFT) a pretrained LLM on the downstream task to obtain GSFT, which is also
used as a reference policy. Given the preference dataset Dpref and GSFT, DPO’s objective
increases the log likelihood, relative to the reference policy, of preferred y[+] to dispreferred
_y[−]_ completions. We present the DPO loss later in Eq. 2, in the context of training verifiers.
### 3 V-STaR: Verifiers for self-taught reasoners
Existing self-improvement methods, such as RFT, STaR, and ReST[EM], throw away incorrect
model-generated solutions. However, incorrect solutions can also contain valuable information: a language model could learn from discrepancies between correct and incorrect
solutions for a given problem, and identify error patterns in generations, enhancing its ability to provide more accurate solutions. In this work, we propose V-STaR that utilizes both
incorrect and correct generated solutions in an iterative process to train a better generator
and verifier (see algorithm 1 in appendix).
- First, we fine-tune a pretrained LLM Gbase on the original training data DSFT to obtain
generator GSFT.
- Next, we sample k completions for each problem in the training data from the generator
_{ ˆyi,j ∼_ _G(y|xi)}[k]j=1[, where][ x][ ∈D][query][ (see][ §D][ for an example).]_
- Generated solutions are labeled for their correctness z using ground truth answers or
test cases. We use only correct generated solutions (z = 1) to augment the generator
training data DGEN as (xi, ˆyi,j). Both correct and incorrect generated solutions are added
to verifier data DVER with their correctness label as (xi, ˆyi,j, zi,j), so the verifier can learn
from generator’s mistakes.
- In the next iteration t, the generator G[t] is obtained by fine-tuning the pretrained model
_Gbase on the augmented DGEN. We can sample solutions again from this generator G[t]._
This process is repeated for up to T iterations to augment DGEN and DVER iteratively.
- The final generator G[T] is obtained by using DGEN to fine-tune a pretrained model Gbase.
The verifier V[T] is obtained by using DVER to further train a model GSFT which was
fine-tuned on the original DSFT.
-----
(a) GSM8K for 7B and 13B sizes (b) MBPP for 7B and 13B sizes
Figure 3: Pass@1 and Best-of-64 scores for generator-only and verifier-based methods.
Numbers above each bar represent the absolute improvement over RFT. V-STaR[1 Iter]
baseline is trained with data consisting of 3 16 completions per query for one iteration
_×_
only. STaR[†]and V-STaR are trained using iterative data collection (i.e. 16 completions
generated per query at each iteration). At test time, Best-of-64 is calculated using Eq. 3 on
128 candidate answers sampled per problem from the generators. V-STaR 7B performs on
par with CodeLLaMA 34B, which has a zero-shot Pass@1 of 55% on MBPP.
In our approach, the original training data is also included as correct solutions in both
generator data and verifier data. The difference between V-STaR training and Cobbe et al.
(2021) is that our verifier training data is collected iteratively, each iteration from a better
generator, while ORM only collects data from a fixed generator that is only fine-tuned on
the original SFT data. We compare to this ORM approach as a baseline, as discussed in §4.1.
**3.1** **Training verifiers with DPO**
Following Cobbe et al. (2021), current LLM verifiers are trained with a combination of
language modeling and binary classification loss (§2.2). These two objectives can be unified
via offline preference learning methods, such as DPO (Rafailov et al., 2023), where the
proximity to the reference policy is a proxy for the language modeling objective while the
classification loss is a proxy for reward modelling. Empirically, we found DPO verifiers to
be better than ORM-style verifiers (§4.4), when using LoRA adapters (Hu et al., 2022).
To use DPO for training verifiers, we construct a preference pair dataset using collected solutions in DVER. We treat correct solutions as preferred and incorrect solutions as not preferred
completions given the problem. Specifically, DVER = {(xi, yi[+],1[,][ y]i[−],1[)][,][ · · ·][,][ (][x][i][,][ y]i[+],m[,][ y]i[−],m[)][}]i[N]=1[,]
where m is the number of preference pairs which are from the Cartesian product of correct
and incorrect solutions. We train our verifier V using this constructed DVER and the SFT
policy GSFT using the DPO objective, LDPO(V; GSFT):
_−_ **E(x,y+,y−)∼DVER** �log σ �rˆ(x, y+) − _rˆ(x, y−)��_, with ˆr(x, y) = β log _GVSFT(y(|yx|)x)_ (2)
where σ is the logistic function, and β is a hyper-parameter controlling the proximity to
the reference policy GSFT. The DPO objective steers the verifier towards increasing the
likelihood of correct solutions y[+] and decreasing the likelihood of incorrect solutions y[−] for
a problem x. At inference, we use the likelihood of a generated solution given a problem
under the trained DPO verifier (i.e. V( ˆy _x)) as scores to rank candidate solutions._
_|_
### 4 Empirical results
To demonstrate the effectiveness of V-STaR, we conduct experiments on two widely used
datasets: GSM8K (Cobbe et al., 2021) for solving math problems, and MBPP (Austin et al.,
2021) for code-generation problems. We also evaluate the transfer generalization performance of V-STaR using Hendrycks’ MATH (Hendrycks et al., 2021) and HumanEval (Chen
et al., 2021). Specifically, for math reasoning, we only train our generators and verifiers
-----
(a) GSM8K → MATH Subset (b) MBPP → HumanEval
Figure 4: Out-of-domain transfer evaluation: Pass@1 and Best-of-64 for generators and
verifiers with absolute improvement over RFT shown above each bar. Models trained on
GSM8K are evaluated on a subset of MATH test set (§4), and models trained on MBPP are
evaluate on HumanEval test set. V-STaR 7B performs close to CodeLLaMA 34B which has a
zero-shot Pass@1 of 48% on HumanEval.
using GSM8K training data and evaluate them on the whole GSM8K test set and a subset of
MATH test set[2]. For code generation, we train our models using the MBPP training data
and evaluate them on the full test sets of MBPP and HumanEval.
**Models** For experiments, we fine-tune LLaMA2 (Touvron et al., 2023) and CodeLLaMA
(Roziere et al.`, 2023) 7B and 13B models using LoRA (Hu et al., 2022). Generators are trained
with a causal language modeling objective, and our baseline (V-STaR[1 Iter]) and V-STaR
verifiers are trained using DPO. The reference policy GSFT for DPO is trained on the original
training data for 2 and 3 epochs for GSM8K and MBPP, respectively. See §3.1 for details.
**Data generation** For each iteration, k = 16 completions are sampled per query from the
previous iteration’s generator. For GSM8K, the first iteration samples are from a generator
trained solely on the original GSM8K training data for 2 epochs. For MBPP, this data is
from a 3-shot pretrained CodeLLaMA (see §B). Completions are labeled for correctness by
checking the final answer for math problems and running test cases for coding problems.
**4.1** **Baselines and metrics**
We run V-STaR for 3 iterations and sample k = 16 solutions at each iteration to augment
_DGEN and DVER. To assess the gains from our iterative approach, we compare against a_
number of baselines (Table 1):
1. SFT: Standard fine-tuning (Eq. 1) on training data without any self-improvement or
test-time verification.
2. STaR[†]: Bootstrapping a generator by sampling k = 16 completions per query for 3
iterations, see §2.1.
3. RFT: Running STaR[†]by sampling 3 × 16 completions for only 1 iteration, see §2.1.
4. Verification (SFT + Verifier): Generating 3 × 16 completions using SFT generator to train
a verifier, as described in §2.3. This is similar to ORM verification approach by (Cobbe
et al., 2021) but empirically performs better (Fig. 5).
5. V-STaR [1 Iter]: Bootstrapping a generator and training a verifier for 1 iteration with
_k = 3 × 16 completions from GSFT, so that the total generation budget matches V-STaR._
This baseline also corresponds to RFT + Verifier.
6. Self-consistency / Majority Voting (Wang et al., 2023a): An alternative to use test-time
compute without verifiers: sample multiple solutions from STaR[†]generator and pick the
majority-vote answer.
2This subset includes a total 150 problems of Level 1 difficulty in MATH with question types of
_algebra, Counting & probability, prealgebra and number theory where the final answer is a number and no_
latex exists in the question.
-----
(a) GSM8K Best-of-k (b) MBPP Best-of-k
Figure 5: Best-of-k test accuracy of V-STaR, V-STaR [1 Iter], and outcome-supervised reward
model (ORM) style verifier 7B models, measured by Eq. 3. Best-of-1 is equivalent to not
having a verifier and is equal to Pass@1 of the generator.
At inference, we generate 128 candidate solutions for each test problem using the generator.
We report Pass@1 accuracy for the generators, which estimates the probability that an
answer randomly sampled from the generator is correct. Best-of-64 accuracy is used for all
verifier-based methods, using the formula in (Eq. 3), as well as the self-consistency baseline.
**4.2** **Reliable estimation of Best-of-k accuracy**
To estimate accuracy with test-time verifiers, Cobbe et al. (2021); Lightman et al. (2024)
repeat the following procedure several times and average the results: sample k solutions,
rank them using a verifier and take the top scoring one as the predicted answer. However,
computing this best-of-k accuracy can have high variance and is expensive. Instead, to
measure the best-of-k accuracy reliably, we propose two methods, akin to how Pass@k is
computed (Chen et al., 2021). To do so, we estimate the probability that out of k samples
drawn without replacement from a fixed set of N (for N > k) samples, the one with the
highest verifier score is correct. The best-of-k is calculated by repeating this process M times
and taking the average. Assuming there are no duplicate verifier scores this can be done
efficiently (see §E in appendix), using the following formula:
1
Best-of-k :=
([N]k [)]
_N−k_
## ∑
_i=0_
�N − _i −_ 1
_k −_ 1
�
_αi_ (3)
where [α0, . . ., αN−1] are the binary correctness values (0 or 1) for the N candidates yi sorted
in decreasing order by their verifier score. The numerator here can be derived by considering
subsets where the top-ranked candidate is yi for all possible values of i ∈{0, . . ., N − _k −_ 1}.
**4.3** **V-STaR on Math Reasoning and Code Generation**
As shown in Fig. 2, V-STaR shows consistent gains across GSM8K, MBPP, MATH subset
and HumanEval test sets for LLaMA2 7B and 13B models (Fig. 8) over baselines. In math,
we report absolute improvement of 6 to 17% in test accuracy over STaR[†] and Verification,
and 4 to 12% in code generation tasks. The gains over V-STaR [1 iter] in Fig. 3 show that
iteratively generating solutions to collect verifier training data results in a better distribution
and quality compared to a non-iterative approach with the same generation budget. We
show gains in each iteration of generator and verifier on MBPP in Fig. 7. We also trained a
4th iteration of generator and verifier on MBPP which led to a marginal gain of 0.3%.
**Out-of-domain performance of V-STaR. The generators and verifiers trained on MBPP**
are evaluated on HumanEval, while those trained on GSM8K are evaluated on a subset
of MATH test set (see Fig. 2 and Fig. 4). In general, we observe lower absolute Pass@1 and
Best-of-64 scores for all methods as these two tasks are considered to be more difficult than
GSM8K and MBPP. That said, Iterative V-STaR outperforms baselines, and V-STaR [1 iter]
on both tasks and across model sizes. Utilizing incorrect solutions to train verifiers results in
large improvements than just bootstrapping with correct model generated solutions using
-----
Figure 6: Left: Best-of-k test accuracy of 7B V-STaR compared to V-STaR[1 Iter] and selfconsistency (Wang et al., 2023c) across different numbers of candidate solutions generated
on GSM8K. We subsample solutions from N = 1000 generations. V-STaR can rank over a
reasonably large number of candidate solutions. We compute 95% CIs for self-consistency
using 256 trials. Right: Comparing DPO-based generator and verifier for V-STaR 7B,
measured by Pass@1 and Best-of-64 respectively on GSM8K. Best-of-64 accuracy rises
significantly while the generation ability of DPO verifier degrades with only 2k updates.
STaR[†]or RFT. While we use LoRA adapters due to compute constraints, we hypothesize
that gains from V-STaR could potentially be larger with full parameter fine-tuning.
**Best-of-k accuracy. Fig. 5 shows test accuracy for k = 1 to k = 64, calculated from 128**
candidate solutions per test problem, for 7B models on both tasks. Best-of-1 is equivalent
to Pass@1 and ignores verifier scores. Best-of-k saturates for k ≥ 16 and the gap between
V-STaR [1 Iter] and V-STaR stays consistent.
**4.4** **Comparing DPO vs. ORM verifiers**
We trained ORM style verifiers, as described in §2.2, with LoRA adapters. These verifiers
did seem to achieve relatively poor performance compared to DPO-based verifiers. Fig. 5a
shows the comparison between the V-STaR [1 Iter] trained with DPO and an ORM style
verifier on the same training data. ORM fails to effectively search through generated
candidate solutions for number of candidates above 4 in the GSM8K task. The ORM style
verifier is also performing worse than our DPO based verifier in MBPP for number of
candidate solutions above 16.
**4.5** **How many completions can V-STaR be extended to?**
Fig. 6 shows the Best-of-K accuracy of V-STaR 7B on GSM8K, measured by Eq. 3 as a function
of k. V-STaR outperforms majority voting (Wang et al., 2023c) at searching over a large
number of candidate solutions (see §F in appendix). While V-STaR is far more effective than
majority voting for k ≤ 64, the performance gap starts to slightly decrease for larger value
of k, similar to performance decay reported in Cobbe et al. (2021). Furthermore, V-STaR can
be used for any problem solving task where we can verify correctness while majority voting
is not applicable to tasks such as code generation. We also tried combining verifier scores
with reranking strategies, such as weighted reranking and weighted majority voting Liu
et al. (2023), but did not observe performance gains.
**4.6** **Evaluating DPO verifier as a generator**
Since DPO fine-tuned models can also be used as generators, we evaluate how good is the
generation ability of DPO verifiers. Fig. 6 shows Pass@1 and Best-of-64 for V-STaR verifier
over training updates, for three different β coefficients for proximity to SFT policy in DPO
objective (§3.1). The verifier’s solving ability starts degrading only after a small number of
training updates. In contrast, using the DPO objective for verification seems to be sample
efficient as the model’s Best-of-64 increases significantly with only 2k training updates.
-----
**4.7** **Should the verifier be in the training loop?**
Optionally, one could train intermediate verifiers at each iteration and filter correct solutions
to include in DGEN and DVER to provide feedback. This step seems more reasonable
with sufficient exploration, that is larger values of k, when sampling k solutions from the
generator in each iteration.
We tried putting the verifier in the training loop to filter correct solutions from the generator
for the next training iteration. To do so, we sampled k = 64 completions per query from the
generator, labeled their correctness, and took only the top 8 based on their verifier score. We
take as many samples from the incorrect set so that the total number of correct and incorrect
completions per query is 16 or less. After running three iterations with verifier in the loop
for MBPP, the final Best-of-64 accuracy and Pass@1 are 53.2 and 46.34 respectively.
Our results suggest that having the verifier
in the training loop does not provide a substantial gain for this task. V-STaR is simpler
without the verifier in the loop and there is
no need to train a verifier at each iteration;
however we did not experiment with other
tasks and different sampling strategies from
the generator at each iteration. We leave a
more detailed study of this question to future work.
**4.8** **Gains across V-STaR iteration**
Fig. 7 shows the improvements achieved
from each iteration of the generator and verifier on MBPP. The gains are larger across
iterations of verifiers (from Ver.1 to Ver.3)
than iterations of generators which highlights the importance of verifiers.
### 5 Related work
Figure 7: Best-of-64 for each V-STaR iteration
on MBPP. The performance gets better with
each iteration of generator and verifier without showing signs of collapsing.
Challenging multi-step reasoning tasks has driven innovative research on LLMs, such as
generating answers given questions via intermediate steps (Wei et al., 2022; Kojima et al.,
2022). A large volume of recent work studies ways to improve the correctness of these
intermediate steps and reducing the cost of arriving at a correct solution.
**Self-training and self-improvement.** One family of methods, beginning with STaR (Zelikman et al., 2022), reinforced self-training (Gulcehre et al., 2023), and rejection fine-tuning
(Yuan et al., 2023), relies on solutions generated by the LLM to update itself. These methods
fine-tune the model on generated solutions that yield a correct answer. ReST[EM] (Singh
et al., 2023) view this fine-tuning as expectation-maximization based RL fine-tuning of a
solution-generating agent. Wang et al. (2023a) propose a contrastive loss to make correct
solutions more likely than incorrect ones, while Ni et al. (2023a) propose to use intermediate
states of successful solutions as supervision to improve credit assignment. Discovery of
successful solutions is a difficult exploration problem, and Luong et al. (2024) has shown
that RL-based fine-tuning of a LLM is difficult unless it is initialized by some steps of supervised fine-tuning. In An et al. (2023), a more powerful LLM was used to edit the incorrect
rationales generated by a smaller model and provide positive data for its fine-tuning. However, Huang et al. (2023) argued that LLMs are limited in their ability to correct their own
reasoning. V-STaR is similar to self-improvement methods in that it uses its self-generated
solutions for fine-tuning, but also trains a verifier using these solutions, including both
correct and incorrect ones.
**Training verifiers.** Verifiers – models that score or rank reasoning chains with the aim
of favouring successful rationales – were introduced for mathematical reasoning tasks by
Cobbe et al. (2021), who proposed to collect correct and incorrect rationales from a tuned
-----
generator and train a verifier. They noted the importance of a large training set for the
success of the method. Uesato et al. (2022) found that process supervision – correctness of the
rationale – enhances the performance of fine-tuned LLMs relative to outcome supervision
– whether the answer is correct or not. Subsequent work correspondingly studied ways
of deriving reward signals for individual reasoning steps (Li et al., 2023; Lightman et al.,
2024; Yu et al., 2023), combining solution-level and step-level verifiers (Zhu et al., 2023), and
augmenting verifiers with auxiliary information, such as results of program execution (Ni
et al., 2023b). In Ma et al. (2023); Wang et al. (2023b), rationale generation is treated as a
graph search problem, either using a stepwise verifier to guide the search or estimating
the quality of steps by Monte Carlo rollouts. In V-STaR, the verifier is trained with DPO,
which enjoys a high sample efficiency (see Fig. 6), and is used for ranking LLM-generated
solutions at test-time.
The manner of training the verifier varies between works. The verifier can be viewed a
reward model trained on human annotations – making the training of a generator that
satisfies the verifier as an instance of RL with human feedback (Ziegler et al., 2019) – or on
synthetic data, leading to forms of RL with AI feedback (Bai et al., 2022; Yang et al., 2023).
The verifier can alternatively be viewed as a generative model, such as by conditioning on
control tokens indicating a positive or negative label of a solution (Korbak et al., 2023) or by
extracting the score as the likelihood of a special token following the candidate solution (Liu
et al., 2023). V-STaR takes the unique approach of using DPO (Rafailov et al., 2023) to
contrast the likelihoods of correct and incorrect solutions under the verifier (see §3.1).
### 6 Conclusion
We propose V-STaR, a data-efficient and simple to implement approach that utilizes correct
and incorrect generated solutions from an iteratively trained generator to train a strong
verifier. We find training verifiers with DPO to be more effective than the common method
by Cobbe et al. (2021). Our empirical results show the effectiveness of V-STaR over existing
self-improvement and verification-based methods. V-STaR has the potential to improve
existing self-improvement loops on a wide range of problems with access to correctness
feedback during training.
### References
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen.
Learning from mistakes makes llm better reasoner. arXiv preprint arXiv:2310.20689, 2023.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David
Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with
large language models. arXiv preprint arXiv:2108.07732, 2021.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy
Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen,
Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli,
Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua
Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage,
Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam
Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham,
Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R Bowman,
Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom
Brown, and Jared Kaplan. Constitutional AI: Harmlessness from AI feedback. arXiv
_preprint arXiv:2212.08073, 2022._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto,
Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray,
Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin,
Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings,
Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji,
-----
Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh
Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage,
Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish,
Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code.
_arXiv preprint arXiv:2107.03374, 2021._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz
Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint
_arXiv:2110.14168, 2021._
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts,
Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al.
Reinforced self-training (rest) for language modeling. arXiv preprint arXiv:2308.08998,
2023.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the
MATH dataset. Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks,
2021.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,
Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models.
_International Conference on Learning Representations (ICLR), 2022._
Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying
Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv
_preprint arXiv:2310.01798, 2023._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.
Large language models are zero-shot reasoners. Neural Information Processing Systems
_(NeurIPS), 2022._
Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason
Phang, Samuel R. Bowman, and Ethan Perez. Pretraining language models with human
preferences. International Conference on Machine Learning (ICML), 2023.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen.
Making language models better reasoners with step-aware verifier. In Anna Rogers,
Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of
_the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–5333, Toronto,_
Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.
[acl-long.291. URL https://aclanthology.org/2023.acl-long.291.](https://aclanthology.org/2023.acl-long.291)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy
Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step.
_International Conference on Learning Representations (ICLR), 2024._
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, and Peter J. Liu. Improving large
language model fine-tuning for solving math problems. arXiv preprint arXiv:2310.10047,
2023.
Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. ReFT:
Reasoning with reinforced fine-tuning. arXiv preprint arXiv:2401.08967, 2024.
Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia
Yang. Let’s reward step by step: Step-level reward model as the navigators for reasoning.
_arXiv preprint arXiv:2310.10080, 2023._
Janet Metcalfe. Learning from errors. Annual Review of Psychology, 68(1):465–489, 2017.
-----
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Oleksandr Polozov, Christopher Meek,
Dragomir Radev, and Jianfeng Gao. Learning math reasoning from self-sampled correct
and partially-correct solutions. International Conference on Learning Representations (ICLR),
2023a.
Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I Wang, and
Xi Victoria Lin. LEVER: Learning to verify language-to-code generation with execution.
_International Conference on Machine Learning (ICML), 2023b._
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton,
Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F.
Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions
with human feedback. Neural Information Processing Systems (NeurIPS), 2022.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward
model. Neural Information Processing Systems (NeurIPS), 2023.
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,`
Yossi Adi, Jingyu Liu, Tal Remez, Jer´ emy Rapin, Artyom Kozhevnikov, Ivan Evtimov,´
Joanna Bitton, Manish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori, Wenhan Xiong,
Alexandre Defossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas´
Usunier, Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models
for code. arXiv preprint arXiv:2308.12950, 2023.
Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia,
Peter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex
Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Elsayed,
Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey
Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp,
Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris
Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha SohlDickstein, and Noah Fiedel. Beyond human data: Scaling self-training for problem-solving
with language models. arXiv preprint arXiv:2312.06585, 2023.
Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec
Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize from human
feedback. Neural Information Processing Systems (NeurIPS), 2020.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas
Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude
Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman
Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas,
Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning
Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew
Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva,
Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor,
Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang,
Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,´
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat
models. arXiv preprint arXiv:2307.09288, 2023.
Jonathan Uesato, Nate Kushman, Ramana Kumar, H. Francis Song, Noah Y. Siegel, Lisa
Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems
with process- and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022.
Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, and
Zhifang Sui. Making large language models better reasoners with alignment. arXiv
_preprint arXiv:2309.02144, 2023a._
-----
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, and
Zhifang Sui. Math-shepherd: Verify and reinforce LLMs step-by-step without human
annotations. arXiv preprint arXiv:2312.08935, 2023b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang,
Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought
reasoning in language models. International Conference on Learning Representations (ICLR),
2023c.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H.
Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large
language models. Neural Information Processing Systems (NeurIPS), 2022.
Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. RLCD:
Reinforcement learning from contrast distillation for language model alignment. arXiv
_preprint arXiv:2307.12950, 2023._
Fei Yu, Anningzhe Gao, and Benyou Wang. Outcome-supervised verifiers for planning in
mathematical reasoning. arXiv preprint arXiv:2311.09724, 2023.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan,
Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning
with large language models. arXiv preprint arXiv:2308.01825, 2023.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. Star: Bootstrapping reasoning
with reasoning. Neural Information Processing Systems (NeurIPS), 2022.
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing
Zhang, and Yujiu Yang. Solving math word problems via cooperative reasoning induced
language models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.),
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pp. 4471–4485, Toronto, Canada, July 2023. Association for Computational_
[Linguistics. doi: 10.18653/v1/2023.acl-long.245. URL https://aclanthology.org/2023.](https://aclanthology.org/2023.acl-long.245)
[acl-long.245.](https://aclanthology.org/2023.acl-long.245)
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei,
Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
-----
### A Algorithm
**Algorithm 1: V-STaR**
**Input: Original data DSFT, Training queries Dquery, base model Gbase, Num generations k,**
Num iterations T
_DGEN ←DSFT_
_GSFT ←_ SFT(Gbase, DSFT)
**for iter = 1 to T do**
_G ←_ SFT(Gbase, DGEN) _{fine-tune generator}_
_S ←_ sample(G, Dquery, k) _{generate candidates}_
label correctness( ) score candidates to get z
_D[′]_ _←_ _S_ _{_ _}_
_DGEN ←DGEN ∪D[[′]z=1]_ _{correct solutions →_ buffer}
_DVER ←DVER ∪D[′]_ _{all solutions →_ DPO buffer}
**end for**
_Dpref ←_ preference pairs(DVER)
_V ←_ DPO(GSFT, Dpref)
### B The prompt used for MBPP few-shot generation.
Following Ni et al. (2023b), we use the following prompt to sample completions per problem
for data generation during training.
-----
### C Test accuracy of 13B V-STaR and baselines
Figure 8: Test accuracy of 13B V-STaR compared to baselines. We report Best-of-64 for
verification-based methods and Pass@1 for others. (Left) Test accuracy for training tasks.
**(Right) Transfer evaluation of GSM8K and MBPP trained models on MATH subset and**
HumanEval respectively.
### D Candidate solutions ˆy1 and ˆy2 for a GSM8K problem x
**x = Andy walks 50 meters from his house to school. After school, he comes**
back to the house and goes to the market. If he walks 140 meters in total,
how many meters is the distance between the house and the market?
**yˆ** 1 = He walks to school and back, which is 2 × 50 meters = 100 meters. So
he walks 140 meters - 100 meters = 40 meters to the market. Answer=40
**yˆ** 2 = If he walks 50 meters from his house to school, and 140 meters in total,
he walks 140 - 50 = 90 meters from the school to the market. Answer=90
### E Example for Best-of-k using Eq. 3
Figure 9: An example showing how Eq. 3 estimates Best-of-k accuracy efficiently and
without variance for k = 4 from N = 10 candidate solutions.
-----
### F V-STaR vs. majority voting example
**x = Every day, Wendi feeds each of her chickens three cups of mixed**
chicken feed, containing seeds, mealworms and vegetables to help keep
them healthy. She gives the chickens their feed in three separate meals.
In the morning, she gives her flock of chickens 15 cups of feed. In the
afternoon, she gives her chickens another 25 cups of feed. How many cups
of feed does she need to give her chickens in the final meal of the day if the
size of Wendi’s flock is 20 chickens?
**V-STaR solution = Wendi feeds each of her 20 chickens 3 cups of feed per**
day for a total of 20x3=60 cups of feed. She gives the chickens 15 cups
in the morning and 25 cups in the afternoon for a total of 15+25=40 cups.
Therefore, she needs to give the chickens 60-40=20 cups of feed in the final
meal of the day. Answer=20
**Majority voting solution = The total amount of feed Wendi gives the**
chickens is 15 + 25 = 40 cups of feed. Wendi has 20 chickens so she needs to
give each chicken 40 / 20 = 2 cups of feed. Answer=2
### G Best-of-k using Eq. 3 vs. Best-of-k from Lightman et al. (2024)
Figure 10: Eq. 3 estimates Best-of-k accuracy efficiently and without variance. For Best-of-K
accuracy without the formula, the process is repeated 32 times.
-----
| [
"Rishabh, Agarwal",
"Alessandro, Sordoni",
"Xingdi, Yuan",
"Nikolay, Malkin",
"Aaron, Courville",
"Arian, Hosseini"
] | 2024-08-14T00:00:00 | COLM 2024 | false | 47 | 2 | null | http://arxiv.org/abs/2402.06457 | https://arxiv.org/abs/2402.06457 | https://www.semanticscholar.org/paper/4fe4f0f9d39d708a6c3d7b8dfbfa2616cd376e1e |
A Careful Examination of Large Language Model Performance on Grade School Arithmetic | Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning.However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability.To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark,the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more.When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 8%, with several families of models showing evidence of systematic overfitting across almost all model sizes.Further analysis suggests a positive relationship (Spearman's r^2=0.36) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that some models may have partially memorized GSM8k.Nevertheless, many models, especially those on the frontier, show minimal signs of overfitting, and all models broadly demonstrate generalization to novel math problems guaranteed to not be in their training data. | GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning, and ensures that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. | ## A Careful Examination of Large Language Model Performance on Grade School Arithmetic
**Hugh Zhang[∗]** **Jeff Da** **Dean Lee** **Vaughn Robinson** **Catherine Wu** **Will Song**
**Tiffany Zhao** **Pranav Raja** **Dylan Slack** **Qin Lyu** **Sean Hendryx** **Russell Kaplan**
**Michele (Mike) Lunati[†]** **Summer Yue[†]**
## Scale AI
**Abstract**
Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some
of this performance actually reflects dataset contamination, where data closely
resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School
_Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of_
the established GSM8k benchmark, the gold standard for measuring elementary
mathematical reasoning. We ensure that the two benchmarks are comparable across
important metrics such as human solve rates, number of steps in solution, answer
magnitude, and more. When evaluating leading open- and closed-source LLMs on
GSM1k, we observe accuracy drops of up to 13%, with several families of models
(e.g. Phi and Mistral) showing evidence of systematic overfitting across almost all
model sizes. At the same time, many models, especially those on the frontier, (e.g.
Gemini/GPT/Claude) show minimal signs of overfitting. Further analysis suggests
a positive relationship (Spearman’s r[2] = 0.32) between a model’s probability of
generating an example from GSM8k and its performance gap between GSM8k and
GSM1k, suggesting that many models may have partially memorized GSM8k.
**1** **Introduction**
Improving reasoning in large language models (LLMs) is one of the most important directions of
current research. As such, proper benchmarking of current LLM abilities is paramount for ensuring
progress continues in the correct direction. Currently, the field typically relies on public benchmarks
such as GSM8k (Cobbe et al. [2021]), MATH (Hendrycks et al. [2021b]), MBPP (Austin et al.
[2021]), HumanEval (Chen et al. [2021]), SWEBench (Jimenez et al. [2024])). However, because
LLMs are trained on large corpora of data scraped from the Internet, there are major concerns
that such benchmarks may inadvertently include examples that closely resemble the questions
found in such benchmarks. This contamination may result in models having weaker reasoning
capabilities than otherwise believed, due to simply being able to repeat the correct answer that it
previously encountered during pre- or post- training. To properly investigate the reasoning abilities
_∗Correspondence to [email protected]_ _†equal senior authorship_
Preprint. Under review.
-----
Figure 1: Notable models arranged by their drop in performance between GSM8k and GSM1k (lower
is worse). We notice that Mistral and Phi top the list of overfit models, with almost 10% drops on
GSM1k compared to GSM8k, while models such as Gemini, GPT, and Claude show little to no signs
of overfitting.
of models, we commission GSM1k, a newly constructed collection of 1250 grade school level
math problems designed to mirror that of GSM8k. We took extensive efforts to ensure that GSM1k
had a similar distribution of difficulty to GSM8k to ensure an apples-to-apples comparison. These
efforts are described in Section 3, alongside a detailed description of the data creation process. To
mitigate worries about data contamination, we created GSM1k solely with human annotators, without
assistance from any LLM or other synthetic data source.
**Dataset** **Example**
GSM8k James writes a 3-page letter to 2 different friends twice a week. How many
pages does he write a year?
GSM1k (ours) Lee bought 6 shares of Delta stock at $40 per share. If he wants to make $24
from this trade, how much should Delta stock be per share when he sells?
Figure 2: Example from both the GSM8k dataset and the new GSM1k dataset (ours). We also provide
an additional 50 examples from GSM1k in Appendix E.
We benchmark leading open-source and closed-source LLMs on GSM1k, including GPT-4 (OpenAI
et al. [2024]), Gemini (Team et al. [2024]), Claude, Mistral (Jiang et al. [2024, 2023]), Llama
(Touvron et al. [2023a,b]), Phi (Gunasekar et al. [2023], Abdin et al. [2024]) and many more. Our
analysis confirms that the widespread suspicion in the field that many models are contaminated by
benchmark data, with the worst model performing 13% worse on GSM1k compared to GSM8k.
Additionally, our results suggest that several families of models, most notably Mistral and Phi, show
-----
consistent evidence of overfitting for nearly all model versions and sizes. Further analysis finds a
positive relationship (Spearman’s r[2] = 0.32) between a model’s probability of generating examples
from GSM8k and its performance gap between GSM8k and GSM1k, strongly suggesting that one
important component of this overfitting is that models have partially memorized examples from
GSM8k. Nevertheless, our results find that all frontier models, as well as all sizes of the Llama2
family, show minimal signs of overfitting. Additionally, we also find that all models, including the
most overfit ones, are still capable of successfully generalizing to new mathematical grade school
problems, albeit occasionally at lower rates than their benchmark numbers would suggest.
We do not intend to release GSM1k publicly at this time to prevent a similar problem of data
contamination occurring in the future. However, we plan to run recurring evaluations of all major
open- and closed- source releases and to continually update our results. We will also open source our
entire evaluation code so that the public version of our results can be reproduced. Additionally, we
commit to open sourcing the entire benchmark when either 1) the top open source models score over
95% on GSM1k or 2) at the end of 2025, whichever comes earlier. See Section 3 for precise criteria
for release.
**2** **Related Work**
A major inspiration of this work was the celebrated study on overfitting done on ImageNet classifiers
in 2019 (Recht et al. [2019]). This work measured overfitting in ImageNet by creating new versions
of CIFAR10 and ImageNet and measuring the performance gap between the public test set and the
newly created sets they constructed. In this work, we do a similar analysis on GSM8k, one of the
leading benchmarks for elementary mathematical reasoning. GSM1k is modelled primarily after the
GSM8k dataset (Cobbe et al. [2021]), released by OpenAI in 2021, which consists of 8.5k grade
school math problems. Each problem is designed to be solvable using only basic arithmetic operations
(+, −, ×, ÷) with a difficulty level appropriate for grade school students. As of April 2024, top
models report benchmark accuracies of over 95% (Team et al. [2024]). Other popular benchmarks
for reasoning include MATH (Hendrycks et al. [2021b]), MMLU (Hendrycks et al. [2021a]), GPQA
(Rein et al. [2023]).
**2.1** **Data Contamination**
Because data contamination is a well known issue in the field (Balloccu et al. [2024], Magar and
Schwartz [2022], Sainz et al. [2023], Jacovi et al. [2023], Xu et al. [2024]), model builders will
frequently take great pains to minimize the likelihood of data contamination. For example, it is
common to remove all data with too high of an n-gram overlap with the benchmark data (Brown
et al. [2020]). Additionally, methods such as using embedding similarity attempt to remove all
contaminated data that is too similar in embedding space to the dataset (Shi et al. [2024]).
Xu et al. [2024] propose using similar variants of a benchmark questions to detect if models favor
the original wording as a proxy for data contamination. Srivastava et al. [2024] propose functional
evaluations, where benchmarks are written in the form of functions that can generate an infinite
number of specific evaluation datapoints, each with slightly different numbers. In this setup, whenever
a language model is evaluated, functional evaluations generate a specific problem instance to evaluate
the model on, which is then never used again. This reduces the worry of data contamination by
ensuring that no datapoint is ever used twice. Like ours, their results indicate the LLMs may be
severely overfit on benchmark data. The main advantage of our approach over a purely function
based evaluation is that functional evaluations can only generate a tiny portion of the full problem
space by producing variations of the same problem with slightly different numerical values. Their
results also suggest substantial amounts of data contamination, including for frontier models, in the
MATH dataset.
**3** **GSM1k**
GSM1k consists of 1250 problems requiring only elementary mathematical reasoning to solve. We
created GSM1k using human annotators sourced by Scale AI. Annotators were prompted with 3
example GSM8k problems and asked to produce novel problems of a similar difficulty level. The
precise instructions and UI given to the annotators is available in Appendix A. All problem anno
-----
Difficulty Histogram
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Train Test|Col23|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||Train Test||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
Train
Test
Difficulty
Figure 3: Approximate difficulty distribution of GSM8k train and test sets, measured by number of
required steps to solve the problem. GSM1k annotators were instructed to create problems matching
the overall distribution of the combined train and test difficulty distribution. The process of estimating
problem difficulty is described in Section 3.2.
tators were instructed to create problems solvable with only basic arithmetic (addition, subtraction,
multiplication, and division) and which did not require any advanced math concepts. As is the case
with GSM8k, all problem solutions are positive integers[2]. No language models were used in the
process of constructing this dataset.
To prevent data contamination concerns with GSM1k, we will not be releasing the dataset publicly
at this time. However, we commit to releasing the full GSM1k dataset when at least one of the
two following conditions have passed, whichever comes earlier. 1) Three open-source models with
different pre-trained foundational model lineages reach 95% accuracy on GSM1k. 2) The end of
2025. At such a point, we believe that grade school mathematics will likely no longer be difficult
enough to materially benchmark model releases and commit to releasing all data into the public
domain under the MIT license. Additionally, to evaluate proprietary models, we were required to
send over the dataset via API. Our belief is that model providers typically do not use such datapoints
for model training. Nevertheless, in case GSM1k data is leaked through such means, we also hold
out a small number of data points that have passed all quality checks but do not appear in the final
GSM1k dataset. This data will also be released alongside GSM1k upon final release. We encourage
future benchmarks to follow a similar pattern, where they are not released publicly lest they be gamed,
but are precommitted to be released at a future date or upon a future condition. As part of this release,
we will also open source our evaluation framework, which is based off of a fork of the LM Evaluation
Harness by EleutherAI (Gao et al. [2023a]).
Finally, while we undertook extensive efforts to ensure maximum similarity between GSM8k and
GSM1k, these results are only an approximation of an ideal world in which the test set of GSM8k was
instead not publicly released and used for evaluations. We would recommend reading all results with
2GSM8k has a few problems, likely errors, for which this is not the case.
-----
the understanding that GSM8k and GSM1k are only highly similar, but not identically distributed
despite all our efforts below.
**3.1** **Quality Checks**
All questions passed through a total of 3 review layers. After initial creation, each task was manually
reviewed by a subset of trusted annotators selected for strong past performance. These reviewers
checked both for correctness as well as ensuring problems contained only grade school level math
and proper formatting. To ensure that questions were answered correctly, we also do a second review
layer by having an independent set of data annotators solve each question without seeing the intended
_solution. If this second solve produced a different answer to that of the initial solve, we discarded_
the problem. Finally, all problems were reviewed by a special team within Scale responsible for
conducting general quality audits for data production. Out of a total of 2108 initial problems, 1419
passed the second solve stage and 1375 passed the general quality audit.
**3.2** **Matching the Difficulty Distribution of GSM8k**
One important axis of recreating a benchmark is ensuring that new problems have a comparable
difficulty to the original benchmark. To construct problems of difficulty N, we requested annotators
to construct problems with N required resolution steps and prompted them with 3 examples from
GSM8k with estimated difficulty N . The distribution of problems requested from annotators matched
the estimated distribution in GSM8k. Difficulty is tricky to measure precisely, so we used an estimate
based on the number of operations needed to solve the problem. This was extracted programmatically
by counting the number of “calculator” tags in the problem solution. However, as not all problem
solutions were formatted consistently, this estimate is only a rough estimate of actual difficulty.
Additionally, the number of resolution steps in a problem does not necessarily directly correlate with
the true level of problem difficulty.
Past work has also found that LLMs struggle with problems with larger numbers (Gao et al. [2023b])
even if they can solve otherwise identical problems with smaller numbers. To remove this as a
potential confounding variable, our final processing step is to discard candidate problems from
GSM1k so that the answer magnitude distributions of GSM8k and GSM1k are as similar as possible.
This selection process is described in Figure 4. GSM1k consists of the 1250 problems that survive this
final winnowing. Additionally, we run several checks to ensure that our efforts to match benchmark
Figure 4: As the final step, we select 1250 problems to match the answer magnitude distribution of
GSM8k as much as possible. The remaining problems are discarded and not included in the final
dataset. Before discarding, we find that our generated problems tend to have slightly larger answers.
difficulty were successful.
-----
**3.2.1** **Human Differentiation Rates**
The first test we run is human distinguishability. We present human annotators with a set of five
questions, four of which were randomly selected from the original GSM8k dataset and one of which
was selected from the newly created GSM1k dataset, and rewarded annotators for finding the odd
one out. In an audit conducted using 19 annotators who were not involved in the problem creation
process, we found that annotators were able to correctly identify the lone GSM1k example 21.83% of
the time out of 1205 attempts (20% is pure chance). Separately, we also tested several paper authors
who had not yet seen the data and they were also unable to perform much better than random. This
suggests minimal differences between GSM8k and GSM1k, at least as measured by the human eye.
**3.2.2** **Human Solve Rates**
To ensure similar solve rates, we also asked annotators to solve questions under time pressure. 14
annotators who had not participated in the problem creation process attempted to solve as many
GSM8k problems as they could in 15 minutes and were rewarded based on the number of problems
they solved. We repeated this exact setup for GSM1k. Annotators were able to solve an average of
4.07 ± 0.93 problems on the GSM8k dataset. They were able to solve 4.36 ± 1.11 problems on the
GSM1k dataset, where the error rates are the standard deviations of the evaluation. This suggests
that GSM1k is comparable in difficulty (and perhaps even slightly easier) than GSM8k. As such,
substantial decreases in model accuracy on GSM1k compared to GSM8k are likely not explainable
due to differences in dataset difficulty.
**3.2.3** **LLM Solve Rates**
Finally, we sanity check our results by measuring solve rates of several models that are known to not
be contaminated by GSM8k due to being released before the publication of the GSM8k dataset. Due
to the relative scarcity of LLMs trained only on pre-2021 data, we evaluate only GPT-NeoX-20B
(Black et al. [2022]) and GPT-2 (Radford et al. [2019]). For these two language models, we find
minimal difference between their solve rates of GSM8k and GSM1k (Figure 7).
**4** **Results**
To evaluate models, we use a fork of EleutherAI’s LM Evaluation Harness using the default settings.
Both GSM8k and GSM1k questions are run with the same prompt of using 5 randomly drawn examples from the GSM8k train set, as is standard in the field. The full prompt is provided in Appendix B.
All open-source models are evaluated at temperature 0 for reproducibility. LM Evaluation Harness
extracts the last numeric answer in the response and compares this to the correct answer. As such,
model responses which produce the “correct” answer in a format that do not match the examples
are marked as incorrect. For open source models, we use vLLM to speed up model inference if a
model is compatible with the library. Otherwise, we default to inference using standard HuggingFace
libraries. Closed-source models were queried through the LiteLLM library which unifies the API call
format for all proprietary models evaluated. All API model results were from queries between April
16 - April 28, 2024 and use the default settings.
As model benchmark performance is highly dependent on choice of prompt and evaluation setting,
our reported GSM8k numbers may occasionally be below the reported model benchmark numbers,
as we use a standardized setting for all models instead of the prompt that maximizes each individual
model’s performance. For completeness, we also report results with an alternative prompting format
uses non-GSM8k examples as n-shot examples in Appendix C. Nevertheless, since we focus primarily
on the difference between a model’s performance on GSM1k and GSM8k when holding fixed an
evaluation strategy, we believe the above setup to be a fair comparison for all models. We will release
the full evaluation code for reproducibility.
We select models to evaluate based on popularity. Additionally, we evaluated several lesser known
models that sit near the top of the OpenLLMLeaderboard and discover evidence of Goodhart’s law:
many of these models perform substantially worse on GSM1k, suggesting that they are primarily
gaming the GSM8k benchmark rather than improving model reasoning capabilities. The full set
of results, including the performance table for all models, can be found in Appendix D. For fair
-----
Figure 5: Models with over 70% accuracy on GSM8k compared to the line of no overfit. This plot
is zoomed into the relevant sections (70-100% accuracy). Note that some models, especially the
Claude family, perform above the 45 degree line, which is consistent with our findings in Section 3
that GSM1k is slightly easier than GSM8k. In contrast, many models, especially the Mistral and Phi
families lie well below the line.
comparison, we partition the models by performance on GSM8k and compare them to other models
which perform similarly (Figures 5, 6, 7).
**5** **Analysis**
The interpretation of evaluation results, like the interpretations of dreams, is often a very subjective
endeavor. While we report our objective results in Section 4 and Appendix D, here we describe four
major takeaways from interpreting the results in a more subjective manner.
**5.1** **Lesson 1: Some Model Families are Systematically Overfit**
While it is often difficult to draw conclusions from singular data points or model releases, examining a
family of models and observing a pattern of overfitting enables us to make more definitive statements.
Several families of models, including the Phi and Mistral families of models, both show systematic
tendencies to perform stronger on GSM8k compared to GSM1k for almost every release and scale of
models. Other model families, such as Yi, Xwin, Gemma and CodeLlama also show this pattern to a
lesser extent.
**5.2** **Lesson 2: Other Models, Especially Frontier Models, Show No Signs of Overfitting**
Nevertheless, we find that many models, through all regions of performance, show minimal signs
of being overfit. In particular, we find that all frontier or close-to-frontier models (including the
-----
Figure 6: Models with between 40 and 70% accuracy on GSM8k compared to the line of no overfit.
This plot is zoomed into the relevant sections (40-70% accuracy). We observe that no models lie on
the line of no overfit in this regime.
proprietary Mistral Large) appear to perform similarly on both GSM8k and GSM1k. We posit two
potential hypotheses for this: 1) frontier models have sufficiently advanced reasoning capability so
that they can generalize to new problems even if they have already seen GSM8k problems in their
training set, 2) frontier model builders may be more careful about data contamination.
While it is impossible to know for certain without looking at the training set for each model, one
piece of evidence in favor of the former is that Mistral Large is the only model in the Mistral family
to show no signs of overfitting. Since the hypothesis that Mistral took unique care in ensuring only
that their largest model was free from data contamination seems unlikely, we lean instead towards the
hypothesis that sufficiently strong LLMs also learn elementary reasoning ability during training. If a
model learns strong enough reasoning capabilities to solve problems of a given difficulty, it will be
able to generalize to new problems even if GSM8k has appeared in their training set.
**5.3** **Lesson 3: Overfit Models Are Still Capable of Reasoning**
One worry about model overfitting is that models are incapable of reasoning and merely only
memorizing answers seen in the training data. Our results do not support this conjecture. The fact
that a model is overfit does not mean that it is poor at reasoning, merely that it is not as good as
the benchmarks might indicate it to be. In fact, we find that many of the most overfit models are
still capable of reasoning and solving novel problems. For example, while Phi-3 has an almost
10% drop in accuracy between GSM8k and GSM1k, we find that it is still able to correctly solve
over 68% of GSM1k problems – which are certain to not have appeared in its training distribution.
This performance is similar to that of much larger models such as dbrx-instruct, which contains
almost 35x as many parameters. Similarly, Mistral models remain some of the strongest open source
-----
Figure 7: Models with between 0 and 40% accuracy on GSM8k compared to the line of no overfit.
This plot is zoomed into the relevant sections (0-40% accuracy).
models, even accounting for their overfitting. This provides additional evidence for our lesson that
sufficiently strong models learn elementary reasoning, even if benchmark data accidentally leaked
into the training distribution, as is likely to be the case for the most overfit models.
**5.4** **Lesson 4: Data Contamination Is Likely Not The Full Explanation for Overfitting**
A priori, a natural hypothesis is that the primary cause for overfitting is data contamination, e.g.
that the test set was leaked in the pre-training or instruction fine-tuning part of the model creation.
Previous work has suggested that models put higher log-likelihoods on data that they have seen
during training (Carlini et al. [2023]). We test the hypothesis that data contamination is the cause
of overfitting by measuring a model’s probability of generating an example from the GSM8k test
set and compare it to how overfit it is on GSM8k compared to GSM1k, using the assumption that a
model’s probability of generating the GSM8k test set is a proxy for whether the sequence is likely
to have appeared in the training set. We normalize by c, the number of characters in the sequence,
to make the log-likelihood calculations comparable between sequences and models with different
tokenizers. Formally, we have:
1
log p(xi _x<i)_ (1)
_c_ _|_
_i_
X
with c being the number of characters in the sequence. Figure 8 shows the result of this plot against the
gap between GSM8k and GSM1k performance. We indeed find a positive relationship between the two
values. We observe a Spearman’s rank correlation of 0.32 between the per-character log-likelihood
of generating GSM8k and the performance gap between GSM8k and GSM1k (p = 0.03), and the
relationship suggests that every percentage point difference in GSM8k and GSM1k performance is
associated with an increase of 7.9 × 10[−][3] in the per-character log-likelihood. This result suggests
that some of the reason for overfitting is due to partial memorization of the test set. For completeness,
-----
Figure 8: Comparison between overfit on GSM8k (x-axis) and average sequence-level log-likelihood
on the GSM8k test set (y-axis). We find that there is a correlation between overfit on GSM8k and
sequence-level log-likelihood, suggesting that, in general, models that have a high overfit generally
have a higher probability of generating the test set. This suggests that some of the GSM8k test set
may have leaked into the model training data. The line of best fit is in blue. Additionally, we highlight
5 “outlier” models which we discuss further with Lesson 4.
we also report the standard Pearson r[2] = 0.15 and the Kendall’s τ of 0.28, but note that Pearson r[2] is
not the ideal metric due to the curve-of-best-fit not appearing linear.
Nevertheless, data contamination is likely not the full story. We observe this via the presence of
several outliers, which cause the r[2] = 0.32 value to be relatively low. Examining these outliers
carefully reveals that the model with the lowest per-character log-likelihood (Mixtral-8x22b) and the
model with the highest per-character log-likelihood (Mixtral-8x22b-Instruct) are not only variations of
the same model, but also have similar levels of overfit (Jiang et al. [2024]). Perhaps more intriguingly,
the most overfit model we discovered (Math-Shepherd-Mistral-7B-RL (Yu et al. [2023])) had a
relatively low per-character log-likelihood. Math Shepherd trains a reward model on process level
data using synthetic data. As such, we hypothesize that the reward modelling process may have
leaked information about the correct reasoning chains for GSM8k even if the problems themselves
did not ever appear in the dataset. Finally, we observe that the Llema models (Azerbayev et al. [2024])
have both high log-likelihoods and minimal overfit. These models are open-sourced alongside their
training data, and the authors report finding a very small number of GSM8k examples in the training
corpus. Nevertheless, they also find (and our study supports) that these few instances do not lead to
overfitting. The existence of these outliers suggests that overfitting on GSM8k is not purely due to
data contamination, but rather may be through other indirect means, such as model builders collecting
data similar in nature to benchmarks as training data or selecting final model checkpoints based on
performance on benchmarks, even if the model itself may have not seen the GSM8k dataset at any
point via training. Conversely, the reverse is also true: small amounts of data contamination do not
necessarily lead to overfitting.
**6** **Discussion**
We create GSM1k, a novel dataset designed to measure LLM overfitting on GSM8k. When benchmarking leading open- and closed-source models, we find substantial evidence that many models
-----
have been contaminated by benchmark data, with models showing performance drops of up to 13%
accuracy. Additionally, we find that several families of models, most notably the Mistral and Phi
families, show consistent overfitting across almost all model sizes and versions. An extended analysis
reveals a positive relationship between a model’s likelihood of generating data points in GSM8k and
its performance difference between GSM8k and GSM1k, suggesting evidence of data contamination
as one of the underlying causes. Nevertheless, we find that frontier models exhibit little to no evidence
of overfitting and that many models, even the most heavily overfit families, show strong signs of
generalizable mathematical reasoning.
**7** **Acknowledgements**
We would like to thank Dan Hendrycks, Adi Ganesh, Akilesh Praveen, Andrea Jaba, Charlotte
Zhuang, Will Zhou, Celia Chen and Kamile Lukoši˙ ut¯ e for their helpful comments and suggestions.˙
**References**
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha
Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu
Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon,
Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider,
Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos
Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee,
Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik
Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid
Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli
Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma,
Xia Song, Masahiro Tanaka, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael
Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong
Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and
Xiren Zhou. Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone,
[April 2024. URL http://arxiv.org/abs/2404.14219. arXiv:2404.14219 [cs].](http://arxiv.org/abs/2404.14219)
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program Synthe[sis with Large Language Models, August 2021. URL http://arxiv.org/abs/2108.07732.](http://arxiv.org/abs/2108.07732)
arXiv:2108.07732 [cs].
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An Open Language Model For
[Mathematics, March 2024. URL http://arxiv.org/abs/2310.10631. arXiv:2310.10631](http://arxiv.org/abs/2310.10631)
[cs].
Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, and Ondˇrej Dušek. Leak, Cheat, Repeat:
Data Contamination and Evaluation Malpractices in Closed-Source LLMs, February 2024. URL
```
http://arxiv.org/abs/2402.03927. arXiv:2402.03927 [cs].
```
Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He,
Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu
Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. GPT-NeoX-20B:
[An Open-Source Autoregressive Language Model, April 2022. URL http://arxiv.org/abs/](http://arxiv.org/abs/2204.06745)
```
2204.06745. arXiv:2204.06745 [cs].
```
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler,
Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray,
Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever,
and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato,
R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems,
-----
[volume 33, pages 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
```
neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
```
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan
[Zhang. Quantifying Memorization Across Neural Language Models, March 2023. URL http:](http://arxiv.org/abs/2202.07646)
```
//arxiv.org/abs/2202.07646. arXiv:2202.07646 [cs].
```
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri,
Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan,
Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian,
Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios
Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino,
Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders,
Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa,
Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob
McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating
[Large Language Models Trained on Code, July 2021. URL http://arxiv.org/abs/2107.](http://arxiv.org/abs/2107.03374)
```
03374. arXiv:2107.03374 [cs].
```
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
[Schulman. Training Verifiers to Solve Math Word Problems, November 2021. URL http:](http://arxiv.org/abs/2110.14168)
```
//arxiv.org/abs/2110.14168. arXiv:2110.14168 [cs].
```
Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster,
Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff,
Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika,
Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot
[language model evaluation, December 2023a. URL https://zenodo.org/records/10256836.](https://zenodo.org/records/10256836)
tex.version: v0.4.0.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,
and Graham Neubig. PAL: Program-aided Language Models, January 2023b. [URL http:](http://arxiv.org/abs/2211.10435)
```
//arxiv.org/abs/2211.10435. arXiv:2211.10435 [cs].
```
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital
Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai,
[Yin Tat Lee, and Yuanzhi Li. Textbooks Are All You Need, October 2023. URL http://arxiv.](http://arxiv.org/abs/2306.11644)
```
org/abs/2306.11644. arXiv:2306.11644 [cs].
```
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob
[Steinhardt. Measuring Massive Multitask Language Understanding, January 2021a. URL http:](http://arxiv.org/abs/2009.03300)
```
//arxiv.org/abs/2009.03300. arXiv:2009.03300 [cs].
```
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring Mathematical Problem Solving with the MATH Dataset. NeurIPS,
2021b.
Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. Stop Uploading Test Data in
Plain Text: Practical Strategies for Mitigating Data Contamination by Evaluation Benchmarks.
In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on
_Empirical Methods in Natural Language Processing, pages 5075–5084, Singapore, December_
2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.308. URL
```
https://aclanthology.org/2023.emnlp-main.308.
```
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier,
Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas
[Wang, Timothée Lacroix, and William El Sayed. Mistral 7B, October 2023. URL http://arxiv.](http://arxiv.org/abs/2310.06825)
```
org/abs/2310.06825. arXiv:2310.06825 [cs].
```
-----
Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna
Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne
Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,
Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mix[tral of Experts, January 2024. URL http://arxiv.org/abs/2401.04088. arXiv:2401.04088](http://arxiv.org/abs/2401.04088)
[cs].
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik
Narasimhan. SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, April 2024.
[URL http://arxiv.org/abs/2310.06770. arXiv:2310.06770 [cs].](http://arxiv.org/abs/2310.06770)
Inbal Magar and Roy Schwartz. Data Contamination: From Memorization to Exploitation. In
Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th
_Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),_
pages 157–165, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi:
[10.18653/v1/2022.acl-short.18. URL https://aclanthology.org/2022.acl-short.18.](https://aclanthology.org/2022.acl-short.18)
OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor
Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian,
Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny
Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks,
Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea
Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen,
Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung,
Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch,
Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty
Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte,
Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel
Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua
Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike
Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon
Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne
Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo
Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar,
Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik
Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich,
Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy
Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie
Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini,
Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne,
Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David
Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie
Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély,
Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo
Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano,
Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew
Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira
Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris
Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond,
Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario
Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John
Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav
Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl,
Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers,
Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian,
Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea
Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben
Wang, Jonathan Ward, Jason Wei, C. J. Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng,
-----
Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong,
Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu,
Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao,
Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. GPT-4 Technical Report, March
[2024. URL http://arxiv.org/abs/2303.08774. arXiv:2303.08774 [cs].](http://arxiv.org/abs/2303.08774)
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
Models are Unsupervised Multitask Learners. page 24, 2019.
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do ImageNet Classifiers Generalize to ImageNet?, June 2019. [URL http://arxiv.org/abs/1902.10811.](http://arxiv.org/abs/1902.10811)
arXiv:1902.10811 [cs, stat].
David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien
Dirani, Julian Michael, and Samuel R. Bowman. GPQA: A Graduate-Level Google-Proof Q&A
[Benchmark, November 2023. URL https://arxiv.org/abs/2311.12022v1.](https://arxiv.org/abs/2311.12022v1)
Oscar Sainz, Jon Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko
Agirre. NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each
Benchmark. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association
_for Computational Linguistics: EMNLP 2023, pages 10776–10787, Singapore, December 2023._
Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.722. URL
```
https://aclanthology.org/2023.findings-emnlp.722.
```
Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen,
and Luke Zettlemoyer. Detecting Pretraining Data from Large Language Models, March 2024.
[URL http://arxiv.org/abs/2310.16789. arXiv:2310.16789 [cs].](http://arxiv.org/abs/2310.16789)
Saurabh Srivastava, Annarose M. B, Anto P V, Shashank Menon, Ajay Sukumar, Adwaith Samod T,
Alan Philipose, Stevin Prince, and Sooraj Thomas. Functional Benchmarks for Robust Evaluation
[of Reasoning Performance, and the Reasoning Gap, February 2024. URL http://arxiv.org/](http://arxiv.org/abs/2402.19450)
```
abs/2402.19450. arXiv:2402.19450 [cs].
```
Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut,
Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson,
Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy
Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul R. Barham, Tom
Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli
Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Jack
Krawczyk, Cosmo Du, Ed Chi, Heng-Tze Cheng, Eric Ni, Purvi Shah, Patrick Kane, Betty Chan,
Manaal Faruqui, Aliaksei Severyn, Hanzhao Lin, YaGuang Li, Yong Cheng, Abe Ittycheriah,
Mahdis Mahdieh, Mia Chen, Pei Sun, Dustin Tran, Sumit Bagri, Balaji Lakshminarayanan,
Jeremiah Liu, Andras Orban, Fabian Güra, Hao Zhou, Xinying Song, Aurelien Boffy, Harish
Ganapathy, Steven Zheng, HyunJeong Choe, Ágoston Weisz, Tao Zhu, Yifeng Lu, Siddharth
Gopal, Jarrod Kahn, Maciej Kula, Jeff Pitman, Rushin Shah, Emanuel Taropa, Majd Al Merey,
Martin Baeuml, Zhifeng Chen, Laurent El Shafey, Yujing Zhang, Olcan Sercinoglu, George Tucker,
Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov, Ivo Danihelka, Becca Roelofs,
Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas
Gonzalez, Misha Khalman, Jakub Sygnowski, Alexandre Frechette, Charlotte Smith, Laura Culp,
Lev Proleev, Yi Luan, Xi Chen, James Lottes, Nathan Schucher, Federico Lebron, Alban Rrustemi,
Natalie Clay, Phil Crone, Tomas Kocisky, Jeffrey Zhao, Bartek Perz, Dian Yu, Heidi Howard, Adam
Bloniarz, Jack W. Rae, Han Lu, Laurent Sifre, Marcello Maggioni, Fred Alcober, Dan Garrette,
Megan Barnes, Shantanu Thakoor, Jacob Austin, Gabriel Barth-Maron, William Wong, Rishabh
Joshi, Rahma Chaabouni, Deeni Fatiha, Arun Ahuja, Gaurav Singh Tomar, Evan Senter, Martin
Chadwick, Ilya Kornakov, Nithya Attaluri, Iñaki Iturrate, Ruibo Liu, Yunxuan Li, Sarah Cogan,
Jeremy Chen, Chao Jia, Chenjie Gu, Qiao Zhang, Jordan Grimstad, Ale Jakse Hartman, Xavier
Garcia, Thanumalayan Sankaranarayana Pillai, Jacob Devlin, Michael Laskin, Diego de Las Casas,
Dasha Valter, Connie Tao, Lorenzo Blanco, Adrià Puigdomènech Badia, David Reitter, Mianna
Chen, Jenny Brennan, Clara Rivera, Sergey Brin, Shariq Iqbal, Gabriela Surita, Jane Labanowski,
Abhi Rao, Stephanie Winkler, Emilio Parisotto, Yiming Gu, Kate Olszewska, Ravi Addanki,
Antoine Miech, Annie Louis, Denis Teplyashin, Geoff Brown, Elliot Catt, Jan Balaguer, Jackie
-----
Xiang, Pidong Wang, Zoe Ashwood, Anton Briukhov, Albert Webson, Sanjay Ganapathy, Smit
Sanghavi, Ajay Kannan, Ming-Wei Chang, Axel Stjerngren, Josip Djolonga, Yuting Sun, Ankur
Bapna, Matthew Aitchison, Pedram Pejman, Henryk Michalewski, Tianhe Yu, Cindy Wang, Juliette
Love, Junwhan Ahn, Dawn Bloxwich, Kehang Han, Peter Humphreys, Thibault Sellam, James
Bradbury, Varun Godbole, Sina Samangooei, Bogdan Damoc, Alex Kaskasoli, Sébastien M. R.
Arnold, Vijay Vasudevan, Shubham Agrawal, Jason Riesa, Dmitry Lepikhin, Richard Tanburn,
Srivatsan Srinivasan, Hyeontaek Lim, Sarah Hodkinson, Pranav Shyam, Johan Ferret, Steven Hand,
Ankush Garg, Tom Le Paine, Jian Li, Yujia Li, Minh Giang, Alexander Neitz, Zaheer Abbas, Sarah
York, Machel Reid, Elizabeth Cole, Aakanksha Chowdhery, Dipanjan Das, Dominika Rogozi´nska,
Vitaliy Nikolaev, Pablo Sprechmann, Zachary Nado, Lukas Zilka, Flavien Prost, Luheng He,
Marianne Monteiro, Gaurav Mishra, Chris Welty, Josh Newlan, Dawei Jia, Miltiadis Allamanis,
Clara Huiyi Hu, Raoul de Liedekerke, Justin Gilmer, Carl Saroufim, Shruti Rijhwani, Shaobo Hou,
Disha Shrivastava, Anirudh Baddepudi, Alex Goldin, Adnan Ozturel, Albin Cassirer, Yunhan Xu,
Daniel Sohn, Devendra Sachan, Reinald Kim Amplayo, Craig Swanson, Dessie Petrova, Shashi
Narayan, Arthur Guez, Siddhartha Brahma, Jessica Landon, Miteyan Patel, Ruizhe Zhao, Kevin
Villela, Luyu Wang, Wenhao Jia, Matthew Rahtz, Mai Giménez, Legg Yeung, James Keeling,
Petko Georgiev, Diana Mincu, Boxi Wu, Salem Haykal, Rachel Saputro, Kiran Vodrahalli, James
Qin, Zeynep Cankara, Abhanshu Sharma, Nick Fernando, Will Hawkins, Behnam Neyshabur,
Solomon Kim, Adrian Hutter, Priyanka Agrawal, Alex Castro-Ros, George van den Driessche,
Tao Wang, Fan Yang, Shuo-yiin Chang, Paul Komarek, Ross McIlroy, Mario Luˇci´c, Guodong
Zhang, Wael Farhan, Michael Sharman, Paul Natsev, Paul Michel, Yamini Bansal, Siyuan Qiao,
Kris Cao, Siamak Shakeri, Christina Butterfield, Justin Chung, Paul Kishan Rubenstein, Shivani
Agrawal, Arthur Mensch, Kedar Soparkar, Karel Lenc, Timothy Chung, Aedan Pope, Loren
Maggiore, Jackie Kay, Priya Jhakra, Shibo Wang, Joshua Maynez, Mary Phuong, Taylor Tobin,
Andrea Tacchetti, Maja Trebacz, Kevin Robinson, Yash Katariya, Sebastian Riedel, Paige Bailey,
Kefan Xiao, Nimesh Ghelani, Lora Aroyo, Ambrose Slone, Neil Houlsby, Xuehan Xiong, Zhen
Yang, Elena Gribovskaya, Jonas Adler, Mateo Wirth, Lisa Lee, Music Li, Thais Kagohara, Jay
Pavagadhi, Sophie Bridgers, Anna Bortsova, Sanjay Ghemawat, Zafarali Ahmed, Tianqi Liu,
Richard Powell, Vijay Bolina, Mariko Iinuma, Polina Zablotskaia, James Besley, Da-Woon Chung,
Timothy Dozat, Ramona Comanescu, Xiance Si, Jeremy Greer, Guolong Su, Martin Polacek,
Raphaël Lopez Kaufman, Simon Tokumine, Hexiang Hu, Elena Buchatskaya, Yingjie Miao,
Mohamed Elhawaty, Aditya Siddhant, Nenad Tomasev, Jinwei Xing, Christina Greer, Helen Miller,
Shereen Ashraf, Aurko Roy, Zizhao Zhang, Ada Ma, Angelos Filos, Milos Besta, Rory Blevins,
Ted Klimenko, Chih-Kuan Yeh, Soravit Changpinyo, Jiaqi Mu, Oscar Chang, Mantas Pajarskas,
Carrie Muir, Vered Cohen, Charline Le Lan, Krishna Haridasan, Amit Marathe, Steven Hansen,
Sholto Douglas, Rajkumar Samuel, Mingqiu Wang, Sophia Austin, Chang Lan, Jiepu Jiang, Justin
Chiu, Jaime Alonso Lorenzo, Lars Lowe Sjösund, Sébastien Cevey, Zach Gleicher, Thi Avrahami,
Anudhyan Boral, Hansa Srinivasan, Vittorio Selo, Rhys May, Konstantinos Aisopos, Léonard
Hussenot, Livio Baldini Soares, Kate Baumli, Michael B. Chang, Adrià Recasens, Ben Caine,
Alexander Pritzel, Filip Pavetic, Fabio Pardo, Anita Gergely, Justin Frye, Vinay Ramasesh, Dan
Horgan, Kartikeya Badola, Nora Kassner, Subhrajit Roy, Ethan Dyer, Víctor Campos Campos, Alex
Tomala, Yunhao Tang, Dalia El Badawy, Elspeth White, Basil Mustafa, Oran Lang, Abhishek Jindal,
Sharad Vikram, Zhitao Gong, Sergi Caelles, Ross Hemsley, Gregory Thornton, Fangxiaoyu Feng,
Wojciech Stokowiec, Ce Zheng, Phoebe Thacker, Ça˘glar Ünlü, Zhishuai Zhang, Mohammad Saleh,
James Svensson, Max Bileschi, Piyush Patil, Ankesh Anand, Roman Ring, Katerina Tsihlas, Arpi
Vezer, Marco Selvi, Toby Shevlane, Mikel Rodriguez, Tom Kwiatkowski, Samira Daruki, Keran
Rong, Allan Dafoe, Nicholas FitzGerald, Keren Gu-Lemberg, Mina Khan, Lisa Anne Hendricks,
Marie Pellat, Vladimir Feinberg, James Cobon-Kerr, Tara Sainath, Maribeth Rauh, Sayed Hadi
Hashemi, Richard Ives, Yana Hasson, Eric Noland, Yuan Cao, Nathan Byrd, Le Hou, Qingze
Wang, Thibault Sottiaux, Michela Paganini, Jean-Baptiste Lespiau, Alexandre Moufarek, Samer
Hassan, Kaushik Shivakumar, Joost van Amersfoort, Amol Mandhane, Pratik Joshi, Anirudh Goyal,
Matthew Tung, Andrew Brock, Hannah Sheahan, Vedant Misra, Cheng Li, Nemanja Raki´cevi´c,
Mostafa Dehghani, Fangyu Liu, Sid Mittal, Junhyuk Oh, Seb Noury, Eren Sezener, Fantine Huot,
Matthew Lamm, Nicola De Cao, Charlie Chen, Sidharth Mudgal, Romina Stella, Kevin Brooks,
Gautam Vasudevan, Chenxi Liu, Mainak Chain, Nivedita Melinkeri, Aaron Cohen, Venus Wang,
Kristie Seymore, Sergey Zubkov, Rahul Goel, Summer Yue, Sai Krishnakumaran, Brian Albert,
Nate Hurley, Motoki Sano, Anhad Mohananey, Jonah Joughin, Egor Filonov, Tomasz K˛epa, Yomna
Eldawy, Jiawern Lim, Rahul Rishi, Shirin Badiezadegan, Taylor Bos, Jerry Chang, Sanil Jain, Sri
Gayatri Sundara Padmanabhan, Subha Puttagunta, Kalpesh Krishna, Leslie Baker, Norbert Kalb,
-----
Vamsi Bedapudi, Adam Kurzrok, Shuntong Lei, Anthony Yu, Oren Litvin, Xiang Zhou, Zhichun
Wu, Sam Sobell, Andrea Siciliano, Alan Papir, Robby Neale, Jonas Bragagnolo, Tej Toor, Tina
Chen, Valentin Anklin, Feiran Wang, Richie Feng, Milad Gholami, Kevin Ling, Lijuan Liu, Jules
Walter, Hamid Moghaddam, Arun Kishore, Jakub Adamek, Tyler Mercado, Jonathan Mallinson,
Siddhinita Wandekar, Stephen Cagle, Eran Ofek, Guillermo Garrido, Clemens Lombriser, Maksim
Mukha, Botu Sun, Hafeezul Rahman Mohammad, Josip Matak, Yadi Qian, Vikas Peswani, Pawel
Janus, Quan Yuan, Leif Schelin, Oana David, Ankur Garg, Yifan He, Oleksii Duzhyi, Anton
Älgmyr, Timothée Lottaz, Qi Li, Vikas Yadav, Luyao Xu, Alex Chinien, Rakesh Shivanna,
Aleksandr Chuklin, Josie Li, Carrie Spadine, Travis Wolfe, Kareem Mohamed, Subhabrata Das,
Zihang Dai, Kyle He, Daniel von Dincklage, Shyam Upadhyay, Akanksha Maurya, Luyan Chi,
Sebastian Krause, Khalid Salama, Pam G. Rabinovitch, Pavan Kumar Reddy M, Aarush Selvan,
Mikhail Dektiarev, Golnaz Ghiasi, Erdem Guven, Himanshu Gupta, Boyi Liu, Deepak Sharma,
Idan Heimlich Shtacher, Shachi Paul, Oscar Akerlund, François-Xavier Aubet, Terry Huang, Chen
Zhu, Eric Zhu, Elico Teixeira, Matthew Fritze, Francesco Bertolini, Liana-Eleonora Marinescu,
Martin Bölle, Dominik Paulus, Khyatti Gupta, Tejasi Latkar, Max Chang, Jason Sanders, Roopa
Wilson, Xuewei Wu, Yi-Xuan Tan, Lam Nguyen Thiet, Tulsee Doshi, Sid Lall, Swaroop Mishra,
Wanming Chen, Thang Luong, Seth Benjamin, Jasmine Lee, Ewa Andrejczuk, Dominik Rabiej,
Vipul Ranjan, Krzysztof Styrc, Pengcheng Yin, Jon Simon, Malcolm Rose Harriott, Mudit Bansal,
Alexei Robsky, Geoff Bacon, David Greene, Daniil Mirylenka, Chen Zhou, Obaid Sarvana,
Abhimanyu Goyal, Samuel Andermatt, Patrick Siegler, Ben Horn, Assaf Israel, Francesco Pongetti,
Chih-Wei "Louis" Chen, Marco Selvatici, Pedro Silva, Kathie Wang, Jackson Tolins, Kelvin Guu,
Roey Yogev, Xiaochen Cai, Alessandro Agostini, Maulik Shah, Hung Nguyen, Noah Ó Donnaile,
Sébastien Pereira, Linda Friso, Adam Stambler, Adam Kurzrok, Chenkai Kuang, Yan Romanikhin,
Mark Geller, Z. J. Yan, Kane Jang, Cheng-Chun Lee, Wojciech Fica, Eric Malmi, Qijun Tan, Dan
Banica, Daniel Balle, Ryan Pham, Yanping Huang, Diana Avram, Hongzhi Shi, Jasjot Singh, Chris
Hidey, Niharika Ahuja, Pranab Saxena, Dan Dooley, Srividya Pranavi Potharaju, Eileen O’Neill,
Anand Gokulchandran, Ryan Foley, Kai Zhao, Mike Dusenberry, Yuan Liu, Pulkit Mehta, Ragha
Kotikalapudi, Chalence Safranek-Shrader, Andrew Goodman, Joshua Kessinger, Eran Globen,
Prateek Kolhar, Chris Gorgolewski, Ali Ibrahim, Yang Song, Ali Eichenbaum, Thomas Brovelli,
Sahitya Potluri, Preethi Lahoti, Cip Baetu, Ali Ghorbani, Charles Chen, Andy Crawford, Shalini
Pal, Mukund Sridhar, Petru Gurita, Asier Mujika, Igor Petrovski, Pierre-Louis Cedoz, Chenmei Li,
Shiyuan Chen, Niccolò Dal Santo, Siddharth Goyal, Jitesh Punjabi, Karthik Kappaganthu, Chester
Kwak, Pallavi LV, Sarmishta Velury, Himadri Choudhury, Jamie Hall, Premal Shah, Ricardo
Figueira, Matt Thomas, Minjie Lu, Ting Zhou, Chintu Kumar, Thomas Jurdi, Sharat Chikkerur,
Yenai Ma, Adams Yu, Soo Kwak, Victor Ähdel, Sujeevan Rajayogam, Travis Choma, Fei Liu,
Aditya Barua, Colin Ji, Ji Ho Park, Vincent Hellendoorn, Alex Bailey, Taylan Bilal, Huanjie Zhou,
Mehrdad Khatir, Charles Sutton, Wojciech Rzadkowski, Fiona Macintosh, Konstantin Shagin, Paul
Medina, Chen Liang, Jinjing Zhou, Pararth Shah, Yingying Bi, Attila Dankovics, Shipra Banga,
Sabine Lehmann, Marissa Bredesen, Zifan Lin, John Eric Hoffmann, Jonathan Lai, Raynald Chung,
Kai Yang, Nihal Balani, Arthur Bražinskas, Andrei Sozanschi, Matthew Hayes, Héctor Fernández
Alcalde, Peter Makarov, Will Chen, Antonio Stella, Liselotte Snijders, Michael Mandl, Ante
Kärrman, Paweł Nowak, Xinyi Wu, Alex Dyck, Krishnan Vaidyanathan, Raghavender R, Jessica
Mallet, Mitch Rudominer, Eric Johnston, Sushil Mittal, Akhil Udathu, Janara Christensen, Vishal
Verma, Zach Irving, Andreas Santucci, Gamaleldin Elsayed, Elnaz Davoodi, Marin Georgiev, Ian
Tenney, Nan Hua, Geoffrey Cideron, Edouard Leurent, Mahmoud Alnahlawi, Ionut Georgescu,
Nan Wei, Ivy Zheng, Dylan Scandinaro, Heinrich Jiang, Jasper Snoek, Mukund Sundararajan,
Xuezhi Wang, Zack Ontiveros, Itay Karo, Jeremy Cole, Vinu Rajashekhar, Lara Tumeh, Eyal BenDavid, Rishub Jain, Jonathan Uesato, Romina Datta, Oskar Bunyan, Shimu Wu, John Zhang, Piotr
Stanczyk, Ye Zhang, David Steiner, Subhajit Naskar, Michael Azzam, Matthew Johnson, Adam
Paszke, Chung-Cheng Chiu, Jaume Sanchez Elias, Afroz Mohiuddin, Faizan Muhammad, Jin
Miao, Andrew Lee, Nino Vieillard, Jane Park, Jiageng Zhang, Jeff Stanway, Drew Garmon, Abhijit
Karmarkar, Zhe Dong, Jong Lee, Aviral Kumar, Luowei Zhou, Jonathan Evens, William Isaac,
Geoffrey Irving, Edward Loper, Michael Fink, Isha Arkatkar, Nanxin Chen, Izhak Shafran, Ivan
Petrychenko, Zhe Chen, Johnson Jia, Anselm Levskaya, Zhenkai Zhu, Peter Grabowski, Yu Mao,
Alberto Magni, Kaisheng Yao, Javier Snaider, Norman Casagrande, Evan Palmer, Paul Suganthan,
Alfonso Castaño, Irene Giannoumis, Wooyeol Kim, Mikołaj Rybi´nski, Ashwin Sreevatsa, Jennifer
Prendki, David Soergel, Adrian Goedeckemeyer, Willi Gierke, Mohsen Jafari, Meenu Gaba, Jeremy
Wiesner, Diana Gage Wright, Yawen Wei, Harsha Vashisht, Yana Kulizhskaya, Jay Hoover, Maigo
Le, Lu Li, Chimezie Iwuanyanwu, Lu Liu, Kevin Ramirez, Andrey Khorlin, Albert Cui, Tian
-----
LIN, Marcus Wu, Ricardo Aguilar, Keith Pallo, Abhishek Chakladar, Ginger Perng, Elena Allica
Abellan, Mingyang Zhang, Ishita Dasgupta, Nate Kushman, Ivo Penchev, Alena Repina, Xihui Wu,
Tom van der Weide, Priya Ponnapalli, Caroline Kaplan, Jiri Simsa, Shuangfeng Li, Olivier Dousse,
Fan Yang, Jeff Piper, Nathan Ie, Rama Pasumarthi, Nathan Lintz, Anitha Vijayakumar, Daniel
Andor, Pedro Valenzuela, Minnie Lui, Cosmin Paduraru, Daiyi Peng, Katherine Lee, Shuyuan
Zhang, Somer Greene, Duc Dung Nguyen, Paula Kurylowicz, Cassidy Hardin, Lucas Dixon, Lili
Janzer, Kiam Choo, Ziqiang Feng, Biao Zhang, Achintya Singhal, Dayou Du, Dan McKinnon,
Natasha Antropova, Tolga Bolukbasi, Orgad Keller, David Reid, Daniel Finchelstein, Maria Abi
Raad, Remi Crocker, Peter Hawkins, Robert Dadashi, Colin Gaffney, Ken Franko, Anna Bulanova,
Rémi Leblond, Shirley Chung, Harry Askham, Luis C. Cobo, Kelvin Xu, Felix Fischer, Jun Xu,
Christina Sorokin, Chris Alberti, Chu-Cheng Lin, Colin Evans, Alek Dimitriev, Hannah Forbes,
Dylan Banarse, Zora Tung, Mark Omernick, Colton Bishop, Rachel Sterneck, Rohan Jain, Jiawei
Xia, Ehsan Amid, Francesco Piccinno, Xingyu Wang, Praseem Banzal, Daniel J. Mankowitz, Alex
Polozov, Victoria Krakovna, Sasha Brown, MohammadHossein Bateni, Dennis Duan, Vlad Firoiu,
Meghana Thotakuri, Tom Natan, Matthieu Geist, Ser tan Girgin, Hui Li, Jiayu Ye, Ofir Roval,
Reiko Tojo, Michael Kwong, James Lee-Thorp, Christopher Yew, Danila Sinopalnikov, Sabela
Ramos, John Mellor, Abhishek Sharma, Kathy Wu, David Miller, Nicolas Sonnerat, Denis Vnukov,
Rory Greig, Jennifer Beattie, Emily Caveness, Libin Bai, Julian Eisenschlos, Alex Korchemniy,
Tomy Tsai, Mimi Jasarevic, Weize Kong, Phuong Dao, Zeyu Zheng, Frederick Liu, Fan Yang,
Rui Zhu, Tian Huey Teh, Jason Sanmiya, Evgeny Gladchenko, Nejc Trdin, Daniel Toyama, Evan
Rosen, Sasan Tavakkol, Linting Xue, Chen Elkind, Oliver Woodman, John Carpenter, George
Papamakarios, Rupert Kemp, Sushant Kafle, Tanya Grunina, Rishika Sinha, Alice Talbert, Diane
Wu, Denese Owusu-Afriyie, Cosmo Du, Chloe Thornton, Jordi Pont-Tuset, Pradyumna Narayana,
Jing Li, Saaber Fatehi, John Wieting, Omar Ajmeri, Benigno Uria, Yeongil Ko, Laura Knight,
Amélie Héliou, Ning Niu, Shane Gu, Chenxi Pang, Yeqing Li, Nir Levine, Ariel Stolovich, Rebeca
Santamaria-Fernandez, Sonam Goenka, Wenny Yustalim, Robin Strudel, Ali Elqursh, Charlie
Deck, Hyo Lee, Zonglin Li, Kyle Levin, Raphael Hoffmann, Dan Holtmann-Rice, Olivier Bachem,
Sho Arora, Christy Koh, Soheil Hassas Yeganeh, Siim Põder, Mukarram Tariq, Yanhua Sun,
Lucian Ionita, Mojtaba Seyedhosseini, Pouya Tafti, Zhiyu Liu, Anmol Gulati, Jasmine Liu, Xinyu
Ye, Bart Chrzaszcz, Lily Wang, Nikhil Sethi, Tianrun Li, Ben Brown, Shreya Singh, Wei Fan,
Aaron Parisi, Joe Stanton, Vinod Koverkathu, Christopher A. Choquette-Choo, Yunjie Li, T. J.
Lu, Abe Ittycheriah, Prakash Shroff, Mani Varadarajan, Sanaz Bahargam, Rob Willoughby, David
Gaddy, Guillaume Desjardins, Marco Cornero, Brona Robenek, Bhavishya Mittal, Ben Albrecht,
Ashish Shenoy, Fedor Moiseev, Henrik Jacobsson, Alireza Ghaffarkhah, Morgane Rivière, Alanna
Walton, Clément Crepy, Alicia Parrish, Zongwei Zhou, Clement Farabet, Carey Radebaugh,
Praveen Srinivasan, Claudia van der Salm, Andreas Fidjeland, Salvatore Scellato, Eri LatorreChimoto, Hanna Klimczak-Pluci´nska, David Bridson, Dario de Cesare, Tom Hudson, Piermaria
Mendolicchio, Lexi Walker, Alex Morris, Matthew Mauger, Alexey Guseynov, Alison Reid, Seth
Odoom, Lucia Loher, Victor Cotruta, Madhavi Yenugula, Dominik Grewe, Anastasia Petrushkina,
Tom Duerig, Antonio Sanchez, Steve Yadlowsky, Amy Shen, Amir Globerson, Lynette Webb,
Sahil Dua, Dong Li, Surya Bhupatiraju, Dan Hurt, Haroon Qureshi, Ananth Agarwal, Tomer Shani,
Matan Eyal, Anuj Khare, Shreyas Rammohan Belle, Lei Wang, Chetan Tekur, Mihir Sanjay Kale,
Jinliang Wei, Ruoxin Sang, Brennan Saeta, Tyler Liechty, Yi Sun, Yao Zhao, Stephan Lee, Pandu
Nayak, Doug Fritz, Manish Reddy Vuyyuru, John Aslanides, Nidhi Vyas, Martin Wicke, Xiao Ma,
Evgenii Eltyshev, Nina Martin, Hardie Cate, James Manyika, Keyvan Amiri, Yelin Kim, Xi Xiong,
Kai Kang, Florian Luisier, Nilesh Tripuraneni, David Madras, Mandy Guo, Austin Waters, Oliver
Wang, Joshua Ainslie, Jason Baldridge, Han Zhang, Garima Pruthi, Jakob Bauer, Feng Yang, Riham
Mansour, Jason Gelman, Yang Xu, George Polovets, Ji Liu, Honglong Cai, Warren Chen, XiangHai
Sheng, Emily Xue, Sherjil Ozair, Christof Angermueller, Xiaowei Li, Anoop Sinha, Weiren Wang,
Julia Wiesinger, Emmanouil Koukoumidis, Yuan Tian, Anand Iyer, Madhu Gurumurthy, Mark
Goldenson, Parashar Shah, M. K. Blake, Hongkun Yu, Anthony Urbanowicz, Jennimaria Palomaki,
Chrisantha Fernando, Ken Durden, Harsh Mehta, Nikola Momchev, Elahe Rahimtoroghi, Maria
Georgaki, Amit Raul, Sebastian Ruder, Morgan Redshaw, Jinhyuk Lee, Denny Zhou, Komal Jalan,
Dinghua Li, Blake Hechtman, Parker Schuh, Milad Nasr, Kieran Milan, Vladimir Mikulik, Juliana
Franco, Tim Green, Nam Nguyen, Joe Kelley, Aroma Mahendru, Andrea Hu, Joshua Howland, Ben
Vargas, Jeffrey Hui, Kshitij Bansal, Vikram Rao, Rakesh Ghiya, Emma Wang, Ke Ye, Jean Michel
Sarr, Melanie Moranski Preston, Madeleine Elish, Steve Li, Aakash Kaku, Jigar Gupta, Ice Pasupat,
Da-Cheng Juan, Milan Someswar, Tejvi M., Xinyun Chen, Aida Amini, Alex Fabrikant, Eric Chu,
Xuanyi Dong, Amruta Muthal, Senaka Buthpitiya, Sarthak Jauhari, Nan Hua, Urvashi Khandelwal,
-----
Ayal Hitron, Jie Ren, Larissa Rinaldi, Shahar Drath, Avigail Dabush, Nan-Jiang Jiang, Harshal
Godhia, Uli Sachs, Anthony Chen, Yicheng Fan, Hagai Taitelbaum, Hila Noga, Zhuyun Dai, James
Wang, Chen Liang, Jenny Hamer, Chun-Sung Ferng, Chenel Elkind, Aviel Atias, Paulina Lee, Vít
Listík, Mathias Carlen, Jan van de Kerkhof, Marcin Pikus, Krunoslav Zaher, Paul Müller, Sasha
Zykova, Richard Stefanec, Vitaly Gatsko, Christoph Hirnschall, Ashwin Sethi, Xingyu Federico
Xu, Chetan Ahuja, Beth Tsai, Anca Stefanoiu, Bo Feng, Keshav Dhandhania, Manish Katyal,
Akshay Gupta, Atharva Parulekar, Divya Pitta, Jing Zhao, Vivaan Bhatia, Yashodha Bhavnani,
Omar Alhadlaq, Xiaolin Li, Peter Danenberg, Dennis Tu, Alex Pine, Vera Filippova, Abhipso
Ghosh, Ben Limonchik, Bhargava Urala, Chaitanya Krishna Lanka, Derik Clive, Yi Sun, Edward
Li, Hao Wu, Kevin Hongtongsak, Ianna Li, Kalind Thakkar, Kuanysh Omarov, Kushal Majmundar,
Michael Alverson, Michael Kucharski, Mohak Patel, Mudit Jain, Maksim Zabelin, Paolo Pelagatti,
Rohan Kohli, Saurabh Kumar, Joseph Kim, Swetha Sankar, Vineet Shah, Lakshmi Ramachandruni,
Xiangkai Zeng, Ben Bariach, Laura Weidinger, Amar Subramanya, Sissie Hsiao, Demis Hassabis,
Koray Kavukcuoglu, Adam Sadovsky, Quoc Le, Trevor Strohman, Yonghui Wu, Slav Petrov,
Jeffrey Dean, and Oriol Vinyals. Gemini: A Family of Highly Capable Multimodal Models, April
[2024. URL http://arxiv.org/abs/2312.11805. arXiv:2312.11805 [cs].](http://arxiv.org/abs/2312.11805)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand
Joulin, Edouard Grave, and Guillaume Lample. LLaMA: Open and Efficient Foundation Language
[Models, February 2023a. URL http://arxiv.org/abs/2302.13971. arXiv:2302.13971 [cs].](http://arxiv.org/abs/2302.13971)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,
Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open Foundation and Fine-Tuned Chat Models,
[July 2023b. URL http://arxiv.org/abs/2307.09288. arXiv:2307.09288 [cs].](http://arxiv.org/abs/2307.09288)
Ruijie Xu, Zengzhi Wang, Run-Ze Fan, and Pengfei Liu. Benchmarking Benchmark Leakage in Large
[Language Models, April 2024. URL http://arxiv.org/abs/2404.18824. arXiv:2404.18824](http://arxiv.org/abs/2404.18824)
[cs].
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok,
Zhenguo Li, Adrian Weller, and Weiyang Liu. MetaMath: Bootstrap Your Own Mathematical
[Questions for Large Language Models, October 2023. URL http://arxiv.org/abs/2309.](http://arxiv.org/abs/2309.12284)
```
12284. arXiv:2309.12284 [cs].
```
-----
**A** **Annotator Instructions**
We provide the annotator instructions given below.
```
Welcome to the Grade School Math Question Development project. The goal
of this project is to create questions and answers similar to what is
found in an 8th−grade math quiz. Our goal is to develop high−quality
questions that are almost the same as what is found in the dataset
but are entirely unique. You will see three example questions and
their corresponding answers in each task. These examples will guide
you to create completely new questions and answers. It’s important to
note that you cannot use chatbots or language models to help you
develop these Q&A pairs. You may be removed from the project if we
detect any use of chatbots. Crucially, your Q&A pairs must be
original creations and cannot be paraphrased versions of the examples
.
Your workflow for this project will be as follows:
Review the examples: In each task you will be shown examples from an 8th−
grade question−and−answer dataset. Review the examples to inform how
you can create your question and answer pair.
Problem Creation: Problems should follow step guidance in the task. Don’t
reuse a problem setting. If you wrote a problem about Rogers trip to
the grocery store, don’t write another problem using the same
premise. All questions should have a resolution of 1 or higher. We do
not want any questions with a negative integer or zero as the answer
.
Craft the resolution steps: Calculations should be simple enough an 8th
grader can complete with a pen and paper. Only use elementary
arithmetic operations (addition, subtraction, multiplication,
division)
Provide the final Answer: Answers should be a single integer value. Any
units should be specified as part of the question (e.g. "How much
money, in dollars, does Robert have?"). Simple decimal numbers (e.g.
3.25) can be part of the intermediate steps in the problem, but final
answers should always be integers.
Check your work: We will utilize quality control process to ensure
accuracy but it is crucial to check your work!
```
-----
Figure 9: What annotators saw before seeing three example prompts drawn from GSM8k.
-----
**B** **N-shot Prompt (examples selected randomly from GSM8k train)**
Below is an example prompt. For each question, we select five random examples from GSM8k to
use as n-shot examples, which vary for each new question from the GSM1k/GSM8k test set. While
evaluation methods vary between models, this is the most common approach to evaluating GSM8k.
```
Question: Jen and Tyler are gymnasts practicing flips. Jen is practicing the triple-flip
while Tyler is practicing the double-flip. Jen did sixteen triple-flips during practice.
Tyler flipped in the air half the number of times Jen did. How many double-flips did Tyler do?
Answer: Jen did 16 triple-flips, so she did 16 * 3 = <<16*3=48>>48 flips.
Tyler did half the number of flips, so he did 48 / 2 = <<48/2=24>>24 flips.
A double flip has two flips, so Tyler did 24 / 2 = <<24/2=12>>12 double-flips.
#### 12
Question: Four people in a law firm are planning a party. Mary will buy a platter of pasta
for $20 and a loaf of bread for $2. Elle and Andrea will split the cost for buying 4 cans
of soda which cost $1.50 each, and chicken wings for $10. Joe will buy a cake that costs
$5. How much more will Mary spend than the rest of the firm put together?
Answer: Mary will spend $20 + $2 = $<<20+2=22>>22.
Elle and Andrea will spend $1.5 x 4 = $<<1.5*4=6>>6 for the soda.
Elle and Andrea will spend $6 + $10 = $<<6+10=16>>16 for the soda and chicken wings.
Elle, Andrea, and Joe together will spend $16 + $5 = $<<16+5=21>>21.
So, Mary will spend $22 - $21 = $<<22-21=1>>1 more than all of them combined.
#### 1
Question: A charcoal grill burns fifteen coals to ash every twenty minutes of grilling.
The grill ran for long enough to burn three bags of coals. Each bag of coal contains 60
coals. How long did the grill run?
Answer: The grill burned 3 * 60 = <<3*60=180>>180 coals.
It takes 20 minutes to burn 15 coals, so the grill ran for 180 / 15 * 20 =
<<180/15*20=240>>240 minutes.
#### 240
Question: A bear is preparing to hibernate for the winter and needs to gain 1000 pounds.
At the end of summer, the bear feasts on berries and small woodland animals. During autumn,
it devours acorns and salmon. It gained a fifth of the weight it needed from berries during
summer, and during autumn, it gained twice that amount from acorns. Salmon made up half of
the remaining weight it had needed to gain. How many pounds did it gain eating small animals?
Answer: The bear gained 1 / 5 * 1000 = <<1/5*1000=200>>200 pounds from berries.
It gained 2 * 200 = <<2*200=400>>400 pounds from acorns.
It still needed 1000 - 200 - 400 = <<1000-200-400=400>>400 pounds.
Thus, it gained 400 / 2 = <<400/2=200>>200 pounds from salmon.
Therefore, the bear gained 400 - 200 = <<400-200=200>>200 pounds from small animals.
#### 200
Question: Brendan can cut 8 yards of grass per day, he bought a lawnmower and it helped
him to cut more yards by Fifty percent per day. How many yards will Brendan be able to cut
after a week?
Answer: The additional yard Brendan can cut after buying the lawnmower is 8 x 0.50 =
<<8*0.50=4>>4 yards.
So, the total yards he can cut with the lawnmower is 8 + 4 = <<8+4=12>>12.
Therefore, the total number of yards he can cut in a week is 12 x 7 = <<12*7=84>>84 yards.
#### 84
```
-----
**C** **Results with An Alternative Prompt**
As an ablation, we evaluate all models with an alternative prompt scheme and compare results with
our primary findings. This prompt is available under the LM Evaluation Harness as a “chain-ofthought” prompt. However, manually examining the prompt (provided in full below) reveals that the
primary difference with the standard n-shot prompt lies not in chain-of-thought reasoning but rather
using a set of non-GSM8k problems as guiding examples as well as providing an alternative answer
format. We choose to use the standard prompt to match typical evaluation methods widespread in the
field but also report these results for completeness.
```
Q: There are 15 trees in the grove. Grove workers will plant trees in the
grove today. After they are done, there will be 21 trees. How many
trees did the grove workers plant today?
A: There are 15 trees originally. Then there were 21 trees after some
more were planted. So there must have been 21 − 15 = 6. The answer is
6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how
many cars are in the parking lot?
A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer
is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many
pieces do they have left in total?
A: Originally, Leah had 32 chocolates. Her sister had 42. So in total
they had 32 + 42 = 74. After eating 35, they had 74 − 35 = 39. The
answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12
lollipops. How many lollipops did Jason give to Denny?
A: Jason started with 20 lollipops. Then he had 12 after giving some to
Denny. So he gave Denny 20 − 12 = 8. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom
and dad. How many toys does he have now?
A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad,
then that is 4 more toys. 5 + 4 = 9. The answer is 9.
Q: There were nine computers in the server room. Five more computers were
installed each day, from monday to thursday. How many computers are
now in the server room?
A: There were originally 9 computers. For each of 4 days, 5 more
computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is
29. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On
wednesday, he lost 2 more. How many golf balls did he have at the end
of wednesday?
A: Michael started with 58 golf balls. After losing 23 on tuesday, he had
58 − 23 = 35. After losing 2 more, he had 35 − 2 = 33 golf balls.
The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money
does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15
dollars. So she has 23 − 15 dollars left. 23 − 15 is 8. The answer is
8.
```
We report our results in Table D. On average, we find that this prompt causes the gap between
GSM8k and GSM1k to decrease by about 1% across all models. However, for some models (e.g.
Mixtral-8x22B-v0.1), this reduces the amount of observed overfitting dramatically. While the exact
cause of this difference is impossible to know, especially without access to model details such as
their training set, our hypothesis is that prompting a model with GSM8k is more likely to activate the
“memorization” portion of a model than if it is prompted by non-GSM8k grade school math problems.
-----
**D** **Results Table**
We report our full results in Table D. Models are sorted by the difference in performance between
GSM8k and GSM1k. Because all models are evaluated using the standard LM Evaluation Harness
prompt and evaluation format, model performance on GSM8k may not match reported benchmark
numbers. In particular, answers that do not match the 5-shot example format are marked incorrect
even if they are otherwise “correct.” Our focus is primarily on the difference between GSM8k and
GSM1k performance, holding evaluation setting constant. The Z-score and p-value are calculated
for a two-tailed two proportion Z-test. Alternative prompt results are also included. For details, see
Appendix C.
**Standard Prompt**
**Model** **Diff** **GSM8k** **GSM1k** **Z-score** **p-value**
`math-shepherd-mistral-7b-rl` 0.134 0.745 0.611 7.322 0.000
`Xwin-Math-13B-V1.0` 0.101 0.631 0.529 5.196 0.000
`Xwin-Math-7B-V1.0` 0.101 0.529 0.428 5.131 0.000
`Mixtral-8x22B-Instruct-v0.1` 0.099 0.859 0.760 6.403 0.000
`Phi-3-mini-4k-instruct` 0.098 0.782 0.684 5.600 0.000
`deepseek-llm-67b-base` 0.093 0.615 0.522 4.771 0.000
`Mixtral-8x22B-v0.1` 0.092 0.770 0.677 5.257 0.000
`gemma-7b` 0.077 0.541 0.464 3.879 0.000
`phi-2` 0.074 0.569 0.495 3.766 0.000
`Yi-6B-Chat` 0.073 0.408 0.334 3.851 0.000
`mistral-small-latest` 0.072 0.790 0.718 4.218 0.000
`Yi-34B-Chat` 0.071 0.641 0.569 3.683 0.000
`Smaug-34B-v0.1` 0.069 0.757 0.688 3.931 0.000
`gemma-1.1-7b-it` 0.069 0.522 0.453 3.485 0.000
`falcon-180B-chat` 0.065 0.610 0.545 3.320 0.001
`codegemma-7b` 0.063 0.479 0.416 3.217 0.001
`CodeLlama-70b-Python-hf` 0.063 0.462 0.399 3.198 0.001
`command` 0.062 0.445 0.383 3.221 0.001
`Meta-Llama-3-8B-Instruct` 0.062 0.752 0.690 3.532 0.000
`Smaug-2-72B` 0.061 0.799 0.738 3.697 0.000
`CodeLlama-70b-hf` 0.059 0.486 0.427 2.989 0.003
`Phi-3-mini-128k-instruct` 0.058 0.741 0.683 3.264 0.001
`CodeLlama-34b-Instruct-hf` 0.058 0.415 0.357 3.013 0.003
`Meta-Llama-3-8B` 0.057 0.499 0.442 2.865 0.004
`codegemma-7b-it` 0.055 0.526 0.471 2.784 0.005
`phi-1` 0.048 0.324 0.276 2.683 0.007
`Mixtral-8x7B-v0.1` 0.048 0.578 0.530 2.450 0.014
`Phind-CodeLlama-34B-v2` 0.048 0.416 0.368 2.460 0.014
`Mixtral-8x7B-Instruct-v0.1` 0.046 0.641 0.594 2.411 0.016
`gemma-7b-it` 0.044 0.335 0.291 2.397 0.017
-----
**Standard Prompt**
**Model** **Diff** **GSM8k** **GSM1k** **Z-score** **p-value**
`mistral-medium-latest` 0.038 0.790 0.752 2.292 0.022
`vicuna-33b-v1.3` 0.038 0.379 0.341 2.020 0.043
`CodeLlama-13b-Python-hf` 0.030 0.214 0.183 1.942 0.052
`OpenMath-Llama-2-70b-hf` 0.029 0.171 0.142 2.072 0.038
`Mistral-7B-v0.1` 0.029 0.381 0.353 1.500 0.134
`CodeLlama-34b-hf` 0.028 0.350 0.322 1.497 0.134
`CodeLlama-13b-Instruct-hf` 0.027 0.277 0.250 1.560 0.119
`Mistral-7B-Instruct-v0.2` 0.027 0.426 0.399 1.383 0.167
`CodeLlama-70b-Instruct-hf` 0.025 0.497 0.471 1.287 0.198
`Llama-2-7b-hf` 0.023 0.141 0.118 1.766 0.077
`Meta-Llama-3-70B` 0.022 0.811 0.789 1.371 0.170
`Llama-2-70b-hf` 0.021 0.538 0.516 1.092 0.275
`Meta-Llama-3-70B-Instruct` 0.020 0.896 0.876 1.607 0.108
`Mistral-7B-Instruct-v0.1` 0.019 0.335 0.316 1.033 0.302
`gemini-1.5-pro-preview-0409` 0.018 0.897 0.879 1.423 0.155
`pythia-12b` 0.016 0.036 0.021 2.362 0.018
`CodeLlama-13b-hf` 0.015 0.227 0.212 0.899 0.369
`CodeLlama-34b-Python-hf` 0.013 0.306 0.293 0.746 0.456
`gemma-2b` 0.013 0.176 0.163 0.911 0.362
`dbrx-base` 0.013 0.727 0.715 0.715 0.474
`deepseek-coder-33b-instruct` 0.010 0.419 0.409 0.538 0.591
`CodeLlama-7b-Python-hf` 0.008 0.127 0.120 0.629 0.529
`gpt-3.5-turbo` 0.007 0.760 0.753 0.405 0.685
`gpt2-xl` 0.004 0.021 0.017 0.665 0.506
`gpt-neox-20b` 0.003 0.059 0.056 0.253 0.800
`gemini-pro` 0.002 0.792 0.789 0.119 0.905
`CodeLlama-7b-hf` 0.002 0.127 0.125 0.135 0.893
`CodeLlama-7b-Instruct-hf` -0.000 0.186 0.186 0.010 0.992
`mistral-large-latest` -0.000 0.853 0.853 0.009 0.993
`gpt-4-turbo` -0.000 0.898 0.898 0.004 0.997
`claude-3-haiku-20240307` -0.000 0.785 0.785 -0.007 0.994
`gemma-2b-it` -0.001 0.111 0.112 -0.044 0.965
`Llama-2-13b-hf` -0.004 0.230 0.235 -0.283 0.777
`claude-2.1` -0.007 0.887 0.894 -0.532 0.595
`gpt-4` -0.007 0.911 0.918 -0.638 0.523
`gemma-1.1-2b-it` -0.008 0.104 0.112 -0.664 0.506
`deepseek-math-7b-rl` -0.022 0.185 0.207 -1.418 0.156
`claude-3-opus-20240229` -0.023 0.802 0.825 -1.474 0.141
-----
**Standard Prompt**
**Model** **Diff** **GSM8k** **GSM1k** **Z-score** **p-value**
`claude-3-sonnet-20240229` -0.024 0.719 0.744 -1.355 0.175
-----
**Alternative Prompt**
**Model** **Diff** **GSM8k** **GSM1k** **Z-score** **p-value**
`math-shepherd-mistral-7b-rl` 0.138 0.782 0.645 7.726 0.000
`deepseek-math-7b-rl` 0.108 0.754 0.646 5.979 0.000
`Xwin-Math-13B-V1.0` 0.076 0.643 0.567 3.966 0.000
`Phi-3-mini-128k-instruct` 0.065 0.817 0.752 4.029 0.000
`gemma-1.1-7b-it` 0.065 0.493 0.427 3.334 0.001
`Yi-34B-Chat` 0.062 0.647 0.584 3.266 0.001
`deepseek-llm-67b-base` 0.061 0.656 0.594 3.215 0.001
`Yi-6B-Chat` 0.061 0.423 0.362 3.187 0.001
`CodeLlama-34b-Python-hf` 0.059 0.340 0.281 3.216 0.001
`Xwin-Math-7B-V1.0` 0.058 0.513 0.456 2.903 0.004
`codegemma-7b` 0.057 0.516 0.459 2.896 0.004
`Smaug-2-72B` 0.057 0.812 0.755 3.497 0.000
`codegemma-7b-it` 0.056 0.516 0.459 2.855 0.004
`CodeLlama-70b-Python-hf` 0.054 0.493 0.439 2.722 0.006
`command` 0.049 0.446 0.396 2.513 0.012
`phi-1` 0.048 0.324 0.276 2.637 0.008
`Smaug-34B-v0.1` 0.046 0.745 0.700 2.607 0.009
`Mixtral-8x22B-Instruct-v0.1` 0.042 0.885 0.843 3.077 0.002
`CodeLlama-70b-Instruct-hf` 0.041 0.519 0.477 2.117 0.034
`phi-2` 0.041 0.554 0.513 2.103 0.035
`CodeLlama-70b-hf` 0.039 0.500 0.460 2.009 0.045
`OpenMath-Llama-2-70b-hf` 0.039 0.165 0.126 2.796 0.005
`Phi-3-mini-4k-instruct` 0.039 0.801 0.763 2.392 0.017
`dbrx-base` 0.033 0.718 0.685 1.837 0.066
`Mistral-7B-Instruct-v0.2` 0.033 0.436 0.403 1.680 0.093
`mistral-small-latest` 0.031 0.782 0.751 1.871 0.061
`Mixtral-8x7B-Instruct-v0.1` 0.029 0.686 0.657 1.540 0.124
`deepseek-coder-33b-instruct` 0.028 0.421 0.392 1.484 0.138
`CodeLlama-13b-Python-hf` 0.027 0.221 0.194 1.683 0.092
`gemma-7b` 0.026 0.544 0.518 1.318 0.188
`Phind-CodeLlama-34B-v2` 0.026 0.403 0.376 1.381 0.167
`Meta-Llama-3-8B-Instruct` 0.026 0.772 0.746 1.505 0.132
`falcon-180B-chat` 0.025 0.622 0.597 1.292 0.196
`mistral-medium-latest` 0.025 0.789 0.764 1.488 0.137
`pythia-12b` 0.023 0.050 0.027 3.111 0.002
`vicuna-33b-v1.3` 0.023 0.440 0.417 1.171 0.241
`CodeLlama-7b-Instruct-hf` 0.023 0.187 0.164 1.548 0.122
`CodeLlama-7b-hf` 0.021 0.116 0.095 1.781 0.075
-----
**Alternative Prompt**
**Model** **Diff** **GSM8k** **GSM1k** **Z-score** **p-value**
`Mixtral-8x22B-v0.1` 0.019 0.810 0.791 1.173 0.241
`CodeLlama-34b-hf` 0.019 0.324 0.305 1.033 0.302
`Meta-Llama-3-8B` 0.018 0.547 0.529 0.904 0.366
`gemini-1.5-pro-preview-0409` 0.016 0.908 0.892 1.376 0.169
`Llama-2-7b-hf` 0.015 0.143 0.127 1.136 0.256
`Meta-Llama-3-70B-Instruct` 0.015 0.899 0.885 1.173 0.241
`gpt-4` 0.014 0.919 0.904 1.259 0.208
`gemma-2b` 0.013 0.191 0.178 0.879 0.380
`CodeLlama-13b-Instruct-hf` 0.012 0.287 0.276 0.684 0.494
`gemma-7b-it` 0.008 0.248 0.240 0.467 0.641
`CodeLlama-34b-Instruct-hf` 0.007 0.400 0.392 0.391 0.696
`Mistral-7B-v0.1` 0.007 0.422 0.415 0.325 0.745
`Mixtral-8x7B-v0.1` 0.006 0.607 0.601 0.336 0.737
`Llama-2-70b-hf` 0.006 0.572 0.566 0.309 0.757
`gpt-neox-20b` 0.005 0.076 0.070 0.527 0.598
`gpt2-xl` 0.004 0.022 0.018 0.645 0.519
`CodeLlama-7b-Python-hf` 0.003 0.123 0.120 0.281 0.779
`CodeLlama-13b-hf` 0.003 0.218 0.215 0.147 0.883
`Llama-2-13b-hf` 0.002 0.281 0.279 0.117 0.907
`claude-2.1` 0.001 0.836 0.836 0.016 0.987
`gemini-pro` 0.000 0.688 0.687 0.024 0.981
`claude-3-haiku-20240307` 0.000 0.792 0.791 0.019 0.985
`gpt-4-turbo` -0.002 0.847 0.849 -0.137 0.891
`mistral-large-latest` -0.003 0.854 0.857 -0.225 0.822
`gemma-1.1-2b-it` -0.006 0.084 0.090 -0.490 0.624
`gemma-2b-it` -0.007 0.099 0.106 -0.589 0.556
`Mistral-7B-Instruct-v0.1` -0.007 0.347 0.354 -0.380 0.704
`claude-3-sonnet-20240229` -0.011 0.713 0.724 -0.596 0.551
`gpt-3.5-turbo` -0.016 0.742 0.759 -0.946 0.344
`claude-3-opus-20240229` -0.026 0.830 0.857 -1.855 0.064
`Meta-Llama-3-70B` -0.035 0.807 0.841 -2.322 0.020
-----
**50 Examples from GSM1k**
|No.|Question|Answer|
|---|---|---|
|1|Gabriela has $65.00 and is shopping for groceries so that her grandmother can make her favorite kale soup. She needs heavy cream, kale, cauliflower, and meat (bacon and sausage). Gabriella spends 40% of her money on the meat. She spends $5.00 less than one-third of the remaining money on heavy cream. Cauliflower costs three-fourth of the price of the heavy cream and the kale costs $2.00 less than the cauliflower. As Gabriela leaves the store, she spends one-third of her remaining money on her grandmother’s favorite Girl Scout Cookies. How much money, in dollars, does Gabriela spend on Girl Scout cookies?|7|
|2|Bernie is a street performer who plays guitar. On average, he breaks three guitar strings a week, and each guitar string costs $3 to replace. How much does he spend on guitar strings over the course of an entire year?|468|
|3|John Henry is competing against a machine to see who can dig a tunnel more quickly. John works without rest, and excavates at a rate of 6 cubic feet of rock per hour. The machine excavates more quickly but needs to be refueled and maintained by its operator for 30 minutes out of every hour. When it’s not under maintenance, the machine excavates at a rate of 10 cubic feet of stone per hour. Provided that the competition lasts for 8 hours, how much more rock will John have excavated compared to the machine?|8|
|4|Colin is playing dice with his friend Eoin and needs some help keeping track of his score. He begins with 5 points and wins 6 points in the first round. In the second round, he won twice as many points as he won in the first round. In the third round, he had a fantastic roll and was able to triple his current total point count! How many points did Colin end the game with?|69|
|5|Bradley and his friends enjoy playing marbles. They possess a box of marbles containing 12 red balls, 15 yellow balls, and 18 green balls. How many additional red balls do they require to double the number of red balls compared to the combined number of yellow and green balls?|54|
|6|Marge got a job so she can buy her first car. Her job pays $15/hr and she works there 30 hours a week. The car Marge wants is $3600. How many weeks does Marge need to work to buy the car?|8|
|7|Andy’s soccer team needs 80 points to finish in first place. His team plays 38 games, and he gets 3 points for each win, 1 point for each tie, and 0 points for each loss. After 26 games, the team has 15 wins, 5 ties, and 6 losses. How many more points does Andy’s team need to reach 80 points?|30|
|8|Molly wants to win the contest at school for reading 25 books before the end of May. So far, she has read 5 books by the end of January. How many more books will she need to read on average each month until the end of May to win the contest?|5|
|9|Ms. Crabapple has a bag of jelly beans that she is going to divide equally among all of her 32 students who complete their homework every day over the course of a week. The bag has 384 jellybeans in it. Unfortunately, many of Ms. Crabapple’s students have a poorly developed work ethic, and only half of them complete all of the required homework. How many jelly beans will each of the eligible students receive?|24|
-----
|No.|Question|Answer|
|---|---|---|
|10|Emily is applying to 6 different colleges. ½ of the colleges have an applica- tion fee of $60, and the other half have an application fee of $90. She must also pay $15 per transcript to send them to each college. Her parents offer to help pay for half of the total costs. How many dollars does she have to pay?|270|
|11|Bob has to read 2 books and 3 articles, while Emily has to read 4 books and 2 articles. Each book has 3 chapters and each chapter has 4 paragraphs. Each article has 4 sections and each section has 2 paragraphs. How many paragraphs in total will Bob and Emily read?|112|
|12|Leah and 2 of her friends go to an all-you-can-eat dumpling buffet. Leah’s 1st friend ate 30 dumplings, her 2nd friend ate twice as many dumplings as her 1st friend, and Leah ate 1.5 times as many dumplings as her 2nd friend. How many dumplings in total did Leah and her friends eat?|180|
|13|Francis has a bowl of candy in front of him. There are three different flavors of candies that he’s eaten over the course of 3 hours. He’s eaten ten lemon, four orange, and sixteen cherry-flavored candies. If there were twenty of each when he started, how much of an average percentage is still left?|50|
|14|Maryann is saving up for a new bike that costs $450. She already has $120 saved up. She earns $15 per hour at her part-time job. How many hours does she need to work to afford the bike?|22|
|15|Henry is renovating his kitchen and adding a new tile floor. He needs to cover an area of 200 square feet. He has a stack of tiles that measure 0.5 feet in length and width. He can get 40 tiles done per hour. Henry works for 6 hours at that rate, then has some coffee and works at a faster rate for the next 2 hours (60 tiles per hour). Henry runs out of tiles, so he goes to a store to purchase the remaining tiles needed to finish the floor. Given that the price per tile is $2.50, how much will he need to spend at the store to get exactly enough tiles to finish the floor?|1100|
|16|A painter needs to paint 3 houses. The first house requires 14 gallons of paint, the second house requires twice as much paint as the first, and the third house needs half as much paint as the second house. If one gallon of paint costs $35 and the painter gets a bulk discount of 10% for purchases over 30 gallons, how much will the paint cost in total?|1764|
|17|A coal miner is loading up coal into mine carts. During the first hour of the day, he is able to load 15 carts. His boss yells at him after that, so for each of the next three hours, he loads twice as many carts. Each cart weighs 78 pounds. What was the total weight of the coal he loaded on this day?|8190|
|18|A plane owned by Sunny Skies Airlines is flying from Indianapolis to Phoenix. The plane holds 180 passengers and is 2/3 full. Each passenger brings 2 carry-on bags and is charged a carry-on bag fee of $35 per bag. How much money does Sunny Skies Airlines collect for the carry-on bag fees for this flight?|8400|
|19|Sally went to the mall to buy clothes for the summer. She went to Forever 21 and bought 4 tops, each had different prices, $12.99, $6.99, $17.99, $21.99, and 3 pants each priced at $15.99. If her subtotal is over $75, she gets a discount of 15% on her purchase at that store. Then she goes to Shoe Palace and buys 2 shoes for a total of $123.26. How much money did Sally spend at the mall?|215|
-----
|No.|Question|Answer|
|---|---|---|
|20|Dean wants to buy flowers to make arrangements for a party. He is going to make 12 arrangements. He wants to include 4 roses and 3 daisies in each arrangement. Roses come by the dozens and are $15 for each dozen. Daisies come in groups of 4 and are $8 for the set. How much will it cost for Dean to make all 12 arrangements?|132|
|21|Alex plans to adopt a new cat and needs help planning a budget for this event. The adoption fee is $200, and it includes all the essential veterinary care needed for a kitten, but she also needs to buy other supplies for the cat when she brings it home. The litter boxes cost $30, one package of litter costs $17, a bag of dry food costs $55, and the wet food costs $1.50 per can. Alex will buy 2 litter boxes, 3 packages of litter, one bag of dry food, and 12 cans of wet food. How much money should Alex make sure she has before beginning the process of adopting her new cat?|384|
|22|Carolina is trying to qualify for a car loan. The lender tells her she must meet a debt-to-income ratio of 1:4. Her current debts are $900 in rent, $200 in utilities, and another $300 in miscellaneous expenses per month. Her current monthly salary is $4000. How much more money, in dollars, will she need to cut out from her current debts per month to meet the DTI requirements?|400|
|23|Samantha is saving money for a new bike by doing chores. She earns $5 for every chore she completes. If she does 3 chores each day for a week, and then uses $25 to buy a helmet, how much money does she have left at the end of the week?|80|
|24|Frank sneaks out before his break at 3:20 pm and gets back at 4:05. If his break was only supposed to be half an hour, for how much longer did Frank sneak out (in minutes)?|15|
|25|Janet wants to listen to 20 music albums by the end of the week. If its Thursday and she just finished album number twelve and she has to finish them by Saturday, how many albums would she have to listen to per day?|4|
|26|Hana wants to donate her clothes to a local charity. After going through her closet she ended up with 2 boxes of pants, 3 boxes of dresses, 1 box of shoes, and boxes of shirts. The number of boxes with shirts was 3 more than the other three boxes combined. How many boxes of shirts does she have to donate?|9|
|27|Gray has $126 to spend on lunches for the week. On Monday, he spent $16 on a carne asada burrito and a soda. On Saturday, he will spend $30 eating out with friends. If he spends the same amount of money on food for the other 5 days of the week, what will be his average daily spending on food over these 5 days?|16|
|28|Gayle has a lawnmowing business. Lawn 1 takes 15 minutes to mow. Lawn 2 takes 18 more minutes than Lawn 1. Lawn 3 takes 20% more time to mow than Lawn 1. She is paid $2.50 per minute for the time she spends. However, she gives her customers a 20% discount. How much money does she make from mowing all three lawns?|132|
|29|Frank ordered a whole chicken, 6 cans of chopped chicken breast, 1 lb. of macadamia nuts, and 4 bags of frozen broccoli. Each item has the following respective prices: $12 per chicken, $2 per can, $24/lb., $3 per bag. The sales tax was 10% of the total cost and the tip was half the price of the whole chicken. How much did Frank pay for his order?|72|
-----
|No.|Question|Answer|
|---|---|---|
|30|Milo can bench press half as much weight as Doug can squat, and Doug can squat twice as much weight as Diane can squat. If Diana squats 125 pounds, how much weight can Milo bench press?|125|
|31|Pablo is trying to make breakfast for his family. His wife eats 4 pancakes. His son eats 2 pancakes. Pablo wants to eat 4 pancakes. One box of pancake mix will make 5 pancakes. How many boxes of pancake mix will he need?|2|
|32|Jim wants to spend 15% of his monthly earnings on groceries. He makes $2500/month. How much money will he have left over?|2125|
|33|A school is ordering tablets and laptops for three classrooms. Each class- room will receive 4 tablets and 3 laptops. If each tablet costs $250 and each laptop costs $600, how much will the school spend in total for all three classrooms?|8400|
|34|Grant takes 3 minutes to put on his pajamas. He brushes his teeth for 2 minutes. Then, he washes his face and brushes his hair for another 2 minutes. Finally, he reads a book for a while and turns off the light for bed. If Grant begins his routine at 8:15 pm and turns off the lights at 8:47 pm, for how long does Grant read a book?|25|
|35|Bellemere owns a tangerine orchard with 50 trees. Each tree produces 80 tangerines. She wants to sell 600 tangerines at her local farmer’s market. If she picks the same amount of tangerines from every tree, how many tangerines will be left on each tree?|68|
|36|A charity puts out a telethon for a cause. Within 15 minutes, seventy-seven people donated $3 each, and 231 people donated four dollars each. How much does the charity receive within this time?|1155|
|37|A school is selling baskets for a fundraiser. There are three baskets contain- ing the following items: * Blue basket: a ball, cup, and notebook. * Red basket: a cup, bell, and hat. * Green basket: a hat, pen, and notebook. The costs of the items in the baskets are as follows: * $1: ball, notebook, and pen * $2: cup, bell, and hat Jane buys 6 red baskets and 5 blue baskets. Jim buys 3 red baskets and 2 green baskets. Since they purchase so many, they receive a discount. Jane gets an $8 discount and Jim also gets a $2 discount. How many times more does Jane spend than Jim?|2|
|38|Mr. Gordon has 14 boys in his first period class which is twice the number of girls in class. Two of the girls in class have blonde hair and the rest have brown hair. How many girls with brown hair are in his class?|5|
|39|Albert gets paid $15 an hour. He gets time and a half if he works over forty hours a week. Last week, he worked 48 hours. He plans to do this two weeks in a row. How much money will he be paid in overtime for those two weeks?|360|
|40|Beth, Anna, and Kim went to a book fair. Beth had two books less than Anna while Kim had four more books than Anna. Beth had $20 with her and was now left with $8. If all books are priced at $4, how much, in dollars, did Kim spend on her books?|36|
|41|4 friends are going on a road trip. Their names are Alex, Bethany, Carlos, and Drew. They drive at a rate of 65, 75, 60, and 50 mph, respectively. Alex drives for 2 hours, Bethany for 4, and Carlos and Drew each drive for 3 hours. They are using a car with a fuel effciiency of 20 miles per gallon of gas. If, along their route, gas costs $3 per gallon, how much money (in dollars) will they need to spend on gas? Assume they begin their journey at a gas station with an empty tank of gas.|114|
-----
|No.|Question|Answer|
|---|---|---|
|42|The Genco Olive Oil Company has received ninety-nine orders for ninety- nine barrels of olive oil each. Out of those shipped, 33 orders were sent back due to clerical or product errors. How many total barrels of olive oil were not returned?|6534|
|43|There is a very large room that has 4 tables, 1 sofa and 2 chairs that have 4 legs each. There are also 3 tables with 3 legs each, 1 table with 1 leg, and 1 rocking chair with 2 legs. How many legs of tables are there in the room?|26|
|44|A classroom has 24 students, and the teacher has arranged a field trip. If the cost per student for the trip is $15 and the teacher already has $120 from a class fund, how many more dollars does the teacher need to cover the total cost of the trip for all students?|240|
|45|Rachel and Shauna go out to dinner. Dinner costs $68.25 in total (without taxes). Rachel’s meal costs 1/3 of the total price, while Shauna’s meal costs 2/3 of the total price. How much did Shauna’s meal cost (round to the nearest dollar)?|46|
|46|Olivia owns a local hotel and needs to drive up business. She is planning to give a special deal to anyone who signs up for a membership card. Her idea is to give them 20% off their first night and 10% off on every night they stay after that. If her first new customer pays $616 for their stay, and each night costs $140 before discounts, how many nights did they stay at the hotel?|5|
|47|Johnny has 8 green balls. He has fvie fewer than twice that number in red balls. How many total balls does Johnny have?|19|
|48|30 students are in a class. 1/5 of them are 12 years old, 1/3 are 13 years old. 1/10 of them are 11 years old. How many of them are not 11, 12, or 13 years old?|11|
|49|Francis loves sandwiches. He gets his usual from his favorite deli: two “Big Boy” sandwiches, and a glass-bottled soda. A “Big Boy” costs $15.25 and the soda costs $3.75. His friend Lars calls him and asks for a double-sweet soda that’s $4.75. If Francis pays all of this with $40 and asks for his change back in only quarters, how many quarters will he get?|4|
|50|A factory needs to produce 960 pieces of toy boats. They are only able to produce 1/6th of their goal a day. 5 toy boats make up a case and 4 cases make up a box. If a toy shop comes to pick up what is available on the fourth day and finds an extra 8 boxes left for them that were forgotten from a previous pickup, how many boxes of toy boats will they be able to take?|40|
-----
**F** **Additional Plots From Log-Likelihood Experiments**
Figure 10: Log-likelihood of models on GSM1k (ours). As expected, we observe almost no correlation
between the model’s probability of generating GSM1k and its level of overfit. This is because GSM1k
is newly created and not on the internet.
Figure 11: Difference between log-likelihood of GSM1k (ours) and GSM8k.
-----
**G** **Bar Chart of Performance Gaps Between GSM8k and GSM1k Across All**
**Model Accuracies**
Figure 12: Models with over 70% accuracy on GSM8k. We observe that some models (e.g. Mistral,
Phi) are overfit, while other models show little to no evidence of overfitting.
Figure 13: Comparison of models with between 40 and 70% accuracy on GSM8k. We observe that
all models seem to fall below the line in this regime of model performance, though some models (e.g.
Llama-2-70b) do much better than others.
-----
Figure 14: Models with less than 40% accuracy on GSM8k.
-----
| [
"Dylan, Slack",
"Jeff, Da",
"Dean, Lee",
"Vaughn, Robinson",
"Catherine, Wu",
"Will, Song",
"Sean, Hendryx",
"Tiffany, Zhao",
"Pranav, Raja",
"Qin, Lyu",
"Russell, Kaplan",
"Michele, Lunati",
"Summer, Yue",
"Hugh, Zhang"
] | 2024-05-01T00:00:00 | NeurIPS 2024 | true | 46 | 1 | null | https://arxiv.org/abs/2405.00332v3 | https://arxiv.org/abs/2405.00332 | https://www.semanticscholar.org/paper/ef62f95c16f668f031d649799cbd79081c6d2b0f |
Cumulative Reasoning with Large Language Models | Despite the recent advancements in language models (LMs), their ability to solve complex problems remains limited. This paper introduces Cumulative Reasoning (CR), a novel approach that utilizes LMs cumulatively and iteratively, mirroring human thought processes for problem-solving. CR decomposes tasks into smaller, manageable components and leverages previous propositions for effective composition, significantly enhancing problem-solving capabilities. We demonstrate CR's superiority through several complex reasoning tasks: it outperforms existing methods in logical inference tasks with up to a 9.3% improvement, achieving 98.04% accuracy on the curated FOLIO wiki dataset. In the Game of 24, it achieves 98% accuracy, marking a 24% improvement over the prior state-of-the-art. Additionally, CR sets new state-of-the-art on the MATH dataset, achieving a 4.2% increase from previous methods and a 43% relative improvement in the most challenging problems. By extending CR to incorporate a code environment without external aids like retrieval or web browsing, we further harness the computational and logical reasoning capabilities of LMs, achieving a remarkable 72.2% accuracy on the MATH dataset and outperforming the PAL/PoT method by 38.8%. Our work not only sets new state-of-the-art but also paves the way toward more sophisticated AI reasoning methods. The code is available at https://github.com/iiis-ai/cumulative-reasoning. | Cumulative Reasoning is introduced, a novel approach that utilizes LMs cumulatively and iteratively, mirroring human thought processes for problem-solving, and sets new state-of-the-art on the MATH dataset, paving the way toward more sophisticated AI reasoning methods. | ### Cumulative Reasoning With Large Language Models
Yifan Zhang[∗][1] Jingqin Yang[∗][1] Yang Yuan[1][,][2] Andrew Chi-Chih Yao[1][,][2]
1IIIS, Tsinghua University
2Shanghai Qizhi Institute
{yuanyang,andrewcyao}@tsinghua.edu.cn
Abstract
While language models are powerful and versatile, they often fail to address highly
complex problems. This is because solving complex problems requires deliberate
thinking, which has been only minimally guided during training. In this paper,
we propose a new method called Cumulative Reasoning (CR), which employs
language models in a cumulative and iterative manner to emulate human thought
processes. By decomposing tasks into smaller components, CR streamlines the
problem-solving process, rendering it both more manageable and effective. For
logical inference tasks, CR consistently outperforms existing methods with an
improvement up to 9.3%, and achieves the astonishing accuracy of 98.04% on
the curated FOLIO wiki dataset. In the context of the Game of 24, CR achieves
an accuracy of 98%, which signifies a substantial enhancement of 24% over the
previous state-of-the-art method. Finally, on the MATH dataset, we establish new
state-of-the-art results with 58.0% overall accuracy, surpassing the previous best
approach by a margin of 4.2%, and achieving 43% relative improvement on the
hardest level 5 problems (22.4% → 32.1%) [†].
1 Introduction
Despite the remarkable advances made by large language models (LLMs) in a variety of applications (Devlin et al., 2018; Radford et al., 2018, 2019; Brown et al., 2020; Raffel et al., 2020; OpenAI,
2023), they still struggle to provide stable and accurate answers when faced with highly complex
tasks. For instance, it has been observed that language models have difficulty directly generating
correct answers for high school math problems (Lightman et al., 2023).
This shortfall may be anticipated, considering the training approach adopted by LLMs. Specifically,
they are trained to sequentially predict the next token based on the given context, without a pause for
deliberate thoughts. As elucidated by Kahneman (2011), our cognitive processing processes comprise two distinct systems: System 1 is fast, instinctive, and emotional; System 2 is slow, deliberate,
and logical. Currently, LLMs align more closely with System 1, thereby potentially explaining their
limitations in confronting complex tasks.
In response to these limitations, several methods have been proposed to mimic human cognitive
processes. These include the Chain-of-Thought (CoT) that prompts the model to offer step-by-step
solutions (Wei et al., 2022), and the Tree-of-Thought (ToT) that models the solving process as a
thought search tree (Yao et al., 2023; Long, 2023). In addition, dedicated datasets have been created
to provide stepwise guidance in model training (Lightman et al., 2023). Nevertheless, these methods
do not have a site for storing intermediate results, assuming that all the thoughts form a chain or a
tree, which does not fully capture the human thinking process.
∗Equal Contribution.
[†The code is available at https://github.com/iiis-ai/cumulative-reasoning.](https://github.com/iiis-ai/cumulative-reasoning)
Preprint. Under review.
-----
In this paper, we propose a new method termed Cumulative Reasoning (CR), which presents a more
general characterization of the thinking process. CR employs three distinct LLMs: the proposer,
verifier, and reporter. The proposer keeps proposing potential propositions, which were verified by
one or more verifiers, and the reporter decides when to stop and report the solution.
CR significantly amplifies the power of language models in addressing complex tasks, achieved by
decomposing each task into atomic and manageable steps. Despite the computational infeasibility
of enumerating the exponentially numerous possible complex tasks, CR ensures that each individual
step can be efficiently learned and resolved. This strategic decomposition effectively transforms an
otherwise unmanageable exponential problem into a sequence of solvable tasks, thereby providing
a robust solution to the original problem.
Our empirical analyses include three components. In the first experiment, we tackled logical inference tasks like FOLIO wiki (pertaining to first-order logic) and AutoTNLI (associated with higherorder logic). On these datasets, CR consistently surpassed current methodologies, showcasing an
enhancement of up to 9.3%. Additionally, a rigorous refinement of the FOLIO dataset generated
the “FOLIO wiki curated,” on which CR recorded a remarkable accuracy of 98.04%. In the second
experiment, which revolved around the Game of 24, CR achieved an accuracy of 98%. Remarkably, this represents a significant improvement of 24% when compared to the prior state-of-the-art
method, ToT (Yao et al., 2023). In the last experiment, we established new state-of-the-art results
on the renowned MATH dataset (Hendrycks et al., 2021), achieving 58.0% overall accuracy with
a margin of 4.2% over the Complex-CoT with PHP method (Fu et al., 2022; Zheng et al., 2023).
Noteworthy, our method achieves 43% relative improvement on the hardest level 5 problems (22.4%
→ 32.1%).
2 Preliminaries
2.1 Propositional logic
Propositional logic, the most fundamental system of logic, encompasses elements p, q, r and a variety of operations. These include “and” (p ∧ q), “or” (p ∨ q), “implies” (p ⇒ q), and “not” (¬p). The
constants true and false are denoted as 1 and 0 respectively. This system adheres to the following
rules:
x ∧ x = x, x ∨ x = x, 1 ∧ x = 1, 0 ∨ x = 0, x ∧ (y ∨ x) = x = (x ∧ y) ∨ x.
and distributive laws:
x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z), x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z).
In a Boolean algebra, every element x has a complement ¬x and the following holds true:
x ∧¬x = 0, x ∨¬x = 1, ¬¬x = x.
2.2 Higher-order logic
Building upon propositional logic, first-order logic (FOL) introduces universal quantification (∀)
and existential quantification (∃) to describe more intricate propositions. For instance, the statement
“∀xDog(x) ⇒ Animal(x)” translates to “for every x, if x is a dog, then it is also an animal”.
Higher-order logic (HOL) represents a sophisticated formalism that permits quantification over functions and predicates, an ability that contrasts sharply with FOL, which restricts quantification to individual objects. The distinctive characteristics of HOL, as opposed to FOL, can be elaborated as
follows (Mineshima et al., 2015):
Quantification over Functions: Higher-order logic (HOL) allows for lambda expressions, such as
λy.report attribute(y, report), whereby functions themselves become the subject of quantification.
An illustration of this is found in the expression “a representative who reads this report.” Here,
quantification spans the predicates representing both the representative and the reading of the report,
a phenomenon captured as a higher-order function. Unlike HOL, FOL is incapable of extending
quantification to functions or predicates.
Generalized Quantifiers: The introduction of generalized quantifiers, such as “most,” serves as
another demarcation line between HOL and FOL. These quantifiers are capable of accepting predi
-----
cates as arguments, enabling the representation of relations between sets, a feat that transcends the
expressive capacity of FOL.
Modal Operators: Employing modal operators like “might” signifies a transition towards HOL.
These operators, applicable to propositions, give rise to multifaceted expressions that defy easy
reduction to the confines of FOL.
Attitude Verbs and Veridical Predicates: The integration of attitude verbs, such as “believe,” and
veridical predicates like “manage,” injects an additional layer of complexity necessitating the use
of HOL. These linguistic constructs can engage with propositions as arguments, interacting with
the truth values of those propositions in subtle ways that demand reasoning extending beyond the
capabilities of FOL.
2.3 Illustrative example
Consider the following example adapted from the FOLIO dataset (Han et al., 2022), where empirically only the text statements (excluding logical propositions) will be given:
1. All monkeys are mammals: ∀x(Monkey(x) ⇒ Mammals(x)).
2. An animal is either a monkey or a bird: ∀x(Animal(x) ⇒ (Monkey(x) ∨ Bird(x))).
3. All birds fly: ∀x(Bird(x) ⇒ Fly(x)).
4. If something can fly, then it has wings: ∀x(Fly(x) ⇒ Wings(x)).
5. Rock is not a mammal, but Rock is an animal: ¬Mammal(Rock) ∧ Animal(Rock).
The question is: does Rock have wings? We have the following derivations:
#### e f d b
a c
1 2 3 4 5
Figure 1: Illustration of our logical derivation
a. The contrapositive of (1) is: ∀x(¬Mammals(x) ⇒¬Monkey(x)).
b. (a) and (5) ⇒¬Monkey(Rock) ∧ Animal(Rock).
c. (2) and (5) ⇒ (Monkey(Rock) ∨ Bird(Rock))
d. (b) and (c) ⇒ Bird(Rock).
e. (3) and (d) ⇒ Fly(Rock).
f. (4) and (e) ⇒ Wings(Rock).
While the derivation can be treated as a general “chain of thought” from (a) to (f ), its internal
structure is neither a chain nor a tree. Instead, it is a directed acyclic graph (DAG), with each
directed edge as one step of derivation. For examples of higher-order logic, see Appendix A.
3 Our Method
3.1 Cumulative Reasoning (CR)
Our CR algorithm uses three distinct types of LLMs:
-----
1. Proposer. This model suggests the next step based on the current context.
2. Verifier(s). This model or set of models scrutinizes the accuracy of the step put forward by
the proposer. If the step is deemed correct, it will be added to the context.
3. Reporter. This model determines when the reasoning process should be concluded, by
accessing whether the current conditions can directly lead to the final solution.
See Figure 2 for an illustration. In each iteration, the proposer initiates the process by proposing
one or a few new claim(s) based on existing predicates. Subsequently, the verifier(s) evaluate the
proposal, determining whether the claim(s) can be retained as a new predicate. Finally, the reporter
decides if it is the optimal time to cease the thought process and deliver the answer.
Ideally, the proposer should be implemented using a language model pretrained on the corresponding
derivation tasks. Verifier(s) should be capable of translating the derivations to appropriate formal
systems and verifying them using symbolic reasoning modules such as a propositional logic solver
or a formal math prover. However, one can also use general foundational models like GPT-4 or
LLaMA, with different prompts for these roles.
|Col1|a 1 2 3|Verify: OK|a b 1 2 3|
|---|---|---|---|
|Propose||Propose||
|Propose 1 2 3 1 2 3 Propose|Col2|Col3|
|---|---|---|
|Verify: Fail Propose d a c a c Verify: OK Verify: OK Report Propose 1 2 3 1 2 3|||
||d a c 1 2 3||
|||a c 1 2 3|
|Report|||
## 1 2 3
## d
a c
1 2 3
Figure 2: An illustration of CR Reasoning for a 3-premises problem.
3.2 Compare with CoT and ToT
CR clearly generalizes CoT (Wei et al., 2022), in the sense that if there are no verifiers, and the
proposer keeps proposing the next steps until the end, CR becomes the standard chain of thought.
However, in CR the overall thinking process is not necessarily a chain or a tree, it can be a DAG.
Therefore, CR can be used for solving more complex problems.
At first glance, CR is similar to the ToT, which solves the problems with a thought search tree (Yao
et al., 2023; Long, 2023). However, our method is more general in the sense that it stores all the
historical correct reasoning results in memory, which can be a DAG. By contrast, ToT will not store
the information from other branches for exploration at the current search branch.
4 Experiments
Our experimental framework is based on the Microsoft guidance library (Lundberg et al., 2023),
which offers the flexibility to intertwine generation, prompting, and logical control in a seamless
flow that aligns with language models. We consider the following LLMs:
- GPT-3.5-turbo
- GPT-4
- LLaMA-13B
-----
- LLaMA-65B
Our Proposer, Verifier(s), and Reporter in CR are implemented using the same LLM with different
few-shot prompts. This approach ensures a broad application scope and simplifies implementation.
For optimal results, future work could consider the application of a Proposer pretrained on taskspecific corpus and Verifier(s) aided by symbolic formal systems. We denote n as the number of
generated intermediate propositions, and k as the number of majority voting times. We set the
temperature t = 0.1 by default and t = 0.7 for majority voting. We also remark that both gpt-3.5turbo and gpt-4 operate as chat-format APIs from OpenAI.
[Problem Description]
- Example ID: 687
- Premises:
1. It is not true that some giant language models do not have good performance.
2. All language models with good performance are used by some researchers.
3. If a language model is used by some researchers, it is popular.
4. If BERT is a giant language model, then GPT-3 is also a giant language model.
5. BERT is a giant language model.
- Hypothesis: GPT-3 is popular.
- Label: [True]
[Chain-of-Thought Reasoning by GPT-4]
- Reasoning: “From the premises, we have:
1.¬(∃x)(Giant(x) ∧¬GoodPerformance(x))
2.∀x(GoodPerformance(x) → UsedByResearchers(x))
3.∀x(UsedByResearchers(x) → Popular(x))
4.BERT → GPT-3
5.BERT
Now we know that BERT is a giant language model, so GPT-3 is also a giant language
model (from premises 4 and 5). However, the premises do not provide information about
GPT-3’s performance or if it is used by researchers. Therefore, we cannot conclude if GPT3 is popular.”
- Prediction: [Unknown] (Wrong)
[Cumulative-Reasoning by GPT-4]
- Generated Propositions:
1. “GPT-3 is a giant language model with good performance.”
2. “GPT-3 is used by some researchers.”
- Reasoning: “We know that GPT-3 is a giant language model with good performance, and
it is used by some researchers. According to the premises, if a language model is used by
some researchers, it is popular. Therefore, GPT-3 is popular.”
- Prediction: [True] (Correct)
Figure 3: An example from the FOLIO dataset, with solutions generated by CoT and CR. CoT will
generate the answer directly through a chain of thought. By contrast, CR will first generate a few
propositions, and conclude with the help of the generated propositions.
4.1 FOLIO wiki
FOLIO dataset (Han et al., 2022) is a first-order logical inference dataset for reasoning in natural
language. The label of each problem can be “True”, “False”, or “Unknown”. See Figure 3 for an
example. We observed that while the Chain-of-Thought reasoning process can generate useful intermediary results, it tends to flounder midway, failing to arrive at the correct conclusion. Conversely,
the CR initially spawns two beneficial propositions and leverages them to successfully solve the
-----
Table 1: Results for various reasoning approaches on FOLIO-wiki dataset.
Model Method Acc. ↑ (%) Error ↓ (%)
- [Random] 33.33 66.67
Direct 44.75 55.25
CoT 49.06 (+4.31) 50.94 (-4.31)
CoT-SC (k = 16) 52.43 (+7.68) 47.57 (-7.68)
CR (ours, n = 2) 53.37 (+8.62) 46.63 (-8.62)
Direct 67.42 32.58
CoT 67.42 (+0.00) 32.58 (-0.00)
CoT-SC (k = 16) 70.79 (+3.37) 29.21 (-3.37)
CR (ours, n = 2) 72.10 (+4.68) 27.90 (-4.68)
Direct 62.92 37.08
CoT 64.61 (+1.69) 35.39 (-1.69)
CoT-SC (k = 16) 63.33 (+0.41) 36.67 (-0.41)
CR (ours, n = 2) 73.03 (+10.11) 26.97 (-10.11)
Direct 80.52 19.48
CoT 84.46 (+3.94) 15.54 (-3.94)
CoT-SC (k = 16) 85.02 (+4.50) 14.98 (-4.50)
CR (ours, n = 2) 87.45 (+6.93) 12.55 (-6.93)
LLaMA-13B
LLaMA-65B
GPT-3.5-turbo
GPT-4
problem at hand. For a deeper dive into specific examples of the FOLIO dataset, we refer to Appendix B.1.
The FOLIO dataset is a composite of 1435 examples, wherein 52.5% of these instances have been
crafted drawing upon knowledge from randomly selected Wikipedia pages. This approach guarantees the infusion of abundant linguistic variations and a rich vocabulary within the corpus. The
residual 47.5% of the examples have been penned in a hybrid style, rooted in a variety of complex
logical templates. Acknowledging that contemporary LLMs are pretrained on a considerable volume of a standard human-written corpus, we direct our experiments towards those examples derived
from Wikipedia, hereby referred to as FOLIO-wiki. Once a handful of examples are moved aside
for few-shot prompts and those examples without source labels for validations are excluded, we are
left with a testable collection of 534 examples.
Our experimental design employs the LLaMA base model and GPT APIs directly, circumventing
the need for fine-tuning with logical inference datasets and thus ensuring a faithful comparison. The
results, displayed in Table 1, reveal that CR consistently surpasses Direct (standard Input-Output
prompt), CoT, and CoT-SC, with a performance margin spanning up to 8.42%. Notably, GPT-4
paired with Cumulative Reasoning (CR) achieves an accuracy rate of 87.45%, outperforming GPT4 with CoT-SC, which reports an accuracy rate of 85.02%.
4.2 FOLIO wiki curated
The accuracy of 87.45% does not seem to be as competitive as human beings, so we carefully
reviewed the FOLIO-wiki dataset. It turns out that many instances inside the dataset are problematic
in the following sense:
1. Missing common knowledge or contradictory to common knowledge; (9 in total, Example ID
No. 34, 62, 162, 167, 228, 268, 526, 677, 679)
2. Overly ambiguous problems failing to provide unequivocal answers; (37 in total, Example ID
No. 141, 215, 216, 223, 252, 261, 298, 321, 330, 396, 402, 409, 411, 431, 432, 456, 457, 482, 483, 496,
563, 572, 599, 624, 629, 641, 654, 660, 673, 682, 698, 750)
3. Inherent inconsistencies presented within the premises; (2 in total, Example ID No. 640, 643)
4. Vague premises or typographical errors; (2 in total, Example ID No. 314, 315)
5. Incorrect answers. (24 in total, Example ID No. 9, 46, 52, 84, 100, 144, 273, 276, 299, 310, 322, 345,
367, 437, 452, 453, 464, 557, 573, 578, 605, 632, 671, 715)
We note that except for the first class, all the rest should be removed from the dataset. The first class
is because foundation models were trained with common knowledge, but the problem answer based
-----
on FOL systems gives an unnatural answer. See Example ID No. 679 below (see more examples in
Appendix B.2):
[Problem Description]
- Example ID: 669
- Premises:
1. Zaha Hadid is a British-Iraqi architect, artist and designer.
2. Zaha Hadid was born on 31 October 1950 in Baghdad, Iraq.
3. Hadid was a visiting professor of Architectural Design at the Yale School of Architecture.
4. Max is an aspiring architecture student, and he plans to apply to Yale School of Architecture.
- Hypothesis: Hadid was born in 1982.
- Label: [Unknown]
- Explanation: We can see that Zaha Hadid was born on 31 October 1950 in Baghdad,
Iraq. This directly contradicts the hypothesis that Hadid was born in 1982. It is common
knowledge that people are born only once, and it is impossible for someone to be born in
two different years.
Therefore, we removed all 74 such problematic instances, leaving the remaining 460 examples as a
curated collection. The results in Table 2 indicate that the application of GPT-4 in conjunction with
our method (CR) commands an astounding accuracy of 98.04% and maintains an error rate as minimal as 1.96%. This level of performance is almost twice as effective compared to the combination
of GPT-4 and CoT-SC, which scored an accuracy of 96.09% and an error rate of 3.91%.
Table 2: Results for various reasoning approaches on FOLIO-wiki-curated dataset.
Model Method Acc. ↑ (%) Error ↓ (%)
- [Random] 33.33 66.67
Direct 49.13 50.87
CoT 52.17 (+3.04) 47.83 (-3.04)
CoT-SC (k = 16) 53.70 (+4.57) 46.30 (-4.57)
CR (ours, n = 2) 55.87 (+6.74) 44.13 (-6.74)
Direct 74.78 25.22
CoT 74.13 (-0.65) 25.87 (-0.65)
CoT-SC (k = 16) 79.13 (+4.35) 20.87 (-4.35)
CR (ours, n = 2) 79.57 (+4.79) 20.43 (-4.79)
Direct 69.57 30.43
CoT 70.65 (+1.08) 29.35 (-1.08)
CoT-SC (k = 16) 69.32 (-0.25) 30.68 (+0.25)
CR (ours, n = 2) 78.70 (+9.13) 21.30 (-9.13)
Direct 89.57 10.43
CoT 95.00 (+5.43) 5.00 (-5.43)
CoT-SC (k = 16) 96.09 (+6.52) 3.91 (-6.52)
CR (ours, n = 2) 98.04 (+8.47) 1.96 (-8.47)
LLaMA-13B
LLaMA-65B
GPT-3.5-turbo
GPT-4
4.3 AutoTNLI
Experiment Setting. AutoTNLI (Kumar et al., 2022) is a Tabular Natural Language Inference
(TNLI) dataset extended from INFOTABS (Gupta et al., 2020), which can be seen as a higher-order
logical inference dataset due to its inherent complexity lies in natural language inference formalism.
It contains 1,478,662 table-hypothesis pairs with the corresponding label (Entail or Neutral) that indicates whether the given table entails the hypothesis. We treat the tabular content within AutoTNLI
as a set of premises (In fact, the tables within the AutoTNLI dataset are exactly provided in the
form of premises), enabling a direct transference of our algorithm applied to the FOLIO dataset.
Our experimentation encompassed two models, LLaMA-13B, and LLaMA-65B, each subjected to
-----
assessment using Direct, CoT, CoT-SC, and CR methodologies. Due to the extensive magnitude of
the AutoTNLI dataset, we only take the first 1000 table-hypothesis pairs for evaluation.
Evaluation Results. As shown in Table 3, both LLaMA-13B and LLaMA-65B models reveal that
CR delivers a significant enhancement in performance compared to CoT, with a relative improvement reaching up to 9.3% on the LLaMA-65B model. This data emphasizes the clear advantage of
CR over CoT and CoT-SC techniques in the framework of the AutoTNLI dataset.
Table 3: Results for various reasoning approaches on AutoTNLI dataset.
Model Method Acc. ↑ (%) Error ↓ (%)
- [Random] 50.00 50.00
Direct 52.6 47.4
CoT 54.1 (+1.5) 45.9 (-1.5)
CoT-SC (k = 16) 52.1 (-0.5) 47.9 (+0.5)
CR (ours, n = 4) 57.0 (+5.4) 43.0 (-5.4)
Direct 59.7 40.3
CoT 63.2 (+3.5) 36.8 (-3.5)
CoT-SC (k = 16) 61.7 (+2.0) 38.3 (-2.0)
CR (ours, n = 4) 72.5 (+12.8) 27.5 (-12.8)
LLaMA-13B
LLaMA-65B
4.4 Game of 24
The Game of 24 is a puzzle in which players must combine four specified integers using basic
arithmetic operations (addition, subtraction, multiplication, division) to get the number 24.
[Illustrative example for Game of 24]
- Numbers: [3, 3, 7, 7]
- Arithmetic Operations: [+, −, ×, /, (, )]
- Solution:
(3 + 3/7) × 7 = 24
Settings and Baselines. To ensure fairness, we adopt exactly identical task settings as Tree of
Thoughts (ToT) (Yao et al., 2023) on Game of 24. We use the set of 100 Games of 24 collected
by Yao et al. (2023) which was been used to evaluate the performance of ToT. In each game, we
consider the game to be successfully solved if and only if the output is a valid equation that reaches
24 and only uses given numbers each exactly once. We quantify the accuracy (success rate) across
100 games as a main evaluative metric.
In this experiment, we compare CR with variant prompt algorithms, including standard Input-Output
prompting (Direct), Chain-of-Thought prompting (CoT), and CoT-SC by aggregating the majority
outcome from 100 sampled CoT trials (designated as k = 100), and Tree of Thoughts (ToT) with a
breadth-first search width set at 5 (indicated as b = 5).
CR Setup. Within our CR algorithm, we maintain a set of “reached states”, denoted by S. Initially,
S only contains the start state s which represents 4 input numbers without any operation. In each
iteration, a state u is randomly selected from S. This selected state u is passed to the Proposer, which
randomly picks two remaining numbers within u and combines them through a basic arithmetic
operation (+-*/) to obtain a new number, thereby generating a new state v. The Proposer is instructed
to try to avoid taking duplicated operations. Subsequently, the Verifier scrutinizes the arithmetic
operation proposed by the Proposer and evaluates the newly generated state v. Then v is inserted to
S if the Verifier thinks that the operation from u to v is legitimate and it is potential for v to achieve
24. Upon the Verifier identifying a state t that unequivocally 24, the Reporter devises a solution
based on the path from the state s to state t and produces the final answer. The algorithm terminates
when the Reporter outputs the final answer or the number of iterations exceeds a limit of L. In the
experiments, we set the default value of L to 50.
Following Yao et al. (2023), our algorithm runs b concurrent branches and only selects the first
answer for these branches that utilizes each input number exactly once for evaluation. Due to the
-----
prohibitive cost of GPT-4, we only test our CR algorithm with b = 1 to b = 5. As shown in Table 4,
we find that CR outperforms ToT by a large margin of 24%, from 74% to 98%, with much fewer
states visited.
Compare with ToT. Interestingly, in the context of Game of 24, our CR algorithm and ToT algorithm are very similar. Their primary distinction is that, in CR, each iteration of the algorithm
generates at most one newly reached state, while ToT produces a multitude of candidate states per
iteration, filtering and retaining a subset of states. This implies that ToT explores a larger number
of invalid states compared to CR. Moreover, ToT employs a fixed-width and fixed-depth search
tree, while CR allows the LLM to determine the search depth autonomously, and performs different
search widths on different layers of the search tree.
Table 4: Results for various approaches on Game of 24 using GPT-4. The average number of visited
states for ToT is computed from the experimental logs available in its official GitHub repository.
Method Acc. ↑ (%) # Avg. visited states ↓
Direct 7.3 1
CoT 4.0 1
CoT-SC (k = 100) 9.0 100
Direct (best of 100) 33 100
CoT (best of 100) 49 100
ToT (b = 5) 74 61.72
CR (ours, b = 1) 84 (+10) 11.68 (-50.04)
CR (ours, b = 2) 94 (+20) 13.70 (-48.02)
CR (ours, b = 3) 97 (+23) 14.25 (-47.47)
CR (ours, b = 4) 97 (+23) 14.77 (-46.95)
CR (ours, b = 5) 98 (+24) 14.86 (-46.86)
4.5 MATH
The MATH dataset (Hendrycks et al., 2021) serves as a benchmark for assessing AI models’ mathematical reasoning capabilities, encompassing a broad spectrum of mathematical problems across
various subdomains such as Algebra and Geometry. Fig. 4 shows an illustrative example from the
MATH dataset, and Fig. 5 shows the corresponding solutions generated by Complex CoT and CR.
In our experiments, we assessed the performance of Complex CoT and our method (CR), both with
and without Progressive-Hint Prompting (PHP) (Zheng et al., 2023). For a fair evaluation, we reproduced the results of Complex CoT (w/ PHP) on a subset of 500 test examples, adhering to Lightman
et al. (2023), since the other parts of the test dataset (4500 examples) may have been utilized for
model training by OpenAI. The difficulty spans from level 1 (simplest) to level 5 (hardest).
It is important to note that for our method (CR), we employed 4-shot prompting (4 examples for
few-shot prompting) due to GPT-4’s context length constraints (8k by default). While the model
occasionally exceeds the context length with 8-shot prompting, it generally demonstrates superior
performance. Future experiments will explore the utilization of GPT-4-32k.
From Table 5, our method (CR) distinguishes itself by achieving significant advancements in performance across various mathematical subdomains, outperforming Complex CoT by a margin of 5.4%.
The enhancements are particularly pronounced in the Number Theory, Probability, PreAlgebra, and
Algebra categories. In comparison to the Complex CoT approach, even when restricted to 4-shot
prompting due to GPT-4’s context length constraints, CR demonstrates its robustness and effectiveness. It is also evident that the PHP method further amplifies the performance of both Complex CoT
and CR, establishing new state-of-the-art results with an overall accuracy of 58.0% using CR with
PHP, with a margin of 4.2% over Complex CoT with PHP. Additionally, the “Iters” metric elucidates
that CR, when synergized with PHP strategies, reaches self-consistent answers with fewer iterations.
From Table 6, it is evident that consistent performance boost across different difficulty levels signifies the robustness of the CR methodology in handling a diverse range of mathematical problems.
The performance increase of 9.7% at level 5—which translates to a substantial relative improvement of 43%—compared to the baseline Complex CoT approach without PHP, underscores CR’s
effectiveness in handling the most challenging problems in the dataset.
-----
Table 5: Comparative performance on the MATH dataset using GPT-4. We adopted a default temperature setting of t = 0.0, consistent with prior research settings (greedy decoding). PHP denotes
the application of the progressive-hint prompting. “Iters” represents the average number of LLM
interactions, and Overall reflects the overall results across MATH subtopics.
MATH Dataset ([∗] denotes using 500 test examples subset following Lightman et al. (2023))
w/ PHP
InterAlgebra Precalculus Geometry NumTheory Probability PreAlgebra Algebra Overall
CoT (OpenAI, 2023) ✗ - - - - - - - 42.50
✗ 23.4 26.7 36.5 49.6 53.1 71.6 70.8 50.36
✓ 26.3 29.8 41.9 55.7 56.3 73.8 74.3 53.90
(Iters) 3.2414 3.2435 3.2233 3.1740 2.8122 2.3226 2.4726 2.8494
✗ 29.9 33.9 34.1 46.8 47.4 62.1 70.7 48.80
✓ 28.9 30.4 43.9 53.2 50.0 68.5 84.1 53.80
(Iters) 2.7629 2.4643 2.7805 2.7581 2.4474 2.3780 2.5484 2.59
✗ 28.9 (-1.0) 30.4 (-3.5) 39.0 (+4.9) 54.8 (+8.0) 57.9 (+10.5) 71.8 (+9.7) 79.3 (+8.6) 54.20 (+5.40)
✓ 32.0 (+3.1) 35.7 (+5.3) 43.9 (+0.0) 59.7 (+6.5) 63.2 (+13.2) 71.8 (+3.3) 86.6 (+2.5) 58.00 (+4.20)
(Iters) 2.6598 2.4821 2.5122 2.2903 2.2105 2.2195 2.3548 2.40 (-0.19)
Complex CoT, 8-shot
(Zheng et al., 2023)
Complex CoT[∗]
(repro., 8-shot)
CR[∗]
(ours, 4-shot)
Table 6: Comparative performance on the MATH dataset using GPT-4 for different difficulty levels.
MATH Dataset ([∗] denotes using 500 test examples subset)
w/ PHP
Level 5 Level 4 Level 3 Level 2 Level 1 Overall
CoT (OpenAI, 2023) ✗ - - - - - 42.50
Complex CoT[∗] ✗ 22.4 38.3 62.9 72.2 79.1 48.80
(repro., 8-shot) ✓ 23.9 43.8 63.8 86.7 83.7 53.80
CR[∗] ✗ 32.1 (+9.7) 43.0 (+4.7) 62.9 (+0.0) 78.9 (+6.7) 83.7 (+4.6) 54.20 (+5.40)
(ours, 4-shot) ✓ 27.3 (+3.4) 50.0 (+6.2) 70.9 (+7.1) 86.7 (+0.0) 90.7 (+7.0) 58.00 (+4.20)
5 Related Work
Large Language Models Language models have evolved into extremely large-scale neural networks (Devlin et al., 2018; Raffel et al., 2020; Radford et al., 2018, 2019; Brown et al., 2020;
OpenAI, 2023), which have shown impressive results across various tasks. GPT-3 (Brown et al.,
2020) and its successors, such as Gopher (Rae et al., 2021), PaLM (Chowdhery et al., 2022),
GLaM (Du et al., 2022), Chinchilla (Hoffmann et al., 2022), Megatron–Turing NLG (Smith et al.,
2022), LaMDA (Thoppilan et al., 2022), OPT (Zhang et al., 2022), LLaMA (Touvron et al., 2023),
PaLM 2 (Anil et al., 2023) and GPT-4 (OpenAI, 2023), have demonstrated that large auto-regressive
language models can achieve high-quality results without extensive task-specific data collection or
parameter updates.
Reasoning with LLM An extensive range of studies highlights the benefits of equipping neural
networks with the capacity to generate intermediate steps, which is a capability that notably enhances
reasoning performance across a broad spectrum of applications (Zaidan et al., 2007; Yao et al., 2021;
Hase & Bansal, 2021; Yang et al., 2022; Wu et al., 2022; Zhou et al., 2022). Morishita et al. (2023)
improved the reasoning abilities of language models by using a synthetic corpus derived from formal
logic theory. A comprehensive analysis of process-based versus outcome-based approaches on the
GSM8K task was conducted by Uesato et al. (2022), and Lightman et al. (2023) further advanced
this field by meticulously collecting the PRM-800K dataset containing step-by-step supervision.
Additionally, a considerable breadth of research is committed to amplifying the reasoning capabilities of machine learning systems by leveraging symbolic systems, including knowledge graphs (Mihaylov & Frank, 2018; Bauer et al., 2018; Kundu et al., 2018; Wang et al., 2019; Lin et al., 2019;
Ding et al., 2019; Feng et al., 2020; Wang et al., 2022a) and mathematical provers (Jiang et al.,
2022).
Chain-of-Thought Prompting In the pioneering work on chain-of-thought reasoning, Wei et al.
(2022) emphasized the importance of incorporating multi-step reasoning paths before generating
definitive answers. In a progression from this, Wang et al. (2022b) introduced self-consistency, a
sophisticated decoding strategy destined to supersede the rudimentary greedy decoding employed in
CoT prompting. Advancing this further, Zhou et al. (2022) sought to tackle the complexities faced
by CoT prompting in addressing tasks necessitating solutions beyond the complexity scope of the
exemplars used in the prompts. Creswell & Shanahan (2022) showcased a method for enhancing
-----
[Problem Description]
- Example ID: test/intermediate algebra/1350.json
- Level: 5
- Subject: Intermediate Algebra
- Problem: Consider the polynomial
f (x) = anx[n] + an−1x[n][−][1] + · · · + a2x[2] + a1x + a0,
where the polynomial has integer coefficients and its roots are distinct integers.
Given an = 2 and a0 = 66, the inquiry is to determine the least possible value of |an−1|.
[Ground Truth Solution]
- Solution: Since f (x) has integer coefficients, the Integer Root Theorem asserts that any
integer roots of f (x) must divide the constant term 66 = 2 · 3 · 11. Consequently, the
potential integer roots of f (x) are
±1, ± 2, ± 3, ± 6, ± 11, ± 22, ± 33, ± 66.
Additionally, given that all roots of f (x) are integers, they are necessarily members of the
aforementioned list.
We proceed to utilize Vieta’s formulas. The roots ofwhich evaluates to either− an2−1 . To minimize |an 33−1| or, we aim to reduce the absolute value of the root sum, ensuring −33. Simultaneously, the sum of these roots is f (x) yield a product of (− −1)ana[n]n−·1a[a]n=[0] [,]
that the product of the roots remains 33 or −33.
Considering two distinct scenarios:
Case 1: One of the roots is 33 or −33. In this scenario, the only other viable roots are ±1.
Here, the root sum’s absolute value is at least 32.
Case 2: One root is 11 or −11, and another is 3 or −3. The only other plausible roots
remain ±1, leading to a root sum’s absolute value of at least 11 − 3 − 1 = 7. This is a more
optimal outcome than Case 1. For an absolute root sum of 7, we find |an−1| = 7|an| = 14.
From our analysis, we deduce that |an−1| ≥ 14. Verifying this, we observe that equality is
attained by
f (x) = 2(x + 11)(x − 3)(x − 1)
= 2x[3] + 14x[2] − 82x + 66,
which is consistent with having integer coefficients and integer roots. Thus, the minimal
achievable value for |an−1| is 14 .
- Answer: 14
Figure 4: An example from the MATH dataset (Hendrycks et al., 2021; Lightman et al., 2023).
reasoning quality, conducting a beam search throughout the reasoning trace space. Fu et al. (2022)
highlights the importance of increasing reasoning complexity inside the few-shot prompts for better
performance.
More recently, Li et al. (2023) brought forth DIVERSE, which generates a spectrum of prompts
to scrutinize various reasoning trajectories for an identical question, utilizes a verifier to weed out
incorrect answers using a weighted voting scheme, and inspects each reasoning step in isolation
rather than as a collective chain. Yao et al. (2023) proposed a framework for language model inference, Tree-of-Thought (ToT). ToT enhances the problem-solving abilities of language models
by facilitating deliberate decision-making, contemplating multiple reasoning paths, and performing
self-evaluative choices to determine subsequent actions. Taking an iterative approach, Zheng et al.
(2023) advocated for recurrent invocations of LLMs, leveraging prior answers as contextual hints to
inform subsequent iterations. Lastly, Feng et al. (2023) underscored the theoretical prowess of CoT
in addressing intricate real-world tasks like dynamic programming.
-----
[Chain-of-Thought Reasoning by GPT-4]
- Reasoning: “Let’s think step by step:
1. The roots of the polynomial are distinct integers, and the leading coefficient is 2, so
the roots must be divisors of [66]2 [= 33][.]
2. The divisors of 33 are ±1, ±3, ±11, ±33.
3. The sum of the roots of the polynomial is − anan−1 [.]
4. To minimize |an−1|, we should select the roots to be as close to zero as possible.
5. The smallest possible sum of the roots is −1 − 3 + 11 + 33 = 40,
6. Thus, the smallest possible value of |an−1| is 2 × 40 = 80 .”
- Answer: 80 (Wrong)
[Cumulative-Reasoning by GPT-4]
- Generated Hints:
1. “Hint 1: Recall Vieta’s formulas, which relate the coefficients of a polynomial to the
sums and products of its roots.”
2. “Hint 2: The product of the roots of the polynomial is equal to − a[a]n[0] [.”]
3. “Hint 3: The sum of the roots of the polynomial is equal to − ana−n 1 [.”]
4. “Hint 4: Since the roots are distinct integers, consider the factors of − a[a]n[0] [and their]
sums.”
- Generated Simple Questions and Answers:
Question 1: “What is the product of the roots of the polynomial f (x)?”
Answer 1: “The product of the roots of the polynomial is − a[a]n[0] [=][ −] [66]2 [=][ −][33][.”]
Question 2: “What are the possible sets of distinct integer roots of the polynomial?”
Answer 2: “The possible sets of distinct integer roots are {−33, 1}, {−11, 3}, and
{−3, 11, −1}.”
Question 3: “What are the sums of the roots for each possible set of roots?”
Answer 3: “The sums of the roots are −32, −8, and 7 for the sets {−33, 1}, {−11, 3}, and
{−3, 11, −1}, respectively.”
Question 4: “What is the least possible value of |an−1|?”
Answer 4: “The least possible value of |an−1| is 2 × 7 = 14.”
- Solution: “Let’s think step by step. Given that an = 2 and a0 = 66, and the roots of the
polynomial are distinct integers, the least possible value of |an−1| is 14 .”
- Answer: 14 (Correct)
Figure 5: Solutions for the example Fig. 4 from the MATH dataset, generated by CoT and CR. CoT
will generate the answer directly through a chain of thought. By contrast, CR will first generate a
few hints, then several simple and foundational questions, and then answer them by self, and finally
conclude with the help of the generated hints and question-answer pairs.
6 Conclusion
In this paper, we propose CR that employs language models iteratively and cumulatively. The main
idea behind our algorithm is decomposing the complex task into smaller steps, and maintaining a
thinking context for all the intermediate results. Experimental results show that our method achieves
state-of-the-art performance for logical inference tasks, the Game of 24, and MATH word problems.
Given its inherent generality, our framework holds promising potential for addressing a wider array
of mathematical challenges.
-----
References
Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, D., Passos, A., Shakeri, S., Taropa, E., Bailey,
P., Chen, Z., et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. 10
Bauer, L., Wang, Y., and Bansal, M. Commonsense for generative multi-hop question answering
tasks. arXiv preprint arXiv:1809.06309, 2018. 10
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam,
P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural
information processing systems, 33:1877–1901, 2020. 1, 10
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung,
H. W., Sutton, C., Gehrmann, S., et al. Palm: Scaling language modeling with pathways. arXiv
preprint arXiv:2204.02311, 2022. 10
Cooper, R., Crouch, D., Van Eijck, J., Fox, C., Van Genabith, J., Jaspars, J., Kamp, H., Milward,
D., Pinkal, M., Poesio, M., et al. Using the framework. Technical report, Technical Report LRE
62-051 D-16, The FraCaS Consortium, 1996. 16
Creswell, A. and Shanahan, M. Faithful reasoning using large language models. arXiv preprint
arXiv:2208.14271, 2022. 10
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional
transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 1, 10
Ding, M., Zhou, C., Chen, Q., Yang, H., and Tang, J. Cognitive graph for multi-hop reading comprehension at scale. arXiv preprint arXiv:1905.05460, 2019. 10
Du, N., Huang, Y., Dai, A. M., Tong, S., Lepikhin, D., Xu, Y., Krikun, M., Zhou, Y., Yu, A. W., Firat,
O., et al. Glam: Efficient scaling of language models with mixture-of-experts. In International
Conference on Machine Learning, pp. 5547–5569. PMLR, 2022. 10
Feng, G., Gu, Y., Zhang, B., Ye, H., He, D., and Wang, L. Towards revealing the mystery behind
chain of thought: a theoretical perspective. arXiv preprint arXiv:2305.15408, 2023. 11
Feng, Y., Chen, X., Lin, B. Y., Wang, P., Yan, J., and Ren, X. Scalable multi-hop relational reasoning
for knowledge-aware question answering. arXiv preprint arXiv:2005.00646, 2020. 10
Fu, Y., Peng, H., Sabharwal, A., Clark, P., and Khot, T. Complexity-based prompting for multi-step
reasoning. arXiv preprint arXiv:2210.00720, 2022. 2, 11
Gupta, V., Mehta, M., Nokhiz, P., and Srikumar, V. INFOTABS: inference on tables as semistructured data. CoRR, abs/2005.06117, 2020. 7
Han, S., Schoelkopf, H., Zhao, Y., Qi, Z., Riddell, M., Benson, L., Sun, L., Zubova, E., Qiao,
Y., Burtell, M., et al. Folio: Natural language reasoning with first-order logic. arXiv preprint
arXiv:2209.00840, 2022. 3, 5
Hase, P. and Bansal, M. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201, 2021. 10
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv preprint
arXiv:2103.03874, 2021. 2, 9, 11
Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. d. L.,
Hendricks, L. A., Welbl, J., Clark, A., et al. Training compute-optimal large language models.
arXiv preprint arXiv:2203.15556, 2022. 10
Jiang, A. Q., Welleck, S., Zhou, J. P., Li, W., Liu, J., Jamnik, M., Lacroix, T., Wu, Y., and Lample, G. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. ArXiv,
abs/2210.12283, 2022. 10
Kahneman, D. Thinking, fast and slow. macmillan, 2011. 1
-----
Kumar, D., Gupta, V., Sharma, S., and Zhang, S. Realistic data augmentation framework for enhancing tabular reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural
Language Processing, Online and Abu Dhabi, December 2022. Association for Computational
Linguistics. 7, 16
Kundu, S., Khot, T., Sabharwal, A., and Clark, P. Exploiting explicit paths for multi-hop reading
comprehension. arXiv preprint arXiv:1811.01127, 2018. 10
Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J.-G., and Chen, W. Making language models better
reasoners with step-aware verifier. In Proceedings of the 61st Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Papers), pp. 5315–5333, 2023. 11
Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J.,
Sutskever, I., and Cobbe, K. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023. 1,
9, 10, 11
Lin, B. Y., Chen, X., Chen, J., and Ren, X. Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151, 2019. 10
Long, J. Large language model guided tree-of-thought. arXiv preprint arXiv:2305.08291, 2023. 1,
4
Lundberg, S., Ribeiro, M. T. C., Viggiano, D., Rafael, J., Amemiya, R., and et. al. Microsoft
[guidance library. https://github.com/microsoft/guidance, 2023. 4](https://github.com/microsoft/guidance)
Mihaylov, T. and Frank, A. Knowledgeable reader: Enhancing cloze-style reading comprehension
with external commonsense knowledge. arXiv preprint arXiv:1805.07858, 2018. 10
Mineshima, K., Mart´ınez-G´omez, P., Miyao, Y., and Bekki, D. Higher-order logical inference
with compositional semantics. In Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing, pp. 2055–2061, 2015. 2, 16
Morishita, T., Morio, G., Yamaguchi, A., and Sogawa, Y. Learning deductive reasoning from synthetic corpus based on formal logic. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato,
S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning,
volume 202 of Proceedings of Machine Learning Research, pp. 25254–25274. PMLR, 23–29 Jul
2023. 10
OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. 1, 10
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. Improving language understanding
by generative pre-training. openai.com, 2018. 1, 10
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are
unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 1, 10
Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S.,
Ring, R., Young, S., et al. Scaling language models: Methods, analysis & insights from training
gopher. arXiv preprint arXiv:2112.11446, 2021. 10
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J.
Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of
Machine Learning Research, 21(1):5485–5551, 2020. 1, 10
Smith, S., Patwary, M., Norick, B., LeGresley, P., Rajbhandari, S., Casper, J., Liu, Z., Prabhumoye,
S., Zerveas, G., Korthikanti, V., et al. Using deepspeed and megatron to train megatron-turing nlg
530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022. 10
Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H.-T., Jin, A., Bos,
T., Baker, L., Du, Y., et al. Lamda: Language models for dialog applications. arXiv preprint
arXiv:2201.08239, 2022. 10
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozi`ere, B., Goyal,
N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv
preprint arXiv:2302.13971, 2023. 10
-----
Uesato, J., Kushman, N., Kumar, R., Song, F., Siegel, N., Wang, L., Creswell, A., Irving, G., and Higgins, I. Solving math word problems with process-and outcome-based feedback. arXiv preprint
arXiv:2211.14275, 2022. 10
Wang, X., Kapanipathi, P., Musa, R., Yu, M., Talamadupula, K., Abdelaziz, I., Chang, M., Fokoue,
A., Makni, B., Mattei, N., et al. Improving natural language inference using external knowledge
in the science questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence,
volume 33, pp. 7208–7215, 2019. 10
Wang, X., Liu, K., Wang, D., Wu, L., Fu, Y., and Xie, X. Multi-level recommendation reasoning
over knowledge graphs with reinforcement learning. In Proceedings of the ACM Web Conference
2022, pp. 2098–2108, 2022a. 10
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou,
D. Self-consistency improves chain of thought reasoning in language models. arXiv preprint
arXiv:2203.11171, 2022b. 10
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-ofthought prompting elicits reasoning in large language models. Advances in Neural Information
Processing Systems, 35:24824–24837, 2022. 1, 4, 10
Wu, T., Terry, M., and Cai, C. J. Ai chains: Transparent and controllable human-ai interaction by
chaining large language model prompts. In Proceedings of the 2022 CHI conference on human
factors in computing systems, pp. 1–22, 2022. 10
Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., and Yang, D. Seqzero: Few-shot compositional
semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381,
2022. 10
Yao, H., Chen, Y., Ye, Q., Jin, X., and Ren, X. Refining language models with compositional
explanations. Advances in Neural Information Processing Systems, 34:8954–8967, 2021. 10
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K. Tree of thoughts:
Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
1, 2, 4, 8, 11
Zaidan, O., Eisner, J., and Piatko, C. Using “annotator rationales” to improve machine learning for
text categorization. In Human language technologies 2007: The conference of the North American
chapter of the association for computational linguistics; proceedings of the main conference, pp.
260–267, 2007. 10
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li,
X., Lin, X. V., et al. Opt: Open pre-trained transformer language models. arXiv preprint
arXiv:2205.01068, 2022. 10
Zheng, C., Liu, Z., Xie, E., Li, Z., and Li, Y. Progressive-hint prompting improves reasoning in
large language models. arXiv preprint arXiv:2304.09797, 2023. 2, 9, 10, 11
Zhou, D., Sch¨arli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Bousquet, O., Le, Q.,
and Chi, E. Least-to-most prompting enables complex reasoning in large language models. arXiv
preprint arXiv:2205.10625, 2022. 10
-----
A Illustrative example on higher-order logic
[Modified Example FraCas-317]
- Premises:
1. Most of the representatives who read the report have a positive attitude towards it.
2. No two representatives have read it at the same time, and they may have different
opinions about it.
3. No representative took less than half a day to read the report.
4. There are sixteen representatives.
- Hypothesis: It took the representatives more than a week to read the report, and most found
it valuable.
- Label: [True]
- Higher-Order Logic Premises:
1. most(λx.representative(x) ∧ reads(x, report), λx.has positive attitude(x, report))
2.
¬∃x, y (x ̸= y ∧ representative(x) ∧ representative(y)∧
read at same time(x, y, report)) ∧
∀x, y (x ̸= y ∧ representative(x) ∧ representative(y) →
may(λz.different opinions(z, x, y, report)))
3. ¬∃x(representative(x) ∧ took less than half day(λy.read(y, report)))
4. count(λx.representative(x)) = 16
- Higher-Order Logic Hypothesis:
∃t(total reading time(report) > 7 ∧ time unit(t, day))∧
most(λx.representative(x), λx.found valuable(x, report))
[Cumulative-Reasoning by GPT-4]
- Generated Propositions:
1. “Using P1 and P2, we can deduce Q1: Every representative read the report at a different time, and most representatives found the report valuable.”
2. “Using P3 and Q1, we deduce Q2: The total reading time must be at least 8 days.
(since 16 representatives each took at least half a day).”
3. “Using Q2 and P4, we deduce Q3: The representatives took at least 8 days to read the
report.”
- Reasoning: “Based on the intermediate propositions Q1, Q2, and Q3, the hypothesis that
it took the representatives more than a week to read the report, and most found it valuable,
is confirmed as True.”
- Prediction: [True] (Correct)
Here we present a refined example derived from the FraCas dataset to illustrate higher-order logic
inference. It is noteworthy that the FraCas dataset (Cooper et al., 1996) is dedicated to the realm of
higher-orderlogic inference. This characterization also applies to a majority of the Natural Language
Inference (NLI) datasets (Kumar et al., 2022), which encompass their internal syntax, semantics,
and logic. The intricate linguistic components such as quantifiers, plurals, adjectives, comparatives,
verbs, attitudes, and so on, can be formalized with Combinatory Categorial Grammar (CCG) along
with the formal compositional semantics (Mineshima et al., 2015).
-----
B More on Datasets
B.1 More FOLIO Examples
[Example ID: 603]
- Hypothesis: Hyraxes lay eggs.
- Premises:
1. The only types of mammals that lay eggs are platypuses and echidnas.
2. Platypuses are not hyrax.
3. Echidnas are not hyrax.
4. No mammals are invertebrates.
5. All animals are either vertebrates or invertebrates.
6. Mammals are animals.
7. Hyraxes are mammals.
8. Grebes lay eggs.
9. Grebes are not platypuses and also not echidnas.
- Label: [False]
[Chain-of-Thought Reasoning by GPT-4]
- Reasoning: “The premises state that the only types of mammals that lay eggs are platypuses
and echidnas. Hyraxes are mammals but are neither platypuses nor echidnas. Since the
conclusion is about hyraxes laying eggs but there’s no direct information in the premises
on this topic, the conclusion cannot be determined with certainty.”
- Prediction: [Unknown] (Wrong)
[Cumulative-Reasoning by GPT-4]
- Generated Propositions:
1. “All mammals are either vertebrates or invertebrates.”
2. “Hyraxes do not lay eggs, as they are neither platypuses nor echidnas.”
- Reasoning: “We can deduce that the only types of mammals that lay eggs are platypuses
and echidnas. Hyraxes are mammals, but they are neither platypuses nor echidnas. There-
fore, hyraxes do not lay eggs.”
- Prediction: [False] (Correct)
[Example ID: 546]
- Hypothesis: Extractive models are not always faithful.
- Premises:
1. A summarization model is always faithful if it uses content from the input documents.
2. Extractive models are a kind of summarization models.
3. Extractive model can only use content from the input documents.
- Label: [False]
[Chain-of-Thought Reasoning by GPT-4]
- Reasoning: “The premises state that a summarization model is always faithful if it uses
content from the input documents. Extractive models are a kind of summarization model,
and they can only use content from the input documents. However, the premises do not
provide any information about whether extractive models are always faithful or not.”
- Prediction: [Unknown] (Wrong)
-----
[Cumulative-Reasoning Reasoning by GPT-4]
- Generated Propositions:
1. “All extractive models are faithful summarization models since they use content from
the input documents.”
2. “All faithful summarization models use content from the input documents.”
- Reasoning: “We can infer that extractive models are always faithful because they only
use content from the input documents and satisfy the condition for a summarization model
to be faithful. Therefore, the conclusion that extractive models are not always faithful is
incorrect.”
- Prediction: [False] (Correct)
[Example ID: 404]
- Hypothesis: No coursing is banned.
- Premises:
1. Greyhound racing is a competitive sport where spectators bet on greyhounds.
2. Coursing is a part of Greyhound racing.
3. Some competitive sports where spectators bet on things are banned.
4. Coursing involves spectators betting on a hare being pursued by greyhounds.
5. Hares are small game.
6. If a competitive sport involves spectators betting on small game, then it is banned.
- Label: [False]
[Chain-of-Thought Reasoning by GPT-4]
- Reasoning: “The premises state that coursing is a part of greyhound racing, and it involves
spectators betting on a hare being pursued by greyhounds. Hares are small game, and if
a competitive sport involves spectators betting on small game, then it is banned. It’s also
mentioned that some competitive sports where spectators bet on things are banned. How-
ever, the premises do not clearly state the status of coursing being banned or not banned.”
- Prediction: [Unknown] (Wrong)
[Cumulative-Reasoning Reasoning by GPT-4]
- Generated Propositions:
1. “If a competitive sport involves spectators betting on hares, then it is banned.”
2. “Coursing is a competitive sport that involves spectators betting on small game.”
- Reasoning: “We know that coursing is a competitive sport involving spectators betting on
small game, specifically hares. The premises also state that if a competitive sport involves
spectators betting on small game, then it is banned. Thus, we can deduce that there is
a possibility that coursing is banned. The hypothesis claims that no coursing is banned,
which contradicts the deduction that coursing might be banned.”
- Prediction: [False] (Correct)
[Example ID: 609]
- Hypothesis: Allison has room for broccoli.
- Premises:
1. If you have room for dessert, you have room for broccoli.
2. Everyone at Luis’s dinner party has room for dessert, including Luis.
3. Mauricia does not have room for broccoli.
4. Luis’s dinner party is the first ever dinner party that Allison has attended.
5. Gustave has room for both broccoli and asparagus.
6. Broccoli and asparagus are both vegetables.
- Label: [True]
-----
[Chain-of-Thought Reasoning by GPT-4]
- Reasoning: “The premises state that if you have room for dessert, you have room for
broccoli. Everyone at Luis’s dinner party has room for dessert, including Luis. However,
we do not have information specifically about Allison having room for dessert or not, even
though it’s her first dinner party. Thus, we can’t conclude with certainty if Allison has room
for broccoli.”
- Prediction: [Unknown] (Wrong)
[Cumulative-Reasoning by GPT-4]
- Generated Propositions:
1. “Gustave has room for vegetables.”
2. “Everyone at Luis’s dinner party, including Mauricia, has room for dessert.”
- Reasoning: “We know that:
1. If one has room for dessert, they have room for broccoli.
2. Everyone at Luis’s dinner party, including Luis and Allison, has room for dessert.
Using these premises, we can infer that since everyone at Luis’s dinner party has room for
dessert, they also have room for broccoli. Therefore, Allison, who is at Luis’s dinner party,
must have room for broccoli.”
- Prediction: [True] (Correct)
B.2 More examples on problems excluded from FOLIO wiki curated
Type 1 Error: Missing common knowledge or contradictory to common knowledge
[Example ID: 34]
- Premises:
1. The Croton River watershed is the drainage basin of the Croton River.
2. The Croton River is in southwestern New York.
3. Kings are male.
4. Water from the Croton River watershed flows to the Bronx.
5. The Bronx is in New York.
- Hypothesis: Water from the Croton River flows to the Bronx.
- Label: [Unknown]
- Wrong Type: [Type 1: Missing common knowledge or contradictory to common knowl-
edge in the premises]
- Explanation: We understand that the Croton River is in southwestern New York, and the
Bronx is also located in New York. It is stated that water from the Croton River watershed
flows to the Bronx, and the Croton River watershed is the drainage basin of the Croton River.
It is common knowledge that water from a river flows to its drainage basin. Therefore, it is
true that water from the Croton River flows to the Bronx.
[Example ID: 268]
- Premises:
1. Bernarda Bryson Shahn was a painter and lithographer.
2. Bernarda Bryson Shahn was born in Athens, Ohio.
3. Bernarda Bryson Shahn was married to Ben Shahn.
4. People born in Athens, Ohio are Americans.
- Hypothesis: Bernarda Bryson Shahn was born in Greece.
- Label: [Unknown]
- Wrong Type: [Type 1: Missing common knowledge or contradictory to common knowl-
edge in the premises]
- Explanation: We know that Bernarda Bryson Shahn was born in Athens, Ohio. It is com-
mon knowledge that Greece is not in Ohio. It also states that people born in Athens, Ohio,
are Americans. Thus, it is false to conclude that Bernarda Bryson Shahn was born in
Greece.
-----
[Example ID: 62]
- Premises:
1. The Golden State Warriors are a team from San Francisco.
2. The Golden State Warriors won the NBA finals.
3. All teams attending the NBA finals have more than thirty years of history.
4. Boston Celtics are a team that lost the NBA finals.
5. If a team wins the NBA finals, then they will have more income.
6. If a team wins or loses at the NBA finals, then they are attending the finals.
- Hypothesis: The Golden State Warriors will have more income for gate receipts.
- Label: [True]
- Wrong Type: [Type 1: Missing common knowledge or contradictory to common knowledge in the premises]
- Explanation: We know that the Golden State Warriors won the NBA finals and that if a
team wins the NBA finals, they will have more income. Therefore, we can infer that the
Golden State Warriors will have more income. However, the hypothesis mentions ’more
income for gate receipts,’ and there is no information about gate receipts on the premises.
Type 2 Error: Overly ambiguous problems failing to provide unequivocal answers
[Example ID: 496]
- Premises:
1. Some fish may sting.
2. Stonefish is a fish.
3. It stings to step on a stonefish.
4. Stonefish stings cause death if not treated.
5. To treat stonefish stings, apply heat to the affected area or use an antivenom.
- Hypothesis: If you step on a stonefish and apply heat to the affected area, stings will cause
death.
- Label: [Unknown]
- Wrong Type: [Type 2: Overly ambiguous problems failing to provide unequivocal an-
swers]
- Explanation: The premises state that applying heat to the affected area or using antivenom
can treat stonefish stings. Thus, if heat is applied to the affected area, it should help treat
the sting and prevent death. However, it is not certain that applying heat to the affected
area will prevent death, as it is possible that the sting is too severe to be treated with heat.
[Example ID: 432]
- Premises:
1. Vic DiCara plays guitar and bass.
2. The only style of music Vic DiCara plays is punk music.
3. Vic DiCara played in the band Inside Out.
- Hypothesis: If you step on a stonefish and apply heat to the affected area, stings will cause
death.
- Label: [Unknown]
- Wrong Type: [Type 2: Overly ambiguous problems failing to provide unequivocal an-
swers]
- Explanation: We know that Vic DiCara played in the band Inside Out and the only style of
music he plays is punk music. This information implies that Inside Out played punk music
while Vic DiCara was a member. However, it is not certain that Inside Out was a punk band,
as it is possible that the band played a different style of music before Vic DiCara joined.
-----
[Example ID: 673]
- Premises:
1. Cancer biology is finding genetic alterations that confer selective advantage to cancer
cells.
2. Cancer researchers have frequently ranked the importance of substitutions to cancer
growth by P value.
3. P values are thresholds for belief, not metrics of effect.
- Hypothesis: Cancer researchers tend to use the cancer effect size to determine the relative
importance of the genetic alterations that confer selective advantage to cancer cells.
- Label: [Unknown]
- Wrong Type: [Type 2: Overly ambiguous problems failing to provide unequivocal answers]
- Explanation: We can deduce that cancer researchers tend to use P values, not effect sizes,
to rank the importance of genetic alterations. Thus, the hypothesis contradicts the premises.
However, it is still possible that cancer researchers use the cancer effect size to determine
the relative importance of the genetic alterations that confer selective advantage to cancer
cells.
Type 3 Error: Inherent inconsistencies presented within the premises
[Example ID: 640]
- Premises:
1. William Dickinson was a British politician who sat in the House of Commons.
2. William Dickinson attended Westminster school for high school and then the University of Edinburgh.
3. The University of Edinburgh is a university located in the United Kingdom.
4. William Dickinson supported the Portland Whigs.
5. People who supported the Portland Whigs did not get a seat in the Parliament.
- Hypothesis: William Dickinson did not get a seat in the Parliament.
- Label: [True]
- Wrong Type: [Type 3: Inherent inconsistencies presented within the premises]
- Explanation: We have a contradiction. On one hand, we have information that William
Dickinson supported the Portland Whigs, and people who supported the Portland Whigs
did not get a seat in the Parliament. On the other hand, another premise states that William
Dickinson was a British politician who sat in the House of Commons, which implies that he
did get a seat in the Parliament.
[Example ID: 643]
- Premises:
1. William Dickinson was a British politician who sat in the House of Commons.
2. William Dickinson attended Westminster school for high school and then the University of Edinburgh.
3. The University of Edinburgh is a university located in the United Kingdom.
4. William Dickinson supported the Portland Whigs.
5. People who supported the Portland Whigs did not get a seat in the Parliament.
- Hypothesis: William Dickinson sat in the House of Commons.
- Label: [True]
- Wrong Type: [Type 3: Inherent inconsistencies presented within the premises]
- Explanation: We have a contradiction. On one hand, we have information that William
Dickinson supported the Portland Whigs, and people who supported the Portland Whigs
did not get a seat in the Parliament. On the other hand, another premise states that William
Dickinson was a British politician who sat in the House of Commons, which implies that he
did get a seat in the Parliament.
Type 4 Error: Vague premises or typographical errors
-----
[Example ID: 314]
- Premises:
1. Palstaves are a type of early bronze axe.
2. Commonly found in northern, western and south-western Europe, palstaves are cast
in moulds.
3. John Evans is an archeologist who popularized the term ”palstave”.
4. A paalstab is not an axe, but rather a digging shovel.
- Hypothesis: John Evans Popularized the term paalstab.
- Label: [Unknown]
- Wrong Type: [Type 4: Vague premises or typographical errors]
- Explanation: What is palstave and paalstab? Were they misspelled?
[Example ID: 315]
- Premises:
1. Palstaves are a type of early bronze axe.
2. Commonly found in northern, western and south-western Europe, palstaves are cast
in moulds.
3. John Evans is an archeologist who popularized the term ”palstave”.
4. A paalstab is not an axe, but rather a digging shovel.
- Hypothesis: There is an axe that is commonly found in Western Europe.
- Label: [Unknown]
- Wrong Type: [Type 4: Vague premises or typographical errors]
- Explanation: We can see that palstaves are a type of early bronze axe and they are commonly found in northern, western, and south-western Europe. Therefore, it is true that there
is an axe that is commonly found in Western Europe. However, the premises also state that
a paalstab is not an axe, but rather a digging shovel. Was paalstab the same thing as palstaves?
Type 5 Error: Incorrect answers
[Example ID: 9]
- Premises:
1. Palstaves are a type of early bronze axe.
2. Pierre de Rigaud de Vaudreuil built Fort Carillon.
3. Fort Carillon was located in New France.
4. New France is not in Europe.
- Hypothesis: Fort Carillon was located in Europe.
- Label: [Unknown]
- Wrong Type: [Type 5: Incorrect answers]
- Explanation: We know that Fort Carillon was located in New France, and New France is
not in Europe. Therefore, Fort Carillon was not located in Europe.
-----
[Example ID: 632]
- Premises:
1. New York City is on the East Coast.
2. Seattle is on the West Coast.
3. If a person from a city on the East coast is traveling to a city on the west coast, they
will be on a long flight.
4. Most passengers on flights to Seattle from New York City are not in first class.
5. People on long flights are uncomfortable unless they’re in first class.
- Hypothesis: Some people flying from New York City to Seattle will be uncomfortable.
- Label: [False]
- Wrong Type: [Type 5: Incorrect answers]
- Explanation: We can deduce the following: 1. A person traveling from New York City to
Seattle will be on a long flight (since New York City is on the East Coast and Seattle is on
the West Coast). 2. Most passengers on flights from New York City to Seattle are not in first
class. 3. People on long flights are uncomfortable unless they’re in first class. Given this
information, we can conclude that some people flying from New York City to Seattle will be
uncomfortable, as most of them are not in first class and long flights cause discomfort for
those not in first class.
[Example ID: 671]
- Premises:
1. Westworld is an American science fiction-thriller TV series.
2. In 2016, a new television series named Westworld debuted on HBO.
3. The TV series Westworld is adapted from the original film in 1973, which was written
and directed by Michael Crichton.
4. The 1973 film Westworld is about robots that malfunction and begin killing the human
visitors.
- Hypothesis: Michael Crichton has directed a film about robots.
- Label: [Unknown]
- Wrong Type: [Type 5: Incorrect answers]
- Explanation: We can deduce that Michael Crichton wrote and directed the 1973 film West-
world, which is about robots that malfunction and begin killing the human visitors. Thus, it
is true that Michael Crichton has directed a film about robots.
-----
| [
"Yifan, Zhang",
"Jingqin, Yang",
"Yang, Yuan",
"Andrew Chi-Chih, Yao"
] | 2023-08-08T00:00:00 | null | false | 46 | 5 | null | https://arxiv.org/abs/2308.04371v4 | https://arxiv.org/abs/2308.04371 | https://www.semanticscholar.org/paper/507acddb0b7f36b83fd7c8bff2f121eb506ac8fb |
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving (extended version) | N/A | This work provides an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment and presents a deep reinforcement learning driven automated theorem provers, DeepHOL, with strong initial results on this benchmark. | null | [
"Sarah, Loos",
"Markus, Rabe",
"Kshitij, Bansal",
"Christian, Szegedy",
"Stewart, Wilcox"
] | 2019-01-01T00:00:00 | ICML 2019 | true | 45 | 7 | [
"HOL Light"
] | https://www.semanticscholar.org/paper/9ef2e09a9e16e176e19c3fdc3b6ee22c5d3f3c97 | null | https://www.semanticscholar.org/paper/9ef2e09a9e16e176e19c3fdc3b6ee22c5d3f3c97 |
Learning From Mistakes Makes LLM Better Reasoner | Large language models (LLMs) recently exhibited remarkable reasoning capabilities on solving math problems. To further improve their reasoning capabilities, this work explores whether LLMs can LEarn from MistAkes (LEMA), akin to the human learning process. Consider a human student who failed to solve a math problem, he will learn from what mistake he has made and how to correct it. Mimicking this error-driven learning process, LEMA incorporates mistake-correction data pairs during fine-tuning LLMs. Specifically, we first collect inaccurate reasoning paths from various LLMs, and then employ GPT-4 as a ''corrector'' to identify the mistake step, explain the reason for the mistake, correct the mistake and generate the final answer. In addition, we apply a correction-centric evolution strategy that effectively expands the question set for generating correction data. Experiments across various LLMs and reasoning tasks show that LEMA effectively improves CoT-alone fine-tuning. Our further ablations shed light on the non-homogeneous effectiveness between CoT data and correction data. These results suggest a significant potential for LLMs to improve through learning from their mistakes. Our code, models and prompts are publicly available at https://github.com/microsoft/LEMA. | Experiments show that LEMA effectively improves CoT-alone fine-tuning and sheds light on the non-homogeneous effectiveness between CoT data and correction data, suggesting a significant potential for LLMs to improve through learning from their mistakes. | ## Learning From Mistakes Makes LLM Better Reasoner
**Shengnan An[∗♢][,][♣], Zexiong Ma[∗♡][,][♣], Zeqi Lin[†][♣], Nanning Zheng[†][♢],**
**Jian-Guang Lou[♣], Weizhu Chen[♣]**
_♢IAIR, Xi’an Jiaotong University, ♣Microsoft Corporation, ♡Peking University_
_♢{an1006634493@stu, nnzheng@mail}.xjtu.edu.cn,_
_♡[email protected], ♣{Zeqi.Lin, jlou, wzchen}@microsoft.com_
**Abstract**
Large language models (LLMs) recently exhibited remarkable reasoning
capabilities on solving math problems. To further improve their reasoning
capabilities, this work explores whether LLMs can LEarn from MistAkes
(LEMA), akin to the human learning process. Consider a human student
who failed to solve a math problem, he will learn from what mistake he
has made and how to correct it. Mimicking this error-driven learning process, LEMA incorporates mistake-correction data pairs during fine-tuning
LLMs. Specifically, we first collect inaccurate reasoning paths from various
LLMs, and then employ GPT-4 as a “corrector” to identify the mistake step,
explain the reason for the mistake, correct the mistake and generate the
final answer. In addition, we apply a correction-centric evolution strategy
that effectively expands the question set for generating correction data.
Experiments across various LLMs and reasoning tasks show that LEMA effectively improves CoT-alone fine-tuning. Our further ablations shed light
on the non-homogeneous effectiveness between CoT data and correction
data. These results suggest a significant potential for LLMs to improve
through learning from their mistakes. Our code, models and prompts are
[publicly available at Github Link.](https://github.com/microsoft/LEMA)
**Question: Tina makes $18.00 an hour. If she works more than 8 hours per shift, she is** CoT-Alone Fine-Tuning LEMA
eligible for overtime, which is paid by your hourly wage + 1/2 your hourly wage. If she works
10 hours every day for 5 days, how much money does she make? 90
85.4 84.2 83.5
**LLM**
_Sampling Rationales_ 80 77.9
**Rationales with Mistakes:**
Step 1: Tina makes $18.00 an hour for 8 hours, which is 8 * $18.00 = $144.00. 70
Step 2: She makes $27.00 an hour for the 2 hours of overtime, which is 2 * $27.00 = $54.00.
Step 3: So for 5 days, she makes $144.00 + $54.00 = $198.00.
Step 4: The answer is 198. Performance on GSM8K (%)
60
MetaMath WizardMath LLaMA-2 LLaMA
70B 70B 70B 65B
**GPT-4 As Corrector**
_Identifying Mistake and Making Correction_ 30
**CorrectionIncorrect Step: Step 3.:** 26.9 27.1
Explanation: Step 3 only calculates the earnings for one day, but not for the entire five days.Correct Solution: 25 25.0
...
Step 3: For one day, she makes $144.00 + $54.00 = $198.00. 20.8
Step 4: For 5 days, she makes $198.00 * 5 = $990.00. 20
Step 5: The answer is 990.
_Fine-Tuning on Mistake-Correction Data_ Performance on MATH (%)
15
**LLM** MetaMath70B WizardMath70B LLaMA-270B LLaMA65B
Figure 1: Left: Process of LEarning from MistAkes (LEMA). Right: Performance of LEMA
on GSM8K and MATH.
_∗_ Work done during the internship at Microsoft.
† Corresponding authors.
-----
**1** **Introduction**
_Mistakes are the portals of discovery._
_—James Joyce_
With exponential growth in data size and model scale, contemporary large language models (Brown et al., 2020; Zhang et al., 2022; Hoffmann et al., 2022; Smith et al., 2022; OpenAI,
2023b; Anil et al., 2023) have demonstrated significant advancements on various NLP tasks,
particularly in mathematical problem solving that necessitates complex chain-of-thought
(CoT) reasoning (Wei et al., 2022; Wang et al., 2022; Li et al., 2023b; Shi et al., 2023; Qin et al.,
2023; Lightman et al., 2023). In terms of performance on challenging mathematical tasks like
GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), proprietary large language
models, including GPT-4 (OpenAI, 2023b) and PaLM-2 (Anil et al., 2023), have attained
notable results. However, open-source LLMs such as LLaMA-2 (Touvron et al., 2023b) still
have much room for improvement.
To further improve the CoT reasoning capabilities of open-source LLMs for tackling mathematical tasks, a common approach is to fine-tune these models using annotated/generated
question-rationale data pairs (referred to as CoT data), which directly teach the model
how to perform CoT reasoning on these tasks (Magister et al., 2022; Huang et al., 2022; Ho
et al., 2022; Li et al., 2022; Yuan et al., 2023; Luo et al., 2023; Yu et al., 2023; Li et al., 2023a;
Liang et al., 2023; Ranaldi & Freitas, 2024). While this straightforward learning process
has exhibited its effectiveness, this study investigates whether the reasoning capabilities
of LLMs can be further improved through a backward learning process, i.e., learning from
the mistakes that LLMs have made. The insight of learning from mistakes comes from the
learning process of human students. Consider a student who is just beginning to learn math.
Beyond learning from golden knowledge and examples in books, he also does exercises.
After failing to solve a problem, he will learn what mistakes he made and how to correct
them. By learning from the mistakes he has made, his reasoning capability will be further
improved. Inspired by this error-driven learning process, this work explores whether the
reasoning capabilities of LLMs can also benefit from understanding and correcting mistakes.
To this end, we first generate mistake-correction data pairs (referred to as correction data)
and then inject these correction data into the CoT fine-tuning process (Figure 1). For
generating correction data, we employ multiple LLMs, including the LLaMA and GPT
series models, to collect inaccurate reasoning paths (i.e., with incorrect final answers). We
then use GPT-4 as the “corrector” to generate corrections for these inaccurate reasoning
paths. The generated corrections contain three pieces of information: (1) the incorrect step in
the original solution, (2) an explanation of why this step is incorrect, and (3) how to correct
the original solution to arrive at the correct final answer. After filtering out corrections
with incorrect final answers, our human evaluation reveals that our correction data exhibits
adequate quality for the subsequent fine-tuning stage. In addition to using the original
training questions to generate correction data, we also consider extending the question
sets to scale up our correction data. Inspired by the evolution techniques for CoT data (Xu
et al., 2023; Yu et al., 2023; Li et al., 2023a), we apply a correction-centric evolution strategy:
compared to randomly selecting seed questions for evolution, our correction-centered
evolution focuses more on moderately difficult questions for expanding the correction data.
We blend the generated correction data with the CoT data and then fine-tune LLMs to
perform LEarning from MistAkes (LEMA).
Our experiments on five open-source LLMs and five challenging reasoning tasks demonstrate the effectiveness of LEMA. Compared to fine-tuning on CoT data alone, LEMA
consistently improves the performance across various LLMs and tasks. For instance, LEMA
with LLaMA-2-70B (Touvron et al., 2023b) achieves 83.5% on GSM8K and 25.0% on MATH,
while fine-tuning on CoT data alone yields 81.4% and 23.6%, respectively. By incorporating our correction-centric evolution strategy on MATH, LEMA with LLaMA-2-70B can be
further improved from 25.0% to 29.3%. Moreover, LEMA can also enhance specialized
LLMs such as WizardMath (Luo et al., 2023) and MetaMath(Yu et al., 2023). In addition to
math tasks, LEMA also benefits commonsense reasoning, improving the performance of
LLaMA-2-70B on CSQA (Talmor et al., 2019) from 84.2% to 85.3%.
-----
_CoT Prompt_ _Correction Prompt_
Reasoning Model Corrector Model
Question-Answer Pairs Inaccurate Reasoning Paths Corrections
(e.g., LLaMA2) (i.e., GPT-4)
_Expanding_
**Mistake-Correction Pairs**
_Evolution Prompt_
Correction-Centric Evolution Model _Sampling Seed Questions_
(i.e., GPT-4)
Figure 2: Process of generating and expanding correction data.
Beyond these impressive results, our ablation studies on correction data shed further light.
In controlling the training data sizes and training tokens to be the same, our experimental results reveal that mixing CoT and correction data outperforms a single data source.
These results indicate the non-homogeneous effectiveness of CoT data and correction data.
Moreover, compared with randomly selecting seed questions, our correction-centric evolution better improves the performance of LEMA. It demonstrates that moderately difficult
questions are more suitable for expanding the correction data.
**2** **Methodology**
LEMA consists of three primary stages: generating correction data, correction-centric evolution, and fine-tuning.
2.1 Correction Data Generation
Figure 2 briefly illustrates the process of generating correction data. Given a question-answer
example (qi, ai) ∈Q, a corrector model Mc, and a reasoning model Mr, we generate the
path to the questionmistake-correction data pair qi, and ci ( denotes the correction forqi ⊕ _ri, ci) ∈C, where_ _ri represents an inaccurate reasoningri._
e e
**Collecting Inaccurate Reasoning Paths.** We first sample multiple reasoning paths for each e
question qi using the reasoning model Mr and retain paths not achieving the correct final
answer ai,
_ri ∼Mr(Pr ⊕_ _qi),_ Ans(ri) ̸= ai, (1)
where Pr is the few-shot prompt instructing the model to perform CoT reasoning, and
Ans( ) extracts the final answer from the reasoning path.e e
_·_
**Generating Corrections for Mistakes.** For question qi and the inaccurate reasoning path
_ri, we employ the corrector model Mc to generate a correction and check the final answer_
in the correction,
_ci_ _c(_ _c_ _qi_ _ri),_ Ans(ci) = ai, (2)
e _∼M_ _P_ _⊕_ _⊕_
where Pc contains 4 annotated mistake-correction examples to guide the corrector model
what kind of information should be contained in the generated corrections. Figure 3 brieflye
illustrates Pc. Specifically, the annotated corrections comprises three pieces of information:
- Incorrect Step: which step in the original reasoning path has made a mistake.
- Explanation: explain what kind of mistake has been made in this step.
- Correct Solution: how to revise the original reasoning path to achieve the correct answer.
**Human Evaluation for Generated Corrections.** Before generating data on a large scale,
we first manually assess the quality of the generated corrections. We take LLaMA-2-70B
as Mr, utilize GPT-4 as Mc, and generate 50 mistake-correction data pairs based on the
GSM8K training set. We classify the corrections into three quality levels.
-----
**Prompt For Generating Corrections**
For the following math word problems, the original solutions may contain errors. Please identify the incorrect step in each solution, explain why it is incorrect,
and provide the correct solution starting from that step.
**Question: James creates a media empire. He creates a movie for $2000. Each DVD cost $6 to make. He sells it for 2.5 times that much. He sells 500 movies a**
day for 5 days a week. How much profit does he make in 20 weeks?
**Original Solution: Step 1, 500 movies a day, 5 days a week, for 20 weeks, he sells 500 * 5 * 20 = 50000 movies. Step 2, Each movie he sells for $6 * 2.5 =**
$15. Step 3, Thus, he makes a profit of $15 - $6 = $9 per movie. Step 4, Therefore, he makes a profit of $9 * 50000 = $450000. Step 5, The answer is 450000.
**Incorrect Step: Step 4.**
**Explanation: The error in Step 4 is that it does not take into account the initial cost of creating the movie ($2000). The correct solution should subtract this**
amount from the total profit calculated in Step 4.
**Correct Solution: … Step 4, Therefore, he makes a profit of $9 * 50000 - $2000 = $448000. Step 5, The answer is 448000.**
… (Another 3 annotated examples)
**Question: 𝒒𝒊, Original Solution: 𝒓𝒊, Incorrect Step: {Completion}**
Figure 3: A brief illustration of our prompt for generating corrections, containing the
incorrect step in the original solution, the reason of mistake, and the corrected step .
- Excellent: the corrector successfully identifies the incorrect step in _ri, provides a rea-_
sonable explanation, and the corrected reasoning path exhibits high continuity with the
pre-steps in the original reasoning path[1].
e
- Good: the corrector successfully identifies the incorrect step in _ri, provides a reasonable_
explanation, while the corrected reasoning path has minor issues in continuity.
- Poor: the corrector fails to identify the incorrect step in _ri or provides unreasonable e_
explanations.
e
Appendix B.1 lists several examples under each quality level. Our evaluation finds that 35
out of 50 generated corrections are of excellent quality, 11 are good, and 4 are poor. Based on
this human evaluation, we suppose the overall quality of corrections generated with GPT-4
is sufficient for the further fine-tuning stage. We generate corrections on a large scale and
take all corrections that have correct final answers for fine-tuning LLMs. We provide further
analysis on the choice and behavior of corrector model in Section D.6.
2.2 Correction-Centric Evolution
After building up the data generation pipeline, we explore how to scale up our correction
data. We consider that expanding the question-answer set Q is a promising direction, as it
primarily determines the correction data diversity.
Inspired by the recent success of evolution techniques on CoT augmentation (Xu et al.,
2023; Yu et al., 2023; Li et al., 2023a), we explore how to effectively apply the evolution
method to expand our correction data. The “evolution” means to generate a set of new
question-answer pairs from the given seed questions by prompting powerful LLMs.
The general evolution method for CoT augmentation randomly selects seed questions to
evolve. However, this strategy does not well suit the nature of our correction data, as too
simple or too challenging questions are less valuable for evolving and collecting correction
information.
- For too simple questions, the reasoning models such as LLaMA can already solve them.
Evolving these questions may not be effective for collecting mistakes.
- For too challenging questions, the most powerful LLMs still cannot handle them. Evolving
these questions may lead to much inaccurate information in corrections.
Therefore, we apply a correction-centric evolution strategy which more focuses on moderately difficult questions: we only sample seed questions that occur in our correction data C, rather
than randomly sampling from the entire set Q,
_qˆi ∼Me(Pe ⊕_ _qi),_ _qi ∈C,_ (3)
1The high continuity means that the corrected reasoning steps follow the pre-steps generated before
the identified mistake step.
-----
Table 1: Our main experimental results (%) on four mathematical reasoning tasks (GSM8K,
MATH, SVAMP and ASDiv) and one commonsense reasoning task (CSQA). Appendix D.1
and D.2 illustrate the performance variances during training.
Tasks
Model Training
GSM8K MATH SVAMP ASDiv CSQA
CoT Fine-Tuning 81.4 23.6 80.3 80.7 84.2
LLaMA-2-70B (Touvron et al., 2023b)
+ Learning From Mistakes 83.5 (+2.1) 25.0 (+1.4) 81.6 (+1.3) 82.2 (+1.5) 85.3 (+1.1)
CoT Fine-Tuning 76.2 19.7 71.9 77.4 83.1
LLaMA-65B (Touvron et al., 2023a)
+ Learning From Mistakes 77.9 (+1.7) 20.8 (+1.1) 72.8 (+0.9) 77.7 (+0.3) 84.0 (+0.9)
CoT Fine-Tuning 68.8 19.1 67.4 73.9 78.1
CodeLLaMA-34B (Rozi`ere et al., 2023)
+ Learning From Mistakes 71.7 (+2.9) 20.4 (+1.3) 72.0 (+4.6) 74.4 (+0.5) 80.8 (+2.7)
CoT Fine-Tuning 62.9 12.2 58.0 67.8 80.4
LLaMA-2-13B (Touvron et al., 2023b)
+ Learning From Mistakes 65.7 (+2.8) 12.6 (+0.4) 62.0 (+4.0) 71.1 (+3.3) 81.9 (+1.5)
CoT Fine-Tuning 52.6 8.7 53.0 63.8 76.9
LLaMA-2-7B (Touvron et al., 2023b)
+ Learning From Mistakes 54.1 (+1.5) 9.4 (+0.7) 54.1 (+1.1) 65.5 (+1.7) 78.8 (+1.9)
where qi is the seed question, and Me and Pe are the LLM and prompt for evolving
questions, respectively. Appendix B.3 illustrates our Pe.
The underlying principle of this strategy is straightforward. If one question frequently
appears in correction data, it means that this question is not well solved by many reasoning
models, but its inaccurate reasoning paths can be well handled by the corrector model.
2.3 Fine-Tuning LLMs
After generating the correction data, we fine-tune LLMs to examine whether these correction
data can facilitate CoT reasoning. We compare the results under two settings:
- Fine-Tuning on CoT Data Alone. In addition to the annotated data in each task, we
additionally take CoT data augmentation following existing methods (Yuan et al., 2023;
Li et al., 2023a; Yu et al., 2023). We generate more reasoning paths for each question in
the training sets with GPT-4 and filter out paths with wrong final answers. We apply this
CoT data augmentation to set up strong fine-tuning baselines that only utilize CoT data.
- Fine-Tuning on CoT Data + Correction Data. We fine-tune LLMs on both CoT data and
generated mistake-correction data. This setting is referred to as LEMA.
Appendix B.2 shows the input-output formats of CoT data and correction data used for
fine-tuning and evaluation.
**3** **Experimental Setup**
3.1 Tasks
We undertake experiments on five challenging reasoning tasks, including four mathematical
reasoning tasks (GSM8K, MATH, SVAMP and ASDiv) and one commonsense reasoning
task (CSQA)[2]. For GSM8K, MATH and CSQA, we generate correction data based on their
training sets. For SVAMP and ASDiv, we take the same training data for GSM8K.
**GSM8K (Cobbe et al., 2021) contains high quality linguistically diverse grade school math**
word problems. It has 7,473 training examples with CoT and 1,319 test cases.
**MATH (Hendrycks et al., 2021) examines math reasoning on solving challenging competi-**
tion mathematics problems. It contains 7,500 training CoT data and 5,000 test cases.
**SVAMP (Patel et al., 2021) consists of questions with short NL narratives as state descrip-**
tions. For evaluation on SVAMP, we use the same training data as for GSM8K and take all
1,000 examples in SVAMP as test cases.
2Appendix C.1 contains basic statics about the tasks and data.
-----
CoT Fine-Tuning LEMA
85 80 75 70 60
83.5
81.4 +0.982.3 82.5 +1.0 76.2 +1.177.3 77.0 +0.977.9 70.8 70.7 71.7+1.0 65.7
on GSM8K (%) 80 75 70 68.8 +2.0 65 64.1 63.8 +1.9 55 53.6 54.1
62.9 +1.2 52.6 52.7 +0.5
+0.1
75 70 65 60 50
Performance 32K 45K 32K 45K 32K 45K 32K 45K 32K 45K
LLaMA-2-70B LLaMA-65B CodeLLaMA-34B LLaMA-2-13B LLaMA-2-7B
Figure 4: Performances of LEMA and CoT-alone fine-tuning with controlled data sizes (32K
and 45K) on GSM8K. See Table 2 for results with controlled number of training tokens.
**ASDiv (Miao et al., 2020) is a diverse math dataset in terms of both language patterns and**
problem types for evaluating. For evaluation on ASDiv, we use the same training data as for
GSM8K and test on 2,084 examples in ASDiv[3].
**CSQA (Talmor et al., 2019) is a question answering dataset for commonsense reasoning.**
It has 9,741 examples in the training set and 1,221 examples in the dev set. As it does not
contain any CoT annotation, we first annotate 4 CoT examples (detailed in Appendix C.3),
then take its training set to augment CoT data and generate correction data.
3.2 Data Construction
**CoT Data.** For GSM8K (also SVAMP and ASDiv), the CoT data contains all training examples of GSM8K and 24,948 augmented reasoning paths. We first generate 30,000 reasoning
paths with GPT-4 and filter out 5,052 paths with wrong final answers or unexpected format[4].
For MATH, the CoT data contains all training examples and 12,509 augmented reasoning
paths. We sample 30,000 reasoning paths with GPT-4 and filter out 17,491 paths. For CSQA,
we generate 15,000 reasoning paths with GPT-4 and then filter out 4,464 paths.
**Correction Data.** We utilize multiple LLMs to collect inaccurate reasoning paths, including
LLaMA-2 (Touvron et al., 2023b), WizardLM (Xu et al., 2023), WizardMath (Luo et al., 2023),
Text-Davinci-003 (OpenAI, 2023c), GPT-3.5-Turbo (OpenAI, 2023a) and GPT-4 (OpenAI,
2023b). We take GPT-4 as the corrector model. Finally, we collect 12,523, 6,306, 7,241 mistakecorrection pairs based on the training sets of GSM8K, MATH and CSQA, respectively.
**Correction-Centric Evolution.** We take 10K bootstrap samples from the questions in our
correction data. We utilize GPT-4 to evolve the questions. To generate “ground-truth”
answers for the evolved questions, we utilize GPT-4 to sample three answers for each
question and conduct a majority voting. The question that leads to three different answers
will be filtered. Note that the evolved data will only be used in Section 4.2.
3.3 Fine-Tuning and Evaluation
We fine-tune multiple open-source LLMs in the LLaMA (Touvron et al., 2023a), LLaMA2 (Touvron et al., 2023b), CodeLLaMA (Roziere et al., 2023), WizardMath (Luo et al., 2023)`
and MetaMath (Yu et al., 2023) families. We utilize QLoRA[5] (Hu et al., 2022; Dettmers et al.,
2023) by default to conduct parameter-efficient fine-tuning (PEFT) for these models. We set
low-rank dimension as 64 and dropout rate as 0.05. We set learning rate as 0.0001 for LLMs
3The original ASDiv contains 2,305 examples and we filter out non-numerical examples, detailed
in Appendix C.2.
4The unexpected format means that the final answer is failed to be extracted from the path with
the regular expression.
[5https://github.com/artidoro/qlora.](https://github.com/artidoro/qlora)
-----
Model Data Acc (%)
CoT-5.8M 82.1
LLaMA-2-70B
LEMA-5.8M **83.5 (+1.4)**
CoT-5.8M 64.2
LLaMA-2-13B
LEMA-5.8M **65.7 (+1.5)**
Table 2: Performances with the same size of training tokens (5.8M) on GSM8K.
Model Acc (%)
WizardMath-70B (Luo et al., 2023) 81.6
WizardMath-70B + LEMA **84.2 (+2.6)**
MetaMath-70B (Yu et al., 2023) 82.3
MetaMath-70B + LEMA **85.4 (+3.1)**
Table 3: Performances of LEMA with specialized
LLMs on GSM8K.
larger than (or equal to) 34B and 0.0002 for LLMs smaller than 34B. We set batch size as 96,
train for 2,000 steps, and save checkpoints for every 100 training steps.
For evaluation, we evaluate the performance of all saved checkpoints based on vLLM
library[6] (Kwon et al., 2023) and report the accuracy of the best checkpoint. During inference,
we set temperature as 0 (i.e., greedy decoding) and max sample length as 2,048. To clarify
the influence from random disturbances during training, we provide the performances of
the best three checkpoints in Appendix D.1 and the performance curves during the whole
training processes in Appendix D.2. We do not add demonstration examples into the prompt
for both fine-tuning and evaluation by default. All evaluations are conducted under the
same CoT instruction. For models trained with LEMA, we do not generate corrections
during evaluations. All our experiments can be conducted on 4 x A100 GPU stations.
**4** **Results and Analysis**
We focus on two main research questions in this section. More results and analysis are
contained in Appendix D.
4.1 Can LLMs Learn From Mistakes?
**LEMA effectively improves CoT-alone fine-tuning.** Table 1 shows the main experimental
results on five challenging reasoning tasks. Compared to fine-tuning on CoT data alone, incorporating correction data during fine-tuning brings improvements across all five backbone
LLMs and five tasks. It demonstrates that LEMA can effectively facilicate CoT fine-tuning.
Note that SVAMP and ASDiv can be regarded as two out-of-distribution tasks as the training
data is constructed based on GSM8K. The gains on these two tasks reflect that LEMA has a
certain extent of generalizability in the out-of-distribution scenarios.
**The effectiveness of CoT data and correction data are non-homogeneous.** If the effectiveness of the two data sources are homogeneous, the gains in Table 1 will be diminished
if the data sizes of two fine-tuning settings are controlled as the same. To further validate
the effectiveness of correction data, we conduct two ablation studies with controlled data
**sizes. In default settings, we have about 32K examples for CoT-alone fine-tuning and 45K**
examples for LEMA. Here are another two controlled settings:
- LEMA-32K. We keep the 13K correction data and randomly remove 13K CoT data.
- CoT-45K. To expand CoT data, we extract the corrected CoT from each correction example.
Figure 4 shows that LEMA can still bring gains for four out of five backbone LLMs under
the same data size. It means that these LLMs do learn extra information from our correction
data that is not provided by the CoT data. The only exception is for LLaMA-2-7B. It indicates
that a stronger backbone model can more effectively learn from mistakes.
Despite controlling the training data sizes to be the same, we also investigate the training**token efficiency of LEMA compared with CoT-alone fine-tuning. Notice that the target-side**
length of correction data is generally longer than CoT data, so LEMA will have slightly
more training tokens than CoT-alone fine-tuning under the same data size. Specifically,
[6https://github.com/vllm-project/vllm.](https://github.com/vllm-project/vllm)
-----
LEMA + Full Fine-Tuning LEMA + QLoRA Fine-Tuning Trend Lines (Logarithmic)
LLaMA-2-70B
Llemma-34B
|32 30 28.9 29.3 28 26.5 26.6 26 25.3 25.0 24 0 10000 20000 30000 40000|38 35.8 36 34.9 34 32.5 32.7 32 31.5 30.7 30 0 10000 20000 30000 40000|
|---|---|
LEMA LEMA + General Evolution LEMA + Correction-Centric Evolution
34.9
35 33.8
31.5 +2.3 +3.4
28.9
25.3 27.0 +3.6
25 +1.7
Accuracy (%)
15
LLaMA-2-70B Llemma-34B
(a)
(b)
Figure 5: Performance of LEMA on MATH with evolution strategies. (a) Compare general
and correction-centric evolution strategies (full fine-tuning). (b) The performance trend of
LEMA with QLoRA or full fine-tuning. X-axis is the number of sampled questions.
CoT-45K has 5.4M training tokens and LEMA-45K has 5.8M (a ∼7% relative increment). To
conduct the comparison under the same size of training tokens, we construct CoT-5.8M by
sampling more reasoning paths (following Section 2.3) to add into CoT-45K.
Table 2 shows that LEMA still outperforms CoT-alone fine-tuning with the same number of
training tokens. Note that this comparison is under an unfavorable setup for LEMA as it increases the training samples for CoT-alone fine-tuning. The improvements in Table 2 further
support the non-homogeneous effectiveness of CoT data and correction data. Moreover,
we notice that augmenting more reasoning paths for LLaMA-2-70B does not continuously
boost the model performance on GSM8K. To validate this, we further expand CoT-5.8M to
CoT-6.8M and have a 82.2% accuracy. Such an observation is in line with the Yu et al. (2023).
We suppose that this is because sampling too many reasoning paths for the same question
will only bring redundant information to the training.
**A stronger backbone model can be more effective at learning from mistakes.** As evidenced in Table 1, LLaMA-2-70B has the highest baseline performances in CoT alone
fine-tuning, while maintaining significant improvements in all five tasks (an accuracy gain
of over 1%) with the help of LEMA. In contrast, for other four less powerful models in
Table 1, the improvements from LEMA are occasionally less significant. This comparison,
along with the performance of LLaMA-2-7B in Figure 4, suggests that the inherent strength
of backbone LLMs can influence how well the models can learn from mistakes.
**LEMA can also facilitate specialized LLMs.** To adapt generally pre-trained LLMs into the
math domain, there have been several specialized LLMs such as WizardMath (Luo et al.,
2023) and MetaMath (Yu et al., 2023). We also apply LEMA on these specialized LLMs to
further examine its effectiveness. As these models have been already trained on a large
amount of CoT data designed for math tasks, we directly compare LEMA with the results
reported in the original papers for these specialized models. Table 3 shows that LEMA can
further improve these specialized LLMs. Appendix D.3 contains detailed comparisons.
4.2 How Beneficial Is Correction-Centric Evolution?
Figure 5a and Figure 5b demonstrate further improvements on the performance of LEMA
with incorporating the correction-centric evolution strategy to expand the correction data.
**Correction-centric evolution can more effectively improve LEMA.** Figure 5a shows
the performance of LEMA with incorporating different evolution strategies. Besides the
correction-centric evolution introduced in Section 2.2, we also compare with the general
evolution strategy applied in previous work (Xu et al., 2023; Yu et al., 2023; Li et al., 2023a).
For a fair comparison, the number of seed questions is kept the same for both evolution
strategies (i.e., 10K). We also tried the Llemma (Azerbayev et al., 2023) model which has
-----
been pre-trained on a math-related corpus (such as arXiv papers). We fully fine-tune LLMs
as the correction data scale has been much increased[7].
There are two primary conclusions. First, LEMA can effectively benefit from evolution
techniques. It indicates that the performance of LEMA can be further improved by incorporating existing data augmentation techniques. Second, the correction-centric evolution
outperforms the general evolution. It demonstrates that moderately difficult questions are
more suitable for expanding the correction data.
**Evolution techniques can better facilitate LEMA under full fine-tuning.** To explore the
scaling trend of LEMA, we apply the correction-centric evolution on another 10K sampled
seed questions (detailed in Appendix C.5). Figure 5b shows the performance trends of
LEMA as the question set expands. It shows that if only the original question-answer pairs
in MATH are used (i.e., the initial points in each line), there is no significant difference in
the performances of LEMA between QLoRA and full fine-tuning. However, as the question
set expands, the performance with full fine-tuning improves significantly, while QLoRA
fine-tuning increases only slightly. It indicates that the parameter-efficient fine-tuning can
only “digest” a limited scale of correction data. Appendix D.5 provides further analysis.
**5** **Related Work**
**LLMs with CoT reasoning.** Wei et al. (2022) uncovered the emergence of CoT reasoning
capability for extremely large language models, and this reasoning capability was then
examined in various reasoning-related domains including logical reasoning (Creswell et al.,
2022; Pan et al., 2023; Lei et al., 2023), commonsense reasoning (Talmor et al., 2019; Geva et al.,
2021; Ahn et al., 2022), and math reasoning (Miao et al., 2020; Koncel-Kedziorski et al., 2016;
Patel et al., 2021; Cobbe et al., 2021; Hendrycks et al., 2021). The impressive performance of
LLMs in these domains has spurred the research community to further investigate methods
for effectively harnessing and enhancing CoT reasoning for LLMs (Wang et al., 2022; Zhou
et al., 2022; Creswell & Shanahan, 2022; Li et al., 2023b; Lightman et al., 2023).
**Enhancing CoT reasoning for solving mathematical problems.** There has been much
work dedicated to enhancing the performance of LLMs in solving mathematical problems
from various perspectives. Some studies explored the voting or verification methods
based on sampling multiple reasoning paths (Wang et al., 2022; Li et al., 2023b; Lightman
et al., 2023). Some methods considered to generate executable programs to obtain the final
answer or to integrate plug-in tools that facilitate the execution of external APIs during
intermediate steps (Jie & Lu, 2023; Wang et al., 2023a; Yue et al., 2023; Azerbayev et al.,
2023; Gou et al., 2023). Some work collected math-related corpus such as arXiv papers for
pre-training better base models for math (Azerbayev et al., 2023; Wang et al., 2023d). Some
work focused on augmenting existing datasets, which expanded training sets or provided
external annotations (Magister et al., 2022; Huang et al., 2022; Ho et al., 2022; Li et al., 2022;
Luo et al., 2023; Yu et al., 2023; Li et al., 2023a; Liang et al., 2023; Liu et al., 2023a). From the
perspective of the techniques used, this work follows the data augmentation approach.
**Data augmentation for mathematical tasks.** With the help of advanced LLMs (e.g., GPT-4
and GPT-3.5-Turbo), various methods have been proposed to generate more CoT data for
mathematical tasks: Yuan et al. (2023) proposed rejection sampling for augmenting CoT
data; Xu et al. (2023) evolved the math questions in the training sets; Li et al. (2023a) applied
both query augmentation and response augmentation; Yu et al. (2023) used self-verification
and FOBAR to generate CoT with high diversity. While the effectiveness of CoT data has
been well studied, how to improve math reasoning with other auxiliary data is still underexplored. To this end, there are some preliminary explorations: Azerbayev et al. (2023) and
Yue et al. (2023) found that code data can facilitate math reasoning; Liu et al. (2023b) and
Wang et al. (2023e) constructed re-ranking data or verification data to make the model judge
the quality of reasoning paths. This work takes a further step toward leveraging auxiliary
7Appendix C.4 contains the settings for full fine-tuning.
-----
data: we propose and examine the effectiveness of mistake-correction data, which informs
the model what kind of mistakes could be made in CoT reasoning and how to correct them.
**6** **Conclusion**
This work explores whether the reasoning capabilities of LLMs can be further improved
by learning from mistakes. Experimental results and in-depth analysis demonstrate the
effectiveness and potential of learning from mistakes.
**Ethics Statement**
Due to the utilization of pre-trained language models, this work could be exposed to some
potential risks of ethical issues on general deep learning models (such as social bias and
privacy breaches). We hope that the idea of learning from mistakes would facilitate the
development of responsible AI models, for instance, on training LLMs to recognize and
modify risky generated contents.
**Reproducibility Statement**
We open source our training code, evaluation scripts and fine-tuned checkpoints to facilitate
further explorations on learning from mistakes. For generating the training data, we provide
all our prompts used for data generation.
**Acknowledgments**
Shengnan An and Nanning Zheng were supported in part by NSFC under grant No.
62088102. Thank Chen Li at IAIR, Xi’an Jiaotong University for his valuable comments on
this work.
**References**
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David,
Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog,
Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui
Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov,
Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada,
Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre
Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia,
Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can, not as i say:
Grounding language in robotic affordances, 2022.
Alibaba. Alibaba open sources qwen, a 7b parameter ai model, 2023. URL
```
https://www.maginative.com/article/alibaba-open-sources-qwen-a-7b-\
parameter-ai-model/.
```
Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning
Zheng, and Jian-Guang Lou. Input-tuning: Adapting unfamiliar inputs to frozen pretrained models, 2022.
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu,
Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav
Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan
Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob
Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks,
Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha
-----
Chowdhery, Cl´ement Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin,
Mark D´ıaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus
Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand,
Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz,
Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim
Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li,
Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick
Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam
Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie
Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter,
Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby,
Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter,
Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang,
John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui
Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and
Yonghui Wu. Palm 2 technical report, 2023.
[Anthropic. Model card and evaluations for claude models, 2023. URL https://www-files.](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf)
```
anthropic.com/production/images/Model-Card-Claude-2.pdf.
```
Kourosh Hakhamaneshi Artur Niederfahrenhorst and Rehaan Ahmad. Finetuning llms: Lora or full-parameter? an in-depth analysis with llama 2,
2023. [URL https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-\](https://www.anyscale.com/blog/ fine-tuning-llms-lora-or-full-\ parameter-an-in-depth-analysis-\ with-llama-2)
```
parameter-an-in-depth-analysis-\with-llama-2.
```
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,
Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language
model for mathematics, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla
Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
Language models are few-shot learners. Advances in neural information processing
systems, 33:1877–1901, 2020.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers
to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Antonia Creswell and Murray Shanahan. Faithful reasoning using large language models.
arXiv preprint arXiv:2208.14271, 2022.
Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting
large language models for interpretable logical reasoning. In The Eleventh International
Conference on Learning Representations, 2022.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient
finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did
Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning
Strategies. Transactions of the Association for Computational Linguistics, 9:346–361, 04
[2021. ISSN 2307-387X. doi: 10.1162/tacl a 00370. URL https://doi.org/10.1162/tacl_](https://doi.org/10.1162/tacl_a_00370)
```
a_00370.
```
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan,
and Weizhu Chen. Tora: A tool-integrated reasoning agent for mathematical problem
solving, 2023.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math
dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets
and Benchmarks Track (Round 2), 2021.
-----
Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning
teachers. arXiv preprint arXiv:2212.10071, 2022.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai,
Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark,
et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556,
2022.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,
Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In
[International Conference on Learning Representations, 2022. URL https://openreview.](https://openreview.net/forum?id=nZeVKeeFYf9)
```
net/forum?id=nZeVKeeFYf9.
```
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and
Jiawei Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610,
2022.
Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying
Song, and Denny Zhou. Large language models cannot self-correct reasoning yet, 2023.
Zhanming Jie and Wei Lu. Leveraging training data in few-shot prompting for numerical
reasoning. arXiv preprint arXiv:2305.18170, 2023.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of
the North American Chapter of the Association for Computational Linguistics: Human
Language Technologies, pp. 1152–1157, San Diego, California, June 2016. Association for
[Computational Linguistics. doi: 10.18653/v1/N16-1136. URL https://aclanthology.](https://aclanthology.org/N16-1136)
```
org/N16-1136.
```
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu,
Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large
language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th
Symposium on Operating Systems Principles, 2023.
Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu,
Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, and Sushant Prakash. Rlaif:
Scaling reinforcement learning from human feedback with ai feedback, 2023.
Bin Lei, Chunhua Liao, Caiwen Ding, et al. Boosting logical reasoning in large language
models through a new framework: The graph of thought. arXiv preprint arXiv:2308.08614,
2023.
Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient
prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural
Language Processing, pp. 3045–3059, 2021.
Chengpeng Li, Zheng Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang
Wang, and Chang Zhou. Query and response augmentation cannot help out-of-domain
math reasoning generalization. arXiv preprint arXiv:2310.05506, 2023a.
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang,
Jing Qian, Baolin Peng, Yi Mao, et al. Explanations from large language models make
small reasoners better. arXiv preprint arXiv:2210.06726, 2022.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen.
Making language models better reasoners with step-aware verifier. In Proceedings of the
61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long
Papers), pp. 5315–5333, 2023b.
Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, and
Ashwin Kaylan. Let gpt be a math tutor: Teaching math word problem solvers with
customized exercise generation. arXiv preprint arXiv:2305.14386, 2023.
-----
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee,
Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023.
Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan Kulkarni, Yuanzhi Li, Anh Nguyen,
Rachel Ward, and Yi Zhang. Tinygsm: achieving ¿80
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, and Peter J. Liu. Improving
large language model fine-tuning for solving math problems, 2023b.
Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj
Ammanabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced
unlearning. In Advances in Neural Information Processing Systems, 2022.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo
Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering
mathematical reasoning for large language models via reinforced evol-instruct. arXiv
preprint arXiv:2308.09583, 2023.
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei
Severyn. Teaching small language models to reason. arXiv preprint arXiv:2212.08410,
2022.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and
developing English math word problem solvers. In Proceedings of the 58th Annual
Meeting of the Association for Computational Linguistics, pp. 975–984, Online, July 2020.
Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.92. URL
```
https://aclanthology.org/2020.acl-main.92.
```
[OpenAI. Gpt-3.5 turbo fine-tuning and api updates, 2023a. URL https://openai.com/](https://openai.com/blog/gpt-3-5-turbo-fine-tuning-and-\ api-updates)
```
blog/gpt-3-5-turbo-fine-tuning-and-\api-updates.
```
OpenAI. Gpt-4 technical report, 2023b.
[OpenAI. Openai documentation: Models, 2023c. URL https://platform.openai.com/](https://platform.openai.com/docs/models/gpt-3-5)
```
docs/models/gpt-3-5.
```
Liangming Pan, Alon Albalak, Xinyi Wang, and William Yang Wang. Logic-lm: Empowering
large language models with symbolic solvers for faithful logical reasoning. arXiv preprint
arXiv:2305.12295, 2023.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve
simple math word problems? In Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language
Technologies, pp. 2080–2094, Online, June 2021. Association for Computational Linguis[tics. doi: 10.18653/v1/2021.naacl-main.168. URL https://aclanthology.org/2021.](https://aclanthology.org/2021.naacl-main.168)
```
naacl-main.168.
```
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and
Diyi Yang. Is chatgpt a general-purpose natural language processing task solver? arXiv
preprint arXiv:2302.06476, 2023.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward
model, 2023.
Leonardo Ranaldi and Andre Freitas. Aligning large and small language models via chain-ofthought reasoning. In Yvette Graham and Matthew Purver (eds.), Proceedings of the 18th
Conference of the European Chapter of the Association for Computational Linguistics
(Volume 1: Long Papers), pp. 1812–1827, St. Julian’s, Malta, March 2024. Association for
[Computational Linguistics. URL https://aclanthology.org/2024.eacl-long.109.](https://aclanthology.org/2024.eacl-long.109)
-----
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,`
Yossi Adi, Jingyu Liu, Tal Remez, Jer´ emy Rapin, Artyom Kozhevnikov, Ivan Evtimov,´
Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong,
Alexandre Defossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas´
Usunier, Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models
for code, 2023.
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi,
Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason
Wei. Language models are multilingual chain-of-thought reasoners. In The Eleventh
[International Conference on Learning Representations, 2023. URL https://openreview.](https://openreview.net/forum?id=fR3wGCk-IXp)
```
net/forum?id=fR3wGCk-IXp.
```
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari,
Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al.
Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative
language model. arXiv preprint arXiv:2201.11990, 2022.
Kaya Stechly, Matthew Marquez, and Subbarao Kambhampati. Gpt-4 doesn’t know it’s
wrong: An analysis of iterative prompting for reasoning problems, 2023.
Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan
Yang, Ning Ding, Xingzhi Sun, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Exploring
the impact of model scaling on parameter-efficient tuning, 2023.
Xianghui Sun, Yunjie Ji, Baochang Ma, and Xiangang Li. A comparative study between
full-parameter and lora-based fine-tuning on chinese instruction data for instruction
following large language model, 2023.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A
question answering challenge targeting commonsense knowledge. In Proceedings of the
2019 Conference of the North American Chapter of the Association for Computational
Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp.
4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
[doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421.](https://aclanthology.org/N19-1421)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux,
Timothee Lacroix, Baptiste Rozi´ ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien`
Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and
efficient foundation language models, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2:
Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. Can large language
models really improve by self-critiquing their own plans?, 2023.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang,
Linqi Song, Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in
llms for enhanced mathematical reasoning, 2023a.
Ruida Wang, Wangchunshu Zhou, and Mrinmaya Sachan. Let’s synthesize step by step:
Iterative dataset synthesis with large language models by extrapolating errors from small
models, 2023b.
Xinyi Wang, Lucas Caccia, Oleksiy Ostapenko, Xingdi Yuan, and Alessandro Sordoni. Guiding language model reasoning with planning tokens. arXiv preprint arXiv:2310.05707,
2023c.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang,
Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought
reasoning in language models. In The Eleventh International Conference on Learning
Representations, 2022.
-----
Zengzhi Wang, Rui Xia, and Pengfei Liu. Generative ai for math: Part i – mathpile: A
billion-token-scale pretraining corpus for math, 2023d.
Zhaoyang Wang, Shaohan Huang, Yuxuan Liu, Jiahai Wang, Minghui Song, Zihan Zhang,
Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. Democratizing
reasoning ability: Tailored learning from large language model, 2023e.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V
Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language
models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao,
and Daxin Jiang. Wizardlm: Empowering large language models to follow complex
instructions, 2023.
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Chao Yin, Chenxu Lv, Da Pan, Dian
Wang, Dong Yan, Fan Yang, et al. Baichuan 2: Open large-scale language models. arXiv
preprint arXiv:2309.10305, 2023.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T
Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own
mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou.
Scaling relationship on learning mathematical reasoning with large language models.
arXiv preprint arXiv:2308.01825, 2023.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning,
2023.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen,
Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained
transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale¨
Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, et al. Least-to-most prompting
enables complex reasoning in large language models. In The Eleventh International
Conference on Learning Representations, 2022.
-----
This is the Appendix of the paper: Learning From Mistakes Makes LLM Better Reasoner.
**A** **Discussion**
Here, we discuss further about the insights from our exploration on learning from mistakes.
A.1 LLMs for Self-Correction
Recently, much work has investigated the behavior of advanced LLMs (e.g., GPT-4) on
correcting mistakes generated by themselves (Valmeekam et al., 2023; Stechly et al., 2023;
Huang et al., 2023). We also conduct further analysis on self-correction performance based
on our correction data (detailed in Appendix D.6). These work and our analysis drew the
same conclusion: the most powerful LLMs by now still struggle to perform self-correction.
To achieve more reliable utilization of self-correction, we think that there are mainly three
directions. (1) Inject external supervision to verify the correcting process, such as using
the labeled final answers (which is applied in our work) or incorporating human feedback.
(2) Train a process-based verifier to judge the quality of self-correction process. Lightman
et al. (2023) has demonstrated the great potential of verifier-based method. (3) Develop
trust-worth LLMs that can at least honestly tell us what it can solve and what does not.
A.2 Training with Feedback
To align the behavior of LLMs with human expectations, existing work has tried to collect
feedback for the model-generated contents and inject these feedback back into the model
through various techniques, such as PPO (Lu et al., 2022), RLHF (OpenAI, 2023b) and
DPO (Rafailov et al., 2023). To reduce human efforts on annotation, some recent work
tried to use LLMs to generate feedback, such as RLAIF (Lee et al., 2023). From this view,
LEMA can also be regarded as injecting the feedback from more powerful LLMs (i.e., GPT-4)
into smaller models (e.g., LLaMA). We highlight one difference here: the injection process
of LEMA is just implemented with instruction-based fine-tuning rather than RL-based
methods. It sheds light that for large pre-trained models, it can directly and effectively
learn from the comparison between unexpected and expected contents through the inputoutput fine-tuning process. This can much save the researchers effort to specially design the
learning algorithms.
A.3 Learning From The World Model
Recent advancements in LLMs have enabled them to perform a step-by-step approach in
problem-solving. However, this multi-step generation process does not inherently imply
that LLMs possess strong reasoning capabilities, as they may merely emulate the superficial
behavior of human reasoning without genuinely comprehending the underlying logic and
rules necessary for precise reasoning. This incomprehension results in mistakes during
the reasoning process and necessitates the assistance of a “world model” that possesses a
consciousness prior about the logic and rules governing the real world. From this perspective, our LEMA framework employs GPT-4 as a “world model” to teach smaller models in
adhering to these logic and rules, rather than merely mimicking the step-by-step behavior.
**B** **Additional Examples**
B.1 Examples in Human Evaluation
Figure 13 illustrates the quality levels of three example corrections.
-----
**Input-Output Format of CoT Data**
**Input:**
Please solve the following math problem.
Question: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and
May?
Answer: Let's think step by step.
**Output:**
Natalia sold 48 / 2 = 24 clips in May. Natalia sold 48 + 24 = 72 clips altogether in April and May. The answer is 72.
**Input-Output Format of Correction Data**
**Input:**
For the following math problem, the original solution is incorrect. Please identify the incorrect step, explain why it is incorrect, and correct the original solution
starting from the incorrect step.
Question: Courtney liked to collect marbles. She kept them in mason jars. One jar had 80 marbles. Her second jar had twice that amount. She just started her
third jar which currently has 1/4 the amount of her first jar. How many marbles does she have in total?
Original Solution:
{
Step 1: 80 marbles in the first jar, 2 x 80 = 160 marbles in the second jar.
Step 2: The third jar has 1/4 of the first jar, which is 80 / 4 = 20 marbles.
Step 3: So, Courtney has 160 + 20 = 180 marbles in total.
Step 4: The answer is 180.
}
Incorrect Step:
**Output:**
Step 3.
Explanation: The calculation in Step 3 is incorrect because it only adds the marbles from the second and third jars. The marbles from the first jar should also be
included in the total.
Correct Solution:
{
…
Step 3: So, Courtney has 80 (first jar) + 160 (second jar) + 20 (third jar) = 260 marbles in total.
Step 4: The answer is 260.
}
_Note: The ellipsis here are tokens that were actually used in the fine-tuning process, as we suppose that simply copying pre-steps is not much informative for_
_learning._
Figure 6: The input-output formats for our CoT data and correction data, respectively.
The input part serves as a prompt and only the loss in the output part participates in the
back-propagation.
**Prompt For Question Evolution**
I want you to act as a math teacher. You need to create relevant math questions for the given question. The created math question must be reasonable, without
unknown variables, and must be understood to answer in pure number by humans. Ensure that the created questions are **diverse** from the original one.
Ensure that the created questions use **different natural language expressions**.
**#Given Question#:** 𝒒𝒊
**#Created Question#:**
Figure 7: The prompt for evolving questions.
B.2 Input-Output Formats for Fine-Tuning
Figure 6 illustrate the input-output formats of CoT data and correction data, respectively.
Note that during the fine-tuning process, the input part serves as a prompt and only the
loss in the output part participates in the back-propagation.
B.3 Evolution Prompt
Figure 7 illustrates our prompt used for evolving new questions from the given seed
question.
-----
Table 4: Statistics of data sizes for the five tasks in our experiments (without question
evolution).
Task CoT Data Correction Data Test Data
GSM8K (Cobbe et al., 2021) 32,421 12,523 1,319
MATH (Hendrycks et al., 2021) 20,009 6,306 5,000
SVAMP (Patel et al., 2021) - - 1,000
ASDiv (Miao et al., 2020) - - 2,084
CSQA (Talmor et al., 2019) 10,536 7,241 1,221
**Annotated CoT Example For CSQA**
Please answer the following commonsense question. Please first explain each candidate answer, then select only one answer that is most relevant to the
question and provide reasons.
**Question: The fox walked from the city into the forest, what was it looking for?**
**Candidate Answers: pretty flowers, hen house, natural habitat, storybook, dense forest.**
**Explanations:**
Pretty flowers - This answer suggests that the fox might be looking for beautiful flowers. However, it is not common for a fox to seek out flowers, as they are
carnivorous animals and do not have a particular interest in plants.
Hen house - This answer implies that the fox might be searching for a hen house to find food, such as chickens. Foxes are known to prey on small animals, and
a hen house would be a potential source of food for them.
Natural habitat - This answer suggests that the fox is looking for its natural habitat, which is typically a forest or wooded area. Foxes are wild animals and are
more comfortable in their natural environment than in a city.
Storybook - This answer implies that the fox is searching for a storybook, which is an inanimate object and not something a fox would be interested in or
capable of finding.
Dense forest - This answer is similar to the natural habitat option, suggesting that the fox is looking for a dense forest, which could be a part of its natural
habitat.
**Reasons and Final Answer:**
The most relevant answer to the question is that the fox was looking for its natural habitat. This is because foxes are wild animals and are more comfortable in
their natural environment, such as a forest, than in a city. The other candidate answers either do not align with the natural behavior of a fox (pretty flowers,
storybook) or are too specific (hen house, dense forest) without enough context to support them as the most likely answer.
The answer is natural habitat.
Figure 8: One annotated CoT example for CSQA.
**C** **More Details For Experimental Setup**
C.1 Data Statistics
Table 4 illustrates basic statics about the tasks and data (without question evolution).
C.2 Evaluation on ASDiv
As mentioned in our setup, the original version of ASDiv contains 2,305 questions and part
of them lead to non-numerical answers. For instance, for the question “Mrs. Hilt has two
pennies, two dimes, and two nickels. Jacob has four pennies, one nickel, and one dime. Who
has more money?”, the answer is the string value “Mrs. Hilt”; for the question “Tessa has 4
apples. Anita gave her 5 more. She needs 10 apples to make a pie. Does she have enough
to make a pie?”, the answer is a Boolean value “False”. As our models are trained on data
derived from GSM8K where questions are all leading to numerical answers, it is reasonable
that these models can not generate non-numerical answers. Therefore, for evaluation on
ASDiv, we filter out questions with non-numerical answers and finally leave 2,084 questions.
Specifically, for the question-answer pair in ASDiv, it will be filtered out if the answer can
not be successfully recognized by the Python function float(·).
C.3 Data Construction For CSQA
The original training examples in CSQA only contain the labeled final answers without
rationales. Therefore, we need to generate CoT for the training examples. We first annotate
rationales for four training examples. Figure 8 shows one annotated example. Specifically,
the CoT contain three parts: the explanation to each candidate answers, the predicted final
answer, and the reason to choose this answer. Then, we utilize GPT-4 to generate rationales
for other training examples and filter out rationales that do not contain the correct final
-----
Table 5: Performances of the best three checkpoints saved during the fine-tuning process
and the average of three results.
GSM8K MATH
Model Training
1st / 2nd / 3rd Avg. 1st / 2nd / 3rd Avg.
CoT Fine-Tuning 81.4 / 81.3 / 81.1 81.3 23.6 / 23.2 / 23.2 23.2
LLaMA-2-70B (Touvron et al., 2023b)
+ Learning From Mistakes 83.5 / 83.4 / 83.2 83.4 (+2.1) 25.0 / 25.0 / 24.6 24.9 (+1.7)
CoT Fine-Tuning 76.2 / 76.2 / 75.7 76.0 19.7 / 19.7 / 19.2 19.5
LLaMA-65B (Touvron et al., 2023a)
+ Learning From Mistakes 77.9 / 77.3 / 77.2 77.5 (+1.5) 20.8 / 20.3 / 20.2 20.4 (+0.9)
CoT Fine-Tuning 68.8 / 68.5 / 68.2 68.5 19.1 / 19.0 / 18.9 19.0
CodeLLaMA-34B (Rozi`ere et al., 2023)
+ Learning From Mistakes 71.7 / 71.0 / 70.9 71.2 (+2.7) 20.4 / 20.2 / 20.0 20.2 (+1.2)
CoT Fine-Tuning 62.9 / 62.7 / 62.7 62.8 12.2 / 11.9 / 11.8 12.0
LLaMA-2-13B (Touvron et al., 2023b)
+ Learning From Mistakes 65.7 / 65.2 / 65.0 65.3 (+2.5) 12.6 / 12.6 / 12.4 12.5 (+0.5)
CoT Fine-Tuning 52.6 / 52.5 / 52.5 52.5 8.7 / 8.5 / 8.5 8.6
LLaMA-2-7B (Touvron et al., 2023b)
+ Learning From Mistakes 54.1 / 53.7 / 53.6 53.8 (+1.3) 9.4 / 8.9 / 8.8 9.0 (+0.4)
Figure 9: The performance curves of LLaMA-2-70B during 2,000 fine-tuning steps.
answers. For generating correction data, we do not require GPT-4 to explicitly identify the
position of mistake. It is because the CoT for commonsense questions does not exhibit a clear
step-wise manner, and our ablation study on math tasks have showed that this information
is less influential to the final performance.
C.4 Full Fine-Tuning Setting
For fully fine-tuning LLaMA-2-70B and Llemma-34B, the learning rate is 1e-5 and the batch
size is 128. We fine-tune LLaMA-2-70B for 3 epochs and Llemma-34B for 2 epochs. The
evaluation results are reported on the final checkpoints. Other setting are kept the same in
Section 3.3.
C.5 Another Round of Correction-Centric Evolution
To explore the scaling trend of LEMA, we take another round of correction-centric evolution
to expand correction data. The second round takes the same 10K seed questions as the first
round. The only difference is that we replace the vanilla model as the fine-tuned models
from the first round to collect inaccurate reasoning paths.
**D** **More Results and Analysis**
D.1 Performances of Best Three Checkpoints
Table 5 shows the performances of the best three checkpoints saved during the fine-tuning
process along with the average of three results. It demonstrates that our main results are
not caused by soem random disturbances during training.
-----
Table 6: Math reasoning performances of various LLMs.
Model GSM8K MATH
_closed-source models_
GPT-4 (OpenAI, 2023b) 92.0 42.5
Claude-2 (Anthropic, 2023) 88.0 -
Flan-PaLM-2 (Anil et al., 2023) 84.7 33.2
GPT-3.5-Turbo (OpenAI, 2023a) 80.8 34.1
PaLM-2 (Anil et al., 2023) 80.7 34.3
_open-source models_
LLaMA-2-7B (Touvron et al., 2023b) 14.6 2.5
Baichuan-2-7B (Yang et al., 2023) 24.5 5.6
SQ-VAE-7B (Wang et al., 2023c) 40.0 7.0
RFT-7B (Yuan et al., 2023) 50.3 -
Qwen-7B (Alibaba, 2023) 51.6 -
LLaMA-2-7B + LEMA (ours) 54.1 9.4
WizardMath-7B (Luo et al., 2023) 54.9 10.7
WizardMath-7B + LEMA (ours) 55.9 11.9
LLaMA-2-13B (Touvron et al., 2023b) 28.7 3.9
SQ-VAE-13B (Wang et al., 2023c) 50.6 8.5
Baichuan-2-13B (Yang et al., 2023) 52.8 10.1
RFT-13B (Yuan et al., 2023) 54.8 -
WizardMath-13B (Luo et al., 2023) 63.9 14.0
LLaMA-2-13B + LEMA (ours) 65.7 12.6
MetaMath-13B (Yu et al., 2023) 72.3 22.4
MetaMath-13B + LEMA (ours) 73.2 22.7
LLaMA-2-70B (Touvron et al., 2023b) 56.8 13.5
RFT-70B (Yuan et al., 2023) 64.8 -
WizardMath-70B (Luo et al., 2023) 81.6 22.7
MuggleMath-70B (Li et al., 2023a) 82.3 -
MetaMath-70B (Yu et al., 2023) 82.3 26.6
LLaMA-2-70B + LEMA (ours) 83.5 25.0
WizardMath-70B + LEMA (ours) 84.2 **27.1**
MetaMath-70B + LEMA (ours) **85.4** 26.9
D.2 Training Curves
Figure 9 shows the performance curves of LLaMA-2-70B during 2,000 fine-tuning steps.
It shows that adding correction data leads to clear improvements during training. These
consistent improvements demonstrate that the effectiveness of our correction data is robust
to the random disturbances during training.
D.3 Comparison with SOTA Models
Table 6 contains the comparison with more SOTA models. Another interesting finding
in Table 6 is that the performance of LLaMA-2-70B + LEMA can be comparable with
MuggleMath-70B (Li et al., 2023a) and MetaMath-70B (Yu et al., 2023). Note that these two
specialized LLMs also take the LLaMA-2-70B as the backbone model while their training
data sizes are much larger than LEMA: MuggleMath has ∼220K CoT data and MetaMath
has ∼400K CoT data, while LEMA only has ∼70K CoT + correction data for math problems.
This comparison further supports the non-homogeneous effectiveness between CoT data
and correction data.
D.4 Ablations of Correction Information
**The explanations and corrected reasoning paths play important roles in LEMA.** As
introduced in Section 2.1, our correction data mainly contains three pieces of information:
the mistake step (M.S.), the corrected solution (C.S.), and the explanation to the mistake
(Exp.). To evaluate their individual contribution to the LEMA performance, we separately
omit each information in our correction data. Figure 12 shows the results: the performance of
LEMA drops significantly without the corrected solution or the explanation, while omitting
-----
Inaccurate Reasoning Paths Corrections with Correct Final Answers Success Rate
8,000 50
40
6,000
30
4,000
20
2,000
10
Success Rate (%)
Number of Examples
0 0
Level 1 Level 2 Level 3 Level 4 Level 5 Level 1 Level 2 Level 3 Level 4 Level 5
Figure 10: Statistics of generated correction data according to different difficulty levels in
MATH. Left: The number of collected inaccurate reasoning paths and generated corrections
with correct final answers under different difficulty levels. Right: The success rate for
correcting inaccurate reasoning paths under different difficulty levels.
|Col1|CoT Fine-Tun|
|---|---|
||0.35 0.30 0.25 0.20 0.15 0.10 LLaMA-2-70B LLaMA-65B|
|PPL ∆||
|||
CoT Fine-Tuning LEMA
0.35 0.25
0.30 0.20
PPL∆ 0.250.20 0.150.10
0.15 0.05
0.10 LLaMA-2-70B LLaMA-65B 0.00 LLaMA-2-70B LLaMA-65B
GSM8K MATH
Figure 11: The differences between the PPLs (∆PPL) on mistaken CoT and correct CoT. A
higher difference indicate that the model can better avoid the mistakes.
the mistake step shows less influence to the performance. We suppose it is because the
corrected solution and the explanation have implicitly informed which step is incorrect.
Therefore, it could be less influential to make the model explicitly identify the position of
mistake.
D.5 Additional Analysis to LEMA
**LEMA can still bring improvements to CoT fine-tuning if the distributions of questions**
**are controlled the same.** In our default setting, correction data contains more challenging
questions that can not be easily solved by various LLMs. This leads to a distribution shift
on the difficulty of questions in training data. As Wang et al. (2023b) indicated that this
distribution shift can also benefit fine-tuning LLMs, we also mitigate the influence from
question distribution shift to further clarify the effectiveness of LEMA. Our ablation setting
CoT-45K can be used to clarify this point: its additional CoT data are just converted from
correction data, thus the question distributions of CoT-45K and our default LEMA-45K
are exactly the same. Therefore, the results in Figure 4 under 45K data size demonstrate
that LEMA still outperforms CoT-alone fine-tuning when the influence from question
distribution shift is kept the same.
**QLoRA fine-tuning cannot fully “digest” a large amount of correction data.** As shown
in Figure 5b, as the correction data expands, the gap between full-fine-tuning and QLoRA
fine-tuning increases. Such an observation is not well aligned with the conclusions of some
existing work. Some work indicated that if the model size is large enough, parameterefficient fine-tuning (PEFT) can achieve comparable performance with fine-tuning (Lester
et al., 2021; An et al., 2022; Sun et al., 2023; Su et al., 2023; Artur Niederfahrenhorst & Ahmad,
2023). We suppose the property of correction data causes the inconsistency in observations.
Specifically, correction data is just auxiliary data that do not directly contribute to the in-task
training. We suppose that models with PEFT can “eat” a large amount of correction data but
cannot fully “digest” them. As a results, the training on correction data with PEFT might
not effectively contribute to the forward reasoning process.
-----
LEMA LEMA w/o M.S. LEMA w/o C.S. LEMA w/o Exp. CoT Alone
84 83.5 26
8382 83.2-0.3 82.3-1.2 82.5-1.0 81.4 2524 25.0 24.9-0.1 24.1-0.9 24.4-0.6 23.6
Performance (%) 81 23
80 22
GSM8K MATH
Figure 12: Performance of LEMA with ablations on correction information. The backbone
LLM is LLaMA-2-70B. For each ablation setting, we mark the influence on performance
compared to the default setting of LEMA.
**The comparison learned in the correction data also influences the CoT generation.** During training on the correction data, LLMs could be aware of the comparison between the
correct and incorrect CoT. We suppose such kind of comparison can take effect during CoT
generation. Based on this intuition, we evaluate the differences between PPLs defined as
follows,
1
∆PPL( ; θ) = ∑ [PPL(ri _qi; θ)_ PPL(ri _qi; θ)],_
_C_ _|_ _−_ _|_
_||C||_ (qi,ri,ci)∈C
where is a set of correction data, θ represents the model parameters after fine-tuning,e
e
_C_
PPL(y|x; θ) returns the perplexity on y with x as the context, _ri is one mistaken CoT for the_
question qi, and ri is the correct CoT extracted from the correction ci. We calculate ∆PPL
for fine-tuned LLaMA-2-70B and LLaMA-65B, based on the correction data for GSM8K
e
and MATH. Figure 11 shows ∆PPL for different fine-tuned models. It shows that LEMA
consistently leads to a higher ∆PPL than CoT-alone fine-tuning.
D.6 Further Analysis on Corrector
In our default setting, we take GPT-4 as the corrector model and our human evaluation in
Section 2.1 supports this choice. In the following, we provide further analysis on the choice
and behavior of the corrector model. Specifically, we want to answer the following research
questions: RQ1: Can we use a less powerful model as the corrector model? RQ2: How
well does GPT-4 perform in self-correction? RQ3: How well does GPT-4 correct inaccurate
reasoning paths for challenging questions?
**Less powerful models are not suitable for generating corrections.** Despite GPT-4, we
have also tried leveraging GPT-3.5-Turbo as the corrector model and assess the quality
of generated corrections. We take another round of human evaluation on 20 corrections
generated by GPT-3.5-Turbo and find that nearly half are of poor quality. Therefore, we
just call GPT-4 for correction generation although it is much more expensive than GPT-3.5Turbo. We believe it is a valuable research direction to explore how to generate high-quality
corrections without GPT-4.
**GPT-4 can correct its own mistakes but with a low success rate.** Specifically, for 2,696
inaccurate reasoning paths generated by GPT-4 on MATH training set, we finally get 217
corrections with correct final answers. It means that GPT-4 only achieves 8.0% success
rate for self-correction. Compared with this low success rate for self-correction, GPT-4 can
more effectively correct mistakes from less powerful models, such as LLaMA-2-70B (37.5%
success rate on MATH) and GPT-3.5-Turbo (26.9% success rate on MATH). One possible
reason for the low success rate of self-correction is that the mistakes generated by GPT-4 are
from more challenging questions, thus these mistakes are naturally harder for correcting.
**GPT-4 still struggles to correct inaccurate reasoning paths for challenging questions.**
The math problems in MATH can be categorized into five levels of difficulty: Level 1 for the
easiest problems and Level 5 for the most challenging ones. Figure 10 shows statistics of our
correction data on MATH according to different difficulty levels. As the difficulty increased
from Level 1 to Level 5, the number of collected inaccurate reasoning paths increased, while
-----
**Generated Correction (Excellent)**
**Question: Courtney liked to collect marbles. She kept them in mason jars. One jar had 80 marbles. Her second jar had twice that amount. She just started her**
third jar which currently has 1/4 the amount of her first jar. How many marbles does she have in total?
**Original Solution:**
Step 1: 80 marbles in the first jar, 2 x 80 = 160 marbles in the second jar.
Step 2: The third jar has 1/4 of the first jar, which is 80 / 4 = 20 marbles.
Step 3: So, Courtney has 160 + 20 = 180 marbles in total.
Step 4: The answer is 180.
**Incorrect Step: Step 3.**
**Explanation: The calculation in Step 3 is incorrect because it only adds the marbles from the second and third jars. The marbles from the first jar should also**
be included in the total.
**Correct Solution:**
…
Step 3: So, Courtney has 80 (first jar) + 160 (second jar) + 20 (third jar) = 260 marbles in total.
Step 4: The answer is 260.
**Generated Correction (Good)**
**Question: Johnny is a dog walker. He can walk 3 dogs at once. He gets paid $15 for a 30-minute walk and $20 for a 60-minute walk. Johnny works for 4 hours**
per day. If he always walks the maximum number of dogs possible and 6 dogs have 60-minute walks per day, how much money does he make in a week where
he works 5 days?
**Original Solution:**
Step 1: 4 hours per day * 5 days = 20 hours per week.
Step 2: Johnny can walk 3 dogs at once, so he can walk 20 / 3 = 6 three-dog walks per day.
…
Step 6: The answer is $360.
**Incorrect Step: Step 2.**
**Explanation: The incorrect step is Step 2. The calculation used in this step is not correct for finding the number of three-dog walks per day. Instead, we should**
calculate the number of 30-minute and 60-minute walks Johnny does.
**Correct Solution:**
…
Step 2: Johnny works for 4 hours per day, which is 240 minutes. He walks 6 dogs for 60 minutes each, which takes 120 minutes. This leaves 120 minutes for
30-minute walks.
…
Step 6: The answer is 1500.
_Note: The explanation is reasonable and the final answer is correct. But the new solution has issues on continuity: the new Step 2 does not exactly follow the_
_original Step 1._
**Generated Correction (Poor)**
**Question: Tom, Tim, and Paul are collecting photos of cars. Paul has 10 photos more than Tim. Tim has one hundred photos less than the total amount of**
photos which is 152. How many photos does Tom have?
**Original Solution:**
Step 1: 152 - 100 = 52.
Step 2: Tom has 52 photos.
Step 3: The answer is 52.
**Incorrect Step: Step 1.**
**Explanation: The step is incorrect because it subtracts 100 from the total number of photos (152) to find the number of photos Tim has. However, the problem**
states that Tim has 100 photos less than the total amount of photos, not the total amount of photos minus 100.
**Correct Solution:**
…
Step 5: The answer is 38.
_Note: Step 1 is correct and the mistake step is Step 2, because Tim, not Tom, has 52 photos._
Figure 13: Some examples of generated corrections and their quality levels under our human
evaluation.
the number of correct corrections (i.e., corrections for which the final answer is correct) first
increases and then decreases. We also calculate the success rate for correcting mistakes
under each difficulty level, dividing the number of correct corrections by the total number
of collected reasoning paths. Figure 10 shows that the success rate significantly drops with
increasing the difficulty. These statistics reveals that there is still huge room for improving
contemporary LLMs on correcting mistakes.
-----
| [
"Shengnan, An",
"Zeqi, Lin",
"Zexiong, Ma",
"Nanning, Zheng",
"Jian-Guang, Lou",
"Weizhu, Chen"
] | 2023-01-01T00:00:00 | null | false | 45 | 4 | null | https://arxiv.org/abs/2310.20689 | https://arxiv.org/abs/2310.20689 | https://www.semanticscholar.org/paper/98b607e7cb84e1a5c87c8a49562ae35435e6722d |
Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning | Recent years have seen impressive performance of transformer-based models on different natural language processing tasks. However, it is not clear to what degree the transformers can reason on natural language. To shed light on this question, this survey paper discusses the performance of transformers on different reasoning tasks, including mathematical reasoning, commonsense reasoning, and logical reasoning. We point out successes and limitations, of both empirical and theoretical nature. | This survey paper discusses the performance of transformers on different reasoning tasks, including mathematical reasoning, commonsense reasoning, and logical reasoning. | # Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning
**Chadi Helwe** [email protected]
**Chlo´e Clavel** [email protected]
**Fabian Suchanek** [email protected]
_T´el´ecom Paris, Institut Polytechnique de Paris, France_
**Abstract**
Recent years have seen impressive performance of transformer-based models on different
natural language processing tasks. However, it is not clear to what degree the transformers can reason on natural language. To shed light on this question, this survey paper
discusses the performance of transformers on different reasoning tasks, including mathematical reasoning, commonsense reasoning, and logical reasoning. We point out successes
and limitations, of both empirical and theoretical nature.
**1. Introduction**
In recent years, language models have achieved impressive results on a variety of natural
language processing (NLP) tasks, such as recognizing textual entailment, machine reading
comprehension, and machine translation. Most of these language models are based on variants of the transformer architecture [Vaswani et al., 2017], for example BERT [Devlin et al.,
2019], T5 [Raffel et al., 2020a], and GPT-3 [Brown et al., 2020]. These models depend entirely on the attention mechanism, and thus eliminate the need for recurrent computations
used by LSTMs [Hochreiter and Schmidhuber, 1997] and GRUs [Cho et al., 2014]. They can
easily learn long-range dependencies, and the computation can be parallelized efficiently.
Today’s models contain millions or even billions of parameters. The models are pre-trained
on large unlabeled corpora, and then later fine-tuned to tackle a specific NLP task. For
example, the pre-trained BERT can reply to questions such as the following:
**Context:** The iPhone is produced by [MASK].
**Expected answer:** Apple
**Model answer:** Apple
However, this performance is deceiving: If we introduce a trap word, the pre-trained BERT
model replies completely differently:
**Context:** Samsung. The iPhone is produced by [MASK].
**Expected answer:** Apple
**Model answer:** Samsung
Here, the BERT model got distracted by the additional word (a technique called misprim_ing [Kassner and Sch¨utze, 2020]). Thus, the question arises to what degree such models_
really “understand” the natural language text, and to what degree they merely respond to
statistical cues. This question is of utmost importance, because if we start relying on such
-----
language models, there is the danger that we obtain good responses only in common test
settings, and completely abstruse replies in less common settings.
In this survey paper, we shed light on this question by investigating some of the most
complex natural language tasks: those that involve reasoning. That is, we look at test data
sets that have explicitly been designed to test the limitations of transformer-based models,
and we investigate to what degree the models really “understand” these tasks. While several
survey papers have focused on transformer-based models and on their applications [Rogers
et al., 2020b, Qiu et al., 2020, Xia et al., 2020, Yates et al., 2021], the capabilities of
transformer-based models in reasoning tasks have so far not been surveyed. Our paper is
organized as follows. In Section 2, we describe some common pitfalls that all models need
to handle in order to reason on natural language text. Section 3 analyzes the performance
of transformer-based models on different reasoning tasks. In Section 4, we describe the
theoretical limitations of the transformer architecture, and show, in Sections 4.2-4.3, that
they impact natural language reasoning. We conclude in Section 5. The appendix contains
a detailed list of basic models (Appendix A) and challenging datasets (Appendix B), as well
as of the model performances (Appendix C).
**2. Common Pitfalls for BERT-like Models**
We discuss here some common pitfalls that any approach needs to handle in order to reason
on natural language. Our discussion focuses on BERT, but the phenomena may affect other
transformer-based models as well.
**2.1 Negation**
The pre-trained BERT model cannot differentiate between positive and negative statements. As an example, take this sentence from the LAnguage Model Analysis (LAMA)
dataset [Petroni et al., 2019], where BERT performs well:
**Context:** Marcel Oopa died in the city of [MASK].
**Expected answer:** Paris
**Model answer:** Paris (-2.3), Lausanne (-3.3), Brussels (-3.3)
When Kassner and Sch¨utze [2020] added the word “not”, BERT delivered the exact same
top-ranked result:
**Context:** Marcel Oopa did not die in the city of [MASK].
**Expected answer:** any city different from Paris
**Model answer:** Paris (-2.4), Helsinki (-3.5),Warsaw (-3.5)
This phenomenon was also confirmed by Ettinger [2020]. Kassner and Sch¨utze [2020] show
that BERT can be fine-tuned to pay attention to the negation. Thus, it is essential to add
examples with negation to the training set. Niven and Kao [2019] point out that these
examples should be diverse enough to not rely only on the word “not”, and Hosseini et al.
[2021] propose an unlikelihood objective function to learn to differentiate between positive
and negative statements.
-----
**2.2 Mispriming**
The ability to distinguish useful from distracting contexts is an essential building block for
any reasoning task. We have already seen an example of mispriming in the introduction.
Mispriming can, in principle, affect any task, and thus also reasoning in particular.
Interestingly, mispriming works only when the distracting word is of the same type as
the expected answer (companies, in our example). The pre-trained BERT is not easily
misled by primes of other types [Niven and Kao, 2019]. Misra et al. [2020] also show that
the problem of mispriming can be overcome by providing more context. In the following
sentence, for example, the mispriming fails:
**Context:** Samsung. The iPhone was produced by [MASK],
whose CEO was Steve Jobs
**Expected answer:** Apple
**Model answer:** Apple
This shows that, although there is some dependency on misprimes, their power decreases
when sentences provide more context.
**2.3 Pattern Heuristics**
Fine-tuned BERT models have a tendency to learn simple pattern-based heuristics. For
example, BERT can be trained to perform well on textual entailment in the MNLI dataset
[Williams et al., 2018]:
**Premise:** The actor and the professor mentioned the lawyer.
**Hypothesis:** The professor mentioned the lawyer.
**Expected answer:** Entailment
**Model answer:** Entailment
To better understand the performance of BERT, McCoy et al. [2019b] designed the HANS
(Heuristic Analysis for NLI Systems) dataset. HANS makes BERT fail as follows:
**Premise:** The doctors advised the presidents and the tourists.
**Hypothesis:** The presidents advised the tourists.
**Expected answer:** Non entailment
**Model answer:** Entailment
This shows that the model learned the “lexical overlap heuristic”, which assumes that a
premise entails all hypotheses constructed from words in the premise. This problem can be
addressed by adding more HANS-like examples to the training dataset.
**2.4 Word Order**
Different studies [Ettinger, 2020, Sankar et al., 2019, Pham et al., 2020, Gupta et al., 2021]
have shown that BERT-like models are unperturbed by grammatically incorrect sentences:
If presented with a sentence of randomly shuffled words, they will still reply correctly. This
insensitivity to order can also mislead textual entailment. For example, the pre-trained
BERT fine-tuned on the MNLI dataset fails to provide the correct answer in the following
-----
case [McCoy et al., 2019b,a]:
**Premise:** The doctor visited the lawyer
**Hypothesis:** The lawyer visited the doctor
**Expected answer:** Non entailment
**Model answer:** Entailment
This issue can be solved by augmenting the training set with modified word order instances
with their respective labels or by fine-tuning the model on sensitive word ordering tasks
such as CoLA [Warstadt et al., 2019].
**3. Types of Reasoning with Transformer-based Models**
**3.1 Horn Rule Reasoning**
A rather simple way of logical reasoning is to infer a conclusion from a set of premises and
rules. Transformer-based models are able to perform such kind of reasoning [Clark et al.,
2020, Talmor et al., 2020, Betz et al., 2020] without any external knowledge, if both the
rules and the facts are mentioned explicitly in the text. They can even generate the proofs
[Saha et al., 2020, Tafjord et al., 2020, Gontier et al., 2020]. Here is an example from the
ParaRules dataset [Clark et al., 2020]:
**Context:** Fact 1: Erin is young.
Fact 2: Erin is not kind.
Fact 3: Peter is nice.
Rule 1: If someone is young and not kind then they are big.
**Question:** Is Erin big?
**Expected answer:** _Conclusion: Erin is big._
_Proof: (Fact 1 & Fact 2) →_ Rule 1 → Conclusion
In this task, the best model, a fine-tuned T5-11B, achieves an accuracy above 95% in proof
generation and question answering. A transformer-based model can thus solve the task
nearly perfectly.
**3.2 Commonsense Reasoning**
Commonsense reasoning is any reasoning task that requires background knowledge that
humans commonly have. For example, the instruction “Can you do a Napoleon for the
camera?” requires commonsense reasoning to realize that the word “Napoleon” expresses
a specific pose [Bender and Koller, 2020]. Several studies have shown that BERT learned a
certain amount of commonsense knowledge during pre-training [Petroni et al., 2019, Davison et al., 2019, Bosselut et al., 2019, Zhou et al., 2020b, Cui et al., 2020]. Consider, for
example, the LAMA dataset [Petroni et al., 2019], which asks:
**Context:** Ravens can [MASK]
**Expected answer:** fly
**Model answer:** fly
The model (the pre-trained BERT-large) is able to recall such commonsense knowledge.
-----
This good performance has prompted the research community to develop datasets that
specifically probe the commonsense reasoning of transformer models. Prominent datasets
are COSMOS QA [Huang et al., 2019], CommonsenseQA [Talmor et al., 2019], the Winograd Schema Challenge [Levesque et al., 2012], SWAG [Zellers et al., 2018], ReCoRD [Zhang
et al., 2018], CODAH [Chen et al., 2019], and PIQA [Bisk et al., 2020]. Transformer-based
models can indeed achieve a high performance (often > 75%) on these datasets, but only
with additional methods. These include data augmentation techniques [Yang et al., 2020],
multi-task learning [Lourie et al., 2021], and fusing knowledge graphs into language models
[Xu et al., 2021]. The following is an example from the CommonsenseQA dataset [Talmor
et al., 2019]:
**Question:** Bats have many quirks, with the exception of ?
**Expected Answer:** Laying eggs
**Model w/o knowledge graph fusing:** Eating bugs
**Model w/ knowledge graph fusing:** Laying eggs
The above example shows that providing the model with information from a knowledge
graph helps the model to correctly answer the question. However, several studies [Forbes
et al., 2019, Zhou et al., 2020b, Lin et al., 2020, Boratko et al., 2020, Singh et al., 2021] show
that when the datasets are specifically changed to target the weaknesses of transformerbased models (for example, by adversarial instances), the models fail. Here is an example
from the COM2SENSE dataset [Singh et al., 2021], which asks the model to judge whether
a given sentence is logically coherent or not:
**Context:** Expecting ten fish in the net, Sammy was
thrilled to see forty fish swimming in there.
**Expected answer:** Coherent
**Model answer:** Coherent
The authors created a counterpart to this question by modifying a few words:
**Context:** Expecting ten fish in the net, Sammy was
thrilled to see five fish swimming in there.
**Expected answer:** Incoherent
**Model answer:** Coherent
When the model (UnifiedQA-3B [Khashabi et al., 2020], a multi-task trained model) is
tricked this way, it fails to predict correctly. This shows that the model can fall prey to
relatively simple modifications, and does not really reason.
**3.3 Event-based Commonsense Reasoning**
Some commonsense reasoning tasks are concerned with the usual sequence of events. For
example, the TIMEDIAL dataset [Qin et al., 2021] evaluates temporal reasoning capabilities in dialogs. The TORQUE dataset [Ning et al., 2020] asks temporal relation questions
such as which events have already finished, given a short passage of text. In a similar spirit,
the MCTACO dataset [Zhou et al., 2019] asks:
-----
**Context:** Mr. Barco has refused US troops or advisers but has accepted US military aid.
**Question:** What happened after Mr. Barco accepted the military aid?
**Choices:** (A) the aid was denied, (B) he received the aid, (C) things started to progress
The best model is a fine-tuned BERT model that uses normalization to convert numerical
expressions such as “30 months” to “2.5 years”. It achieves an F1-score of 69.9% (while
human performance has an F1-score of 87.1%). In the same spirit, Zhou et al. [2021] developed TRACIE, a temporal reasoning textual entailment dataset that asks whether one
event preceded another one. The authors use distant supervision from Wikipedia, and
a symbolic reasoning model called SymTime. This approach predicts the end time of an
event by having two transformer models that predict the start time and the duration of this
event and symbolically compare them against the prediction of another start time event.
With this, the authors achieve an accuracy of about 71% (with variations for different subtasks). Like the “normal” commonsense tasks, event-based tasks can be solved rather
well by transformer-based models. However, this works mainly when symbolic machinery
(such as date normalization and symbolic reasoning) or background knowledge (such as
Wikipedia) is added. Human performance, in the high nineties, remains unachieved.
**3.4 Implicit Reasoning**
We now turn to implicit reasoning tasks, where (different from the tasks in Section 3.1), the
rules and facts are not given explicitly. Many of these tasks can be solved by transformerbased models. Here is an example from the SNLI dataset [Bowman et al., 2015]:
**Premise:** Three girls take cover under their umbrellas.
**Hypothesis:** Nobody has umbrellas.
**Expected answer:** Contradiction
**Model answer:** Contradiction
In this task, a RoBERTa-large model, trained with a few-shot learning method [Wang et al.,
2021a], achieves an accuracy of 93.1%. However, these datasets contain superficial cues that
the models can take advantage of [Schlegel et al., 2020, Huang and Zhu, 2021, Lin et al.,
2021]. To adequately evaluate the understanding of a model, several more challenging logical reasoning tasks have been designed, which mostly take the form of machine reading
comprehension. LogiQA [Liu et al., 2020b], for example, is a multiple choice dataset translated from the National Civil Servants Examination of China:
**Context:** David knows Mr. Zhang’s friend Jack, and Jack knows David’s friend Ms. Lin.
Everyone of them who knows Jack has a master’s degree,
and everyone of them who knows Ms. Lin is from Shanghai.
**Question:** Who is from Shanghai and has a master’s degree?
**Choices:** (A) David (B) Jack (C) Mr. Zhang (D) Ms. Lin
The best language model is a pre-trained RoBERTa model [Liu et al., 2019] fine-tuned
on the training set and has an accuracy of 35.31% (while the best human performance is
96%) [Liu et al., 2020b]. Several other benchmarks in this vein also show bad performance:
ReClor [Yu et al., 2020], QuAIL [Rogers et al., 2020a], ConTRoL [Liu et al., 2020a], Strate
-----
gyQA [Geva et al., 2021], AR-LSAT [Zhong et al., 2021], and CLUTRR [Sinha et al., 2019].
This shows that transformer-based models are currently unable to build a representation
of a longer text and draw a logical conclusion from it. This weakness can be remedied to
some degree by adding symbolic representations on top of RoBERTa, such as graph-based
modules [Huang et al., 2021, Ouyang et al., 2021], or logical information [Wang et al.,
2021b]. Other approaches develop neuro-symbolic methods, which teach reasoning strategies by gradient-based optimisation [Minervini et al., 2020], or combine probabilistic logic
programming with neural networks [Manhaeve et al., 2018]. Integrating logical information
into RoBERTa pushes the performance on the easier questions of ReClor to 81.4%. However, the more difficult questions of these datasets incur performances of 50%-60%. The
same is true for comparison-based tasks. The RICA dataset [Zhou et al., 2020a], for example, asks:
**Context:** A prindag is smaller than a flurberg,
so a flurberg is [MASK] likely to contain a prindag.
**Expected answer:** more
Pre-trained and fine-tuned language models such as GPT-2 [Radford et al., 2019] and
RoBERTa achieve a dismal performance of 50% on unseen inferences. Thus, these models
are unable to learn comparisons between (fictitious) objects.
**3.5 Mathematical Reasoning**
Mathematical reasoning is the process of reasoning about different mathematical aspects
such as arithmetic operations, numerical comparison, counting, and sorting. The level of
complexity can range from solving simple mathematical equations to proving theorems. The
following is an example of a math problem that is not linguistically complex, taken from
the DeepMind mathematics dataset [Saxton et al., 2019]:
**Context:** Calculate -841880142.544 + 411127
**Expected answer:** -841469015.544
This task can be solved by GPT-3 [Henighan et al., 2020]. Along the same line, Lample and
Charton [2019] show that a transformer network can compute function integrals, and solve
differential equations. The next more complex problems are math word problems (MWP),
which consist of a short text that describes a mathematical problem (such as a one-unknown
math problem) and a question. This task requires a model to extract relevant information
from the text to perform mathematical reasoning to solve it. The most prominent MWP
datasets are MAWPS [Koncel-Kedziorski et al., 2016] and ASDiv-A [Miao et al., 2020] (both
for one-unknown arithmetic problems). However, these datasets can be solved by models
even when the order of the words is modified and when the questions are omitted, proving
that the models rely on heuristic patterns found in the problem narrative. To remove these
artifacts, Patel et al. [2021] developed the SVAMP dataset, which applies simple variations
to ASDiv-A. The following is an example:
-----
**Context:** Jack had 8 pens and Mary had 5 pens.
Mary gave 3 pens to Jack.
**Question:** How many pens does Jack have now?
**Expected answer:** 8 + 3 = 11
On this dataset, a trained model achieves an accuracy of around 65%. Among the different
mathematical operators (+,-,/,*), the model accuracy ranges from 65.3% for divisions to
35.8% for multiplications. Also, the performance drops drastically when the equations have
more than two numbers or more than one operator.
An even more complicated dataset is MATH [Hendrycks et al., 2021], which consists of
competition problems in mathematics:
**Context:** Tom has a red marble, a green marble, a blue marble,
and three identical yellow marbles.
**Question:** How many different groups of two marbles can Tom choose?
**Expected answer:** There are two cases here: either Tom chooses
two yellow marbles (1 result), or he chooses two marbles of different
colors (( [4]
2 [)= 6 results). The total number of distinct pairs]
of marbles Tom can choose is 1 + 6 = 7
Here, the best model, a fine-tuned GPT-2 model, achieves an accuracy of only 6.9%.
Another dataset at the boundary of what is currently feasible is the IsarStep benchmark [Li et al., 2021], which is concerned with mathematical proofs:
**Context:** 2b[2] = a[2] _⇒_ [Missing Proposition] ⇒∃ _c ∈_ Z. a = 2c
**Expected answer:** a is even
The authors developed a hierarchical transformer model, which outperforms all the other
tested baselines with an accuracy of 22.8% for the top-1 prediction, and an accuracy of
35.2% for the top-10 predictions. Other mathematical theorem proving datasets in the
same spirit are HOList [Bansal et al., 2019] and MetaMathStep [Polu and Sutskever, 2020].
In conclusion, these tasks show that transformer-based models cannot be trained to “understand” mathematical word problems and to “generate” mathematical proofs. In contrast to
simple mathematical problems (as the example we mentioned above), which are not linguistically complex, such challenging tasks require more than huge transformer-based models
to achieve high performance.
**3.6 Summary**
In all of these reasoning tasks, transformer-based models rarely achieve human performance.
That is not surprising, given that they are general-purpose tools that feed mainly from
training data, and lack any symbolic machinery that is commonly considered essential for
such tasks. In fact, it is impressive that the models perform so well at all.
Among the different reasoning tasks, we find that when the transformer-based models
are explicitly given all the information required to perform deductive reasoning, such as
facts and rules, the models can easily learn logical reasoning. However, when this information is stated only implicitly in the text or in the supervision, the models struggle. In
-----
terms of commonsense reasoning, transformer-based models have a certain degree of commonsense knowledge learned during pre-training. However, they can be easily disrupted
with adversarial commonsense instances. They are also limited in logical reasoning over
events and physical commonsense.
We thus see that the strength of transformer-based models comes from two components:
simple patterns in the training data, combined with background knowledge from the pretraining. This combination allows the models to perform well on tasks such as Horn Rule
Reasoning (where the model learns a pattern on the training data), simple commonsense
reasoning (where the answer was learned from the pretraining), and simple mathematical
calculations (where the model learns a pattern during training). However, when these elements are absent, the models struggle. We have seen several commonsense datasets that
specifically avoid patterns, or use adversarial patterns. Here, the models fail. In particular,
textual understanding remains out of reach for now, if the tasks are sufficiently different
from each other to avoid patterns, and if fictional entities are used (for which no background
knowledge is available). Mathematical reasoning, too, falls into this category, if the tasks
do not follow a pattern.
This does not mean that the tasks would be unsolvable in general: Several studies [Huang
et al., 2021, Ouyang et al., 2021, Wang et al., 2021b, Yang et al., 2020, Lourie et al., 2021,
Xu et al., 2021] show that the addition of symbolic knowledge (such as date normalization,
quasi-logical reasoning, and graph-based modules) and the use of supplementary techniques
such as data augmentation, multi-task learning, and knowledge-base fusion improve the
performance. Such tools may thus hold the key to address even the harder reasoning
problems.
**4. Impossible Reasoning Tasks**
We have seen that transformer-based models can be trained and tuned to work on reasoning
tasks, and that some tasks require additional machinery. In this section, we turn to tasks
that BERT will never be able to solve without additional machinery, no matter the amount
of tuning.
**4.1 Theoretical Limitations of Transformers**
Hahn [2020] studied the theoretical limitations of transformers. The main limitations come
from the fact that self-attention does not have the same level of expressiveness as recurrent
models such as LSTMs. In particular, transformers cannot emulate a stack and finite-state
automata. Based on this insight, Hahn proved that transformer-based networks cannot
model two languages, known as Parity and Dyck-2. Parity is the set of bit strings where
the number of 1s is even. Dyck-2 is the language of strings that are balanced sequences
of round brackets “()” and square brackets “[]”. Hahn shows that for any transformer
network, we can find an integer N such that strings of these languages longer than N
cannot be recognized. That is: to recognize such strings, the number of heads and layers
of the model would have to increase with the input length N . Bhattamishra et al. [2020b]
have verified these results in practice, and showed that when the input length is bounded
during training, LSTM can generalize to longer instances, whereas transformer architectures
cannot. Other theoretical limitations are studied in Bhattamishra et al. [2020a]. These
-----
limitations are of rather theoretical nature. And yet, they have a very concrete impact on
natural language reasoning. To show this, we designed two experiments: the light switch
task and the cake task. All our datasets and the code of the experiments can be found at
```
https://github.com/dig-team/FailBERT.
```
**4.2 Light Switch Task**
Our first task puts the Parity language into practice. The input is a word of the language
```
((a|b) and )*(a|b), with a =“I ate a pizza” and b =“I operated the light switch”, i.e.,
```
the input is a sentence that describes a sequence of these two activities. Assuming that
the light is off in the beginning, the goal is to determine whether the light is on in the end
(which corresponds to an odd number of switches). This is a simple reasoning task that a
school child can solve.
**Context:** I operated the light switch, and I ate a pizza, and I ate a pizza.
**Expected answer:** ON
We fine-tuned a pre-trained RoBERTa model for 50 iterations on 20k examples, each containing up to 20 a’s and b’s. On the training and validation datasets, the model achieves
an F-score > 0.99. However, when testing on examples that contain more than 20 a’s and
_b’s, we obtain on average a random precision of 0.50. This confirms that the theoretical_
limitation of the transformer-based model has practical implications for natural language
reasoning.
**4.3 Cake Task**
Our next task puts the Dyck language into practice. The input to the task is a word of
the language S → _ϵ|SS|aSa[′]|bSb[′], where a =“I add a peanut layer to my cake”, a[′]_ =“I eat
a peanut layer from my cake”, b =“I add a chocolate layer to my cake”, and b[′] =“I eat a
chocolate layer from my cake” (with the conjunction “and” in suitable places). The goal is
to determine whether this sequence of steps is possible and the cake is gone. Again, this is
a simple reasoning task that a child can solve on a sheet of paper (or with suitable baking
tools).
**Context:** I add a peanut layer and I eat a peanut layer.
**Expected answer:** Yes
**Context:** I add a peanut layer and I eat a chocolate layer.
**Expected answer:** No
We fine-tuned a pre-trained RoBERTa model on 24k examples, each with up to 20 items,
and with nesting depths up to 15, for 50 iterations. Again, the model achieves an F-score
_> 0.99 on the training and validation sets. However, when testing on examples that contain_
more than 20 items, and on depths larger than 15, we obtain, as expected, on average
a dismal F-score of 0.55. This shows again that the theoretical limitation of BERT-like
models lead to very concrete limitations on natural language reasoning.
10
-----
**5. Conclusion**
This survey paper has shown that transformer-based models can perform a shallow level
of reasoning on textual data but lack deeper reasoning capabilities. The first stumbling
stones are some common pitfalls for BERT-like models: word order, negation, shallow
patterns, and priming problems. The models have to be explicitly trained to deal with
these. We have then discussed several reasoning tasks – from the simple Horn rule reasoning
to the more complex commonsense, textual understanding, and mathematical tasks. On
these tasks, the performance of transformer-based models is significantly behind human
performance. One promising direction of research here is to add symbolical knowledge
to the system – an approach that has been pursued with success on some of the tasks.
However, we have also recalled that transformer-based models have theoretical limitations
in that they cannot model the languages Dyck-2 and Parity. We have shown on small
reasoning tasks that these theoretical limitations, too, can hinder reasoning on natural
language. Further research could explore how different types of positional encodings (such
as learned embeddings, sinusoidal embeddings, or CAPE [Likhomanenko et al., 2021]) and
different attention mechanisms (such as saturated attention [Merrill et al., 2021]) could help
the models overcome even these limitations.
**Acknowledgements. This work was partially funded by ANR-20-CHIA-0012-01 (“NoRDF”).**
**References**
Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. Holist:
An environment for machine learning of higher order logic theorem proving. In Interna_tional Conference on Machine Learning, 2019._
Emily M Bender and Alexander Koller. Climbing towards nlu: On meaning, form, and
understanding in the age of data. In Annual Meeting of the Association for Computational
_Linguistics, 2020._
Gregor Betz, Christian Voigt, and Kyle Richardson. Critical thinking for language models.
_arXiv preprint arXiv:2009.07185, 2020._
Satwik Bhattamishra, Kabir Ahuja, and Navin Goyal. On the ability of self-attention
networks to recognize counter languages. In Conference on Empirical Methods in Natural
_Language Processing, 2020a._
Satwik Bhattamishra, Kabir Ahuja, and Navin Goyal. On the practical ability of recurrent neural networks to recognize hierarchical languages. In International Conference on
_Computational Linguistics, 2020b._
Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial
_Intelligence, 2020._
11
-----
Michael Boratko, Xiang Li, Tim O’Gorman, Rajarshi Das, Dan Le, and Andrew McCallum.
Protoqa: A question answering dataset for prototypical common-sense reasoning. In
_Conference on Empirical Methods in Natural Language Processing, 2020._
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz,
and Yejin Choi. Comet: Commonsense transformers for automatic knowledge graph
construction. In Annual Meeting of the Association for Computational Linguistics, 2019.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large
annotated corpus for learning natural language inference. In Conference on Empirical
_Methods in Natural Language Processing, 2015._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla
Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini
Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya
Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner,
Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models
are few-shot learners. In Advances in Neural Information Processing Systems, 2020.
Michael Chen, Mike D’Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. Codah: An
adversarially-authored question answering dataset for common sense. In Workshop on
_Evaluating Vector Space Representations for NLP, 2019._
Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi
Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Conference on Empirical
_Methods in Natural Language Processing, 2014._
Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over
language. arXiv preprint arXiv:2002.05867, 2020.
Leyang Cui, Sijie Cheng, Yu Wu, and Yue Zhang. Does bert solve commonsense task via
commonsense knowledge? arXiv preprint arXiv:2008.03945, 2020.
Joe Davison, Joshua Feldman, and Alexander Rush. Commonsense knowledge mining from
pretrained models. In Conference on Empirical Methods in Natural Language Processing
_and the International Joint Conference on Natural Language Processing, 2019._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of
deep bidirectional transformers for language understanding. In Conference of the North
_American Chapter of the Association for Computational Linguistics, 2019._
Allyson Ettinger. What bert is not: Lessons from a new suite of psycholinguistic diagnostics
for language models. Transactions of the Association for Computational Linguistics, 8:
34–48, 2020.
Maxwell Forbes, Ari Holtzman, and Yejin Choi. Do neural language representations learn
physical commonsense? arXiv preprint arXiv:1908.02899, 2019.
12
-----
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant.
Did aristotle use a laptop? a question answering benchmark with implicit reasoning
strategies. Transactions of the Association for Computational Linguistics, 2021.
Nicolas Gontier, Koustuv Sinha, Siva Reddy, and Chris Pal. Measuring systematic generalization in neural proof generation with transformers. Advances in Neural Information
_Processing Systems, 2020._
Ashim Gupta, Giorgi Kvernadze, and Vivek Srikumar. Bert & family eat word salad:
Experiments with text understanding. In AAAI Conference on Artificial Intelligence,
2021.
Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. The argument
reasoning comprehension task: Identification and reconstruction of implicit warrants. In
_Conference of the North American Chapter of the Association for Computational Lin-_
_guistics, 2018._
Michael Hahn. Theoretical limitations of self-attention in neural sequence models. Trans_actions of the Association for Computational Linguistics, 2020._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the
math dataset. arXiv preprint arXiv:2103.03874, 2021.
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson,
Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for
autoregressive generative modeling. arXiv preprint arXiv:2010.14701, 2020.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation,
1997.
Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, and
Aaron Courville. Understanding by understanding not: Modeling negation in language
models. In Conference of the North American Chapter of the Association for Computa_tional Linguistics, 2021._
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos QA: Machine
reading comprehension with contextual commonsense reasoning. In Conference on Em_pirical Methods in Natural Language Processing and the International Joint Conference_
_on Natural Language Processing, 2019._
Shanshan Huang and Kenny Q Zhu. Statistically profiling biases in natural language reasoning datasets and models. arXiv preprint arXiv:2102.04632, 2021.
Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, and Xiaodan Liang. Dagn: Discourseaware graph network for logical reasoning. In Conference of the North American Chapter
_of the Association for Computational Linguistics, 2021._
13
-----
Nora Kassner and Hinrich Sch¨utze. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Annual Meeting of the Association for
_Computational Linguistics, 2020._
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark,
and Hannaneh Hajishirzi. Unifiedqa: Crossing format boundaries with a single qa system.
In Conference on Empirical Methods in Natural Language Processing, 2020.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
Mawps: A math word problem repository. In Conference of the North American Chapter
_of the Association for Computational Linguistics: Human Language Technologies, 2016._
Guillaume Lample and Fran¸cois Charton. Deep learning for symbolic mathematics. In
_International Conference on Learning Representations, 2019._
Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge.
In Conference on the Principles of Knowledge Representation and Reasoning, 2012.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed,
Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence
pre-training for natural language generation, translation, and comprehension. In Annual
_Meeting of the Association for Computational Linguistics, 2019._
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence Paulson. Isarstep: a benchmark for high-level
mathematical reasoning. In International Conference on Learning Representations, 2021.
Tatiana Likhomanenko, Qiantong Xu, Ronan Collobert, Gabriel Synnaeve, and Alex Rogozhnikov. Cape: Encoding relative positions with continuous augmented positional
embeddings. arXiv preprint arXiv:2106.03143, 2021.
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. Birds have four legs?! numersense: Probing numerical commonsense knowledge of pre-trained language models.
In Conference on Empirical Methods in Natural Language Processing, 2020.
Jieyu Lin, Jiajie Zou, and Nai Ding. Using adversarial attacks to reveal the statistical bias
in machine reading comprehension models. arXiv preprint arXiv:2105.11136, 2021.
Hanmeng Liu, Leyang Cui, Jian Liu, and Yue Zhang. Natural language inference in context–
investigating contextual reasoning over long texts. In AAAI Conference on Artificial
_Intelligence, 2020a._
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. Logiqa:
A challenge dataset for machine reading comprehension with logical reasoning. arXiv
_preprint arXiv:2007.08124, 2020b._
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy,
Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized
bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
14
-----
Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. In AAAI
_Conference on Artificial Intelligence, 2021._
Robin Manhaeve, Sebastijan Dumanˇci´c, Angelika Kimmig, Thomas Demeester, and Luc
De Raedt. Deepproblog: Neural probabilistic logic programming. Advances in Neural
_Information Processing Systems, 2018._
R Thomas McCoy, Junghyun Min, and Tal Linzen. Berts of a feather do not generalize
together: Large variability in generalization across models with similar test set performance. In Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks
_for NLP, 2019a._
Tom McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing
syntactic heuristics in natural language inference. In Annual Meeting of the Association
_for Computational Linguistics, 2019b._
William Merrill, Yoav Goldberg, Roy Schwartz, and Noah A Smith. On the power of
saturated transformers: A view from circuit complexity. arXiv preprint arXiv:2106.16213,
2021.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and
developing english math word problem solvers. In Annual Meeting of the Association for
_Computational Linguistics, 2020._
Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, and Tim
Rockt¨aschel. Learning reasoning strategies in end-to-end differentiable proving. In Inter_national Conference on Machine Learning, 2020._
Kanishka Misra, Allyson Ettinger, and Julia Rayz. Exploring bert’s sensitivity to lexical
cues using tests from semantic priming. In Conference on Empirical Methods in Natural
_Language Processing, 2020._
Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth. Torque:
A reading comprehension dataset of temporal ordering questions. In Conference on Em_pirical Methods in Natural Language Processing, 2020._
Timothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. In Annual Meeting of the Association for Computational Linguistics,
2019.
Siru Ouyang, Zhuosheng Zhang, and Hai Zhao. Fact-driven logical reasoning. arXiv preprint
_arXiv:2105.10334, 2021._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve
simple math word problems? In Conference of the North American Chapter of the
_Association for Computational Linguistics, 2021._
15
-----
Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang
Wu, and Alexander Miller. Language models as knowledge bases? In Conference on Em_pirical Methods in Natural Language Processing and the International Joint Conference_
_on Natural Language Processing, 2019._
Thang M Pham, Trung Bui, Long Mai, and Anh Nguyen. Out of order: How important
is the sequential order of words in a sentence in natural language understanding tasks?
_arXiv preprint arXiv:2012.15180, 2020._
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem
proving. arXiv preprint arXiv:2009.03393, 2020.
Lianhui Qin, Aditya Gupta, Shyam Upadhyay, Luheng He, Yejin Choi, and Manaal Faruqui.
Timedial: Temporal commonsense reasoning in dialog. arXiv preprint arXiv:2106.04571,
2021.
Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pretrained models for natural language processing: A survey. Science China Technological
_Sciences, 2020._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.
Language models are unsupervised multitask learners. OpenAI blog, 2019.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael
Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning
with a unified text-to-text transformer. Journal of Machine Learning Research, 2020a.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael
Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning
with a unified text-to-text transformer. Journal of Machine Learning Research, 2020b.
Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. Getting closer to
ai complete question answering: A set of prerequisite real tasks. In AAAI Conference on
_Artificial Intelligence, 2020a._
Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in bertology: What we know
about how bert works. Transactions of the Association for Computational Linguistics,
2020b.
Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, and Mohit Bansal. Prover: Proof
generation for interpretable reasoning over rules. In Conference on Empirical Methods in
_Natural Language Processing, 2020._
Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, and Yoshua
Bengio. Do neural dialog systems use the conversation history effectively? an empirical
study. In Annual Meeting of the Association for Computational Linguistics, 2019.
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning
_Representations, 2019._
16
-----
Viktor Schlegel, Marco Valentino, Andr´e Freitas, Goran Nenadic, and Riza Theresa BatistaNavarro. A framework for evaluation of machine reading comprehension gold standards.
In Language Resources and Evaluation Conference, 2020.
Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-Lin Wu, Xuezhe Ma, and
Nanyun Peng. Com2sense: A commonsense reasoning benchmark with complementary
sentences. arXiv preprint arXiv:2106.00969, 2021.
Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. Clutrr:
A diagnostic benchmark for inductive reasoning from text. In Conference on Empirical
_Methods in Natural Language Processing and the International Joint Conference on Nat-_
_ural Language Processing, 2019._
Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark. Proofwriter: Generating implications, proofs, and abductive statements over natural language. _arXiv preprint_
_arXiv:2012.13048, 2020._
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa:
A question answering challenge targeting commonsense knowledge. In Conference of the
_North American Chapter of the Association for Computational Linguistics, 2019._
Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, and Jonathan Berant. Leap-ofthought: Teaching pre-trained models to systematically reason over implicit knowledge.
_arXiv preprint arXiv:2006.06609, 2020._
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances
_in Neural Information Processing Systems, 2017._
Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. Does it make
sense? and why? a pilot study for sense making and explanation. In Annual Meeting of
_the Association for Computational Linguistics, 2019._
Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. Entailment as few-shot
learner. arXiv preprint arXiv:2104.14690, 2021a.
Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming
Zhou, and Nan Duan. Logic-driven context extension and data augmentation for logical
reasoning of text. arXiv preprint arXiv:2105.03659, 2021b.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability
judgments. Transactions of the Association for Computational Linguistics, 2019.
Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for
sentence understanding through inference. In Conference of the North American Chapter
_of the Association for Computational Linguistics, 2018._
Patrick Xia, Shijie Wu, and Benjamin Van Durme. Which* bert? a survey organizing
contextualized encoders. In Conference on Empirical Methods in Natural Language Pro_cessing, 2020._
17
-----
Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, and Xuedong Huang.
Fusing context into knowledge graph for commonsense reasoning. In Annual Meeting of
_the Association for Computational Linguistics, 2021._
Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras,
Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, and Doug Downey. G-daug: Generative
data augmentation for commonsense reasoning. In Conference on Empirical Methods in
_Natural Language Processing, 2020._
Andrew Yates, Rodrigo Nogueira, and Jimmy Lin. Pretrained transformers for text ranking:
Bert and beyond. In ACM International Conference on Web Search and Data Mining,
2021.
Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. Reclor: A reading comprehension
dataset requiring logical reasoning. In International Conference on Learning Representa_tions, 2020._
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial
dataset for grounded commonsense inference. In Conference on Empirical Methods in
_Natural Language Processing, 2018._
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag:
Can a machine really finish your sentence? In Annual Meeting of the Association for
_Computational Linguistics, 2019._
Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin
Van Durme. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018.
Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin,
Ming Zhou, and Nan Duan. Ar-lsat: Investigating analytical reasoning of text. arXiv
_preprint arXiv:2104.06598, 2021._
Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. “going on a vacation” takes
longer than “going for a walk”: A study of temporal commonsense understanding. In
_Conference on Empirical Methods in Natural Language Processing and the International_
_Joint Conference on Natural Language Processing, 2019._
Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth.
Temporal reasoning on implicit events from distant supervision. In North American
_Chapter of the Association for Computational Linguistics, 2021._
Pei Zhou, Rahul Khanna, Bill Yuchen Lin, Daniel Ho, Xiang Ren, and Jay Pujara. Can bert
reason? logically equivalent probes for evaluating the inference capabilities of language
models. arXiv preprint arXiv:2005.00782, 2020a.
Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. Evaluating commonsense in
pre-trained language models. In AAAI Conference on Artificial Intelligence, 2020b.
18
-----
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio
Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In IEEE international conference on
_computer vision, 2015._
**Appendix**
This appendix describes the basic transformer-based models (Appendix A), the datasets
mentioned above that are particularly challenging (Appendix B), and the performances of
the models (Appendix C).
**Appendix A. Models**
**A.1 The Transformer Model**
The transformer model [Vaswani et al., 2017] is a neural network architecture that is based
entirely on the attention mechanism. Thereby, it eliminates the need for recurrent computation used by LSTMs [Hochreiter and Schmidhuber, 1997] and GRUs [Cho et al., 2014]. Also,
it easily learns long-range dependencies, and it allows the computation to be performed in
parallel. The transformer achieved state of the art results in machine translation.
**A.2 BERT**
BERT [Devlin et al., 2019] is a pre-trained language mode that consists of a stack of transformer blocks. BERT was pre-trained on two large corpora: The Books Corpus [Zhu et al.,
2015] (800M words) and Wikipedia (2500M words). BERT was pre-trained on two tasks:
Masked Language Modeling (MLM) and Next Sentence Prediction (NSP).
The task of MLM consists of training the model to predict a masked word given the other
words in a sentence. The dataset is constructed by choosing 15% of its tokens to be masked,
and by replacing 80% of them with the [MASK] token, 10% with a random token, and 10%
with the original token. The BERT model is trained to predict the masked word based on
the context of the sentences. The task of NSP consists of training the model to learn the
relationship between two sentences by taking as input two sentences and predicting if one
sentence follows the other.
The BERT-base model consists of 12 layers of transformer blocks with a hidden size of
768. It has 110M parameters. The BERT-large model consists of 24 layers of transformer
blocks with a hidden size of 1024, and has 340M parameters.
**A.3 RoBERTa**
RoBERTa [Liu et al., 2019] is an improved BERT model, which achieves better results than
BERT on different NLP tasks. The model was pre-trained longer and on a larger dataset
than BERT, by including three more datasets, namely the CommonCrawl News dataset of
63 million articles, the Web text corpus, and the Stories Corpus from Common Crawl. The
authors pre-trained the model on longer sequences, removed the NSP task, and introduced
dynamic masking (a masking technique to dynamically change the masked tokens after each
training epoch). Both variants of RoBERTa, RoBERTa-base and RoBERTa-large, have an
19
-----
architecture that is similar to the one of BERT-base and BERT-large, respectively, but use
more parameters.
**A.4 BART**
BART [Lewis et al., 2019] is a denoising autoencoder for pre-training sequence-to-sequence
models. The model is composed of an encoder and a decoder. The encoder is a bidirectional
encoder such as BERT, and the decoder is GPT, an autoregressive decoder. Different
pre-training objectives were tested, such as token masking, token infilling, and sentence
permutations. The effectiveness of such pre-training objectives depends on the end tasks.
The BART-base model consists of 6 encoders and 6 decoders. It has 140M parameters. The
BART-large model consists of 12 encoders and 12 decoders, and has 400M parameters.
**A.5 GPT-N Models**
GPT is a neural language model that is pre-trained to predict the next word given all the
previous words. The model consists of a stack of transformer decoders. GPT exists in
different versions: GPT-1 was the original model. GPT-2 is a GPT model with 1.5 billion
parameters, while GPT-3 can have up to 175 billion parameters.
**A.6 T5**
T5 is a text-to-text transfer transformer [Raffel et al., 2020b]. It uses a unified architecture
that can be trained on a variety of NLP problems. Each problem is formulated as a text-totext approach. It consists of an encoder-decoder architecture that is similar to the BERT
model. However, T5 uses a causal self-attention and a fill-in-the-blank denoising pre-training
objective. There are different T5 models with different sizes: The smallest version of T5
consists of 12 layers of transformer blocks with a hidden size of 512. It has 60M parameters.
The largest T5 model consists of 24 layers of transformer blocks with a hidden size of 1024.
It has 11B parameters.
**Appendix B. Datasets**
**B.1 ParaRules**
ParaRules [Clark et al., 2020] is a dataset that serves to evaluate deductive reasoning capabilities in language models. It consists of 40K synthetic questions. These instances were
generated for 2K paraphrased facts, which were acquired by crowdworkers. Here is an example:
**Context:** Harry can do magic.
Muggles cannot do magic.
If a person can do magic then they can vanish.
Mr Dursley is a Muggle.
**Question:** Can Harry vanish ?
**Expected answer:** True
20
-----
**B.2 ProtoQA**
ProtoQA [Boratko et al., 2020] is a question-answer dataset that is designed to evaluate
commonsense reasoning capabilities in prototypical situations. A prototypical situation is
represented as a question that can have multiple common answers. Here is an example with
its possible answers:
**Question:** Name a profession where you might be fired if you lost your voice
**Expected answers:** Radio host, Teacher
The dataset is split into 9762 questions for training, 52 for validation, and 102 for testing.
**B.3 COM2SENSE**
The COM2SENSE dataset [Singh et al., 2021] was designed to evaluate the commonsense reasoning capabilities in language models. The dataset includes 4K natural language
true/false statements, with each sample paired with its complementary counterpart. The
task consists of asking a model to judge whether a given sentence is logically coherent or
not:
**Context:** Expecting ten fish in the net, Sammy was
thrilled to see forty fish swimming in there.
**Expected answer:** Coherent
The authors created a counterpart to this question by modifying a few words:
**Context:** Expecting ten fish in the net, Sammy was
thrilled to see five fish swimming in there.
**Expected answer:** Incoherent
**B.4 CODAH**
The CODAH dataset [Chen et al., 2019] was designed to target the weaknesses of the stateof-the-art language models. The dataset was adversarially-constructed by allowing crowd
workers to receive feedback from a pre-trained model and use this information to create
challenging commonsense questions. The dataset consists of 2801 questions. The following
is an example from the dataset:
**Context:** A man on his first date wanted to break the ice. He
**Choices:** (A) drank all of his water.
(B) threw the ice at the wall.
(C) looked at the menu.
**(D) made a corny joke.**
21
-----
**B.5 CATS**
The CATs dataset [Zhou et al., 2020b] reframes 6 different commonsense reasoning benchmarks to evaluate pre-trained transformer-based models on word-level and sentence-level
tasks. These 6 different benchmarks are Sense Making [Wang et al., 2019], the Winograd
Schema Challenge [Levesque et al., 2012], SWAG [Zellers et al., 2018], HellaSwag [Zellers
et al., 2019], Sense Making with Reasoning [Wang et al., 2019], and the Argument Reasoning Comprehension Task [Habernal et al., 2018]. Also, they created a new task called
Conjunction Acceptability to evaluate logical commonsense-knowledge in language models.
Here is an example from CATs:
**Choices:** (A) Money can be used for buying cars.
(B) Money can be used for buying stars.
**Expected Answer:** (A)
Here, the model has to differentiate between statements that make sense and statements
that don’t.
**B.6 PIQA**
The PIQA dataset [Bisk et al., 2020] is a benchmark to evaluate the physical commonsense
capabilities of language models. It consists of a set of questions, where each question has
two possible answers, but only one is correct. The training set has around 16000 instances,
while the validation set and the testing sets have around 2000 and 3000 examples, respectively. The following is an instance of the dataset:
**Context:** To make a hard shelled taco,
**Choices:** (A) put seasoned beef, cheese, and lettuce onto the hard shell.
**(B) put seasoned beef, cheese, and lettuce into the hard shell.**
**B.7 TIMEDIAL**
TIMEDIAL [Qin et al., 2021] is a dataset to test temporal commonsense reasoning in dialogs. It consists of 1.1K dialogs represented as multiple-choice cloze tasks. This task
requires deep reasoning capabilities, such as performing different arithmetic operations over
temporal expressions with a need for commonsense reasoning. Here is an example:
**Context:** A: How long do you want the house? All summer ?
B: No, just for six weeks.
A: I’m afraid I can only rent it for two months.
B: My holiday is only, [MASK] but I think my brother
and his family would take it for the other two weeks .
**Choices:** (A) six decades
**(B) 45 days**
**(C) six weeks**
(D) two months
22
-----
**B.8 TORQUE**
The TORQUE dataset [Ning et al., 2020] is a reading comprehension dataset concerning
temporal ordering. It consists of 21K questions, split into 80% for training, 5% for validation, and 15% for testing. Here is an example:
**Context:** Heavy snow is causing disruption to transport across the UK,
with heavy rainfall bringing flooding to the south-west of England.
Rescuers searching for a woman trapped in a landslide at her home
in Looe, Cornwall, said that had found a body.
**Question:** What events have already finished?
**Expected answers:** searching, trapped, landslide, said, found
**B.9 MCTACO**
The MCTACO dataset [Zhou et al., 2019] was designed to evaluate temporal commonsense
in transformer-based models. The dataset consists of 13K questions, split into 30% for
development and 70% for testing. Here is an example:
**Context:** Mr. Barco has refused US troops or advisers but has accepted US military aid.
**Question:** What happened after Mr. Barco accepted the military aid?
**Choices:** (A) The aid was denied
**(B) He received the aid**
**(C) Things started to progress**
In the above example, two answers are correct to the same questions.
**B.10 TRACIE**
TRACIE [Zhou et al., 2021] is a temporal reasoning textual entailment dataset. It consists
of 5.5K instances, split into 20% for training and 80% for testing. Each instance has a
hypothesis that is querying either about the start time of an event or about the end time
of an event. Here is an example:
**Premise:** Tom needed to get braces. He was afraid of them.
The dentist assured him everything would be fine.
Tom had them on for awhile. Once removed he felt it was worth it.
**Hypothesis:** Tom avoids foods he can’t eat with braces
starts before the braces are removed.
**Expected answer:** Entailment
23
-----
**B.11 RICA**
RICA [Zhou et al., 2020a] is a dataset of cloze questions that can be used to assess commonsense reasoning capabilities. To build this dataset, the authors first created commonsense
axioms such as ”Larger objects can contain smaller objects” and then translated them into
commonsense statements. RICA consists of 16000 commonsense statements, split into 80%
for training, 10% for validation, and 10% for testing. The task is to guess the comparator,
which is masked in the input sentence, as here:
**Context:** A prindag is smaller than a flurberg, so a flurberg
is [MASK] likely to contains a prindag.
**Expected answer:** more
**B.12 LogiQA**
LogiQA [Liu et al., 2020b] is a multiple-choice machine reading comprehension dataset.
This task assesses the logical deductive ability of language models for the case where the
correct answer to a question is not explicitly included in the passage. The corpus includes
8678 paragraph-question pairs translated from the National Civil Servants Examination of
China. Each question has one correct answer from a choice of four possible answers, as
here:
**Context:** David knows Mr. Zhang’s friend Jack, and Jack knows David’s friend Ms. Lin.
Everyone of them who knows Jack has a master’s degree,
and everyone of them who knows Ms. Lin is from Shanghai.
**Question:** Who is from Shanghai and has a master’s degree?
**Choices:** (A) David (B) Jack (C) Mr. Zhang (D) Ms. Lin
The dataset is split into 80% for training, 10% for validation, and 10% for testing.
**B.13 ReCLOR**
ReCLOR [Yu et al., 2020] is a multiple-choice machine reading comprehension dataset that
tests logical reasoning. The corpus consists of questions retrieved from standardized exams
such as LSAT and GMAT. It consists of 6138 paragraph-question pairs. Here is an example:
24
-----
**Context:** Heavy rains during Centralia’s corn planting season prevented some farmers
there from planting corn. It is now the planting season for soybeans,
another of Centralia’s principal crops, and those fields originally intended
for corn are dry enough for planting. Nonetheless, even though soybean prices
are unusually high at present, the farmers will leave most of these fields empty
rather than plant them with soybeans, since
**Question:** Which of the following most logically completes the passage below ?
**Choices:** (A) some Centralian farmers anticipate serious financial losses due
to the extremely wet spring planting season.
(B) the extensive rains have led to an increase in the price of corn.
**(C) chemicals that were used to prepare the fields for corn**
**planting would stunt the growth of soybeans.**
(D) many centralian farmers grow both corn and soybeans.
To adequately evaluate a model without allowing it to take advantage of artifacts in the
corpus, the authors split the testing set into two sets: the EASY set where the instances
are biased and the HARD set where they are not.
**B.14 AR-LSAT**
AR-LSAT [Zhong et al., 2021] is a machine reading comprehension dataset that can be
used to evaluate logical reasoning capabilities. The dataset was constructed by selecting
the analytical reasoning section of 90 LSAT exams from 1991 to 2016. It consists of 2046
multiple-choice questions. Here is an example:
**Context:** A professor must determine the order in which five of her students
— Fernando, Ginny, Hakim, Juanita, and Kevin —
will perform in an upcoming piano recital.
Each student performs one piece, and no two performances overlap.
The following constraints apply:
Ginny must perform earlier than Fernando.
Kevin must perform earlier than Hakim and Juanita.
Hakim must perform either immediately before or immediately after Fernando
**Question:** If Juanita performs earlier than Ginny, then which one of the following could be true?
**Choices:** (A) Fernando performs fourth.
**(B) Ginny performs second.**
(C) Hakim performs third.
(D) Juanita performs third.
(E) Kevin performs second.
**B.15 QuAIL**
QuAIL [Rogers et al., 2020a] is a machine reading comprehension dataset. It assesses verbal
reasoning capabilities across 4 different domains: fiction, news, blogs, and user stories. The
25
-----
corpus consists of 15K questions for 800 passages. The testing dataset comprises 15% of
the questions, and different approaches were evaluated. Due to the size of the passages, we
cannot show an example here.
**B.16 StrategyQA**
StrategyQA [Geva et al., 2021] is a boolean QA benchmark that can be used to evaluate
a model’s reasoning capabilities. The model has to perform implicit decomposition of the
question into reasoning steps in order to answer a question correctly. Here is an example:
**Question:** Did Aristotle use a laptop?
**Implicit Reasoning Steps:** 1. When did Aristotle live?
2. When was the laptop invented?
3. Is #2 before #1?
**Expected answers:** No
The dataset is composed of 2780 instances, where each instance consists of a strategy
question, a decomposition into reasoning steps, and Wikipedia paragraphs that answer
each reasoning step.
**B.17 ConTROL**
ConTRoL [Liu et al., 2020a] is a dataset of 8325 context-hypothesis pairs to evaluate a
models’ contextual reasoning capabilities over long texts. It is a passage-level textual entailment task that consists of context-hypothesis pairs. Here is an example:
**Premise:** Ten new television shows appeared during the month of September.
Five of the shows were sitcoms, three were hourlong dramas, and two were news-magazine shows.
By January, only seven of these new shows were still on the air.
Five of the shows that remained were sitcoms.
**Hypothesis:** At least one of the shows that were cancelled was an hour-long drama.
**Expected answer:** Entailment
**B.18 CLUTRR**
CLUTRR [Sinha et al., 2019] is a benchmark dataset to evaluate the inductive reasoning
capabilities of models. The task requires a model to infer the kinship between characters
in short stories. Here is an example:
**Context:** Kristin and her son Justin went to visit her mother Carol
on a nice Sunday afternoon. They went out for a movie
together and had a good time.
**Question:** How is Carol related to Justin ?
**Expected answer:** Carol is the grandmother of Justin.
26
-----
CLUTRR is a synthetic dataset. For each experiment, 5000 instances were generated for
training and 100 for testing.
**B.19 SVAMP**
SVAMP [Patel et al., 2021] is a dataset that was created by varying instances of ASDiv-A
(a dataset of one-unknown arithmetics problems). It contains 1000 tasks. To solve these
tasks, a model needs a certain level of reasoning capability. It also has to be sensitive to
the question. Here is an example:
**Context:** Jack had 8 pens and Mary had 5 pens.
Mary gave 3 pens to Jack.
**Question:** How many pens does Jack have now?
**Expected answer:** 8 + 3 = 11
**B.20 MATH**
MATH [Hendrycks et al., 2021] is a dataset that consists of 12500 competition mathematics
problems. It is split into 7500 problems for training and 5000 for testing. Each instance is a
description of the problem with a question, the step-by-step solution, and the final answer.
Here is an example from the dataset:
**Context:** Tom has a red marble, a green marble, a blue marble,
and three identical yellow marbles.
How many different groups of two marbles can Tom choose?
**Expected answer:** There are two cases here:
either Tom chooses two yellow marbles (1 result),
or he chooses two marbles of different colors (( [4]
2 [)= 6 results).]
The total number of distinct pairs of marbles Tom can choose is
1 + 6 = 7
**B.21 IsarSTEP**
IsarStep [Li et al., 2021] is a mathematical reasoning benchmark. It was built by collecting
formal proofs written in Isabelle from the Archive of Formal Proofs and from the standard
library of Isabelle/HOL. In this task a model needs to predict the missing intermediate
proposition in a proof. Here is an example for the proof that _√2 is not a rational number,_
where the missing intermediate proposition is a is even:
**Context:** 2b[2] = a[2] _⇒_ [Missing Proposition] ⇒∃ _c ∈_ Z. a = 2c
**Expected answer:** a is even
The dataset is split into 820K examples for training, 5000 for validation, and 5000 for
testing.
27
-----
**B.22 HOList**
HOList [Bansal et al., 2019] is an automated theorem proving dataset for higher-order logic.
The benchmark includes 29465 theorems and their proofs, split into 60% for training, 20%
for validation, and 20% for testing. Two tasks can be evaluated in HOList: (1) proving
each theorem in the dataset, and (2) predicting the tactic and the arguments of the tactic
that were used in the human proof. A tactic can be a previously proven theorem or a list
of previously proven theorems.
**B.23 MetaMathStep**
MetaMathStep [Polu and Sutskever, 2020] is a benchmark for automated theorem proving.
The dataset evaluates the capabilities of a language model to generate a proof for a given
statement. The dataset contains 3 million proof steps for around 38000 theorems, which
are split into 36K for training, 1K for validation, and 1K for testing.
**Appendix C. Model Performances on Selected Datasets**
|Type of Reasoning|Dataset|Model|Performance|Metric|
|---|---|---|---|---|
|Horn Reasoning (Section: 3.1)|ParaRules|RoBERTa|98.8%|Accuracy|
||ParaRules|PROVER (Based on RoBERTa)|98.4%|Accuracy|
||ParaRules|PRoofWriter (Based on T5)|99.1%|Accuracy|
|Commonsense Reasoning (Section: 3.2)|ProtoQA|GPT-2|71.2%|Accuracy|
||CODAH|BERT|69.5%|Accuracy|
||CATS|RoBERTa|67%|Accuracy|
||PIQA|RoBERTa|77%|Accuracy|
|Event-based Commonsense Reasoning (Section: 3.3)|MCTACO|BERT|69.9%|F1-Score|
||TRACIE|SymTime (Based on T5)|80.6%|F1-Score|
||TORQUE|RoBERTa|75.2%|F1-Score|
|Implicit Reasoning (Section: 3.4)|LogiQA|RoBERTa|35.31%|Accuracy|
||LogiQA|DAGN|39.32%|Accuracy|
||LogiQA|FOCAL REASONER (Based on RoBERTa)|40.25%|Accuracy|
||ReClor EASY|RoBERTa|75.50%|Accuracy|
||ReClor HARD|RoBERTa|54.3%|Accuracy|
||ReCLor EASY|DAGN (Based on RoBERTa)|75.91%|Accuracy|
||ReClor HARD|DAGN (Based on RoBERTa)|44.46%|Accuracy|
||ReCLor EASY|FOCAL REASONER (Based on RoBERTa)|77.05%|Accuracy|
||ReClor HARD|FOCAL REASONER (Based onRoBERTa)|44.64%|Accuracy|
||ReClor EASY|RoBERTa with Logical Data Augmentation|81.4%|Accuracy|
||ReClor HARD|RoBERTa with Logical Data Augmentation|62.5%|Accuracy|
||AR-LSAT|RoBERTa|23.1%|Accuracy|
||QuAIL|BERT|55.9%|Accuracy|
||StrategyQA|RoBERTa|66%|Accuracy|
||ConTRoL|BART|60.95%|Accuracy|
||CLUTTR|GAT|77%|Accuracy|
||RICA|GPT-2|50%|Accuracy|
||RICA|RoBERTa|50%|Accuracy|
|Mathematical Reasoning (Section: 3.5)|SVAMP|RoBERTa Embeddings + Graph2Tree|65%|Accuracy|
||MATH|GPT-2|6.9%|Accuracy|
||IsarStep|Hierarchical Transformer|22.8%|Accuracy|
Table 1: Model Performances on Selected Datasets
28
-----
| [
"Chadi, Helwe",
"Chloé, Clavel",
"Fabian M., Suchanek"
] | 2021-06-22T00:00:00 | null | false | 45 | 1 | null | https://openreview.net/forum?id=Ozp1WrgtF5_ | null | https://www.semanticscholar.org/paper/8424082e3bf4792462eb112d7ebcecf5b0dc3613 |
Holophrasm: a neural Automated Theorem Prover for higher-order logic | I propose a system for Automated Theorem Proving in higher order logic using deep learning and eschewing hand-constructed features. Holophrasm exploits the formalism of the Metamath language and explores partial proof trees using a neural-network-augmented bandit algorithm and a sequence-to-sequence model for action enumeration. The system proves 14% of its test theorems from Metamath's set.mm module. | Holophrasm exploits the formalism of the Metamath language and explores partial proof trees using a neural-network-augmented bandit algorithm and a sequence-to-sequence model for action enumeration. | ## Holophrasm: a neural Automated Theorem Prover for higher-order logic
**Daniel P.Z. Whalen**
Stanford University
[email protected]
**Abstract**
I propose a system for Automated Theorem Proving in higher order logic using deep learning and eschewing hand-constructed features. Holophrasm exploits
the formalism of the Metamath language and explores partial proof trees using
a neural-network-augmented bandit algorithm and a sequence-to-sequence model
for action enumeration. The system proves 14% of its test theorems from Metamath’s set.mm module.
**1** **Introduction**
Formalized mathematics arises from a desire for rigor. A formal proof of a theorem is a proof that is
complete: every step follows directly from previous steps and known theorems in an algorithmicallyverifiable manner.
A number of corpora have been developed for formalized mathematics in various formalisms. Large
datasets of formal proofs include Metamath [1], the Mizar Mathematical Library [2], Flyspeck [3],
the Archive of Formal Proofs [4], the Coq standard library [5], and the HOL Light library [6].
These databases cover wide swaths of mathematics. The Metamath set.mm module, for example,
is a collection of theorems and proofs constructing mathematics from ZFC. The module includes a
number of important theorems, including Zorn’s Lemma from set theory, the theorem of quadratic
reciprocity from number theory, and Sylow’s theorems from group theory.
The time cost of formalizing proofs is substantial, and so tools to assist in construction of the formal
proofs have arisen. Interactive Theorem Provers automate the technical steps of theorem-proving,
leaving the creative steps to the user. Over time, the techniques from Interactive Theorem Provers
have been extended to Automated Theorem Provers, complete non-interactive tools for the generation of formal proofs. Rapid advances are being made in Automated Theorem Proving, and recent
systems now permit proofs of 40% of the theorems in the Mizer Mathematical Library [7]. These
proof systems generally consist of multiple modules, one of which is Premise Selection: the identification of relevant axioms and theorems. Premise Selection has shown promise as a target for
machine-learning techniques [8–10] and more recently deep learning [11] — the first application of
deep learning to Automated Theorem Proving.
While most of the current research has focused on the Mizar Mathematical Library, I demonstrate
that the tree structure of Metamath proofs is exploitable by modern tree exploration techniques.
Holophrasm[1] takes a novel approach to Automated Theorem Proving. The system uses a variant
of UCT [12], an algorithm for the tree-based multi-armed bandit problem, to search the space of
partial proof trees of a Metamath theorem. Recent developments in machine learning have made
such searches accessible. Action enumeration is made viable by sequence-to-sequence models [13].
In parallel, algorithms developed for go-playing AIs describe how neural networks can be used to
1Here, “holophrasm” is the notion that a complicated idea can be conveyed by a simple theorem-vector.
-----
guide tree exploration [14, 15]. Those techniques have been adapted here to create a complete,
non-interactive system for proving Metamath propositions.
**2** **The Metamath Format and Data Set**
The Metamath language is designed for automated theorem verification, utilizing metatheorems and
proper substitutions as the standard proof step. The exact specification of the language is given in
section 4 of [1], but the relevant details are summarized below
**2.1** **Metatheorems**
A theorem in the Metamath database is a proposition if it has a proof or an axiom if it does not.
The notion of axiom here is general and includes what are traditionally known as axioms, but also
includes definitions and the production rules for expressions.
We separate the axioms and propositions in set.m by their type, which is “set,” “class,” “wff,”
or “⊢”. Axioms of non-“⊢”-type describe the production rules for a context-free grammar with
the types as syntactic categories. An expression of the given type is a string of the corresponding
syntactic category. These axioms along with the free variables as additional terminal symbols provide a unique parse tree for every expression, and they will be referred to as constructor axioms.
Henceforth I will conflate the notion of an expression and its parse tree.
Propositions of type “set,” “class,” and “wff,” are ignored to maintain uniqueness of the parse trees.
Axioms and propositions of “⊢” type are assertions that an expression of “wff”-type is true, that the
expression can be proved from the axioms and given hypotheses. These theorems will be used as
nodes in proof trees.
A theorem T of “ ⊢ ”-type consists of a number of elements
_• aT, an assertion, which is an expression of “wff”-type._
_• eT, a set of hypotheses, each an expression of “wff”-type._
_• fT, a set of free variables that appear in the assertion and hypotheses and a type for each._
_• dT, a set of unordered pairs of disjoint variables from fT ._
The disjoint variables satisfy (x, x) ̸∈ _dT for all x ∈_ _fT, and represent pairs of variables which can_
not share any variables after a proper substitution.
**2.2** **Proper Substitution**
Consider a context proposition C that is to be proven and an expression a of “wff”-type, either the
assertion of C or an intermediate step in the proof of C.
An application of particular theorem, T, to prove a consists of a set of substitutions, φ, into the free
variables fT . In these substitutions, earch variable is replaced by an expression of the same type
built out of the constructor axioms extended by additional terminal symbols for the free variables of
_C. The application requires that the assertion of T after substitution matches a, that is φ(aT ) = a._
By performing this process we reduce the problem of proving a to the problem of proving all of the
hypotheses φ(eT ). This process is illustrated in figure 2.1.
The disjointness property adds a restriction on the allowable substitutions: for every pair (x, y) ∈
_dT, for every free variable z ∈_ _φ(x)_ _∩_ _fC, and every free variable w ∈_ _φ(y)_ _∩_ _fC it must be the case_
that (z, w) _dC._
_∈_
For a fixed expression, context, and theorem, substitutions that satisfy these properties are called
_viable. For a fixed expression and context, a theorem is called viable if it permits a viable set of_
substitutions.
I divide the variables in fT into two types. Constrained variables are variables that appear in aT .
_Unconstrained variables are variables that appear in some hypothesis h ∈_ _eT but not aT . The substi-_
tutions in φ are called constrained substitutions or unconstrained substitutions if they apply to con
-----
strained or unconstrained variables respectively. Constrained substitutions are notable in that, given
_a and T_, the constrained substitutions are exactly those fixed by the requirement that φ(aT ) = a.
**2.3** **Proof Trees**
The application of a theorem proves an expression, but
also provide a set of hypotheses which must be proven in
turn. This naturally gives a proof a tree structure. The assertion of the theorem is the root node, and its hypotheses
are leaves, because they are assumed to be true without
proof. Here I define the notion of a proof tree, but I modify the natural structure slightly to permit compatibility
with the notion of a partial proof tree introduced in section 3.1.
A proof tree of an expression a of “wff”-type in context
_C is a bipartite tree with two types of nodes, red nodes_
and blue nodes. Red nodes are labeled by an an expression of “wff”-type, which is an intermediate step in the
proof, and the root node is labeled by a. Unless its label
is a hypothesis in eC, in which case the node is a leaf, red
nodes always have exactly one child, a blue node. Blue
nodes are labelled by a pair (T, φ) of a theorem and viable
substitutions for that theorem into the parent expression.
Blue nodes have one child red node, φ(h), for each hypothesis h ∈ _eT . If such a tree exists, it is a proof of_
_a._
**3** **Proof Tree Exploration**
The problem I wish to solve is as follows: given a context theorem C, find a proof tree for that theorem’s assertion. The algorithm does so by considering a supertree of
potential proofs steps and by using tree exploration techniques to search for the subtree that is a valid proof-tree.
The algorithm will refer to three neural networks, payoff,
**generative, and predictive which are described in sec-**
tion 4.
**3.1** **Partial proof trees**
A partial proof tree is an extension of the the notion of
a proof tree The following changes are made: Red nodes
are permitted to have no children even if they are not hypotheses. Red nodes are permitted to have multiple multiple child blue nodes, each a potential approach for proving the expression.
A red node is said to be proven if any of its children have
been proven or if its label is one of the initial hypotheses.
A blue node is said to the proven if all of its children have
been proven.
**Modus Ponens (ax-mp)**
hypotheses: `` φ
_`` φ φ )_
assertion: _`_
**Double Modus Ponens (mp2b)**
hypotheses: `` ↵
_`` ↵_ _)) β_
_`` β β ) ) γ_
assertion: _`` γ_
**Proof**
_γ_ assertion
**Modus Ponens**
unconstrained
_φφ ! ! β_ substitution
_γ_ constrained
_!_ substitution
_[β]_ _β[β][ )] ) γ[ γ]_
**Modus Ponens**
_φφ ! ! ↵_
_!! β_ hypotheses
_↵_ _↵[↵]_ _[)]) β[ β]_
Figure 1: Examples of an axiom, ax**mp and a proposition, mp2b.** The
red nodes are expressions, and the blue
nodes each describe a theorem and substitutions that will prove the parent.
The subtree of a proven red node is necessarily a supertree of a valid proof tree for its expression.
In particular, the subtree can be pruned by removing all of the children of red nodes except for one
proven blue child. Such a pruned subtree must be valid proof tree.
-----
**3.2** **Exploration**
Similarly to UCT [12], the algorithm builds a partial proof tree over a series of passes. Each pass
traverses the tree downward. At a red node, the traversal chooses either to create a new child or
to proceed to the highest valuation child blue node. At a blue node, the traversal proceeds to the
worst-performing child. The pass continues until a new child blue node is created, whereupon its
red children are created and valued. The process repeats until the root node has been proven.
In order to to perform this exploration, each node maintains additional state, which is updated whenever the node’s children are updated.
Red nodes, a have an initial payoff, ya which is the output of the payoff network, evaluated as soon
as the node is created. They additionally have a total payoff, xa, which is the the sum of the initial
payoff of the node and the total payoffs of its children, a visit count, na, which is 1 plus the sum of
the visit counts of its children, and an average payoff, which is xa/na.
Blue nodes keep track of their least promising child, which is the unproven child with the lowest
average payoff. The total payoff and visit count of a blue node are the corresponding values of its
least promising child. Blue nodes also have a value vb, which indicates how likely this substitution
is to be applicable and is given by the relevance and generative networks.
**3.3** **Visiting Nodes**
In standard UCT, when the traversal reaches a leaf, the leaf spawns a new child for each available
action at that node, but doing so is impractical in this context. In this variant, the number of actions
available at red nodes can be infinite, since there are infinitely many unconstrained substitutions that
could be made into some theorems. The difficult part of the calculation is determining viable actions
rather than calculating payoff. To this end, we attempt to maintain the number of children of a red
node, a to 3
_⌈_ _[n][a]_ _[⌉][, so that more actions are considered after consecutive visits.]_
When a red node is visited for the first time, the node calculates its initial payoff but does not
generate any children. When the node is visited later, it checks if it has sufficiently many children.
If so the algorithm visits an extant chid as described in section 3.4. If not, the algorithm creates a
new child as described in section 3.5.
When a blue node is visited, it immediately visits its least promising child.
**3.4** **Visiting current children of red nodes**
To determine which child is visited from a red node, each extant child b is assigned a priority,
_xb_
+ β [v][b] + α
_nb_ _nb_
log na
_nb_
for constants β and α. The highest priority child is then visited.
The _n[x][b]b_ [+][ α] logn nb _a_ arises in the standard UCB algorithm as the upper confidence bound and the
correction _n[v]q[b]b_ [encourages exploring propositions with a high probability first [15]. During the ex-]
periment, α and β were assigned the values 1.0 and 0.5 respectively.
**3.5** **Expansion of red nodes**
The children of a red node are constructed by assigning to them a theorem and substitutions. Each
pair b = (T, φ) of theorem T and substitution φ is assigned a value vb = _pbest theorempT_ _pT, best substitutionpT,φ_ [.]
Here pT is the probability that T is the next theorem to apply, as given by the relevance network.
_pbest theorem is correspondingly the probability of the best theorem. For a fixed T_, pT,φ is the probability of those substitutions as given by the generative network. pT,best substitution is correspondingly the
probability of the best substitution. Children of the red node are added from this expansion queue in
decreasing order of value.
-----
Evaluation of the the relevance and generative network are performed in a just-in-time manner,
evaluating relevance during the second visit to a node and evaluating generative only when a previously unconsidered theorem is due to be added as a child.
When a blue node is added to the child in this way, the algorithm immediately visits each of the blue
node’s children once to estimate their payoffs.
**3.6** **Other details**
In practice a few additional changes can be made to make the algorithm more efficient.
**Circularity: Any attempt to create a red node with the same expression as one its ancestors fails:**
the parent blue node is removed from the expansion queue of its parent and the next pair (T, φ) is
added instead.
**Node Death: While the theoretical number of actions from a given red node is infinite, in practice**
the number of actions is limited by the beam search width of the generative network. This may lead
to instances where a red node has no children and its expansion queue is empty. Such a red node
is called dead. A blue node is said to be dead if any of its children are dead. Dead blue nodes are
removed from the graph and their ancestors are subsequently checked for death.
**Multiprocessing: The algorithm can be run efficiently in parallel. When doing so, the different**
threads traverse the proof tree asyncronously. Following [15], the priority function for a blue node b
from a red node is modified by replacing the _n[x][b]b_ [term with] _nb+xbγtb_ [, where][ t][b][ is the number of threads]
currently exploring a descendent of b and γ is a constant, chosen here to be 3.
**Generative length limits: When evaluating the generative network, outputs with more than 75**
tokens in total across all unconstrained substitutions are discarded during the beam search. The
beam search returns no substitutions if all items in the beam reached the size limit. If so, a dummy
child is added to the red node with a payoff of 0 to discourage further exploration of this node
**Last step: When a red node is added when proving the context C, the viable theorems are deter-**
mined. For the viable theorems, T, if there are viable substitutions φ such that φ(eT ) ⊂ _eC then_
that theorem and those substitutions are immediately added as a blue node.
**4** **Networks**
Three distinct neural networks were used in the algorithm. The payoff network estimates the payoff
of red nodes. The relevance network predicts which theorems will be useful at a given step. The
**generative network generated unconstrained substitutions.**
**4.1** **Tokenization**
A token is created for each constructor axiom. A number of dummy variables are created and
assigned tokens, the minimum number such that for each theorem the numbers of “set”, “wff”,
“class” free variables are at most the number of dummy variables of the corresponding type. Five
special tokens are added, ‘EOH’ for the end of a hypothesis, ‘EOS’ for the end of a section, ‘START’
for the start of sequence generation, ‘UV’ for an unconstrained variable, and ‘TARGET’ for a target
unconstrained variable.
The inputs are modified for each iteration by randomly replacing the free variables that appear with
distinct dummy variables of the corresponding type. Each expression is tokenized by reading the
tokens of the constructor axioms in its parse tree in a depth-first pre-order. Hypotheses are separated
by the ‘EOH’ token. If multiple different components are inputted, such as an assertion and set of
hypotheses, they are separated by the ‘EOS’ token. The other three special tokens are used only by
the generative network and are are described later.
**4.2** **Neural Networks**
The networks all share a number of similar features. In general, the embedding vectors for tokens
were inputted into 2 layers of bidirectional GRUs with internal biases for the GRU units and hidden
-----
layer dimensions of size 128. GRU weights were permitted to vary between different sections of
input and output, but the token embedding vectors were shared. The embedding vectors were augmented with four additional dimensions describing the graph structure of the input, the depth of the
node, the degree of the node, the degree of its parent, and its position into the degree of its parent. All
fully-connected layers used leaky RELUs with α = 0.01 and had dimension 128 unless otherwise
specified. Weights were regularized by their L2-norm with a regularization factor of 10[−][4].
**4.3** **Payoff Network**
The payoff network takes as input an expression, a, and a set of hypotheses eC, which are fed into
the GRUs. The network attempts to predict whether the expression is provable from the hypotheses.
The outputs of both sides of the bidirectional network are concatenated and fed through two fully
connected layers with leaky RELU units and a fully connected layer with a sigmoid to obtain the
classification probability, px.
The network is trained on known proof steps as positive examples and on incorrect proof steps
generated by the relevance and generative networks as negative examples. During training, the
cross-entropy loss is minimized.
**4.4** **Relevance Network**
The relevance network takes as input an expression, a, and a set of hypotheses eC, and attempts
to classify the next proposition that will used in the proof of a. The relevance network is designed
as two parallel networks. The first parallel branch takes a and eC as inputs and returns a 128dimensional expression-vector v. The second parallel branch is evaluated separately for all theorem
_T of “⊢”-type, inputs aT and eT, and returns a 128-dimensional theorem-vector wT . The probabil-_
ities are computed as the softmax over theorems T, pT = softmax(lT ), where lT = v[T] _WwT for_
some weight matrix W . This structure permits generalizability to new theorems while simultaneously allowing for the theorem vectors to be precomputed and cached.
The network is trained using a negative-sampling loss with four negative samples; at each iteration,
only five theorems are considered: the correct theorem TC and four incorrect theorems TW,i chosen
uniformly at random from the viable theorems for a. The training loss is computed as log σ(lTC )
_−_ _−_
_i_ [log][ σ][(][−][l][T]W,i [)][ and is minimized.]
P
**4.5** **Generative Network**
Given an expression, a, a set of hypotheses, eC, and a theorem, T, to be applied, the generative network uses a sequence-to-sequence model [13] with an intermediate fully-connected layer to create
expressions for the unconstrained substitutions.
To execute the network, an unconstrained variable in fT is chosen uniformly at random to be the
target. A set of substitutions φ is generated as follows. For v ∈ _fT a constrained variable, φ(v)_
is the expression needed for φ(aT ) = a. φ additionally maps the target unconstrained variable to
the ‘TARGET’ special token and the other unconstrained variables to the ‘UV’ special token. The
sequence-to-sequence model is used to generate an expression for φ applied to the target variable.
_φ is updated to include this as a substitution, a new target unconstrained variable is chosen, and the_
process repeats until all variables have substitutions.
The network takes as inputs φ(eT ) and eC. A fully-connected layer is applied to the outputs of
each direction and the result is used as the initial state of the GRUs for the sequence-to-sequence
output. An attentional model is added, following the general model of [16]. The output sequence
is initialized with the ‘START’ token. The outputs of the last GRU layer are fed through a fullyconnected layer with RELU nonlinearity and then a fully-connected layer with softmax nonlinearly
to obtain token probabilities. During training, the total cross-entropy loss of the output tokens is
minimized.
During execution, multiple outputs are given, following the beam search technique of [13]. The
tokens which can be included are restricted by a number of filters. Only constructor axioms defined
before the current context may be used. No “wff” or “class” variables may be added unless they
appear elsewhere in the context. At most one new such set variable is considered during selection
-----
for a given token. Furthermore, no token may be added if doing so would violate the disjoint variable
conditions.
**5** **Experiment**
**5.1** **Dataset**
The theorems of the Metamath set.mm module are used as the data set, discarding axioms and
keeping only propositions of “⊢”-type. Of these propositions, 21786 were selected as a training set,
2711 as a validation set and 2720 as a test set. The proofs are expanded into a full proof tree, and
each step of “⊢”-type was recorded.
The relevance network was trained and evaluated on all of the proof steps for the propositions. The
expansion of the propositions into proof steps provides 1.2M training proof steps, 120k validation
proof steps and 158k testing proof steps.
The generative network was trained on the proof steps where a proposition was applied that had
at least one unconstrained variable. This constraint leaves 426k training proof steps, 38k validation
proof steps, and 56k testing proof steps.
Data for the payoff network was generated by including all of the proof steps as positive examples excluding duplicates. Additionally, negative examples were generated by using the trained
**relevance and generative networks to predict the best two proposition/substitution pairs using the**
valuation described in section 3.5. The hypotheses generated by applying these propositions were
included as negative examples after removing all the the hypotheses that were equivalent to positive
examples. There were 587k positive and 960k negative training examples, 69k positive and 113k
negative validation examples, and 74k positive and 120k negative test examples.
**5.2** **Training**
Network weights were initialized with Xavier initialization [17]. Training for each network was done
using stochastic gradient descent with a batch size of 100, with an Adam optimizer [18]. Learning
rates started at 10[−][4], were decayed by a factor of 2 each time the validation loss failed to decrease,
and training was ended after the validation loss failed to decrease for three consecutive epochs.
**5.3** **Performance**
The neural networks are trained separately. The relevance network is tested, selecting from all
viable propositions. On the test data relevance obtains a 55.3% top-1 accuracy, a 72.8% top-5
accuracy and an 87.4% top-20 accuracy.
The generative network has a perplexity of 2.08 on the test set, when selecting from 1083 tokens. I
also measure the probability that a beam search creates the correct substitutions for all unconstrained
variables as one of the results. On the test set, generative achieves an accuracy of 39.1% with a beam
width of 1, 51.3% with a beam width of 5, and 57.5% with a beam width of 20.
The payoff network achieves a classification accuracy of 77.6% on the test set. For comparison, a
baseline prediction of negative achieves a 62.1% accuracy.
The system as a whole was tested on each test theorem by expanding the proof trees for 10000
passes or until 5 minutes had passed. Multiple attempts were made for each proposition with the
beam search width set to 1, 5, or 20. Under these parameters, the system finds proofs for 388 of
2720, or 14.3% of the test propositions. The system works particularly well on the initial part of the
database, finding proofs for 45.1% of the 457 test propositions in the first 5000 theorems.
In the cases that a valid proof is generated, the system works quickly. The discovered proofs were
created with a median of 17 passes.
-----
**6** **Discussion**
In this paper I have proposed a nonconventional approach to Automated Theorem Proving for
higher-order logic, and tested performance on the Metamath set.mm module. While the system
does not achieve state-of-the-art performance, it is the first effective complete Automated Theorem
Prover to not exploit hand-crafted features.
Holophrasm takes a unconventional approach to automated theorem proving, attempting to emulate
the processes and intuition of human proof exploration. A number of new techniques and novel
adaptions of current technologies have been introduced:
_• tree-based bandit algorithms for proof exploration_
_• tree-reduction during exploration passes to permit actions to have multiple subtrees_
_• deep networks for estimating statement provability_
_• the theorem-vector encoding for rapid theorem selection_
_• sequence-to-sequence models for enumeration from an infinite set of actions_
While the results of Holophrasm are not directly comparable to current results on the Mizar dataset,
the developments show promise as generalizable techniques. They highlight the feasibility of deep
learning as an approach to Automated Theorem Proving.
**Acknowledgments**
I am grateful to Yu-hsin Chen, Zach DeVito, and Matthew Fisher for invaluable discussions during
the planning stages of this project.
**References**
[1] Norman Megill. Metamath: A computer language for pure mathematics. 1997.
[2] Roman Matuszewski and Piotr Rudnicki. Mizar: the first 30 years. Mechanized mathematics and its
_applications, 4(1):3–24, 2005._
[3] Thomas C Hales. Introduction to the Flyspeck project. In Dagstuhl Seminar Proceedings. Schloss
Dagstuhl-Leibniz-Zentrum f¨ur Informatik, 2006.
[4] Mauro Jaskelioff and Stephan Merz. Proving the correctness of Disk Paxos. Archive of Formal Proofs,
[June 2005. ISSN 2150-914x. http://isa-afp.org/entries/DiskPaxos.shtml, Formal](http://isa-afp.org/entries/DiskPaxos.shtml)
proof development.
[[5] The Coq development team. The Coq proof assistant reference manual. INRIA, 2012. URL http:](http://coq.inria.fr)
[//coq.inria.fr. Version 8.4.](http://coq.inria.fr)
[6] John Harrison. HOL Light: a tutorial introduction. In International Conference on Formal Methods in
_Computer-Aided Design, pages 265–269. Springer, 1996._
[7] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. Journal of Automated Reasoning, 55(3):
245–256, 2015.
[8] Jesse Alama, Tom Heskes, Daniel K¨uhlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise selection for
mathematics by corpus analysis and kernel methods. Journal of Automated Reasoning, 52(2):191–213,
2014.
[9] Daniel K¨uhlwein, Twan van Laarhoven, Evgeni Tsivtsivadze, Josef Urban, and Tom Heskes. Overview
and evaluation of premise selection techniques for large theory mathematics. In International Joint Con_ference on Automated Reasoning, pages 378–392. Springer, 2012._
[10] Daniel K¨uhlwein, Jasmin Christian Blanchette, Cezary Kaliszyk, and Josef Urban. Mash: machine learning for Sledgehammer. In Interactive Theorem Proving, pages 35–50. Springer, 2013.
[11] Alex A Alemi, Francois Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath-Deep
sequence models for premise selection. arXiv preprint arXiv:1606.04442, 2016.
[12] Levente Kocsis and Csaba Szepesv´ari. Bandit based Monte-Carlo planning. In Machine Learning: ECML
_2006, pages 282–293. Springer, 2006._
[13] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In
_Advances in neural information processing systems, pages 3104–3112, 2014._
-----
[14] Chris J Maddison, Aja Huang, Ilya Sutskever, and David Silver. Move evaluation in Go using deep
convolutional neural networks. arXiv preprint arXiv:1412.6564, 2014.
[15] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game
of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
[16] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based
neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
[17] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In Aistats, volume 9, pages 249–256, 2010.
[18] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint_
_arXiv:1412.6980, 2014._
-----
| [
"Daniel, Whalen"
] | 2016-08-09T00:00:00 | null | false | 44 | 15 | [
"MetaMath"
] | http://arxiv.org/abs/1608.02644 | null | https://www.semanticscholar.org/paper/2ab3c53b08d4180c8642207ee09738f8df193b92 |
MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | The recently released GPT-4 Code Interpreter has demonstrated remarkable proficiency in solving challenging math problems, primarily attributed to its ability to seamlessly reason with natural language, generate code, execute code, and continue reasoning based on the execution output. In this paper, we present a method to fine-tune open-source language models, enabling them to use code for modeling and deriving math equations and, consequently, enhancing their mathematical reasoning abilities. We propose a method of generating novel and high-quality datasets with math problems and their code-based solutions, referred to as MathCodeInstruct. Each solution interleaves $\textit{natural language}$, $\textit{code}$, and $\textit{execution results}$. We also introduce a customized supervised fine-tuning and inference approach. This approach yields the MathCoder models, a family of models capable of generating code-based solutions for solving challenging math problems. Impressively, the MathCoder models achieve state-of-the-art scores among open-source LLMs on the MATH (45.2%) and GSM8K (83.9%) datasets, substantially outperforming other open-source alternatives. Notably, the MathCoder model not only surpasses ChatGPT-3.5 and PaLM-2 on GSM8K and MATH but also outperforms GPT-4 on the competition-level MATH dataset. The proposed dataset and models will be released upon acceptance. | This paper proposes a method to fine-tune open-source language models, enabling them to use code for modeling and deriving math equations and, consequently, enhancing their mathematical reasoning abilities, and yields the MathCoder models, a family of models capable of generating code-based solutions for solving challenging math problems. | # MATHCODER: SEAMLESS CODE INTEGRATION IN LLMS FOR ENHANCED MATHEMATICAL REASONING
**Ke Wang[1,4]** _[∗]_ **Houxing Ren[1][∗]** **Aojun Zhou[1][∗]** **Zimu Lu[1][∗]** **Sichun Luo** **[3][∗]**
**Weikang Shi** **[1][∗]** **Renrui Zhang** **[1]** **Linqi Song[3]** **Mingjie Zhan[1][†‡]** **Hongsheng Li[1,2]** _[‡]_
1Multimedia Laboratory (MMLab), The Chinese University of Hong Kong
2Shanghai AI Laboratory 3City University of Hong Kong 4Nanjing University
{wangk.gm, renhouxing, aojunzhou, zmjdll}@gmail.com
[email protected]
ABSTRACT
The recently released GPT-4 Code Interpreter has demonstrated remarkable
proficiency in solving challenging math problems, primarily attributed to its
ability to seamlessly reason with natural language, generate code, execute code,
and continue reasoning based on the execution output. In this paper, we present
a method to fine-tune open-source language models, enabling them to use code
for modeling and deriving math equations and, consequently, enhancing their
mathematical reasoning abilities. We propose a method of generating novel and
high-quality datasets with math problems and their code-based solutions, referred
to as MathCodeInstruct. Each solution interleaves natural language, code, and
_execution results. We also introduce a customized supervised fine-tuning and_
inference approach. This approach yields the MathCoder models, a family
of models capable of generating code-based solutions for solving challenging
math problems. Impressively, the MathCoder models achieve state-of-the-art
scores among open-source LLMs on the MATH (45.2%) and GSM8K (83.9%)
datasets, substantially outperforming other open-source alternatives. Notably, the
MathCoder model not only surpasses ChatGPT-3.5 and PaLM-2 on GSM8K and
MATH but also outperforms GPT-4 on the competition-level MATH dataset. The
proposed dataset and models will be released upon acceptance.
1 INTRODUCTION
Recently, closed-source large language models (LLMs) such as GPT-4 (OpenAI, 2023) and PaLM2 (Anil et al., 2023), paired with methods such as Chain-of-Thought (CoT) (Wei et al., 2022) and
Program-Aided Language models (PAL) (Gao et al., 2023), have shown remarkable performance
on mathematical reasoning tasks. In contrast, current open-source LLMs (Touvron et al., 2023;
Penedo et al., 2023; Zhang et al., 2022) still lag significantly behind in this area. Even Llama-270B (Touvron et al., 2023), one of the most potent open-source models, only scores 56.8% and
13.5% respectively on GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) datasets,
remarkably lower than GPT-4 Code Interpreter[1], which scores 97% and 69.7% (Zhou et al., 2023a).
To narrow the gap between open-source and closed-source models in math problem solving, recent
works, such as the WizardMath (Luo et al., 2023) and RFT (Yuan et al., 2023), have tried to
tune open-source models with math problems and CoT solutions, achieving a significant gain in
performance compared to their base model, Llama-2. On the other hand, methods such as PAL (Gao
et al., 2023), PoT (Chen et al., 2022), and CSV (Zhou et al., 2023a) encourage code usage in solving
math problems, showing promising improvements when paired with closed-source models like GPT3.5, GPT-4 and GPT-4 Code Interpreter. In particular, GPT-4 Code Interpreter surpasses the previous
_∗Equal contribution._
_†Project leader._
_‡Corresponding author._
[1https://openai.com/blog/chatgpt-plugins#code-interpreter](https://openai.com/blog/chatgpt-plugins##code-interpreter)
-----
**Table 1: Comparison with different Instruction-following datasets: G**
and M are the abbreviation for the training subset of GSM8K and
MATH dataset. The baseline datsets include recent RFT-u13b (Yuan
et al., 2023) and WizardMath (Luo et al., 2023).
**Datasets** **Seed** **Annotation** **Available**
RFT-100k G Llama ✓
WizardMath-96k G+M GPT-4 ✗
Ours-49k G+M GPT-4 ✓
GPT-4 +
Ours-80k G+M ✓
Self-distillation
**Figure 1:** Performance comparison
80
MathCoder 73.1
70 WizardMath (SoTA)
Llama-1 RFT
60
50.7
50 46.2
Accuracy 40
30 24.8
20 17.8
10
7B 70B
between MathCoder, WizardMath, and
Llama-1 RFT models with different
model sizes.
SOTA by a clear margin. Recent study (Zhou et al., 2023a) shows that this excellent performance can
be attributed to its ability to generate and assess the execution results of a chain of code interlaced
with natural language reasoning steps. However, existing open-source models fail to benefit from
this sophisticated mechanism since they lag behind closed-source models in both code generation
and natural language reasoning. Therefore, we still lack an effective recipe to deliver open-source
_models to solve math problems in a manner similar to GPT-4 Code Interpreter._
In this paper, leveraging the strengths of GPT-4 Code Interpreter (Zhou et al., 2023a), we
introduce a simple yet effective framework, MathCoder, designed to enhance the mathematical
reasoning capabilities of open-source models. This framework can be categorized into two parts:
(1) math instruction-following dataset construction and (2) customized supervised fine-tuning.
_Specifically, the instruction-following dataset, termed as MathCodeInstruct, consists exclusively of_
80k math problems and their corresponding solutions. Each solution is interwoven with natural
_language for reasoning, code for execution, and execution results._ The comparison between
MathCodeInstruct and other math instruction-tuning datasets is shown in Tab. 1.
MathCodeInstruct is created in two steps. The first step is collecting GPT-4 Code Interpreterstyle solutions for the GSM8K and MATH training sets. GSM8K and MATH are two important
datasets of math problems for improving and evaluating models’ mathematical abilities, which
consist of grade school math word problems and challenging competition mathematics problems,
respectively. Using this data, we trained our initial models, termed MathCoder-Initial. The second
step is to augment more math problems by using an innovative prompt named problem interpolation,
which asks the LLM to generate questions with difficulty levels that fall between the provided
MATH and GSM8K problems. This paradigm generates problems that bridge the gap between
the grade-school-level problems in GSM8K and the challenging high-school-level problems in
MATH, thus enhancing the dataset’s generalization capability. We use MathCoder-Initial to generate
solutions for these new problems. Combining this new data with those from the first step, we finetune the base Llama-2 models, reaching a score that outperforms the SOTA by a clear margin on
GSM8K and MATH. Concurrently with our work, MAmmoTH (Yue et al., 2023) also creates a
dataset consisting of math problems and model-generated solutions. However, their solutions consist
of either only code or only natural language reasoning steps, which is notably different from our
dataset of GPT-4 Code Interpreter-style solutions.
Regarding the supervised fine-tuning stage, we propose an effective training and inference pipeline
to ensure that our fine-tuned model can behave in a manner similar to the GPT-4 Code Interpreter.
We use special tokens (<|text|>, <|code|>, <|execution|>) to identify if a part of the training data
is natural language, code, or execution results. With this deliberately created training corpus, the
model learns to generate interleaved natural language and code divided by special tokens. During
inference, we can use the special tokens to detect code blocks and utilize Jupyter Notebooks for code
execution. We append the result of on-the-fly execution to the previous predictions of the model.
Then, the model continues to autoregressively predict the next token based on this new version of
the input, which includes the execution result at the end. In this way, the model would be able to
"see" the execution results and continue its reasoning accordingly.
We use MathCodeInstruct to fine-tune popular open-source Llama-2 and CodeLlama (Rozière
et al., 2023) models, creating a family of models named MathCoder. Experimental results show that
-----
on this data, producing the MathCoder-Initial. New problems are created using our novel prompt (detailed
**(a) Creating Dataset**
GSM8K
& MATH
(pro+sol, 49k)
GSM8K train& MATH train (pro) GPT-4 (Inference) CodeLlama34B(SFT) MathCoder34B -Initial MathCodeInstruct(pro+sol, 80k)
Problem MathCoder-Initial Problem
Interpolation 34B Interpolation
(pro) (Inference) (pro+sol, 31k) _pro: problem_
_sol: solution_
**(b) Supervised Fine-Tuning (SFT)**
MathCode Llama-27B, 13B, 70B(SFT) MathCoder-L7B, 13B, 70B
Instruct
(pro+sol, 80k) CodeLlama7B, 13B, 34B(SFT) MathCoder-CL7B, 13B, 34B
**Figure 2: The process of dataset creation and model fine-tuning. (a) First, solutions for problems in the**
GSM8K and MATH datasets are collected from the GPT-4. Then, we fine-tune the CodeLlama-34B model
examples in Appendix C), and their solutions are generated using MathCoder-Initial. (b) Finally, the new
problems and solutions are combined with the existing training data to create the final dataset, which we use to
fine-tune the base Llama-2 model, producing our final MathCoder model.
the models with our proposed dataset and training framework achieve significant improvement on
various mathematical reasoning benchmarks, as depicted in Fig. 1.
This paper’s main contributions can be summarized in three key aspects:
- To the best of our knowledge, this is the first systematic study that explicitly integrates
natural language reasoning, code generation, and feedback from execution results into
open-source pre-trained large language models, aiming at enhancing their mathematical
reasoning abilities.
- We have constructed a high-quality mathematical instruction tuning dataset,
MathCodeInstruct. This dataset comprises existing math problems from GSM8K
and MATH, with GPT-4 Code Interpreter-style solutions, and newly formulated ones via
our novel problem interpolation prompting strategy.
- We have produced a family of models, MathCoder. We fine-tune Llama-2 and CodeLlama
models on our dataset, producing a family of models with not only high accuracy on the
GSM8K and MATH, but also a good performance on other out-of-domain datasets like
Mathematics and SimulEq.
2 MATHCODER: SPECIALIZING LLAMA FOR MATHEMATICAL REASONING
In this section, we first introduce the methodology on creating MathCodeInstruct in Sec. 2.1.
Subsequently, we detail the supervised fine-tuning (SFT) and inference methods in Sec. 2.2.
2.1 MATHCODEINSTRUCT DATASET
Our MathCodeInstruct dataset can be expressed as D = {D0, D1}, where D0 denotes the
seed data and D1 is the data generated with the proposed prompting method, named problem
_interpolation prompting. Fig. 2 (a) outlines the process for creating MathCodeInstruct Dataset._
**Seed data D0. First, we obtain solutions for the GSM8K and MATH training sets from the GPT-4.**
The data can be expressed in (solution, question) pairs as {(yi, xi)}i[N]=1[. Each solution][ y][i][ contains]
three kinds of components: natural language (text) for reasoning L, code for execution C, and
-----
Input for MathCoder
<|system|><|text|>Below is a math problem. Please solve it step by step.<|endofblock|><|endofmessage|>
<|user|><|text|>A group of friends went on a road trip across two cities. In the first city, they spent half of the money they had plus an additional $50.
In the second city, they spent half of what was left plus an additional $20, leaving them with $40. How much money did they start with before the road
trip?<|endofblock|><|endofmessage|>
(a) CoT example (c) LCE solution from MathCoder
Let’s call the amount of money they started with x. <amount of money the group of friends started with.|assistant|> <|text|> To solve this problem, let’s set x as the
1. In the first city, they spent half ( _[x]2_ [) plus $50. What remains is] 1. In the first city, they spent half of what they had ( _[x]2_ [) plus an]
_x2. Simplifying this gives −_ _x2_ [+ 50]. _x −_ _[x]2_ _[−]_ [50 =][ x]2 _[−]_ [50][.] _x_ additional $50.2. After this, they are left with3. In the second city, they spend half of what’s left plus an additional x − _x2_ [+ 50].
3. In the second city, they spent half of what remained ( x 2 _[−]2_ [50] ) plus $20.4. Finally, they are left with $40.
$20. What remains is _[x]2_ 2 _[−]2_ [50] + 20 . The equation can be set up as follows:
4. Simplifying this gives5. They are left with $40, so[−][x]2 [50][−][x]4 _[ −][50][−][ −][45 = 40][x]4_ [+ 25][.] _[ −]_ [20 =] _[ x]4_ _[−]_ [45][.] _x −_ x2 + 50 _−_ _x −_ _x22_ [+ 50] + 20 = 40
6. Solving for x gives _[x]4_ [= 85][ and][ x][ = 340][.]
Let’s go ahead and solve this equation using code.<|endofblock|>
(b) PoT example <|code|> from sympy import symbols, Eq, solve
# Define the variable and equation
# Define a variable to represent the starting amount of money
x = symbols(‘x’)
starting_amount = 0
# Loop to find the starting amount
# Equation based on the problem description
while True:
equation = Eq(x - (x/2 + 50) - ((x - (x/2 + 50)) / 2 + 20), 40)
amount = starting_amount
# Spent in the first city: half of the money they had plus $50
# Solve the equation
amount -= (amount / 2 + 50)
initial_amount = solve(equation, x)
# Spent in the second city: half of what was left plus $20
initial_amount[0]< endofblock >
amount -= (amount / 2 + 20) _|_ _|_
# Check if they are left with $40
< execution > 340< endofblock >
if amount == 40: _|_ _|_ _|_ _|_
break
starting_amount += 1 <|text|> Thus, the group of friends started with $340 before the road
starting_amount > > > 340 trip.<|endofblock|><|endofmessage|>
**Figure 3: Example of CoT (Wei et al., 2022), PoT (Gao et al., 2023; Chen et al., 2022) and LCE solution with**
special token. In contrast to CoT, which consists solely of natural language, and PoT, which consists solely
of code, our LCE solution intertwines natural language, code, and execution results. <|text|>, <|code|>, and
<|execution|> are special tokens that denote natural language, code, and execution results respectively.
_execution results E, where L is the natural language reasoning step, C is the Python code the model_
generates when its reasoning leads to some complex computation that it needs code to solve, and
**E is the output of the code. E is assessed by the model so a new L can be generated. All three**
kinds of components are closely chained together in the solutions, with each component influencing
the component that comes after. An integral solution yi can be expressed as (L, C, E, L, C, E, ...).
An example is shown in Fig. 3 (c). We call solutions in this format Natural Language, Code, and
**Execution (LCE) solutions. We put some case studies in Appendix H to demonstrate the advantage**
of LCE.
We filter the seed data D0 = ({(yi, xi)}), making sure that each solution yi provides the same
answer as the ground truth answer so that the quality of the dataset is further assured. Then, we finetune the CodeLlama-34B using the seed data D0, producing our initial MathCoder model, named
MathCoder-Initial.
**Problem interpolation prompting D1. Using the initial MathCoder model, we can generate LCE**
solutions for new problems. We observed a large gap in difficulty between grade-school-level
GSM8K problems and challenging competition MATH problems. To bridge this gap, we present
a novel prompting method (see details in Appendix C), which provides a powerful LLM like GPT-4
with a relatively simple problem drawn from the GSM8K training set, paired with a difficult problem
drawn from the MATH, and ask the model to generate a new problem with difficulty between the
two. GPT-4 generated completely novel intermediate-level problems, instead of just copying the
problems from GSM8k and MATH. We then use GPT-4 to evaluate the new problems, and the
results are shown in Fig. 4. We can see that 83.2% of the new problems are more difficult than
-----
MATH/GSM8K more difficult Tie Interpolation more difficult
MATH vs.
Interpolation
|95.6|3.3|Col3|
|---|---|---|
GSM8K vs.
Interpolation
|Col1|15.3|83.2|
|---|---|---|
**Figure 4: Difficulty comparison of interpolation problems against MATH and GSM8K using GPT-4. The**
evaluation prompt and examples are shown in Appendix E.
GSM8K, and 95.6% are easier than MATH, indicating that the problems generated in this way are
appropriate in difficulty.
We also investigated using only GSM8K to create difficult problems, but we found that the new
problems were too similar to the original ones, and the large gap to MATH still exists (more
information can be found in Appendix F).
**Self-distillation. We primarily use self-distillation due to the high cost of using GPT-4. As we**
observed that our MathCoder-Initial can already generate well-structured LCE-format solutions,
we generated the solutions of D1 with MathCoder-Initial. This does not affect the experiment’s
ability to assess the efficacy of problem interpolation prompting, because if solutions generated
by a model weaker than GPT-4 can improve performance, then using GPT-4 might yield even
greater improvements but leading to much more financial cost. Further discussion can be found
in Appendix D. Given that we do not have ground truth answers for the new problems, we then
generate n different LCE solutions as depicted in (Wang et al., 2023a) for each new problem with
our initial MathCoder models, keeping only those solutions for which all n answers match (n is set
to 3 in this paper), thus ensuring our dataset’s quality.
Combining the new data D1 with the seed data D0 yields the MathCodeInstruct dataset D =
_D0, D1_ . We fine-tune the base Llama-2 (Touvron et al., 2023) and CodeLlama (Rozière et al.,
_{_ _}_
2023) models using MathCodeInstruct to derive our final MathCoder models. For clarity, we
refer to the supervised fine-tuning of base Llama-2 as "MathCoder-L" and that of CodeLlama as
"MathCoder-CL", as shown in Fig. 2 (b).
2.2 SUPERVISED FINE-TUNING AND INFERENCE
**Supervised Fine-tuning. In order to identify the three kinds of components in LCE solutions,**
as illustrated in Fig. 3 (c), we enclose them with special tokens. Reasoning language starts with
<|text|>, while math code and execution results start with <|code|> and <|execution|> respectively.
All components end with <|endofblock|>. These tokens help the model understand the difference
between each component and create LCE solutions during inference. After the special tokens are
added, all components are concatenated to form the solution, which is preceded by the original math
question to form an instance of training data. In order to make the training more efficient, several
instances are concatenated together to form a single input, while cross-question masking is used to
ensure only tokens in the same instance are visible.
During supervised fine-tuning, we apply a standard cross-entropy loss following Alpaca (Taori
et al., 2023). The loss is only computed on reasoning language and math code since they are the
components of the training data generated by the LLM. In particular, we zero-out the loss on tokens
from execution results, as the model would not need to predict these tokens.
**Inference. After supervised fine-tuning, the model has learned to output natural language and**
_code enclosed by special tokens._ We can identify the end of each component by looking for
<|endofblock|>, and determine which component it is by examining the first token of the component.
When a code generation is encountered, we utilize a Jupyter Notebook for real-time code execution,
allowing the variables defined in previous code blocks to be used in subsequent ones. After
execution, the execution results are concatenated following the previous math code block. The
model then continues to autoregressively generate the next reasoning language block, forming the
chain of thoughts in the LCE format, until it reaches the final answer. This process ensures that the
model behaves similarly to the GPT-4 Code Interpreter.
-----
3 EXPERIMENTS
3.1 DATASETS AND IMPLEMENTATION DETAILS
**Datasets.** We evaluate the MathCoder on five datasets, including two in-domain datasets:
GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021); and three out-of-domain datasets:
SVAMP (Patel et al., 2021), Mathematics (Saxton et al., 2019), and SimulEq (Kushman et al., 2014).
We regard GSM8K and MATH as in-domain because their training sets are used for our supervised
fine-tuning, while SVAMP, Mathematics, and SimulEq are out-of-domain because their training
sets are not used in our fine-tuning. The extensive assortment of assessment datasets encompasses
mathematical challenges from elementary, high school, and collegiate levels, covering various
subjects like geometry, formal logic, and even commonsense reasoning. The selection of these
datasets aims at providing a thorough evaluation of the models’ ability to generalize to unknown
circumstances and diverse fields of mathematics.
**Implementation Details.** Different base LLMs of varying sizes are tested, including Llama2 (7B, 13B, and 70B) and CodeLlama (7B, 13B, and 34B). During training, we use a uniform
learning rate of 2 × 10[−][5] and a context length of 2048, and we set the batch size as 128 with
different ratios of gradient accumulation steps and per-device train batch size, considering the model
size. Additionally, we used a cosine scheduler for three epochs in total with a 50-step warmup period. To efficiently train the computationally intensive models, we simultaneously employ
DeepSpeed training with ZeRO-3 stage (Rajbhandari et al., 2020) and flash attention (Dao et al.,
2022). The 7B, 13B, and 34B/70B models are trained on 8, 16, and 32 NVIDIA A800 80GB GPUs,
respectively. The text-generation-inference framework of Hugging Face is used for inference with
greedy decoding and max new tokens of every block set 512, and one to four GPUs are used as
needed. We allow up to 32 LCE blocks in every solution.
**Baselines.** We compare the proposed MathCoders with the following competitive baselines.
Closed-Source Models: we consider three closed-source models, including ChatGPT-3.5 Brown
et al. (2020), GPT-4 (OpenAI, 2023), GPT-4 Code Interpreter (Zhou et al., 2023a), and PaLM2 (Anil et al., 2023). Open-Source Models: we compare with Llama-2 (Touvron et al., 2023),
WizardMath (Luo et al., 2023), Llama-1 RFT (Yuan et al., 2023), and Galactica (Taylor et al., 2022).
For baselines, CoT prompting (Wei et al., 2022) and few-shot in-context-learning (Dong et al., 2023)
are used to maximize their performance while our MathCoders are always evaluated without extra
prompting and under zero-shot setting (Kojima et al., 2023).
3.2 MAIN RESULTS
**Comparison between MathCoder and SOTA open-source models. The experiment results in**
Tab. 2 show that our method outperforms other open-source competitive math-solving models with
a clear advantage, achieving state-of-the-art results across all datasets. However, a substantial
performance gap still exists compared to the state-of-the-art closed-source method GPT-4 Code
Interpreter. Our observations are as follows: (1) MathCoder-L-7B outperforms WizardMath-70B.
Even the smallest version of MathCoder, MathCoder-L-7B, outperforms the largest WizardMath
model, WizardMath-70B, on three out of five datasets, achieving a significant gain (+4.5%) in the
average score, as shown in Tab. 2. This is likely attributed to the fact that WizardMath is trained
solely on CoT data, while MathCoder is trained on our proposed LCE solutions. This demonstrates
the advantage of using solutions that interleave natural language, code, and execution (LCE blocks),
significantly enhancing the model’s ability to perform complex computations. (2) Additionally,
it is worth noting that while the code ability of CodeLlama-34B significantly outperforms that of
Llama-2-70B, in the case of MathCoder models, we observed that models based on Llama-2-70B
(73.1%) can outperform CodeLlama-34B (70.2%). This contrasts with the findings in the concurrent
work, MAmmoTH (Yue et al., 2023). The main reason for this disparity might be that Llama-270B exhibits better natural language reasoning ability, and the MathCodeInstruct dataset can
enhance language models’ code generation ability for math problem-solving.
**Comparison between Llama-2 and CodeLlama.** Tab. 3 shows that MathCoder-CL with
CodeLlama as the base model brings a substantial improvement compared to MathCoder-L with
Llama-2 as the base model. MathCode-CL-7B and MathCoder-CL-13B demonstrate an accuracy
improvement of 4.1% and 3.0% respectively, compared to the corresponding MathCoder-L models
-----
**Table 2:** Model evaluation on in-domain (GSM8K & MATH) and out-of-domain datasets (SVAMP,
Mathematics & SimulEq). + incidates improvement w.r.t. the best open source model. SVA. stands for SVAMP,
Mat. stands for Mathematics, and Sim. stands for SimulEq.
|Model Base Size|In-Domain GSM8K MATH|Out-of-Domain SVA. Mat. Sim.|Average|
|---|---|---|---|
**Closed-Source Model**
|ChatGPT-3.5 (Zhao et al., 2023) - - GPT-4 Code (Zhou et al., 2023a) - - PaLM-2 (Anil et al., 2023) - -|80.8 34.1 97.0 69.7 80.7 34.3|- - - - - - - - -|- - -|
|---|---|---|---|
**Open-Source Model**
|Llama-1 RFT (Yuan et al., 2023) Llama-1 34B|56.5 7.4|55.4 7.6 12.8|27.9|
|---|---|---|---|
|7B WizardMath (Luo et al., 2023) Llama-2 13B 70B|54.9 10.7 63.9 14.0 81.6 22.7|36.1 9.3 12.8 51.9 14.1 14.9 71.8 17.1 37.9|24.8 31.8 46.2|
|7B MathCoder-L Llama-2 13B 70B|64.2 23.3 +9.3 +12.6 72.6 29.9 +8.7 +15.9 83.9 45.1 +2.3 +22.4|71.5 46.9 47.5 +35.4 +37.6 +34.7 76.9 54.7 62.3 +25.0 +40.6 +47.4 84.9 74.4 77.0 +13.1 +57.3 +39.1|50.7 +25.9 59.2 +27.4 73.1 +26.9|
|7B MathCoder-CL CodeLlama 13B 34B|67.8 30.2 +12.9 +19.5 74.1 35.9 +10.2 +21.9 81.7 45.2 +0.1 +22.5|70.7 55.8 49.6 +34.6 +46.5 +36.8 78.0 62.5 60.7 +26.1 +48.4 +45.8 82.5 75.9 65.8 +10.7 +58.8 +27.9|54.8 +30.0 62.2 +30.4 70.2 +24.0|
**Table 3: Model performance comparison for MathCoders with CodeLlama and Llama-2 as base.**
|Size|GSM8K MATH SVAMP Mathematics SimulEq|Average|
|---|---|---|
|MathCoder-CL-7B vs. MathCoder-L-7B|+3.6 +6.9 -0.8 +8.9 +2.1|+4.1|
|---|---|---|
|MathCoder-CL-13B vs. MathCoder-L-13B|+1.5 +6.0 +1.1 +7.8 -1.6|+3.0|
|---|---|---|
of the same size. The potentially superior coding and reasoning capability of CodeLlama can be
attributed to its additional training on code data (Rozière et al., 2023). This extended training
provides CodeLlama with a deeper understanding of programming concepts and patterns, allowing
it to excel in coding-related tasks and exhibit more advanced math reasoning abilities.
**Comparison among different subjects across various levels.** MATH dataset problems are
categorized with difficulty levels ranging from 1 to 5, covering seven different math subjects,
including algebra, prealgebra, number theory, counting and probability, precalculus, intermediate
algebra, and geometry. In Fig. 5, we present the performance comparison of MathCoder-L (7B, 13B)
and MathCoder-CL (7B, 13B), grouped by these levels and subjects. More results are shown in
Appendix G. We find that MathCoder achieves higher scores in algebra and prealgebra problems.
However, when it comes to geometry problems, MathCoder struggles to achieve high scores,
especially for problems with higher difficulty levels. This suggests that code plays a more significant
role in computationally intensive questions.
3.3 ABLATION STUDY
**Analysis of the influence of problem interpolation.** We conducted an experiment to study the
influence of the portion of MathCodeInstruct questions created using the proposed problem
interpolation. The experiment uses CodeLlama-34B as the base model. The experimental results in
Tab. 4 validate that problem interpolation brings a significant improvement across all five datasets.
We also conducted an experiment where we generated 31k data samples using GSM8K or MATH
as the seed data separately with equal portions. The results are presented in Tab. 8. As shown,
experiments involving problem interpolation yield an average accuracy that is 3.6 percentage points
-----
MathCoder-L-7B
(Overall: 0.233)
L1 L2 L3 L4 L5
MathCoder-CL-7B
(Overall: 0.3024)
L1 L2 L3 L4 L5
MathCoder-L-13B
(Overall: 0.2992)
L1 L2 L3 L4 L5
MathCoder-CL-13B
(Overall: 0.3588)
L1 L2 L3 L4 L5
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
algebra
prealgebra
number
theory
counting and
probability
precalculus
intermediate
algebra
geometry
|0.72|0.51|0.4|0.27|0.13|
|---|---|---|---|---|
|0.52|0.53|0.41|0.3|0.14|
|0.57|0.28|0.26|0.15|0.05|
|0.51|0.46|0.17|0.08|0.07|
|0.32|0.15|0.09|0.04|0|
|0.29|0.21|0.17|0.08|0.04|
|0.29|0.3|0.16|0.13|0.01|
|0.76|0.61|0.49|0.37|0.21|
|---|---|---|---|---|
|0.69|0.64|0.49|0.38|0.22|
|0.67|0.43|0.32|0.29|0.17|
|0.59|0.5|0.19|0.15|0.05|
|0.4|0.21|0.12|0.1|0.03|
|0.35|0.3|0.18|0.13|0.09|
|0.29|0.35|0.27|0.1|0.01|
|0.77|0.63|0.5|0.4|0.21|
|---|---|---|---|---|
|0.59|0.56|0.5|0.42|0.19|
|0.77|0.41|0.34|0.2|0.16|
|0.67|0.48|0.24|0.14|0.06|
|0.42|0.21|0.13|0.09|0|
|0.4|0.25|0.18|0.14|0.06|
|0.37|0.39|0.2|0.17|0.02|
|0.83|0.67|0.57|0.45|0.29|
|---|---|---|---|---|
|0.73|0.69|0.52|0.45|0.24|
|0.77|0.49|0.42|0.32|0.21|
|0.74|0.58|0.22|0.16|0.18|
|0.44|0.34|0.23|0.15|0.04|
|0.44|0.38|0.24|0.17|0.1|
|0.24|0.48|0.25|0.18|0.05|
**Figure 5: Performance comparison of MathCoder-L (7B, 13B) and MathCoder-CL (7B, 13B) on the MATH**
dataset by levels and subjects. We can see that the improved accuracy from MathCoder-L to MathCoder-CL
comes primarily from subjects that require precise calculations like algebra and number theory.
**Table 4: Influence of the interpolation problems in MathCodeInstruct (as shown in Tab. 1) based on**
CodeLlama-34B.
|Train set Samples|GSM8K MATH SVAMP Mathematics SimulEq|Average|
|---|---|---|
|GSM8K+MATH 49k|77.3 44.0 78.6 71.6 59.3|66.2|
|GSM8K+MATH+Interpolation 80k|81.7 45.2 82.5 75.9 65.8 +4.4 +1.2 +3.9 +4.3 +6.4|70.2 +4.0|
higher compared to those without it. These results indicate that by employing problem interpolation,
we can generate problems with intermediate difficulty levels, thereby increasing the diversity of the
problem set. This expands the diversity of the problems and ultimately enhances the performance
of the model. Further experiments on the effects of different amounts of data created with problem
interpolation are presented in Tab. 9 (Appendix B).
**Analysis of LCE solutions compared to code-only or natural-language-only solutions.** To
analyze the advantages brought by the LCE solutions, consisting of interleaved natural language,
code, and execution results, we trained a new model with solutions consisting of code-only. We
use the results of WizardMath 7B Luo et al. (2023), which was trained on natural language, to
represent the performance of natural-language-only solutions. The results are shown in Tab. 5 and
Tab. 6. As can be seen, the LCE solution produces the highest average accuracy, surpassing the codeonly solution by 17.9 percentage points and the natural-language-only solution by 25.9 percentage
points.
**Analysis of Code Execution. To demonstrate the effect of code execution, both in training time**
and execution time, we have done further experiments. The results and analysis are presented in
Appendix A.
4 RELATED WORK
**Instruction Tuning. Instruction tuning is a method of enhancing LLMs’ instruction following**
abilities, thus aligning language models with more useful objectives and human preferences. A
long line of previous works (Ye et al., 2021; Longpre et al., 2023; Sanh et al., 2021; Wang et al.,
2022b; Wei et al., 2021; Chung et al., 2022; Longpre et al., 2023) is focused on enhancing LLMs’
instruction following abilities in general. With the emergence of models like GPT-3 and GPT-4,
recent studies (Wang et al., 2022a; 2023b; Zhou et al., 2023b; Peng et al., 2023; Xu et al., 2023)
have started to utilize synthetic instructions generated by these powerful models to tune smaller
models. Compared to these works, our instruction tuning is focused on using high-quality solutions
for math problems generated by models to improve our LLM’s math-solving ability. Another related
work is presented in (Luo et al., 2023), but their method did not use code to solve math problems,
distinguishing our work from theirs.
**Mathematical Reasoning. There are various benchmark datasets (Hendrycks et al., 2020; Ling**
et al., 2017; Hendrycks et al., 2021) to measure a model’s mathematical reasoning abilities. Recently,
many works have focused on enhancing LLMs’ ability to solve math problems, reaching high scores
-----
**Table 5: Comparison between LCE and code-only solutions. Results of LCE-format and Only Code and**
Execution are acquired from models trained based on CodeLlama-7B.
|Solution Format|GSM8K MATH SVAMP Mathematics SimulEq|Average|
|---|---|---|
|LCE-format (ours)|67.8 30.2 70.7 55.8 49.6|54.8|
|Only Code and Execution|50.2 20.2 61.6 39.8 12.8 -17.6 -10.0 -9.1 -16.0 -36.8|36.9 -17.9|
**Table 6: Comparison between LCE and natural-language-only solutions. Both the LCE format model and**
WizardMath (Luo et al., 2023) are finetuned from Llama-2-7B.
|Solution Format|GSM8K MATH SVAMP Mathematics SimulEq|Average|
|---|---|---|
|LCE-format (ours)|64.2 23.3 71.5 46.9 47.5|50.7|
|Only Natural Language (WizardMath 7B)|54.9 10.7 36.1 9.3 12.8 -9.3 -12.6 -35.4 -40.3 -34.7|24.8 -25.9|
on these benchmarks. Many of them apply Chain-of-Thought (Wei et al., 2022; Kojima et al., 2023;
Wang et al., 2023a; Fu et al., 2022) to improve LLMs’ multistep reasoning capability. Another line
of works (Gao et al., 2023; Chen et al., 2022; Zhou et al., 2023a) utilize code to compensate for
LLMs’ limitations in doing complex math computations. Our work takes inspiration from these two
lines of work, as we believe both Chain-of-Thought and code generation (Li et al., 2023a; Rozière
et al., 2023) are essential to solving math problems. There are also works focused on math-related
pre-training (Lewkowycz et al., 2022; Taylor et al., 2022) to improve a model’s general reasoning
capability. We combine natural language and code seamlessly in our dataset, thus providing a
method to train models more efficiently in solving math problems.
**Distillation.** Distillation (Hinton et al., 2015) often involves transferring knowledge from a
larger, more powerful model to a smaller, weaker one (Taori et al., 2023; Zheng et al., 2023;
Cobbe et al., 2021). Recent research (Li et al., 2023b; Wang et al., 2022a; Allen-Zhu & Li,
2020) has demonstrated the plausibility of self-distillation, achieving performance improvements
by distilling the model itself. Our approach can also be viewed as a form of self-distillation, as the
solutions generated by MathCoder-Initial, which is built on CodeLlama-34B, are used to fine-tune
CodeLlama-34B, resulting in MathCoder-CL-34B.
5 CONCLUSION AND LIMITATION
In this paper, we present MathCoder, an open-source large language model designed for math
reasoning, bridging the gap between natural language understanding and computational problemsolving. MathCoder incorporates math instruction-following dataset construction. By utilizing
the GSM8K and MATH datasets as seed data, we leverage the GPT-4 to generate problems
encompassing reasoning, code generation, and program execution. Additionally, we propose a
problem interpretation method to create intermediate-level problems. Furthermore, we introduce
a customized supervised fine-tuning approach, where the training loss is only applied to natural
language and code. Our empirical study demonstrates that MathCoder achieves state-of-theart performance in five math datasets among open-source LLMs, with scores of 83.9% on the
GSM8K dataset and 45.2% on the MATH dataset. It is worth noting that MathCoder outperforms
closed-source models like ChatGPT-3.5 and PaLM-2 on the GSM8K and MATH datasets and even
outperforms GPT-4 on the MATH dataset.
However, our work does have certain limitations that warrant further exploration in future research.
First, since we rely on the GPT-4 for data generation, MathCoder’s capabilities are inherently
constrained by the capabilities of this model and unable to solve theorem-proving problems.
Additionally, as a series of uni-modal models, MathCoder still faces challenges in solving complex
geometry problems, which we acknowledge and plan to address in our future investigations.
-----
6 ACKNOWLEDGMENTS
This project is funded in part by National Key R&D Program of China Project 2022ZD0161100,
and in part by General Research Fund of Hong Kong RGC Project 14204021.
7 AUTHOR CONTRIBUTION STATEMENT
Hongsheng Li and Mingjie Zhan led the project. Ke Wang, Aojun Zhou and Zimu Lu were
responsible for proposing the methodology, conducting experiments, and contributing to the
manuscript. Houxing Ren developed the codebase for code generation and offered suggestions
for its improvement. Weikang Shi, and Sichun Luo assisted with manuscript writing and actively
participated in discussions. Additionally, Renrui Zhang and Linqi Song also contributed to these
discussions.
REFERENCES
Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and
self-distillation in deep learning. arXiv preprint arXiv:2012.09816, 2020.
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report.
arXiv preprint arXiv:2305.10403, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
arXiv:2211.12588, 2022.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language
models. arXiv preprint arXiv:2210.11416, 2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and
memory-efficient exact attention with io-awareness, 2022.
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu,
Lei Li, and Zhifang Sui. A survey on in-context learning, 2023.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting
for multi-step reasoning. arXiv preprint arXiv:2210.00720, 2022.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. In International Conference on Machine
Learning, pp. 10764–10799. PMLR, 2023.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong
Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv,
abs/2009.03300, 2020. [URL https://api.semanticscholar.org/CorpusID:](https://api.semanticscholar.org/CorpusID:221516475)
[221516475.](https://api.semanticscholar.org/CorpusID:221516475)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
preprint arXiv:2103.03874, 2021.
-----
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural
[network. ArXiv, abs/1503.02531, 2015. URL https://api.semanticscholar.org/](https://api.semanticscholar.org/CorpusID:7200347)
[CorpusID:7200347.](https://api.semanticscholar.org/CorpusID:7200347)
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners, 2023.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically
solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pp. 271–281, 2014.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving
quantitative reasoning problems with language models. Advances in Neural Information
Processing Systems, 35:3843–3857, 2022.
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao
Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii,
Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João
Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee,
Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang,
Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan
Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha
Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav
Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra,
Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor,
Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean
Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder: may the
source be with you!, 2023a.
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and
Mike Lewis. Self-alignment with instruction backtranslation. ArXiv, abs/2308.06259, 2023b.
[URL https://api.semanticscholar.org/CorpusID:260866107.](https://api.semanticscholar.org/CorpusID:260866107)
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale
generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th
Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pp. 158–167, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi:
[10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015.](https://aclanthology.org/P17-1015)
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V
Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective
instruction tuning. arXiv preprint arXiv:2301.13688, 2023.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo
Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering
mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint
arXiv:2308.09583, 2023.
OpenAI. Gpt-4 technical report, 2023.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math
word problems? arXiv preprint arXiv:2103.07191, 2021.
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli,
Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb
dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv
preprint arXiv:2306.01116, 2023.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning
with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
-----
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations
toward training trillion parameter models, 2020.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton,
Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez,
Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and
Gabriel Synnaeve. Code llama: Open foundation models for code, 2023.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai,
Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training
enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021.
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
[https://github.com/tatsu-lab/stanford_alpaca, 2023.](https://github.com/tatsu-lab/stanford_alpaca)
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia,
Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for
science. arXiv preprint arXiv:2211.09085, 2022.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2023a. URL
[https://openreview.net/forum?id=1PL1NIMMrw.](https://openreview.net/forum?id=1PL1NIMMrw)
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions.
arXiv preprint arXiv:2212.10560, 2022a.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei,
Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al.
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv
preprint arXiv:2204.07705, 2022b.
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi
Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels
go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751,
2023b.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du,
Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint
arXiv:2109.01652, 2021.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi,
Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language
models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.),
[Advances in Neural Information Processing Systems, 2022. URL https://openreview.](https://openreview.net/forum?id=_VjQlMeSB_J)
[net/forum?id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and
Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions.
arXiv preprint arXiv:2304.12244, 2023.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. Crossfit: A few-shot learning challenge for cross-task
generalization in nlp. arXiv preprint arXiv:2104.08835, 2021.
-----
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling
relationship on learning mathematical reasoning with large language models. arXiv preprint
arXiv:2308.01825, 2023.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning, 2023.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen,
Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained
transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, and Qizhe Xie. Automatic model selection with
large language models for reasoning, 2023.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.
Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code
interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921, 2023a.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat,
Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206,
2023b.
-----
**Table 7: Ablation study of with/without code execution during inference and of the loss with/without execution**
results in training stage.
|Include Actual Experiment execution results code execution for training in inference|GSM8K MATH SVAMP Mathematics Simuleq|Average|
|---|---|---|
|#1 Yes No|54.1 16.9 69.6 20.6 14.2|35.1|
|#2 Yes Yes|79.9 45.9 81.9 74.2 63.6 +25.8 +29.0 +12.3 +53.6 +49.4|69.1 +34.0|
|#3 No Yes|81.7 45.2 82.5 75.9 65.8 +1.8 -0.7 +0.6 +1.7 +2.1|70.2 +1.1|
A ANALYSIS IF CODE EXECUTION
**Analysis of actual code execution in the inference stage. We investigate the impact of code**
execution in the inference stage and report the results in Tab. 7. We conduct this investigation using
CodeLlama-34B as the base model and train the models on our 80k MathCodeInstruct dataset.
Tab. 7 (#1) and Tab. 7 (#2) use the same model, trained with the cross-entropy loss computed on
not only natural language and code, but also the execution results. In this way, this model learns to
predict the execution results. In Tab. 7 (#1), the code execution results are predicted by the model
itself, while in Tab. 7 (#2), the execution result is returned from a Python code interpreter. From
the comparison between Tab. 7 (#1) and Tab. 7 (#2), we can see that Tab. 7 (#2) outperforms Tab. 7
(#1) across all five datasets, showing an improvement of 34.0% in the average accuracy score. This
indicates that actual code execution in the inference stage has a significant impact on the model’s
performance. This study shows that the model failed to predict correct execution results for many
programs and that actually executing the code using an external tool can significantly improve the
accuracy while doing complex computations. This finding validates the significance of integrating
code execution when solving math problems with LLMs, in line with previous closed-source GPT-4
Code Interpreter (Zhou et al., 2023a).
**Analysis of execution results in the training stage. Based on the observation that actual code**
execution contributes a lot to the model’s performance, we investigate not forcing the model to
predict the correct execution result. Tab. 7 (#3) is the performance of MathCoder-CL-34B, which
ignores execution results when computing the loss, so that the model does not learn to estimate
the execution results and the learning task at the supervised fine-tuning stage becomes simpler.
Compared to Tab. 7 (#2), Tab. 7 (#3) improves the accuracy across four out of five datasets, resulting
in a rise in the average accuracy from 69.1% to 70.2%, which aligns with the hypothesis that by
computing the loss only on natural language and code, the model can focus more on the math
problem-solving skills itself, thus making the supervised fine-tuning more effective.
B ADDITIONAL EXPERIMENTS
In this section, we present the results of additional ablation studies in Tab. 5, Tab. 6, Tab. 8, and
Tab. 9.
B.1 COMPARISON BETWEEN LCE FORMAT AND NATURAL-LANGUAGE-ONLY OR
CODE-ONLY FORMAT
Tab. 5 and Tab. 6 compares our LCE solution with two other common solution formats: code-only
and natural-language-only. LCE solution produces the highest average accuracy, surpassing codeonly solution by 17.9%, and natural-language-only solution by 25.9%.
B.2 ANALYSIS OF USING SINGLE DATASET AND USING PROBLEM INTERPOLATION
Tab. 8 presents a comparison between using problems from both the GSM8K and MATH datasets for
problem interpolation, and using problems from only a single dataset for data augmentation. As can
be seen, experiments with problem interpolation produce an average accuracy that is 3.6 percentage
points higher than those without it.
-----
B.3 ANALYSIS OF USING DIFFERENT NUMBER OF PROBLEM INTERPOLATION SAMPLES
Tab. 9 demonstrates the impact of using different numbers of problem interpolation samples. As
presented in the table, the average accuracy continues to rise as the number of problem interpolation
samples increases.
**Table 8: Ablation study of problem interpolation using CodeLlama-7B.**
**Training Data** **GSM8K** **MATH** **SVAMP** **Mathematics** **SimulEq** **Average**
w/ interporation 67.8 30.2 70.7 55.8 49.6 54.8
w/o interporation 61.9 29.1 70.9 50.5 43.4 51.2 (-3.6)
**Table 9: Ablation study of different numbers of problem interpolation samples. 49k is the number of D0 data.**
0, 11k, 31k, and 51k denote different numbers of problem interpolation samples.
**Data Size** **GSM8K** **MATH** **SVAMP** **Mathematics** **SimulEq** **Average**
49k 50.6 22.9 53.2 46.0 29.6 40.5
49k+11k 56.4 26.8 64.9 47.6 40.7 47.3 (+6.8)
49k+31k 67.8 30.2 70.7 55.8 49.6 54.8 (+14.3)
49k+51k 68.0 32.6 70.9 60.1 52.7 56.9 (+16.4)
C DATASET EXAMPLES
In this part, we include two examples that show the process of creating MathCodeInstruct.
Fig. 6 shows an example with only one LCE block, while Fig. 7 shows an example with three LCE
blocks.
D SOLUTIONS OF PROBLEM INTERPOLATION SAMPLES GENERATED WITH
GPT4
To validate that replacing data generated by MathCoder-Initial with solutions generated by GPT4
can further improve accuracy, we generated solutions using GPT4 with additional funding, trained
the model, and presented the results in Tab. 10. As expected, employing GPT4-generated data led
to even better performance.
E EXAMPLES OF DIFFICULTY COMPARISON
We show five examples of using GPT-4 to evaluate the complexity of problems in
MathCodeInstruct. Fig. 8 and Fig. 9 are two examples that the newly generated interpolation
problems are more difficult than the origin GSM8K problems, and Fig. 10 is an example that the
origin MATH problem is more difficult than the newly generated interpolation problem. These two
situations are the most common (83.2% and 95.6%).
Fig. 11 shows an example that the newly generated interpolation problem ties with the origin
GSM8K problem, which situation accounts for 15.3% of all problems.
Fig. 12 shows an uncommon example that the origin GSM8K problem is slightly more difficult than
the newly generated interpolation problem according to GPT-4, which situation accounts for less
than 3% of all problems.
-----
**Table 10: Comparison between GPT4-generated data and MathCoder-Initial-generated data.**
|Base Model|Data Composition|GSM8K MATH SVAMP Mathematics SimulEq|Average|
|---|---|---|---|
|CodeLlama 7B|49k (GPT-4) + 31k (MathCoder-Initial)|67.8 30.2 70.7 55.8 49.6|54.8|
|---|---|---|---|
|CodeLlama 7B|80k (GPT-4)|68.4 31.2 76.3 61.6 52.5|58.0 (+3.2)|
|---|---|---|---|
|CodeLlama 34B|49k (GPT-4) + 31k (MathCoder-Initial)|81.7 45.2 82.5 75.9 65.8|70.2|
|---|---|---|---|
|CodeLlama 34B|80k (GPT-4)|82.2 47.6 84.1 79.2 69.7|72.6 (+2.4)|
|---|---|---|---|
F CREATING PROBLEMS USING ONLY GSM8K
Fig. 13, Fig. 14, Fig. 15, Fig. 16 and Fig. 17 are five examples that utilize problems from the train set
of GSM8K to generate new problems which are more difficult than the origin ones. Compared with
the problems generated by our interpolation method, we can see that the new problems generated in
this way are much more similar to the raw GSM8K problems, sometimes just changing the name
of some variables or scaling the value. These problems are only slightly more complicated than the
raw problems, if not equally difficult, and are still much simpler than those from the MATH dataset.
In contrast to using just GSM8K, introducing problems from the MATH dataset in the interpolation
method shows the model (GPT-4 here) a route to generate more challenging problems. Hence, the
newly generated problems are similar to the problems in the GSM8K and the problems in the MATH.
Consequently, these interpolation problems can narrow the difficulty gap between the two datasets.
G MORE EXPERIMENT RESULTS
We show the performance comparison of all MathCoders, MathCoder-L (7B, 13B, 70B) and
MathCoder-CL (7B, 13B, 34B), on the MATH dataset by levels and subjects in Fig. 18. We can
see that the improved accuracy from MathCoder-L to MathCoder-CL comes primarily from subjects
requiring precise calculations like algebra, number theory, and counting and probability.
H CASE STUDY WITH COT, POT AND LCE
We compare our LCE solutions with CoT solutions and PoT solutions. Fig. 19 is an example of a
problem in number theory, and Fig. 20 is an example of a problem in algebra. CoT and PoT failed
to solve the problem in both cases, but LCE succeeded.
Fig. 21, Fig. 22, and Fig. 23 are three solutions to one problem in geometry. The CoT solutions
successfully figured out the coordinates of D, C, and E but failed to calculate the area, while the PoT
could not interpret the conditions in the problem. Compared with them, we can see that our LCE
solutions not only can utilize the conditions in the problems correctly but also will not make errors
in calculation.
-----
(a) Interpolation Prompt:
Please create a new problem, following the given example, "Example 1" is an easy problem and
"Example 2" is much more difficulty than "Example 1", the new problem should be harder than
"Example 1" and simpler than "Example 2".
Example 1: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then another
hour to walk the next two miles. If she wants her average speed to be 4 miles per hour, what speed (in
miles per hour) does she need to walk the remaining distance?
Example 2: The two solutions of equation x[2] + bx + 48 = 0 are in the ratio of 3 to 1 for some values
of b. What is the largest possible value of b?
(b) According new problem generated by GPT-4:
A group of friends went on a road trip across three cities. In the first city, they spent half of the money
they had plus an additional $50. In the second city, they spent half of the remaining money plus an
additional $30. In the third city, they spent half of what was left plus an additional $20, leaving them
with $40. How much money did they start with before the road trip?
(c) Solution generated by MathCoder-Initial without extra prompt:
Let’s break this problem down step by step:
1. Let x be the amount of money they started with.
_x_
2. After visiting the first city, they spent _[x]2_ [+ 50][. So, the remaining amount is][ x][ −] 2 [+ 50] .
3. In the second city, they spent half of the remaining money plus $30. This means they spent
_x−(_ _[x]22[+50][)]_ + 30. After this expenditure, they would have x − _x2_ [+ 50] _−_ _x−(_ _[x]22 [+50][)]_ + 30 left.
4. In the third city, they spent half of what was left from the above amount plus $20. After this, they
were left with $40.
Using the above information, we can set up an equation to solve for x:
_x_ _x_ _x2_ [+ 50] _x −_ _x2_ [+ 50] _−_ _x−(_ _[x]22[+50][)]_ + 30
_x_ _−_ + 30 + 20
_−_ 2 [+ 50] _−_ 2 ! _−_ 2
[= 40]
Let’s solve this equation to find the value of x.
from sympy import symbols, Eq, solve
# Define the variable
x = symbols(’x’)
# Define the equation based on the problem statement
equation = Eq(x - (x/2 + 50) - (x - (x/2 + 50))/2 - 30 - (x - (x/2 + 50) - (x - (x/2 + 50))/2
- 30)/2 - 20, 40)
# Solve the equation
initial_money = solve(equation)[0]
> > > 700
The group of friends started with $700 before the road trip.
**Figure 6: An example of the process of creating MathCodeInstruct. Firstly, "Example 1" and "Example**
2" are randomly chosen from the train set of GSM8K and MATH respectively. Then a new problem is generated
by GPT-4 using the interpolation prompt. Finally, we use our initial MathCoder to generate LCE-style solution
for the new problem.
-----
(a) Interpolation Prompt:
Please create a new problem, following the given example, "Example 1" is an easy problem and
"Example 2" is much more difficulty than "Example 1", the new problem should be harder than
"Example 1" and simpler than "Example 2".
Example 1: A bus comes by Jerry’s bus stop every 20 minutes starting at exactly 5:13 a.m. If Jerry
shows up at exactly 8:35 a.m., how many minutes will he have to wait for the next bus?
Example 2: Bella’s grandfather is confined to a wheelchair. He is coming to visit her. Bella wants
to build a wheelchair ramp. Her research shows that there must be 3.5 meters of ramp for every 30
centimeters of elevation. The distance from the ground to the front doorstep of Bella’s house is 9 cm.
What must the length of the ramp be?
(b) According new problem generated by GPT-4:
Liam is a bird watcher and records the pattern of a particular species of bird migration. Every 15 days,
the birds fly south 250 miles and then fly back north 100 miles. If this pattern continues, how far south
from their original position will the birds be after 75 days?
(c) Solution generated by MathCoder-Initial without extra prompt:
Let’s solve the problem step by step:
1. Calculate the net distance the birds have moved from their original position in 15 days.
2. Use the net distance to determine how far the birds will have moved after 75 days.
Let’s start by calculating the net distance the birds fly south in 15 days. The net distance can be found
by subtracting the northward distance from the southward distance.
south_distance_15_days, north_distance_15_days = 250, 100
net_distance_15_days = south_distance_15_days - north_distance_15_days
> > > 150
The birds have moved a net distance of 150 miles south from their original position in 15 days.
Now, to determine how far the birds will have moved after 75 days, we need to:
Calculate how many 15-day cycles are within 75 days. Multiply the number of 15-day cycles by the
net distance moved in 15 days to get the total distance moved in 75 days. Let’s find out how many
15-day cycles are within 75 days.
total_days = 75
number_of_cycles = total_days // 15
> > > 5
There are 5 cycles of 15 days within the span of 75 days.
Now, let’s calculate the total distance the birds will have moved after 75 days by multiplying the
number of 15-day cycles by the net distance moved in one 15-day cycle.
total_distance_75_days = number_of_cycles*net_distance_15_days
> > > 750
After 75 days, the birds will be 750 miles south from their original position.
**Figure 7: An example of the process of creating MathCodeInstruct. Firstly, "Example 1" and "Example**
2" are randomly chosen from the train set of GSM8K and MATH respectively. Then a new problem is generated
by GPT-4 using the interpolation prompt. Finally, we use our initial MathCoder to generate LCE-style solution
for the new problem.
-----
(a) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Zain has 10 more of each coin than Emerie. If Emerie has six quarters, seven dimes, and
five nickels, how many coins does Zain have?"
Problem 2: "In a marathon, every runner must run through several checkpoints. In the first segment,
Angela runs 15 meters more than Miguel. For each subsequent segment, Angela runs 5 meters less
than she did in the previous segment, while Miguel runs consistently. If Miguel runs 50 meters in the
first segment, how many meters does Angela run in total when she completes 5 segments?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(b) Judgement from GPT-4:
Let’s break down both problems step by step to assess their complexity.
**Problem 1:**
Emerie has:
- Quarters: 6
- Dimes: 7
- Nickels: 5
Zain has 10 more of each coin than Emerie:
- Quarters: 6 + 10 = 16
- Dimes: 7 + 10 = 17
- Nickels: 5 + 10 = 15
To find out how many coins Zain has, we just need to add these numbers together:
16 + 17 + 15 = 48 coins
**Problem 2:**
Given:
- Miguel runs consistently at 50 meters per segment.
- Angela runs 50 + 15 = 65 meters in the first segment.
- Angela runs 5 meters less in each subsequent segment.
Using the concept of arithmetic series:
_Sn =_ _[n]2 [(2][a][1][ + (][n][ −]_ [1)][d][)]
For Angela, the first term, a1, is 65 meters (15 meters more than Miguel’s 50 meters), and the common
difference, d, is -5 meters, as she runs 5 meters less in each subsequent segment. She completes 5
segments, so n = 5.
Let’s plug in the values to find out how many meters Angela runs in total for the 5 segments:
_S5 = [5]_ = 275 meters
2 [(2][ ·][ 65 + (5][ −] [1)][ · −][5) = 5]2 [(130][ −] [20) = 5]2 [(110) = 550]2
**Conclusion:**
Problem 1 involves simple arithmetic and requires the solver to add a constant number of coins to the
known number of coins that Emerie has. There’s no series or sequential decrease/increase involved.
Problem 2, however, involves understanding an arithmetic progression. It needs understanding of
sequences and series, and it may require more steps to solve, such as identifying the first term, the
common difference, and then applying the formula to find the sum of the series. Therefore, based on
these considerations, it could be argued that "Problem 2" is more difficult.
**Figure 8: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct.**
"Problem 2" is in MathCodeInstruct and "Problem 1" is the problem from GSM8K that was used to
generate "Problem 2".
-----
(a) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Peter has 15 birds. 1/3 are ducks. The rest are chickens and require special feed that costs
$2 per bird. How much does it cost to feed the chickens?"
Problem 2: "In a rectangular garden, the length is 4 times the width. If the total area of the garden is
increased by 120 square feet by increasing the width by 5 feet while keeping the length constant, what
is the original area of the garden?"
You answer should be one of "Problem 1", "Problem 2", and "Tie".
(b) Judgement from GPT-4:
Let’s break down the two problems to understand the complexity of each.
**Problem 1:**
Peter has 15 birds, and 1/3 are ducks. So the number of chickens is [2]3
_[×][ 15 = 10][ chickens.]_
The cost to feed the chickens is $2 per chicken, so:
Cost = 10 chickens × $2/chicken = $20
**Problem 2:**
Let the original width of the garden be w and the length be 4w.
The original area of the garden is 4w[2].
If the width is increased by 5 feet, the new width is w + 5.
The new area is then 4w(w + 5) = 4w[2] + 20w.
Given the total area of the garden is increased by 120 square feet:
4w[2] + 20w − (4w[2]) = 120
20w = 120
_w = 6 feet_
So, the original area is:
4w[2] = 4(6)[2] = 144 square feet
**Comparison:**
Problem 1 involves basic arithmetic and a simple fraction, making it relatively straightforward.
Problem 2 involves solving a quadratic equation and requires a better understanding of algebraic
concepts and the area of rectangles.
Thus, Problem 2 is more difficult than Problem 1.
**Figure 9: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct.**
"Problem 2" is in MathCodeInstruct and "Problem 1" is the problem from GSM8K that was used to
generate "Problem 2".
-----
(a) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "he two solutions of the equation x[2] + bx + 48 = 0 are in the ratio of 3 to 1 for some
values of b. What is the largest possible value of b?"
Problem 2: "In a rectangular garden, the length is 4 times the width. If the total area of the garden is
increased by 120 square feet by increasing the width by 5 feet while keeping the length constant, what
is the original area of the garden?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(b) Judgement from GPT-4:
The two problems provided, at their core, involve algebraic manipulation, but Problem 1 involves
quadratic equations and a bit more conceptual understanding of quadratic roots, whereas Problem 2 is
more straightforward and involves linear equations. Based on this, it can be said that "Problem 1" is
more difficult compared to "Problem 2". Here’s a brief explanation and solution for both problems to
illustrate this point:
**Problem 1:**
Given that the roots of x[2] + bx + 48 = 0 are in the ratio 3 : 1, let the roots be 3k and k. The sum of
the roots is 4k, and the product of the roots is 48.
Using Vieta’s formulas:
- 3k + k = 4k = −b
- 3k ∗ _k = 3k[2]_ = 48
Solving for k in the second equation:
_k[2]_ = 16 =⇒ _k = ±4_
So, the roots are 12 and 4 or −12 and −4. Thus, the possible values of b are −16 or 16.
So the largest possible value of b is 16.
**Problem 2:**
Let the width of the rectangular garden be w, then the length would be 4w.
The original area is:
_Aoriginal = l ∗_ _w = 4w ∗_ _w = 4w[2]_
If the width is increased by 5 feet, the new width is w + 5, and the new area is:
_Anew = l ∗_ (w + 5) = 4w ∗ (w + 5) = 4w[2] + 20w
Given that the total area of the garden is increased by 120 square feet:
_Anew = Aoriginal + 120_
4w[2] + 20w = 4w[2] + 120
Solving for w:
20w = 120 =⇒ _w = 6_
So, the original area of the garden is:
_Aoriginal = 4w[2]_ = 4 ∗ 36 = 144 square feet
Problem 1 requires an understanding of quadratic equations, Vieta’s formulas, and consideration of
multiple possibilities, making it more challenging compared to Problem 2, which involves simpler
algebraic manipulations with linear equations.
**Figure 10: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct.**
"Problem 2" is in MathCodeInstruct and "Problem 1" is the problem from MATH that was used to
generate "Problem 2".
-----
(a) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Kara mixes cups of sugar and water in the ratio of 1:2 to make one liter of lemonade.
Calculate the total number of cups of sugar used if she used 84 cups to make lemonade."
Problem 2: "A cylindrical water tank has two compartments. The first compartment is half the height
of the second compartment. When filled, the first compartment holds 60 liters of water. If a pipe
pumps water into the tank at a rate of 10 liters per minute and starts filling the second compartment,
how many minutes will it take to fill the entire tank?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(b) Judgement from GPT-4:
Let’s solve each problem one by one and then determine which one is more difficult.
**Problem 1:**
Kara mixes cups of sugar and water in the ratio of 1 : 2.
So, for every 1 cup of sugar, there are 2 cups of water.
The ratio of sugar to the total mixture is:
Sugar : Total = 1 : (1 + 2) = 1 : 3
She used 84 cups to make lemonade, so:
Cups of sugar used = [1]
3
_[×][ 84 = 28][ cups]_
**Problem 2:**
The first compartment is half the height of the second compartment but holds 60 liters of water.
So, the second compartment will hold 2 × 60 liters of water, which is 120 liters of water.
The total volume of the tank, including both compartments, is:
60 liters + 120 liters = 180 liters
The pipe pumps water at a rate of 10 liters per minute, so to fill the entire tank, it will take:
180 liters
Time =
10 liters/min [= 18][ minutes]
**Conclusion:**
Both problems can be solved with basic arithmetic and understanding of ratios, and neither requires
advanced mathematical concepts. Therefore, it can be considered a "Tie" in terms of difficulty.
**Figure 11: An example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct.**
"Problem 2" is in MathCodeInstruct, and "Problem 1" is the problem from GSM8K that was used to
generate "Problem 2". It is an example of a tie.
-----
(a) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "When Jeffrey walks, for every three steps forward, he takes two steps backwards.
Therefore, if the distance between the house and the mailbox is 66 steps, what is the total number of
steps Jeffrey takes when he goes from the house to the mailbox?"
Problem 2: "For every four apples Jake eats, he plants three apple seeds. If at the end of the month he
has consumed 24 apples, how many seeds has he planted?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(b) Judgement from GPT-4:
Let’s analyze each problem step by step:
**Problem 1:**
Every 3 steps forward and 2 steps back mean Jeffrey effectively moves 1 step for every 5 steps he
takes (3 steps forward - 2 steps back = 1 step net movement).
To cover 66 steps:
Steps needed = (Total distance / Net movement per cycle) × Steps per cycle
Steps needed = (66 steps / 1 step) × 5 = 330 steps
**Problem 2:**
For every 4 apples Jake eats, he plants 3 seeds.
So, for 24 apples:
Seeds planted = (24 apples / 4) × 3 seeds = 6 × 3 = 18 seeds
To determine which is more difficult, it seems both problems involve similar basic math operations
(division and multiplication).
However, the first problem introduces an extra layer of complexity with the two different types of
movements (forward and backward) compared to the direct ratio in the second problem.
Given this analysis, Problem 1 seems slightly more difficult than Problem 2 due to this added layer of
understanding needed to solve it.
**Figure 12: A example of using GPT-4 to evaluate the complexity of problems in MathCodeInstruct and**
it is an uncommon example that the problem from GSM8K is slightly more difficult than the interpolation
problem. "Problem 2" is in MathCodeInstruct and "Problem 1" is the problem from GSM8K that was
used to generate "Problem 2".
-----
(a) Prompt:
Please create a new problem, following the given "Example" and the new problem should be harder
than the "Example".
Example: "Josh’s mom gives him $20 to go shopping at the mall. He buys a hat for $10 and a pencil
for $2. Then he buys four cookies. If each cookie costs $1.25, how much money does Josh have left?"
(b) According new problem generated by GPT-4:
Rebecca’s dad gives her $50 to spend at the amusement park. She buys a ticket for $25 and a souvenir
for $6. After riding several rides, she decides to buy six balloons for her friends. If each balloon costs
$2.50, how much money does Rebecca have left?
(c) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Josh’s mom gives him $20 to go shopping at the mall. He buys a hat for $10 and a pencil
for $2. Then he buys four cookies. If each cookie costs $1.25, how much money does Josh have left?"
Problem 2: "Rebecca’s dad gives her $50 to spend at the amusement park. She buys a ticket for $25
and a souvenir for $6. After riding several rides, she decides to buy six balloons for her friends. If
each balloon costs $2.50, how much money does Rebecca have left?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(d) Judgement from GPT-4:
Let’s solve both problems step-by-step:
**Problem 1: Josh’s initial amount: $20**
He buys a hat: $20 - $10 = $10 left
He buys a pencil: $10 - $2 = $8 left
Cost of four cookies = 4 x $1.25 = $5
Amount after buying cookies: $8 - $5 = $3
So, Josh has $3 left.
**Problem 2: Rebecca’s initial amount: $50**
She buys a ticket: $50 - $25 = $25 left
She buys a souvenir: $25 - $6 = $19 left
Cost of six balloons = 6 x $2.50 = $15
Amount after buying balloons: $19 - $15 = $4
So, Rebecca has $4 left.
In terms of difficulty, both problems involve subtraction and multiplication. However, Problem 2 has
slightly larger numbers and one more multiplication operation than Problem 1. Hence, you could argue
that:
**Answer: Problem 2 is more difficult than Problem 1.**
**Figure 13: An example of using GPT-4 to create problems based only on the problems from GSM8K and then**
evaluate the complexity of the newly generated problems. "Problem 2" is the new problem, and "Problem 1" is
the problem from GSM8K that was used to generate "Problem 2".
-----
(a) Prompt:
Please create a new problem, following the given "Example" and the new problem should be harder
than the "Example".
Example: "Kara mixes cups of sugar and water in the ratio of 1:2 to make one liter of lemonade.
Calculate the total number of cups of sugar used if she used 84 cups to make lemonade."
(b) According new problem generated by GPT-4:
Liam mixes cups of honey, water, and lemon juice in the ratio of 2:5:3 to make one gallon of honey
lemonade. Calculate the total number of cups of honey used if he used 120 cups to make honey
lemonade.
(c) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Kara mixes cups of sugar and water in the ratio of 1:2 to make one liter of lemonade.
Calculate the total number of cups of sugar used if she used 84 cups to make lemonade."
Problem 2: "Liam mixes cups of honey, water, and lemon juice in the ratio of 2:5:3 to make one gallon
of honey lemonade. Calculate the total number of cups of honey used if he used 120 cups to make
honey lemonade."
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(d) Judgement from GPT-4:
Let’s solve each problem to determine which one is more difficult:
**Problem 1:**
**If the ratio of sugar to water is 1:2, then for every 3 cups (1 cup of sugar + 2 cups of water), 1 liter of**
lemonade is made.
**So, if she used 84 cups to make lemonade, then:**
Number of liters of lemonade made = 84 cups / 3 = 28 liters
**Therefore, the total number of cups of sugar used is:**
28 liters * 1 cup of sugar = 28 cups of sugar
**Problem 2:**
**If the ratio of honey to water to lemon juice is 2:5:3, then for every 10 cups (2 cups of honey + 5 cups**
of water + 3 cups of lemon juice), 1 gallon of honey lemonade is made.
**So, if he used 120 cups to make honey lemonade, then:**
Number of gallons of honey lemonade made = 120 cups / 10 = 12 gallons
**Therefore, the total number of cups of honey used is:**
12 gallons * 2 cups of honey = 24 cups of honey
**Comparison:**
Both problems involve a similar approach to solve. Both require understanding ratios and
proportionally dividing the given total cups by the sum of the ratio’s components.
Verdict: Tie. Both problems are of similar difficulty.
**Figure 14: An example of using GPT-4 to create problems based only on the problems from GSM8K and then**
evaluate the complexity of the newly generated problems. "Problem 2" is the new problem, and "Problem 1" is
the problem from GSM8K that was used to generate "Problem 2".
-----
(a) Prompt:
Please create a new problem, following the given "Example" and the new problem should be harder
than the "Example".
Example: "Hillary always buys the Wednesday, Thursday and Friday editions of the local newspaper
for $0.50 each. On Sunday, she spends $2.00 to get that copy. How much does she spend on the
newspaper over 8 weeks?"
(b) According new problem generated by GPT-4:
Joseph purchases his favorite magazines on Monday, Tuesday, and Saturday every week. The Monday
edition costs $1.25, the Tuesday edition costs $1.50, and the Saturday edition costs $2.75. Additionally,
he buys a special weekly edition every Sunday for $3.50. How much does Joseph spend on purchasing
his favorite magazines over 12 weeks?
(c) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Hillary always buys the Wednesday, Thursday and Friday editions of the local newspaper
for $0.50 each. On Sunday, she spends $2.00 to get that copy. How much does she spend on the
newspaper over 8 weeks?"
Problem 2: "Joseph purchases his favorite magazines on Monday, Tuesday, and Saturday every week.
The Monday edition costs $1.25, the Tuesday edition costs $1.50, and the Saturday edition costs
$2.75. Additionally, he buys a special weekly edition every Sunday for $3.50. How much does Joseph
spend on purchasing his favorite magazines over 12 weeks?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(d) Judgement from GPT-4:
Let’s break down each problem:
**Problem 1:**
1. Hillary buys the newspaper on Wednesday, Thursday, and Friday for $0.50 each. So, the cost for
these 3 days = 3 * $0.50 = $1.50
2. On Sunday, she spends $2.00.
3. The total she spends in a week = $1.50 + $2.00 = $3.50
4. Over 8 weeks, she spends = 8 * $3.50 = $28.00
**Problem 2:**
1. Joseph buys magazines on Monday for $1.25, Tuesday for $1.50, and Saturday for $2.75. So, the
cost for these 3 days = $1.25 + $1.50 + $2.75 = $5.50
2. Additionally, he buys a special edition on Sunday for $3.50.
3. The total he spends in a week = $5.50 + $3.50 = $9.00
4. Over 12 weeks, he spends = 12 * $9.00 = $108.00
In terms of mathematical complexity, both problems involve the same operations: multiplication and
addition. However, the second problem has more purchases and higher values involved, which could
make it slightly more challenging for some people to solve. Also, the fact that Joseph’s purchases
extend over 12 weeks, as opposed to Hillary’s 8 weeks, adds a layer of complexity.
So, based on the given information, the answer would be: Problem 2.
**Figure 15: An example of using GPT-4 to create problems based only on the problems from GSM8K and then**
evaluate the complexity of the newly generated problems. "Problem 2" is the new problem, and "Problem 1" is
the problem from GSM8K that was used to generate "Problem 2".
-----
(a) Prompt:
Please create a new problem, following the given "Example" and the new problem should be harder
than the "Example".
Example: "Peter has 15 birds. 1/3 are ducks. The rest are chickens and require special feed that costs
$2 per bird. How much does it cost to feed the chickens?"
(b) According new problem generated by GPT-4:
Maria has 36 animals. 1/4 of them are rabbits. The rest are cats, and each cat needs a special food that
costs $3 per cat per day. If Maria feeds the cats for 5 days, how much will she spend on cat food?
(c) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Peter has 15 birds. 1/3 are ducks. The rest are chickens and require special feed that costs
$2 per bird. How much does it cost to feed the chickens?"
Problem 2: "Maria has 36 animals. 1/4 of them are rabbits. The rest are cats, and each cat needs a
special food that costs $3 per cat per day. If Maria feeds the cats for 5 days, how much will she spend
on cat food?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(d) Judgement from GPT-4:
Let’s break down the steps required to solve each problem:
**Problem 1:**
1. Calculate the number of ducks: 15 × [1]3 [= 5][.]
2. Subtract the number of ducks from the total to find the number of chickens: 15 − 5 = 10.
3. Calculate the cost to feed the chickens: 10 × $2 = $20.
**Problem 2:**
1. Calculate the number of rabbits: 36 × 4[1] [= 9][.]
2. Subtract the number of rabbits from the total to find the number of cats: 36 − 9 = 27.
3. Calculate the daily cost to feed the cats: 27 × $3 = $81.
4. Multiply the daily cost by 5 days: 5 × $81 = $405.
While both problems require a similar initial approach of finding the number of animals not explicitly
stated, Problem 2 has an additional step of multiplying by the number of days to get the final cost.
Based on the number of steps and calculations required, Problem 2 is slightly more difficult than
Problem 1.
**Figure 16: An example of using GPT-4 to create problems based only on the problems from GSM8K and then**
evaluate the complexity of the newly generated problems. "Problem 2" is the new problem, and "Problem 1" is
the problem from GSM8K that was used to generate "Problem 2".
-----
(a) Prompt:
Please create a new problem, following the given "Example" and the new problem should be harder
than the "Example".
Example: "Zain has 10 more of each coin than Emerie. If Emerie has six quarters, seven dimes, and
five nickels, how many coins does Zain have?"
(b) According new problem generated by GPT-4:
Xavier has 15 more of each coin than Elara. If Elara has nine quarters, eleven dimes, eight nickels,
and three pennies, how many coins does Xavier have?
(c) Evaluation Prompt:
Which problem is more difficult?
Problem 1: "Zain has 10 more of each coin than Emerie. If Emerie has six quarters, seven dimes, and
five nickels, how many coins does Zain have?"
Problem 2: "Xavier has 15 more of each coin than Elara. If Elara has nine quarters, eleven dimes,
eight nickels, and three pennies, how many coins does Xavier have?"
You answer should be one of "Problem 1", "Problem 2" and "Tie".
(d) Judgement from GPT-4:
Let’s break down each problem step by step:
**Problem 1:**
Emerie has: 6 quarters, 7 dimes, 5 nickels
For each type of coin, Zain has 10 more than Emerie.
Thus:
- Zain has 6 + 10 = 16 quarters
- Zain has 7 + 10 = 17 dimes
- Zain has 5 + 10 = 15 nickels
Total number of coins Zain has = 16 + 17 + 15 = 48 coins.
**Problem 2:**
Elara has: 9 quarters, 11 dimes, 8 nickels, 3 pennies
For each type of coin, Xavier has 15 more than Elara.
Thus:
- Xavier has 9 + 15 = 24 quarters
- Xavier has 11 + 15 = 26 dimes
- Xavier has 8 + 15 = 23 nickels
- Xavier has 3 + 15 = 18 pennies
Total number of coins Xavier has = 24 + 26 + 23 + 18 = 91 coins.
Now, let’s evaluate the complexity:
Problem 1 has 3 types of coins and a difference of 10 for each coin.
Problem 2 has 4 types of coins and a difference of 15 for each coin.
Considering that "more difficult" might mean more steps or operations to solve, Problem 2 has more
types of coins and a larger difference, so it requires slightly more computation.
Answer: Problem 2 is more difficult.
**Figure 17: An example of using GPT-4 to create problems based only on the problems from GSM8K and then**
evaluate the complexity of the newly generated problems. "Problem 2" is the new problem, and "Problem 1" is
the problem from GSM8K that was used to generate "Problem 2".
-----
|0.72|0.51|0.4|0.27|0.13|
|---|---|---|---|---|
|0.52|0.53|0.41|0.3|0.14|
|0.57|0.28|0.26|0.15|0.05|
|0.51|0.46|0.17|0.08|0.07|
|0.32|0.15|0.09|0.04|0|
|0.29|0.21|0.17|0.08|0.04|
|0.29|0.3|0.16|0.13|0.01|
|0.76|0.61|0.49|0.37|0.21|
|---|---|---|---|---|
|0.69|0.64|0.49|0.38|0.22|
|0.67|0.43|0.32|0.29|0.17|
|0.59|0.5|0.19|0.15|0.05|
|0.4|0.21|0.12|0.1|0.03|
|0.35|0.3|0.18|0.13|0.09|
|0.29|0.35|0.27|0.1|0.01|
|0.77|0.63|0.5|0.4|0.21|
|---|---|---|---|---|
|0.59|0.56|0.5|0.42|0.19|
|0.77|0.41|0.34|0.2|0.16|
|0.67|0.48|0.24|0.14|0.06|
|0.42|0.21|0.13|0.09|0|
|0.4|0.25|0.18|0.14|0.06|
|0.37|0.39|0.2|0.17|0.02|
|0.83|0.67|0.57|0.45|0.29|
|---|---|---|---|---|
|0.73|0.69|0.52|0.45|0.24|
|0.77|0.49|0.42|0.32|0.21|
|0.74|0.58|0.22|0.16|0.18|
|0.44|0.34|0.23|0.15|0.04|
|0.44|0.38|0.24|0.17|0.1|
|0.24|0.48|0.25|0.18|0.05|
|0.9|0.77|0.75|0.6|0.41|
|---|---|---|---|---|
|0.77|0.81|0.68|0.53|0.34|
|0.87|0.49|0.64|0.46|0.26|
|0.77|0.58|0.43|0.38|0.2|
|0.46|0.46|0.35|0.18|0.01|
|0.62|0.38|0.31|0.2|0.13|
|0.39|0.49|0.42|0.23|0.08|
|0.93|0.78|0.7|0.62|0.39|
|---|---|---|---|---|
|0.83|0.8|0.66|0.55|0.33|
|0.83|0.58|0.57|0.49|0.36|
|0.77|0.66|0.51|0.27|0.26|
|0.61|0.42|0.36|0.2|0.1|
|0.48|0.35|0.28|0.19|0.1|
|0.39|0.44|0.36|0.22|0.05|
|0.92|0.76|0.73|0.62|0.37|
|---|---|---|---|---|
|0.81|0.79|0.68|0.57|0.38|
|0.8|0.64|0.61|0.46|0.31|
|0.74|0.62|0.49|0.31|0.22|
|0.6|0.48|0.28|0.17|0.07|
|0.58|0.45|0.32|0.22|0.11|
|0.5|0.5|0.44|0.2|0.07|
|0.44|0.32|0.27|0.17|0.07|
|---|---|---|---|---|
|0.53|0.45|0.34|0.21|0.11|
|0.3|0.22|0.13|0.09|0.04|
|0.49|0.37|0.25|0.1|0.03|
|0.33|0.12|0.02|0.03|0.01|
|0.25|0.15|0.07|0.02|0.02|
|0.45|0.26|0.14|0.06|0.01|
MathCoder-L-7B MathCoder-CL-7B
(Overall: 0.233) (Overall: 0.3024)
algebra 0.72 0.51 0.4 0.27 0.13 0.76 0.61 0.49 0.37 0.21
prealgebra 0.52 0.53 0.41 0.3 0.14 0.69 0.64 0.49 0.38 0.22
1.0
numbertheory 0.57 0.28 0.26 0.15 0.05 0.67 0.43 0.32 0.29 0.17
counting andprobability 0.51 0.46 0.17 0.08 0.07 0.59 0.5 0.19 0.15 0.05
precalculus 0.32 0.15 0.09 0.04 0 0.4 0.21 0.12 0.1 0.03
intermediatealgebra 0.29 0.21 0.17 0.08 0.04 0.35 0.3 0.18 0.13 0.09
geometry 0.29 0.3 0.16 0.13 0.01 0.29 0.35 0.27 0.1 0.01
MathCoder-L-13B MathCoder-CL-13B 0.8
(Overall: 0.2992) (Overall: 0.3588)
algebra 0.77 0.63 0.5 0.4 0.21 0.83 0.67 0.57 0.45 0.29
prealgebra 0.59 0.56 0.5 0.42 0.19 0.73 0.69 0.52 0.45 0.24
numbertheory 0.77 0.41 0.34 0.2 0.16 0.77 0.49 0.42 0.32 0.21
counting andprobability 0.67 0.48 0.24 0.14 0.06 0.74 0.58 0.22 0.16 0.18
precalculus 0.42 0.21 0.13 0.09 0 0.44 0.34 0.23 0.15 0.04
intermediatealgebra 0.4 0.25 0.18 0.14 0.06 0.44 0.38 0.24 0.17 0.1 0.6
geometry 0.37 0.39 0.2 0.17 0.02 0.24 0.48 0.25 0.18 0.05
MathCoder-L-70B MathCoder-CL-34B
(Overall: 0.442) (Overall: 0.452)
algebra 0.9 0.77 0.75 0.6 0.41 0.93 0.78 0.7 0.62 0.39
prealgebra 0.77 0.81 0.68 0.53 0.34 0.83 0.8 0.66 0.55 0.33
numbertheory 0.87 0.49 0.64 0.46 0.26 0.83 0.58 0.57 0.49 0.36 0.4
counting andprobability 0.77 0.58 0.43 0.38 0.2 0.77 0.66 0.51 0.27 0.26
precalculus 0.46 0.46 0.35 0.18 0.01 0.61 0.42 0.36 0.2 0.1
intermediatealgebra 0.62 0.38 0.31 0.2 0.13 0.48 0.35 0.28 0.19 0.1
geometry 0.39 0.49 0.42 0.23 0.08 0.39 0.44 0.36 0.22 0.05
MathCode-CL-34B with loss on execution results MathCode-CL-34B without code execution 0.2
(Overall: 0.459) (Overall: 0.169)
algebra 0.92 0.76 0.73 0.62 0.37 0.44 0.32 0.27 0.17 0.07
prealgebra 0.81 0.79 0.68 0.57 0.38 0.53 0.45 0.34 0.21 0.11
numbertheory 0.8 0.64 0.61 0.46 0.31 0.3 0.22 0.13 0.09 0.04
counting andprobability 0.74 0.62 0.49 0.31 0.22 0.49 0.37 0.25 0.1 0.03
precalculus 0.6 0.48 0.28 0.17 0.07 0.33 0.12 0.02 0.03 0.01
intermediatealgebra 0.58 0.45 0.32 0.22 0.11 0.25 0.15 0.07 0.02 0.02 0.0
geometry 0.5 0.5 0.44 0.2 0.07 0.45 0.26 0.14 0.06 0.01
L1 L2 L3 L4 L5 L1 L2 L3 L4 L5
**Figure 18: Performance comparison of MathCoeder-L (7B, 13B, 70B) and MathCoder-CL (7B, 13B, 34B)**
on the MATH dataset by levels and subjects. The improved accuracy from MathCoder-L to MathCoder-CL
comes primarily from subjects that require precise calculations like algebra, number theory, and counting and
probability.
-----
Problem: MATH/test/number-theory/1191
<|user|><|text|> If n is an integer, 1 ≤ _n ≤_ 2010, how many fractions 2010n[2] [yield repeating decimals?][ <][|][endofblock][|][><][|][endofmessage][|][>]
(a) CoT solution for MATH/test/number-theory/1191 (c) LCE solution for MATH/test/number-theory/1191
To determine whether a fraction _[a]b_ [produces a repeating decimal,] < assistant > < text > Alright, for a fraction to yield a repeating
you need to look at the denominator, b. A fraction will produce a decimal, its denominator (after simplification) should contain prime| _|_ _|_ _|_
repeating decimal if and only if, after reducing it to simplest form, factors other than 2 and 5.
the denominator has a prime factor other than 2 or 5.
Let’s solve the problem step by step: So, to solve this problem, we need to follow these steps:
**Step 1: Prime Factorization of 2010**
1. Factorize 2010 to get its prime factors.
2010 = 2 3 5 67
_×_ _×_ _×_ 2. For each integer n from 1 to 2010, compute n[2].
**Step 2: Repeating Decimal Criteria For** 2010n[2] [to be a repeating] 3. Divide4. Check if the simplified denominator has prime factors other than 2 n[2] by 2010 and simplify the fraction.
decimal, the denominator, after reducing the fraction to the simplest and 5.
form, must have a prime factor other than 2 or 5.
**Step 3: Counting Possible Values of n To get a denominator with** If the denominator after simplification has only 2’s and 5’s as its
a prime factor other than 2 or 5 in reduced form, n should not be prime factors, then it will yield a terminating decimal. Otherwise, it
divisible by 3, 67, or any multiple of them. So, we can find the will yield a repeating decimal.
possible values of n as follows:
between 1 and 2010.a) There are 2010/3 = 670 numbers that are multiples of 3 Let’s proceed with these steps. <|endofblock|>
between 1 and 2010.b) There are 2010/67 = 30 numbers that are multiples of 67 <|code|>
201c) There are overlapping numbers which are multiples of. There are 2010/201 = 10 such numbers. 3 × 67 = from math import gcd
**Step 4:** Applying the Inclusion-Exclusion Principle To avoid # Step 1: Factorize 2010 to get its prime
counting the overlapping numbers twice, we apply the inclusion- factors.\\
exclusion principle: def prime_factors(n):\\
i = 2\\
Count = 670 + 30 − 10 = 700 − 10 = 690 factors = {}\\
**Step 5: Subtract from Total Possible Values of n There are 2010** while i * i <= n:\\
possible values for n in total. To find the count of n yielding if n % i:
repeating decimals: i += 1
else:
2010 − 690 = 1320 n //= ifactors[i] = factors.get(i, 0) +
So, there are 1320 ✗ fractions 2010n[2] [that yield repeating decimals.] if n > 1: 1
factors[n] = factors.get(n, 0) + 1
return factors
(b) PoT solution for MATH/test/number-theory/1191
factors_2010 = prime_factors(2010)
from sympy.ntheory import factorint
# Step 2, 3 and 4
# Given range count_repeating = 0
n_range = range(1, 2011) for n in range(1, 2011):
# Given denominator n_square = n**2
denominator = 2010 common_factor = gcd(n_square, 2010)
# Initialize counter for fractions with denominator = 2010 // common_factor
repeating decimals
repeating_decimal_count = 0 # Check if the simplified denominator has
# Factor the denominator to find the prime prime factors other than 2 and 5
factorization factors_denominator = prime_factors(
denominator_factors = factorint(denominator) denominator)
# Loop over the range to find the fractions if 2 in factors_denominator:
with repeating decimals del factors_denominator[2]
for n in n_range: if 5 in factors_denominator:
# Factor the numerator del factors_denominator[5]
numerator_factors = factorint(n**2)
# Check if each factor of the denominator if factors_denominator: # if there are
is present in the numerator with at other prime factors left
least the same power count_repeating += 1
repeating_decimal = any(
denominator_factors[p] > count_repeating
numerator_factors.get(p, 0) for p in < endofblock >
denominator_factors) _|_ _|_
# If the fraction has repeating decimal, < execution > 2000 < endofblock >
increase the counter _|_ _|_ _|_ _|_
if repeating_decimal:
repeating_decimal_count += 1 <|text|> There are 2000 ✓ fractions of the form 2010n[2]
repeating_decimal_count that yield repeating decimals when 1 _≤_ _n_ _≤_ 2010.
> > > 2009 ✗ <|endofblock|><|endofmessage|>
**Figure 19: Example of CoT, PoT and LCE solution with special token. The problem is from the test set of**
MATH in number theory with id 1191. In contrast to CoT, which consists solely of natural language, and PoT,
which consists solely of code, our LCE solution intertwines natural language, code, and execution results.
-----
Problem: MATH/test/algebra/2477
<|user|><|text|> Consider the infinite arithmetic sequence A with first term 5 and common difference −2. Now define the infinite sequence B so that
the k[th] term of B is 2 raised to the k[th] term of A. Find the sum of all of the terms of B. <|endofblock|><|endofmessage|>
(a) CoT solution for MATH/test/algebra/2477 (c) LCE solution for MATH/test/algebra/2477
Alright. Let’s start by understanding both sequences step by step.
< assistant > < text > Sure. Let’s solve the problem step by step.
**Sequence A** **Step 1: Define Sequence|** _|_ _|_ _|_ _A Given that the first term of sequence A_
This is an infinite arithmetic sequence. The first term,the common difference, d, is -2. _a1, is 5, and_ is 5 and the common difference is -2, the k[th] term of A, denoted as
So, we can describe the k[th] term of A as: _Ak, can be found using the formula:_
_ak = 5 −_ 2(k − 1) _Ak = 5 −_ 2(k − 1)
_ak = 5 −_ 2k + 2 _Ak = 7 −_ 2k
_ak = 7 −_ 2k **Step 2: Define Sequence B The k[th]** term of sequence B, denoted
**Sequence B** as Bk, is defined as 2 raised to the k[th] term of A:
The sequence B is defined such that the k[th] term of B is 2 raised
to the k[th] term of A: _Bk = 2[Ak]_
_bk = 2[ak]_ = 2[7][−][2][k] Substitute the value of Ak into the equation for Bk:
**Sum of SequenceSince B is an infinite sequence, we are to find the sum of all terms of B** _Bk = 2[7][−][2][k]_
_B:_
**Step 3: Find Sum of Sequence B To find the sum of all terms of**
sequence B, we need to find the sum to infinity of this sequence.
_SB = 2[7]_ + 2[5] + 2[3] + 2[1] + · · · Since the terms of sequence B are getting smaller (because 2[7][−][2][k]
To find SB, notice that it is a geometric series with the first term 2[7] decreases as k increases), this is a geometric series.
and common ratioThe sum of an infinite geometric series can be found using the 2[−][2] = 4[1] [.] The sum of an infinite geometric series can be found using theformula: _a_
formula: _S =_
_a_ 1 − _r_
_S =_ Where:
1 − _r_ - S is the sum of the series,
where a is the first term and r is the common ratio. - a is the first term of the series,
Substitute the first term 2[7] and the common ratio 4[1] [into the formula:] Let’s find the first term and common ratio of the sequence- r is the common ratio of the series. _B and then_
_SB =_ 1 −2[7] 4[1] = 234[7] = 2[7] 1· [4]3 = 23[9] use them to find the sum of sequence<|endofblock|> _B._
Thus, the sum of all the terms of sequence B is [29]3 [.][✗] <|code|>
# Given values
first_term_A = 5
common_difference_A = -2
(b) PoT solution for MATH/test/algebra/2477 base_B = 2
# Given first term and common difference of # Calculating the first term of sequence B
sequence A first_term_B = base_B ** first_term_A
first_term_A = 5
common_diff_A = -2 # Calculating the common ratio of sequence B
# For k=1, A_k = 5, B_1 = 2^5
# Initialize sum of sequence B # For k=2, A_k = 3, B_2 = 2^3 => B_2 = B_1 /
sum_B = 0 2^2
common_ratio_B = base_B ** (-2 * 1) # k=1, so
# Loop over the terms of sequence A and 2*1=2
calculate corresponding terms of
sequence B, # Calculating the sum of sequence B
# until the term of sequence A is non- sum_B = first_term_B / (1 - common_ratio_B)
positive. first_term_B, common_ratio_B, sum_B
kth_term_A = first_term_A
while kth_term_A > 0: <|endofblock|>
kth_term_B = 2 ** kth_term_A
sum_B += kth_term_Bkth_term_A += common_diff_A <|execution|> (32, 0.25, 42.666666666666667) <|endofblock|>
sum_B <|text|>
**Step 3: Results Thus, the sum of all of the terms of sequence B is**
> > > 42 ✗ approximately 42.67.✓ <|endofblock|><|endofmessage|>
**Figure 20: Example of CoT, PoT and LCE solution with special token. The problem is from the test set of**
MATH in algebra with id 2477. In contrast to CoT, which consists solely of natural language, and PoT, which
consists solely of code, our LCE solution intertwines natural language, code, and execution results.
-----
Problem: MATH/test/geometry/500
<|user|><|text|> Points A(0, 0), B(9, 6) and C(6, 12) are vertices of triangle ABC. Point D is on segment AB such that 2(AD) = DB,
point E is on segment BC such that 2(BE) = EC and point F is on segment CA such that 2(CF ) = F A. What is the ratio of the area of
triangle DEF to the area of triangle ABC? Express your answer as a common fraction. <|endofblock|><|endofmessage|>
CoT solution for MATH/test/geometry/500
To find the coordinates of points D, E, and F, we can use the section formula. The section formula states that if a line divides two points P (x1, y1)
and Q(x2, y2) in the ratio m : n, then the coordinates (x, y) of the point dividing the line segment are given by:
_mx2 + nx1_
_x =_
_m + n_
_my2 + ny1_
_y =_
_m + n_
**Find coordinates of D**
_D divides AB in the ratio 2 : 1. Here, A(0, 0) and B(9, 6). So,_
_xD =_ 2 · 9 + 1 · 0 = 6
2 + 1
_yD =_ 2 · 6 + 1 · 0 = 4
2 + 1
So, D(6, 4).
**Find coordinates of E**
_E divides BC in the ratio 2 : 1. Here, B(9, 6) and C(6, 12). So,_
_xE =_ 2 · 6 + 1 · 9 = 7
2 + 1
_yE =_ 2 · 12 + 1 · 6 = 10
2 + 1
So, E(7, 10).
**Find coordinates of F**
_F divides CA in the ratio 2 : 1. Here, C(6, 12) and A(0, 0). So,_
_xF =_ 2 · 0 + 1 · 6 = 2
2 + 1
_yF =_ 2 · 0 + 1 · 12 = 4
2 + 1
So, F (2, 4).
Find the area ratio of △DEF to △ABC The area of a triangle formed by three vertices P (x1, y1), Q(x2, y2), and R(x3, y3) is given by:
1
Area = 2 _|x1(y2 −_ _y3) + x2(y3 −_ _y1) + x3(y1 −_ _y2)|_
**1. Area of △ABC:** 1
AreaABC = 2 _|0 · (6 −_ 12) + 9 · (12 − 0) + 6 · (0 − 6)|
1
AreaABC = 2 _|0 + 108 + 36|_
AreaABC = 72 square units
**2. Area of △DEF :** 1
AreaDEF = 2 _|6 · (10 −_ 4) + 7 · (4 − 4) + 2 · (4 − 10)|
1
AreaDEF = 2 _|36 + 0 −_ 12|
AreaDEF = 12 square units
**Find the Ratio**
Ratio = AreaDEF
AreaABC
12 1
Ratio = =
72 6
So, the ratio of the area of triangle DEF to the area of triangle ABC is [1]6 [.][ ✗]
**Figure 21: Example of CoT solution. The problem is from the test set of MATH in geometry with id 500.**
-----
Problem: MATH/test/geometry/500
<|user|><|text|> Points A(0, 0), B(9, 6) and C(6, 12) are vertices of triangle ABC. Point D is on segment AB such that 2(AD) = DB,
point E is on segment BC such that 2(BE) = EC and point F is on segment CA such that 2(CF ) = F A. What is the ratio of the area of
triangle DEF to the area of triangle ABC? Express your answer as a common fraction. <|endofblock|><|endofmessage|>
PoT solution for MATH/test/geometry/500
from sympy import symbols, solve, Point, Polygon, Rational
# Given points of the triangle ABC
A = Point(0, 0)
B = Point(9, 6)
C = Point(6, 12)
# Given that D is on segment AB such that 2(AD) = DB
# Therefore, AD:DB = 1:2
# Using section formula to find the coordinates of D
m1, m2 = 2, 1
D = Point((m1*B.x + m2*A.x)/(m1+m2), (m1*B.y + m2*A.y)/(m1+m2))
# Given that E is on segment BC such that 2(BE) = EC
# Therefore, BE:EC = 1:2
# Using section formula to find the coordinates of E
m1, m2 = 1, 2
E = Point((m1*C.x + m2*B.x)/(m1+m2), (m1*C.y + m2*B.y)/(m1+m2))
# Given that F is on segment CA such that 2(CF) = FA
# Therefore, CF:FA = 1:2
# Using section formula to find the coordinates of F
m1, m2 = 1, 2
F = Point((m1*A.x + m2*C.x)/(m1+m2), (m1*A.y + m2*C.y)/(m1+m2))
# Calculating the area of the triangles ABC and DEF
triangle_ABC = Polygon(A, B, C)
triangle_DEF = Polygon(D, E, F)
area_ABC = abs(triangle_ABC.area)
area_DEF = abs(triangle_DEF.area)
# Finding the ratio of the areas of triangle DEF to triangle ABC
ratio = Rational(area_DEF, area_ABC)
ratio
> > > 2/9 ✗
**Figure 22: Example of PoT solution. The problem is from the test set of MATH in geometry with id 500.**
-----
Problem: MATH/test/geometry/500
<|user|><|text|> Points A(0, 0), B(9, 6) and C(6, 12) are vertices of triangle ABC. Point D is on segment AB such that 2(AD) = DB,
point E is on segment BC such that 2(BE) = EC and point F is on segment CA such that 2(CF ) = F A. What is the ratio of the area of
triangle DEF to the area of triangle ABC? Express your answer as a common fraction. <|endofblock|><|endofmessage|>
LCE solution for MATH/test/geometry/500
<|assistant|> <|text|> Let’s solve the problem step by step:
**1. Find Points D, E, F:**
- Use the section formula to find the coordinates of the points D, E, and F .
- The section formula is: (x, y) = _m · x2 + n · x1_ _,_ _m · y2 + n · y1_
_m + n_ _m + n_
- For point D, m = 2, n = 1, x1 = 0, y1 = 0, x2 = 9, and y2 = 6.
- For point E, m = 2, n = 1, x1 = 9, y1 = 6, x2 = 6, and y2 = 12.
- For point F, m = 2, n = 1, x1 = 6, y1 = 12, x2 = 0, and y2 = 0.
**2. Find the Area of the Triangles ABC and DEF:**
- The area of a triangle with vertices (x1, y1), (x2, y2), and (x3, y3) is given by:
Area = _|x1(y2 −_ _y3) + x2(y3 −_ _y1) + x3(y1 −_ _y2)|_
2
**3. Find the Ratio of the Areas:**
- The ratio of the areas of triangles DEF and ABC is given by:
Area of DEF
Ratio =
Area of ABC
Let’s start by finding the coordinates of points D, E, and F using the section formula. <|endofblock|>
<|code|>
# Given points and ratios
A = (0, 0)
B = (9, 6)
C = (6, 12)
m1, n1 = 2, 1 # ratio for point D
m2, n2 = 2, 1 # ratio for point E
m3, n3 = 2, 1 # ratio for point F
# Using section formula to find coordinates of D, E, F
D = ((m1 * B[0] + n1 * A[0]) / (m1 + n1), (m1 * B[1] + n1 * A[1]) / (m1 + n1))
E = ((m2 * C[0] + n2 * B[0]) / (m2 + n2), (m2 * C[1] + n2 * B[1]) / (m2 + n2))
F = ((m3 * A[0] + n3 * C[0]) / (m3 + n3), (m3 * A[1] + n3 * C[1]) / (m3 + n3))
D, E, F
<|endofblock|>
<|execution|> ((6.0, 4.0), (7.0, 10.0), (2.0, 4.0)) <|endofblock|>
<- Point|text| D> The coordinates of the points are as follows: is located at (6, 4)
- Point E is located at (7, 10)
- Point F is located at (2, 4)
Now, let’s proceed to find the areas of triangles ABC and DEF using the formula for the area of a triangle, and then find the ratio of these areas.
<|endofblock|>
<|code|>
# Function to find the area of a triangle given its vertices
def triangle_area(v1, v2, v3):
return 0.5 * abs(v1[0] * (v2[1] - v3[1]) + v2[0] * (v3[1] - v1[1]) + v3[0] * (v1[1] - v2[1]))
# Areas of triangles ABC and DEF
area_ABC = triangle_area(A, B, C)
area_DEF = triangle_area(D, E, F)
# Ratio of areas
ratio_areas = area_DEF / area_ABC
area_ABC, area_DEF, ratio_areas
<|endofblock|>
<|execution|> (36.0, 12.0, 0.3333333333333333) <|endofblock|>
<|text|> So, the ratio of the area of triangle DEF to the area of triangle ABC is 3[1] [✓] [or][ 0][.][3333][.][ <][|][endofblock][|][><][|][endofmessage][|][>]
**Figure 23: Example of LCE solution with special token. The problem is from the test set of MATH in geometry**
with id 500. In contrast to CoT, which consists solely of natural language, and PoT, which consists solely of
code, our LCE solution intertwines natural language, code, and execution results.
-----
| [
"Aojun, Zhou",
"Ke, Wang",
"Zimu, Lu",
"Weikang, Shi",
"Houxing, Ren",
"Hongsheng, Li",
"Sichun, Luo",
"Renrui, Zhang",
"Linqi, Song",
"Mingjie, Zhan"
] | 2023-10-13T00:00:00 | ICLR 2024 Poster | true | 44 | 4 | null | https://openreview.net/forum?id=z8TW0ttBPp | https://arxiv.org/abs/2310.03731 | https://www.semanticscholar.org/paper/cddb552f6c3464a54a02b0b64b2d1af56c086606 |
First Experiments with Neural Translation of Informal to Formal Mathematics | We report on our experiments to train deep neural networks that automatically translate informalized LaTeX-written Mizar texts into the formal Mizar language. To the best of our knowledge, this is the first time when neural networks have been adopted in the formalization of mathematics. Using Luong et al.'s neural machine translation model (NMT), we tested our aligned informal-formal corpora against various hyperparameters and evaluated their results. Our experiments show that our best performing model configurations are able to generate correct Mizar statements on 65.73\% of the inference data, with the union of all models covering 79.17\%. These results indicate that formalization through artificial neural network is a promising approach for automated formalization of mathematics. We present several case studies to illustrate our results. | The experiments to train deep neural networks that automatically translate informalized LaTeX-written Mizar texts into the formal Mizar language indicate that formalization through artificial neural network is a promising approach for automated formalization of mathematics. | null | [
"Cezary, Kaliszyk",
"Qingxiang, Wang",
"Josef, Urban",
"Florian, Rabe",
"William M., Farmer",
"Grant O., Passmore",
"Abdou, Youssef"
] | 2018-01-01T00:00:00 | null | false | 43 | 7 | null | http://link.springer.com/10.1007/978-3-319-96812-4_22 | null | https://www.semanticscholar.org/paper/d92ee995a2ba3de900ff6ffeb1949220be45c480 |
Linear algebra with transformers | Transformers can learn to perform numerical computations from examples only. I study nine problems of linear algebra, from basic matrix operations to eigenvalue decomposition and inversion, and introduce and discuss four encoding schemes to represent real numbers. On all problems, transformers trained on sets of random matrices achieve high accuracies (over 90%). The models are robust to noise, and can generalize out of their training distribution. In particular, models trained to predict Laplace-distributed eigenvalues generalize to different classes of matrices: Wigner matrices or matrices with positive eigenvalues. The reverse is not true. | Nine problems of linear algebra are studied, from basic matrix operations to eigenvalue decomposition and inversion, and four encoding schemes to represent real numbers are introduced. | ## Linear algebra with transformers
**François Charton Meta AI**
```
[email protected]
```
**Abstract**
Transformers can learn to perform numerical computations from examples only. I study nine
problems of linear algebra, from basic matrix operations to eigenvalue decomposition and
inversion, and introduce and discuss four encoding schemes to represent real numbers. On
all problems, transformers trained on sets of random matrices achieve high accuracies (over
90%). The models are robust to noise, and can generalize out of their training distribution.
In particular, models trained to predict Laplace-distributed eigenvalues generalize to different
classes of matrices: Wigner matrices or matrices with positive eigenvalues. The reverse is
not true.
**1** **Introduction**
Since their introduction for machine translation by Vaswani et al. (2017), transformers were applied to a wide
range of problems, from text generation (Radford et al., 2018; 2019) to image processing (Carion et al., 2020)
and speech recognition (Dong et al., 2018), where they now achieve state-of-the-art performance (Dosovitskiy
et al., 2021; Wang et al., 2020b). Transformers have also been proposed for problems of symbolic mathematics,
like integration (Lample & Charton, 2019), theorem proving (Polu & Sutskever, 2020), formal logic (Hahn
et al., 2021), SAT solving (Shi et al., 2021), symbolic regression (Biggio et al., 2021) and dynamical systems
Charton et al. (2020). In these works, transformers perform symbolic computations, i.e. manipulate abstract
mathematical symbols.
Beyond symbol manipulation, mathematics also involve numerical calculations (e.g. arithmetic, numerical
solutions of equations). On these tasks, experiments with transformers and other sequence models have been
disappointing. Basic arithmetic operations, like multiplication or modulus, prove very difficult to learn (Kaiser
& Sutskever, 2015; Palamas, 2017), and models struggle with generalization out of their training distribution
(Nogueira et al., 2021). It could even be shown (Shalev-Shwartz et al., 2017) that some arithmetic tasks cannot
be solved using gradient descent. Such results might severely restrict the applicability of transformers in
science. Most practical problems of mathematics mix symbolic and numerical computations. If transformers
“cannot compute”, their use in science is very limited.
In this paper, I investigate the capability of transformers to learn to perform numerical computations with
high accuracy. I focus on nine problems of linear algebra, from basic operations on dense matrices to inversion,
eigen and singular value decomposition. I show that small transformers can be trained, from examples only,
to compute approximate solutions (up to a few percents of the L[1] norm) with more than 90% accuracy (over
99% in most cases). I propose and discuss four encodings to represent real numbers, and train small sequence
to sequence transformers (up to 6 layers, 10 to 50 million trainable parameters) from generated datasets of
random matrices. I investigate different architectures, in particular asymmetric configurations where the
encoder or decoder has only one layer. Finally, I show that the models are robust to noisy data, and that
they can generalize out of their training distribution if special attention is paid to training data generation.
**Caveat. This paper does not advocate replacing existing linear algebra algorithms with transformer-based**
implementations. Numerical packages are faster, more accurate, and scale better. My motivation is twofold:
better understand the capabilities and limitations of transformers in mathematics, and investigate their
potential use as tools for the emerging field of AI for science.
-----
In applications to mathematics, while previous research has shown that transformers struggle with basic
arithmetic, I demonstrate that they can learn complex computations, like eigenvalue decomposition. I also
show that leveraging the theory of random matrices can help understand the mechanisms of out-of-domain
generalization, a known limitation of transformers in mathematics (Welleck et al., 2021), and a difficult
problem because of the lack of metrics over problem space.
Beyond mathematics, transformers are fast becoming the “default model” for many deep learning applications.
Their potential use as end to end tools for AI for Science has received a lot of attention recently. I believe
that demonstrating that transformers can handle some of the computational building blocks of many scientific
problems, like the problems of linear algebra discussed here, is a pre-requisite to their wider generalization.
The source code for the model and experiments is available at github.com/facebookresearch/LAWT.
**2** **Problems and datasets**
Let M and N be m × n real matrices and V ∈ R[m] . This paper considers nine problems of linear algebra:
- matrix transposition: find M _[T]_, a n _m matrix,_
_×_
- matrix addition: find M + N, a m _n matrix,_
_×_
- matrix-vector multiplication: find M _[T]_ _V, in R[n],_
- matrix multiplication: find M _[T]_ _N_, a n _n matrix,_
_×_
- eigenvalues: M symmetric, find its n (real) eigenvalues, sorted in descending order,
- eigenvectors: M symmetric, find D diagonal and Q orthogonal such that QMQ[T] = D, set as a
(n + 1) _n matrix, with (sorted) eigenvalues in its first row,_
_×_
- singular values: find the n eigenvalues of M _[T]_ _M_, sorted in descending order,
- singular value decomposition: find orthogonal U, V and diagonal S such that S = UMV, set as a
(m + n + 1) _min(m, n) matrix,_
_×_
- inversion: M square and invertible, find its inverse P, such that MP = PM = Id.
These problems range from operations on single coefficients of the matrices (transposition and addition), to
computations over rows and columns, involving several arithmetic operations (multiplication), and complex
nonlinear transformations involving the whole matrix (decompositions and inversion).
For each problem, the training data is generated by sampling random input matrices I (see section 2.2), and
computing the output O with a linear algebra package (NumPy linalg). All coefficients in I and O are set in
base ten floating-point representation, and rounded to three significant digits in the mantissa. If a problem
has several input or output matrices, they are concatenated into one (for instance, the two m _n operands of_
_×_
the addition task are concatenated into one m 2n matrix I).
_×_
**2.1** **Encoding matrices as sequences**
The input and output to all problems studied here are matrices. Transformers process sequences of tokens.
To encode a m × n matrix as a sequence, its dimensions are encoded as two symbolic tokens (Vm and Vn), and
its mn coefficients are then enumerated and encoded. I propose four encoding schemes for matrix coefficients
(set in scientific notation with three significant digits): P10, P1000, B1999, and FP15.
**Base 10 positional encoding (P10) represents numbers as sequences of five tokens : one sign token (+ or**
```
-), 3 digits (from 0 to 9) for the mantissa, and a symbolic token (from E-100 to E+100) for the exponent. For
```
instance, 3.14 is represented as 314.10[−][2], and encoded as [+, 3, 1, 4, E-2].
**Base 1000 positional encoding (P1000) provides a more compact representation. The mantissa is**
encoded as a single token (from 0 to 999) and a number is represented as the triplet (sign, mantissa,
exponent).
**Balanced base 1999 (B1999) encodes the sign and mantissa as a single token (from -999 to 999).**
**15 bit floating point (FP15) encodes a floating point number x = m10[b]** as a single token FPm/b.
Table 1 provides examples for the four encodings. More information can be found in Appendix A.
-----
|Encoding|3.14|−6.02.1023|Tokens / coefficient|Size of vocabulary|
|---|---|---|---|---|
|P10 P1000 B1999 FP15|[+, 3, 1, 4, E-2] [+, 314, E-2] [314, E-2] [FP314/-2]|[-, 6, 0, 2, E21] [-, 602, E21] [-602, E21] [FP-602/21]|5 3 2 1|210 1100 2000 30000|
|---|---|---|---|---|
Table 1: Four encodings for matrix coefficients.
Choosing an encoding is a trade-off. Long encodings (P10, P1000) use a small vocabulary, and embed
knowledge about numbers that the model can use (e.g. that numbers can be crudely compared from their
signs and exponents only, that addition and multiplication can be learned by memorizing small tables).
Compact encodings use a larger vocabulary (harder to learn) but result in shorter sequences that facilitate
training with transformers. In P10, a 20 20 matrix is a sequence of 2002 tokens, close to the practical limit
_×_
of transformers with quadratic attention. In FP15, it is only 402 tokens long.
The decision to round matrix coefficients to three significant digits is mainly motivated by the need to keep
FP15 vocabulary at sizes that small transformers can learn without pre-training. Experiments about the
impact of number precision can be found in appendix C.
**2.2** **Random matrix generation**
In most experiments, training and test data are random matrices with coefficients uniformly distributed in
[ _A, A] (with A = 10). When symmetric, these matrices are known as Wigner matrices. Their eigenvalues_
_−_
have a centered distribution with standard deviation σ = A _n/3 (see Mehta (2004) and Appendix H) that_
converges as n grows to the semi-circle law p(λ) = _√4σ[2]_ _λ[2]/(2πσ[2]). If the coefficients follow a gaussian_
_−p_
distribution, the associated eigenvectors are uniformly distributed over the unit sphere.
In section 4.4, while investigating out-of-distribution generalization, I will need to generate random symmetric
matrices with specific eigenvalue distributions (i.e. classes of random matrices with non-independent
coefficients). To this effect, I sample random symmetric matrices M with gaussian coefficients, and compute
their eigenvalue decomposition M = PDP _[T]_, with P an orthogonal matrix of eigenvectors (uniformly
distributed over the unit sphere because the coefficients are gaussian). Replacing D, the diagonal matrix of
eigenvalues of M, with a diagonal D[′] sampled from a different distribution, and recomputing M _[′]_ = PD[′]P _[T]_,
yields a symmetric matrix (since P is orthogonal) with eigenvalues following the desired distribution, and
eigenvectors uniformly distributed over the unit sphere.
**3** **Models and experimental settings**
**Models and training. All models use the transformer architecture from Vaswani et al. (2017): an encoder**
and a decoder connected by cross-attention. Models have 512 dimensions, 8 attention heads and up to 6
layers (experiments with larger models can be found in Appendix D.3). Training is supervised, minimizes the
cross-entropy between model predictions and correct solutions, and uses the Adam optimiser (Kingma & Ba,
2014) with a learning rate of 10[−][4], a linear warm-up phase of 10,000 steps and cosine scheduling (Loshchilov
& Hutter, 2016). Training data is generated on the fly in batches of 64. All models are trained on an internal
cluster, using NVIDIA Volta GPU with 32GB memory. Basic operations on matrices and eigenvalues train
on 1 GPU in less than a day (from a few hours for transposition and addition, to a day for multiplication
and eigenvalues). Eigenvectors, SVD and inversion train on 4 GPU, and take from 3 days to a week.
**Evaluation. At the end of every epoch (300,000 examples), a random test set (10,000 examples) is generated**
and model accuracy is evaluated. A predicted sequence is a correct solution to the problem (I, O) (I and O the
input and output matrices) if it can be decoded as a valid matrix P and approximates the correct solution to a
given tolerance τ . In most problems, I check that P verifies _P_ _O_ _< τ_ _O_ . When computing eigenvectors, I
_∥_ _−_ _∥_ _∥_ _∥_
verify that the predicted solution (Q, D) can reconstruct the input matrix, _QIQ[T]_ _D_ _< τ_ _D_ . For singular
_∥_ _−_ _∥_ _∥_ _∥_
value decomposition, I check that _UIV_ _S_ _< τ_ _S_, and for matrix inversion, that _PI_ _Id_ _< τ_ _Id_ = τ .
_∥_ _−_ _∥_ _∥_ _∥_ _∥_ _−_ _∥_ _∥_ _∥_
-----
The L[1] norm, _A_ = _i,j_
the error is weighted in favor of correct predictions of the largest coefficients in the solution. For eigenvalue ∥ _∥_ _[|][a][i,j][|][, for][ A][ = (][a][i,j][), is used in all experiments. With other norms, like][ L][2][ or][ L][∞][,]_
and singular value prediction, this amounts to finding the largest values, a different and easier problem. More[P]
discussion and comparisons between norms can be found in Appendix B.
**Numerical tolerance. All results are provided with tolerance τ between 0.5 and 5%. Since coefficients are**
rounded to three significant digits, 0.5% is the best we can achieve when computations are subject to rounding
error. As computations become more complex, error accumulates, and larger values of τ should be considered.
I consider τ = 0% for transposition, τ = 1% for basic matrix operations (addition and multiplication), and
_τ = 2 or 5% for non linear operations (decomposition, inversion)._
**Problem size. All experiments are performed on dense matrices. In most cases, I focus on 5** 5 matrices
_×_
(or rectangular matrices with as many coefficients: e.g. 6 4, 2 13), and scale to larger dimensions, from
_×_ _×_
8 8 to 15 15, and datasets of matrices with variable dimensions (e.g. 5 5 to 15 15). In this paper,
_×_ _×_ _×_ _×_
the emphasis is on problems that can be solved by small transformers (up to 6 layers). I discuss scaling and
larger models in Appendix D.3.
**4** **Experiments and results**
This section presents experimental results for the nine problems considered. I compare encodings for different
matrix sizes and tolerance levels, using the best choice of hyperparameters for each problem (i.e. the smallest
architecture that can achieve high accuracy). I also show that our models are robust to noise in the training
data. Learning curves and experiments with model size can be found in Appendix D, alternative architectures
in Appendix E.1 (LSTM and GRU) and E.2 (universal transformers), and additional tasks (re-training, joint
training) in Appendix F.
**4.1** **Transposition**
Learning to transpose a matrix amounts to learning a permutation of its elements. For a square matrix, all
cycles in the permutation have length 1 or 2. Longer cycles may appear in rectangular matrices. This task
involves no arithmetic operations: tokens in the input sequence are merely copied to different positions in the
output. This paper investigates two cases. In the fixed-dimension case, all matrices in the dataset have the
same dimensions and only one permutation must be learned. In the variable-dimension case, the dataset
includes matrices of different formats, and several permutations must be learned (one per matrix format). In
these experiments, models have one layer, 256 dimensions and 8 attention heads, and use the four encodings.
After training, all models achieve 99% exact accuracy (0% tolerance) for fixed-size matrices with dimensions
up to 30 30. This holds for all encodings and input and output sequence lengths up to 2000 tokens. The
_×_
variable-size case proves more difficult, because the model must learn many different permutations. Still,
the model achieve 99% accuracy on matrices with 5 to 15 dimensions, and 96% for matrices with 5 to 20
dimensions. Table 2 summarizes the results.
|Col1|Fixed dimensions 5x5 10x10 20x20 30x30 5x6 7x8 9x11|Variable dimensions Square Rectangular 5-15 5-20 5-15 5-20|
|---|---|---|
|P10 P1000 B1999 FP15|100 100 100 - 100 100 100 100 100 99.9 - 100 100 100 100 100 99.9 100 100 100 100 99.8 99.5 99.4 99.8 99.8 99.5 99.3|100 - 97.0 - 99.9 - 98.4 - 100 96.6 99.6 91.4 99.8 99.6 99.4 96.1|
|---|---|---|
Table 2: Exact prediction of matrix transposition for different matrix dimensions. Transformers with one
layer, 256 dimensions and 8 attention heads.
-----
**4.2** **Addition**
To add two m _n matrices, the model must learn the correspondence between input and output positions and_
_×_
the algorithm for adding two numbers in scientific notation. Then, it must apply the algorithm to mn pairs
of coefficients. In these experiments, models have one or two layers, 8 attention heads and 512 dimensions.
All models achieve 99% accuracy at 1% tolerance (98% at 0.5%) on sums of fixed-size matrices with dimensions
up to 10 10, for all four encodings. B1999 models achieve 99.5% accuracy at 0.5% tolerance for 15 15
_×_ _×_
matrices and 87.9% accuracy at 1% tolerance on 20 20 matrices. As dimensions increase, models using long
_×_
encodings (P1000 and P10) become more difficult to train as their input sequences grow longer. For instance,
adding two 15 15 matrices involves 450 coefficients, an input of 1352 tokens in P1000 and 2252 in P10.
_×_
On variable-size matrices, models achieve 99.5% accuracy at 1% tolerance for dimensions up to 10, with 2-layer
transformers using the B1999 encoding. Their accuracy drops to 48 and 37% for square and rectangular
matrices with 5 to 15 dimensions. This can be mitigated by increasing the depth of the decoder: models with
one layer in the encoder and 6 in the decoder achieve 77 and 87% accuracy. Table 3 summarizes these results.
|Size Layers|Fixed dimensions Variable dimensions Square Rectangular 5x5 6x4 3x8 10x10 15x15 20x20 5-10 5-15 5-15 5-10 5-15 5-15 2/2 2/2 2/2 2/2 2/2 1/1 2/2 1/1 1/6 2/2 2/2 1/6|
|---|---|
|5% 2% 1% 0.5%|100 99.9 99.9 100 100 98.8 100 99.5 99.8 100 100 98.4 100 99.3 99.7 100 99.9 87.9 100 98.1 98.9 100 99.5 48.8|100 63.1 99.3 100 72.4 99.4 99.8 53.3 88.1 99.8 50.8 94.9 99.5 47.9 77.2 99.6 36.9 86.8 98.9 42.6 72.7 99.1 29.7 80.1|
|---|---|---|
Table 3: Accuracies of matrix sums, for different tolerances. B1999 encoding, 512 dimension and 8 attention
heads.
**4.3** **Multiplication**
Multiplication of a matrix M of dimension m × n by a vector V ∈ R[n] amounts to computing m dot products
between V and the lines of M . Each calculation features n multiplications and n 1 additions, and involves
_−_
one row in the matrix and all coefficients in the vector. The model must now learn two operations: add and
multiply. Experiments with models with 1 and 2 layers show that high accuracy can only be achieved with
the P10 or P1000 encoding, with P1000 performing better on average. The number of layers, on the other
hand, makes little difference.
|Col1|P10 P1000 5x5 5x5 10x10|P1000 Variable 5-10 (P1000) 14x2 9x3 4x6 2x10 Square Rectangular|
|---|---|---|
|Tolerance|2/2 layers 2/2 2/2|1/1 1/1 2/2 2/2|4/4 2/2|
|---|---|---|---|
|5% 2% 1% 0.5%|100 100 100 99.9 100 100 98.5 99.9 99.9 81.6 99.5 98.4|99.3 99.9 100 100 99.0 99.7 100 99.8 98.7 99.5 99.9 99.2 98.1 99.0 98.6 94.5|72.4 41.7 68.4 35.0 60.1 20.1 30.8 4.4|
|---|---|---|---|
Table 4: Accuracies of matrix-vector products, for different tolerances. All model have 512 dimensions and
8 heads.
On this task, models achieve 99.9% accuracy at 1% tolerance for 5 5 and 10 10 square matrices, and
_×_ _×_
99% for rectangular matrices with about 30 coefficients. The variable-size case proves much harder. Models
achieve non-trivial results: 60% accuracy with 1% tolerance for square matrices, but larger models are needed
for high accuracy. Table 4 summarizes the results.
Multiplication of matrices M and P is a scaled-up version of matrix-vector multiplication, now performed
for every column in matrix P . As above, high accuracy is only achieved with the P10 and P1000 encoding.
-----
|Col1|Square matrices 5x5 5x5|Rectangular matrices 2x13 2x12 3x8 4x6 6x4 8x3 12x2 13x2|
|---|---|---|
|Tolerance|P10 2/2 layers 1/4|4/4 4/4 2/6 1/4 1/6 1/6 1/6 1/4|
|---|---|---|
|5% 2% 1% 0.5%|100 100 100 100 99.8 100 64.5 99.9|100 100 100 100 100 100 100 99.9 100 100 100 100 100 100 99.7 99.8 99.9 100 100 99.9 100 99.9 99.3 99.8 97.1 98.5 99.6 99.7 99.5 99.5 99.0 99.8|
|---|---|---|
Table 5: Accuracy of matrix multiplication, for different tolerances. Fixed-size matrices with 24-26 coefficients. All encodings are P1000 unless specified. Models have 512 dimensions and 8 attention heads.
Models achieve 99% accuracy at 1% tolerance for 5 5 square matrices and rectangular matrices of comparable
_×_
dimensions (see Table 5). Performance is the same as matrix-vector multiplication, a simpler task. However,
matrix multiplication needs deeper models (especially decoders), and more training time.
**4.4** **Eigenvalues**
Compared to basic operations on matrices, computing the eigenvalues of symmetric matrices is a much harder
problem, non-linear and typically solved by iterative algorithms. Deeper models, with 4 or 6 layers, are used
in this task. They achieve 100% accuracy at 5% tolerance, and 99% at 2%, for 5 5 and 8 8 matrices.
_×_ _×_
High accuracy is achieved with all four encodings, but P1000 proves more efficient with 8 8 matrices.
_×_
On fixed-size datasets, scaling to larger problems proves difficult. It takes 360 million examples for our best
models to reach 25% accuracy on 10 10 matrices. As a comparison, 40 million examples are required to
_×_
train 5 5 models to 99% accuracy, and 60 million for 8 8 models. This limitation can be overcome by
_×_ _×_
training on variable-size datasets, achieving 100% accuracy at 5% tolerance, and 100, 100 and 76% at 2%, for
sets of 5-10, 5-15 and 5-20 matrices. Table 6 summarizes the results.
|Col1|Fixed dimensions Variable dimensions 5x5 5x5 5x5 5x5 8x8 8x8 10x10 5-10 5-15 5-20|
|---|---|
|Encoding Layers|P10 P1000 B1999 FP15 P1000 FP15 FP15 6/6 4/1 6/6 6/1 6/1 1/6 1/6|FP15 FP15 FP15 4/4 6/6 4/4|
|---|---|---|
|5% 2% 1% 0.5%|100 100 100 100 100 100 25.3 100 99.9 100 100 99.2 97.7 0.4 99.8 98.5 98.6 99.7 84.7 77.9 0 93.7 88.5 73.0 91.8 31.1 23.9 0|100 100 100 99.8 100 75.5 87.5 94.3 45.3 37.2 40.6 22.5|
|---|---|---|
Table 6: Accuracy of eigenvalues for different tolerances and dimensions. All models have 512 dimensions
and 8 attention heads, except the 10x10 model, which has 510 and 12.
**Larger models.** In the fixed-dimension case, 6-layer models are limited to 8 8 matrices. Experiments
_×_
with deeper models show that they can solve larger problems. For instance, 12-layer transformers can compute
the eigenvalues of 12 12 matrices. Large models also need less examples to train to high accuracy. Table 7
_×_
summarizes these results. Detailed results are in Appendix D.3
|Layers|Accuracy Sample size (millions) 1/8 1/12 1/24 1/8 1/12 1/24|
|---|---|
|8 8 matrices × 10 10 matrices × 12 12 matrices ×|100 100 100 100 100 100 3 97 100|28.8 11.4 11.1 85.2 36.6 32.1 - - 99.3|
|---|---|---|
Table 7: Eigenvalues, larger models. Accuracy to 5% tolerance, and sample size to reach 99% accuracy.
-----
**4.5** **Eigenvectors**
In this task, the model predicts both the eigenvalues and an associated orthogonal matrix of eigenvectors.
Models using the P10 and P1000 encoding achieve 97 and 94% accuracy at 5% tolerance for 5 5 matrices.
_×_
P1000 models also reach 82% accuracy on 6 6 matrices. Whereas FP15 models only reach 52% accuracy,
_×_
an asymmetric model, coupling a 6-layer FP15 encoder and a 1-layer P1000 decoder, achieves 94% accuracy
at 5% and 87 at 2%, the best result on this task. Table 8 summarizes these results.
|Col1|5x5|6x6|
|---|---|---|
|Col1|P10 P1000 FP15 FP15/P1000 4/4 layers 6/6 1/6 6/1|P1000 6/1|
|---|---|---|
|5% 2% 1% 0.5%|97.0 94.0 51.6 93.5 83.4 77.9 12.6 87.4 31.2 41.5 0.6 67.5 0.6 2.9 0 11.8|81.5 67.2 11.0 0.1|
|---|---|---|
Table 8: Accuracies of eigenvectors, for different tolerances and depths (512 dimensions, 8 heads).
**Analysis of failure cases.** On this task, models achieve significantly less than 100% accuracy. This makes
it possible to investigate failure cases. For this analysis, I use the trained FP15/P1000 6/1 layer model from
table 8 (93% accuracy), generate a new test sample of 10 000 problems, predict solutions, and evaluate
performance on various metrics. On this new test set, the model achieves 91.1% accuracy with 5% tolerance,
and 82.1, 45.2 and 1.4% at 2, 1 and 0.5 tolerance.
First, one notes that almost all model predictions (9999 out of 10000) are well-formed matrices: the model
produces no meaningless output, such as incorrect encoding of numbers, or matrices with the wrong number
of elements. This is consistently observed in all experiments: output syntax is learned to near-perfection
at the beginning of training. Also, accuracy increases with tolerance: 95.6% accuracy at 25% tolerance.
This suggests that, even when they fail to predict the correct solution, models do not hallucinate irrelevant
predictions (as was reported on natural language tasks). Instead, they predict syntactically correct solutions
that often turn out to be “rough approximations” of the correct answer.
When computing eigenvectors, the model predicts two matrices, a diagonal matrix of eigenvalues D and an
orthogonal matrix of eigenvectors H. Accuracy is measured as the L[1] distance between H _[T]_ _IH (I the input_
matrix) and D, i.e. how well H diagonalizes the input into D. A different metric, the distance between HDH _[T]_
and I, could be used instead, which is associated to a related, but weaker, problem: finding approximations
to I of the form HDH _[T]_ (D diagonal). With this metric, the model achieves slightly higher accuracy: 95.7%
at 5% tolerance (92.9, 88.4 and 73.0 at 2, 1 and 0.5%). This confirms our previous observation that, in
many failure cases, the model predicts solutions that are “somehow relevant” to eigen decomposition (here,
solutions to the weak problem).
We also know from theory that D should contain the eigenvalues of the input matrices, and that H should be
orthogonal (all lines and columns orthogonal, with unit norm). Translating these properties into metrics can
help us understand failure cases, and the relevance of incorrect model predictions.
First, eigenvalues are always correctly predicted: the corresponding accuracy is 100% at 5% tolerance, and
99.4% at 0.5%. The (easier) sub-task of eigenvalue prediction has been learned by the model. Second, all the
norms of the columns of H are within 5% of 1 in 99.9% of the test examples (within 1% in 99.2%). All predicted
eigenvectors have unit norm. This indicates that failed model predictions actually succeed in computing the
eigenvalues, and a set of unit vectors. In other words, those two properties of eigen decomposition have been
learned by the model, in the sense that they are respected even in incorrect predictions.
To measure the orthogonality of the eigenvectors, I compute the dot products of successive eigenvectors, which
should be all be 0. On the test set, all dot products are within 0.05 and 0.05 in 93.6 of the cases (and within
_−_
0.01 and 0.01 in 85.1). This suggests that when the model fails, it proveds the correct eigenvalues, and unit
_−_
eigenvectors, but fails to make the “eigenvectors” strictly orthogonal. This observation suggests a criterion for
-----
predicting model accuracy: we can test the orthogonality of predicted H by measuring its condition number
(the ratio of its largest and smallest singular values), which should be one if H is orthogonal, and larger if it
is not. In fact, in 98% of correct predictions, the predicted H has a condition number smaller than 1.035.
For 98% of failures, the condition number of H is larger than 1.04.
To summarize, when trained on eigen decomposition, the model learns the easier sub-task of predicting
eigenvalues, with 100% accuracy. It also learns to preserve theoretical properties of the result, like the unit
norm of eigenvectors, and their (approximate) orthogonality. All failures concentrate on one specific sub-task:
orthogonalizing the eigenvectors. This allows us to derive an accurate predictor of model failure: the condition
number of the predicted matrix H.
**4.6** **Inversion**
Computing the inverses of 5 5 matrices proves the hardest task so far. Models using the P10 and P1000
_×_
encodings, with 6-layer encoders and 1-layer decoders and 8 attention heads, achieve 74 and 80% accuracy at
5% tolerance. Adding more heads in the encoder bring no gain in accuracy, but makes training faster: 8-head
models need 250 millions examples to train to 75% accuracy, 10 and 12-head models only 120. As in the
previous task, asymmetric models achieve the best results. 90% accuracy at 5% tolerance can be reached
with a 6-layer FP15 encoder with 12 attention heads, and a 1-layer P1000 decoder with 8 heads.
|Tolerance|P10 8/8 heads|P1000 8/8 heads 10/8 heads 12/8 heads|FP15/P1000 10/4 heads 12/8 heads|
|---|---|---|---|
|5% 2% 1% 0.5%|73.6 46.9 15.0 0.2|80.4 78.8 76.9 61.0 61.7 52.5 30.1 34.2 16.2 3.1 5.9 0.1|88.5 90.0 78.4 81.8 55.5 60.0 20.9 24.7|
|---|---|---|---|
Table 9: 5x5 matrix inversion. All models have 512 dimension and 6/1 layers, except P1000 10 heads, which has
6/6.
**Analysis of failure cases.** I proceed as in section 4.5, and use the 6/1 layer, 12/8 heads, FP15/P1000
model from table 9 (90% accuracy). Model accuracy on the new test set is 89.6% at 5% tolerance, and
81.7, 59.1 and 23.7 at 2, 1 and 0.5% tolerance. As previously, all 10000 model predictions are well-formed
matrices, and accuracy increases with tolerance: 96.6% at 25%. Again, even when the model fails, it provides
a “relevant” bad approximation of the solution, instead of hallucinating an unrelated guess.
For this task, the accuracy metric is the distance between PI (P the predicted matrix, I the input) and
identity (in L[1] norm). This measures that the inverse has indeed been found. However, since the inverse
is unique, we might as well use the distance between P and I _[−][1]_ (i.e. the distance the model is minimizing
during training). On this alternative metric, the model achieves 98.2% accuracy at 5% tolerance, and 96.0,
92.3 and 84.5% at 2, 1 and 0.5 tolerance. In this metric, all failure cases are bad approximations: the model
achieves 99.5% accuracy at 25% tolerance.
This suggests that most model failures happen because the approximation of the I _[−][1]_ predicted by the model
is not a “good inverse” of I, in the sense that PI is not close to identity. Theory tells us this happens when
the condition number of the input matrix (the ratio of largest to smallest singular values) is large. Indeed,
98% of correct predictions correspond to matrices with condition number below 51.5. On the other hand,
98% of failures are matrices with condition numbers larger than 51.5. The condition number of the input
matrix proves to be a very accurate predictor of model success.
These results provide a complete explanation of model failures for this task. They indicate that failures are
not due to the architecture or learning technique, but to the mathematical limitations of the computation of
matrix inverses, which apply to every numerical algorithm. They also indicate that failures are concentrated
on a small class of problems, and can be predicted in advance (without running the model, in this case).
-----
They also suggest two directions for improvement. First, we could oversample ill-conditioned matrices in
the training set, in a manner of curriculum learning. Second, since ill-conditioning amplifies the effect of
rounding and approximate computations, training with increased precision should improve accuracy.
**4.7** **Singular value decomposition (SVD)**
For symmetric matrices, singular value and eigenvalue decompositions are related: the singular values of a
symmetric matrix are the square roots of the absolute values of its eigenvalues, and the vectors are the same.
Yet, this task proves more difficult than computing the eigenvectors. Models achieve 100 accuracy at 5%
tolerance, and 86.7% at 1% when predicting the singular values of 4 4 symmetric matrices. For the full
_×_
decomposition, models achieve 98.9 and 75.3% accuracy. However, the SVD of 5 5 matrices could not be
_×_
predicted using transformers with up to 6 layers, and using the P10 or P1000 encoding. Table 10 summarizes
these results, on models with 512 dimensions and 8 attention heads.
|Col1|Singular values Singular vectors P10 2/2 layers P1000 4/4 layers P10 1/6 layers P1000 6/6 layers|
|---|---|
|5% 2% 1% 0.5%|100 100 98.5 99.8 84.5 86.7 41.1 39.8|71.5 98.9 15.6 95.7 0.4 75.3 0 6.3|
|---|---|---|
Table 10: Accuracies of SVD for 4x4 matrices.
**4.8** **Experiments with noisy data**
Because experimental data is often noisy, robustness to noise is a key feature of efficient models. In this
section, I investigate model behavior in the presence of random error when computing the sum and eigenvalues
of 5 5 matrices. A random gaussian error is added to all coefficients of the input matrices in the train and
_×_
test sets, for three levels of noise: standard deviation equal to 1, 2 and 5% of the standard deviation of the
random matrix coefficients (σ = 5.77 for uniform coefficients in [ 10, 10]). For a linear operation like addition,
_−_
one expects the model to predict correct results so long tolerance τ is larger than error. For non-linear
computations like eigenvalues, expected outcomes are unclear, as errors may be amplified by non-linearities
or reduced by concentration laws.
|Encoding Dimension|Addition Eigenvalues B1999 FP15 P1000 256 512 512 1024 512 1024|
|---|---|
|5% tolerance 0.01σ error 0.02σ 0.05σ|100 100 6.1 100 100 100 100 100 100 100 100 100 41.5 41.2 99.1 99.3 99.3 99.0|
|---|---|
|2% tolerance 0.01σ error 0.02σ 0.05σ|99.8 99.9 0.7 99.8 99.3 99.6 43.7 44.2 97.0 97.1 97.3 97.9 0 0 37.9 38.4 40.1 37.3|
|---|---|
|1% tolerance 0.01σ error 0.02σ 0.05σ|39.8 41.7 0.1 82.1 79.7 83.8 0.1 0.1 47.8 51.3 46.2 47.5 0 0 3.8 4.2 4.1 3.8|
|---|---|
Table 11: Accuracy with noisy data, for different error levels and tolerances (5 5 matrices).
_×_
**Addition. Training on noisy data causes no loss in accuracy in the models, so long the ratio between the**
standard deviation of noise and that of the coefficients is lower than tolerance. Within 5% tolerance, models
-----
trained with 0.01σ and 0.02σ noise reach 100% accuracy, as do models trained with trained with 0.01σ noise
at 2% tolerance. Accuracy drops to about 40% when error levels are approximately equal to tolerance, and
to zero once error exceeds tolerance. Model size and encoding have no impact on robustness (see Table 11,
2-layer, 8-head models and Table 27 in Appendix F.3).
**Eigenvalues.** Models trained with the P1000 encoding prove more robust to noise when computing
eigenvalues than when calculating sums. For instance, they achieve 99% accuracy at 5% tolerance with noise
equal to 0.05σ, vs only 41% for addition. As before, model size has no impact on robustness. However, FP15
models prove more difficult to train on noisy data than P1000 (see Table 11 and Table 28 in Appendix F.3
for additional results, models have 4 layers and 8 heads).
**5** **Out-of-domain generalization**
So far, model accuracy was measured on test sets of matrices generated with the same procedure as the
training set. In this section, I investigate accuracies on test sets with different distributions. I focus on one
task: predicting the eigenvalues of symmetric matrices (with tolerance 2%).
**Wigner matrices. All models are trained on datasets of random symmetric real matrices, with independent**
and identically distributed (iid) coefficients sampled from a uniform distribution over [ _A, A]. These are_
_−_
known as Wigner matrices (see 2.2), and constitute a very common class of random matrices. Yet, matrices
with different eigenvalue distributions (and non iid coefficients) appear in important problems. For instance,
statistical covariance matrices have all their eigenvalues positive, and the adjacency matrices of scale-free
and other non-Erdos-Renyi graphs have centered but non semi-circle distributions of eigenvalues (Preciado
& Rahimian, 2017). We now investigate how models trained on Wigner matrices perform on test sets of
matrices with different distributions.
**Testing on different distributions. Matrix coefficients in the training set are sampled from** [ 10, 10],
_U_ _−_
with standard deviation σtr = 5.77. First, I consider test sets of Wigner matrices with different standard
deviation σtst. Models achieve high accuracy (96% at 2% tolerance) so long 0.6σtr < σtst < σtr. Out of
this range, model accuracy drops: to 54% for 0.4σtr, to 26% for 1.1σtr, to 2% for 1.3σtr and to 0% for
0.2σtr. Then, the model is tested on sets of matrices with different eigenvalue distributions: positive, uniform,
Gaussian and Laplace (generated as per section 2.2), with standard deviation σtr and 0.6σtr. With σtst = σtr,
the model achieves 26% accuracy for Laplace, 25 for Gaussian, 19 for uniform, and 0 for positive. With
_σtst = 0.6σtr, model accuracy is slightly higher, 28, 44, 60 and 0% respectively, but remains low overall._
Matrices with positive eigenvalues cannot be predicted at all. These results are summarized in line 1 of
Table 12. These results confirm previous observations (Welleck et al., 2021): transformers only generalize to
a narrow neighborhood around their training distribution.
**Training on different distributions. A common approach to improving out-of-distribution accuracy is to**
make the training set more diverse. Models trained from a mixture of Wigner matrices with different standard
deviation (A [1, 100], line 2 of Table 12) generalize to Wigner matrices of all standard deviation (which
_∈_
are no longer out-of-distribution), and achieve better performances on the uniform, Gaussian and Laplace
test set. But they do not generalize to positive matrices. A model trained on a mixture of Wigner and
positive eigenvalues (line 3 of Table 12) can predict positive eigenvalues (now in-domain), but its performance
degrades on all other test sets.
Training on mixtures of Wigner and Gaussian eigenvalues, or Wigner and Laplace eigenvalues (lines 4 and 5
of Table 12), achieves high accuracies over all test sets, including the out-of-distribution sets: uniform and
positive eigenvalues, and Wigner with low or high standard deviations.
Finally, models trained on matrices with Laplace eigenvalues only, or a mixture of uniform, Gaussian and
Laplace eigenvalues (all non-Wigner matrices) achieve 95% accuracy over all test sets (lines 6 and 7 of
Table 12). These results confirm that out-of-distribution generalization is possible, if attention is paid to the
training data distribution. They also suggest that Wigner matrices, the default model for random matrices,
is not the best choice for training transformers: models trained on Wigner matrices do not generalize out
of distribution, whereas models trained on non-Wigner matrices, with non-iid coefficients, do generalize to
Wigner matrices.
-----
**Train set distribution** **Test set eigenvalue distribution**
|σ /σ tst tr|Wigner Positive Uniform Gaussian Laplace 0.3 1.0 1.2 0.6 1 0.6 1 0.6 1 0.6 1|
|---|---|
|Wigner, A=10 (baseline)|12 100 7|0 0|60 19|44 25|28 26|
|---|---|---|---|---|---|
|Wigner, A ∈[1, 100] Wigner - Positive Wigner - Gaussian Wigner - Laplace Laplace Gaussian-Uniform-Laplace|99 98 97 1 99 14 88 100 100 98 100 100 95 99 99 99 100 100|0 0 88 99 99 99 100 100 100 100 100 100|68 60 45 23 96 98 100 100 98 98 100 100|65 59 31 23 93 97 99 100 97 98 99 100|57 53 17 20 84 90 96 99 94 96 97 99|
|---|---|---|---|---|---|
Table 12: Out-of-distribution eigenvalue accuracy (tolerance 2%) for different training distributions.
All models have 512 dimensions and 8 attention heads, and use the P1000 encoding.
**6** **Related work**
**Neural networks for linear algebra. Neural networks that can compute eigenvalues and eigenvectors**
have been proposed since the early 1990s (Samardzija & Waterland, 1991; Cichocki & Unbehauen, 1992; Oja,
1992; Yi et al., 2004), and are still an active field of research (Tang & Li, 2010; Finol et al., 2019). They
leverage the Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991), which states that, under weak
conditions on their activation functions, neural networks can approximate any continuous mapping – in this
case, the mapping between a matrix and its eigenvalues or vectors. In these works, the network represents a
differential equations involving matrix coefficients, which features the eigenvalues in its solution (Brockett,
1991). The matrix to decompose is encoded in the input, and prediction errors are back-propagated until
a solution to the differential equation is found, from which eigenvalues can be recovered. Note that these
models compute their solutions during training, and must be retrained every time a new matrix is to be
processed. Similar techniques have been proposed for other problems of linear algebra (Wang, 1993a;b; Zhang
et al., 2008).
**Arithmetic with neural networks.** Neural networks for binary addition and multiplication have been
proposed since the 1990s (Siu & Roychowdhury, 1992). Since 2015, recurrent architectures have been used,
from LSTM (Kalchbrenner et al., 2015) to RNN (Zaremba et al., 2015), Neural Turing Machines (Castellini,
2019) and Neural GPU (Kaiser & Sutskever, 2015). All authors note that sequential models struggle to
generalize out of their training distribution (i.e. to larger numbers), and that their architectures only perform
satisfactorily on binary numbers. Neural Arithmetic Logic Units (NALU (Trask et al., 2018) were introduced
as a solution to the generalization problem. They can perform exact additions, substractions, multiplications
and divisions by constraining the weights of a linear network to remain close to 0, 1 or -1. NALU (and Neural
GPU) can extrapolate to numbers far larger than those they were trained on, and could serve as building
blocks for larger models. The use of language models for arithmetic and problem solving has been studied by
Saxton et al. (2019). Palamas (2017) experiments with modular arithmetic.Nogueira et al. (2021) investigates
the limitations of transformers.
**Transformers for mathematics.** Early applications of transformers to mathematics focused on symbolic
computation. Lample & Charton (2019) used transformers to compute symbolic integrals and solve differential
equations. Davis (2019) and Welleck et al. (2021) discuss the limits of their approach, especially with respect to
out-of-distribution generalization. Transformers have also been applied to theorem proving (Polu & Sutskever,
2020; Han et al., 2021), temporal logic (Hahn et al., 2021), and have been proposed as a replacement for the
genetic algorithms used in symbolic regression (Biggio et al., 2021; d’Ascoli et al., 2022). In more numerical
applications, Charton et al. (2020) use them to predict the numerical properties of differential systems, and
Dersy et al. (2022) to simplify formulas involving polylogarithms.
With the advent of large language models (Bommasani et al., 2021), a new line of research focuses on informal
mathematics: solving problems of mathematics written in natural language, as a language task (Griffith &
Kalita, 2021; Meng & Rumshisky, 2019; Cobbe et al., 2021). Lewkowycz et al. (2022) show that a very large
-----
(540 billion parameters) pre-trained transformer can be retrained on a large math corpus to solve grade and
high school problems of mathematics. Welleck et al. (2022) apply similar techniques to theorem proving.
**Other architectures for mathematics.** Graph Neural Networks (Scarselli et al., 2009) have been widely
used in scientific applications of AI, because of their capacity to integrate problem or domain-specific inductive
biases into the network structure. They have been applied to a wide range of mathematical problems, from
dynamical systems (Iakovlev et al., 2020) to combinatorial optimization (Cappart et al., 2021) and knot theory
(Davies et al., 2021). Vinyals et al. (2015) proposed pointer networks to solve combinatorial problems. Blalock
& Guttag (2021) use machine learning techniques to improve existing algorithms for matrix multiplication, in
the specific case where one fixed matrix should be multiplied by many others.
**7** **Discussion**
**Encodings and architecture. Our best results are achieved using the P1000 and FP15 encodings. For**
most problems, P10 is dominated by the more economical P1000, and B1999 never finds its use, between the
more compact FP15 and the more efficient P1000. P1000 emerges as a good choice for problems of moderate
size, and FP15 when sequences grow long. For the hardest problems, eigenvectors and inversion, asymmetric
encodings, FP15 in the encoder and P1000 in the decoder, achieve the best results. I believe that the longer
and meaningful P1000 output representation provide better error feedback to the model, facilitating learning,
while the FP15 encoding provides a compact representation of the input, which is easier to train.
Experiments also showcase the efficacy of asymmetric architectures, with one layer in either the encoder or
decoder. Whether the encoder or the decoder should be shallow is unclear: on the eigenvalue and eigenvector
tasks, the 6/1 and 1/6 architecture seem equally efficient. Finally, increasing the number of attention heads
seems to help. Most transformers architectures (from Vaswani to BERT) maintain a dimension/head ratio of
64, increased to 96 or more in very large models like GPT-3. For eigenvalue and inversion, using 10 or 12
heads with dimension 512, i.e. a dimension/head ratio between 40 and 50, improves model accuracy.
**Model limitations, scaling to large dimensions. Most experiments feature dense matrices with 5 to**
10 dimensions. Experiments with eigenvalues suggest that larger problems can be solved by training from
samples of matrices of variable size, or by using larger models. However, scaling to larger dense matrices
will be limited by the length of the sequences a transformer can handle. For quadratic attention models
(i.e. most current transformer architectures), sequence length can hardly exceed a few thousand tokens, and
the methods proposed in this paper could probably not scale beyond 50 50 matrices. Experimenting with
_×_
transformers with linear or log-linear attention (Zaheer et al., 2021; Wang et al., 2020a; Vyas et al., 2020;
Child et al., 2019) is a natural extension of this work. Problems of larger dimension usually feature sparse
matrices, and therefore are out of the scope of this work. Extension to sparse matrices constitutes a future
research direction.
**Out-of-distribution experiments. These are our most significant results. They prove that transformers**
trained on random data can generalize to a wide range of test distributions, provided their training data
distribution is chosen with care. Selecting a training distribution can be counter-intuitive. In our experiments,
Wigner matrices are the “obvious” random model, but “special” matrices (with non-iid coefficients and
Laplace eigenvalues) produce models that better generalize, notably on Wigner matrices. This matches the
intuitive idea that we learn more from edge cases than averages.
**Result verification. One common criticism of deep learning models is that they provide no guarantee on**
the correctness of their output. This limitation does not apply here, as the model achieves 100% accuracy on
basic matrix operations and eigenvalue calculations, and our analysis of failure cases propose a mitigation for
the harder problems of eigenvectors and matrix inversion.
**Do the models memoize? Transformers are often accused of using the large capacity of their feed-forward**
networks to memorize training examples, and interpolating between them at inference. Three observations
lead me to believe that this is not the case here. First, in section 4.4, the eigenvalues of matrices with more
than 9 dimensions cannot be learned from a training set where all matrices have the same size. However,
training on a mixture of matrices from 5 5 to 20 20 allows all dimensions to be learned. If memoization
_×_ _×_
happened, a training set with just one dimension would be easier to train than a mixture. Second, in
-----
appendix F.1, retraining a model on matrices of a different dimension takes significantly less examples than
training it from scratch. In a memoization setting, there would be little benefit to retraining. Finally, the
results on out-of-domain generalization seem to rule out interpolation. A model trained on matrices with
Laplace distributed eigenvalues (which can be positive or negative) will generalize to positive definite matrices,
a different ensemble, with very little overlap (almost no Laplace matrix is positive).
**Comparison with numerical packages. Given the practical importance of linear algebra, optimized**
numerical libraries exist for most programming languages and environment. Since my models run in Python,
I compare them with Numpy.
Calculating the eigenvalues of a 5 5 matrix takes 0.5 millisecond on a trained 4/1 layer transformer running
_×_
pyTorch on a single GPU machine. Matrix inversion takes 1 ms with a 6/1 layer transformer, running pyTorch
on a single GPU machine. On the same machine, the optimized algorithms in Numpy (linalg.eigval, and
linalg.inv) are faster : 0.07 millisecond for eigenvalues and 0.04 ms for inversion. However, the current code
was not designed for speed. An optimized version of the models might achieve inference speeds comparable to
those of numerical packages. Note, however, that the memory footprint of a transformer would be considerably
larger).
For these two tasks, the algorithms implemented in Numpy and other packages have asymptotic complexity
_O(n[3]) (O(n[2][.][37]) for the best known bounds) for n_ _n matrices. The attention mechanism of the transformers_
_×_
used in this paper is quadratic in the length of the sequence (O(n[2])), which makes it O(n[4]). Linear attention
models could reduce complexity to O(n[2]), lower than known algorithms, but the memory requirement of
transformers would offset this advantage for large n. As stated in the introduction, there is no clear advantage
to replace existing algorithms with transformers.
**8** **Conclusion.**
I have shown that transformers can learn to perform numerical computations from examples only. I also
proved that they can generalize out of domain when their training distribution is carefully selected. This
suggests that applications of transformers to mathematics are not limited to symbolic computation, and
can cover a broader range of scientific problems. I believe that these results pave the way for wider use of
transformers in science.
-----
**References**
Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Parascandolo. Neural
symbolic regression that scales. arXiv preprint arXiv:2106.06427, 2021.
Davis Blalock and John Guttag. Multiplying matrices without multiplying. arXiv preprint arXiv:2106.10860,
2021.
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S.
Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas
Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora
Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin
Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby
Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong,
Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti,
Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith
Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa
Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele
Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles,
Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris
Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong,
Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam,
Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E.
Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan
You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn
Zhou, and Percy Liang. On the opportunities and risks of foundation models, 2021.
Roger W Brockett. Dynamical systems that sort lists, diagonalize matrices, and solve linear programming
problems. Linear Algebra and its applications, 146:79–91, 1991.
Quentin Cappart, Didier Chételat, Elias Khalil, Andrea Lodi, Christopher Morris, and Petar Veličković.
Combinatorial optimization and reasoning with graph neural networks, 2021.
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko.
End-to-end object detection with transformers. arXiv preprint arXiv:2005.12872, 2020.
Jacopo Castellini. Learning numeracy: Binary arithmetic with neural turing machines. arXiv preprint
arXiv:1904.02478, 2019.
François Charton, Amaury Hayat, and Guillaume Lample. Learning advanced mathematical computations
from examples. arXiv preprint arXiv:2006.06462, 2020.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse
transformers. arXiv preprint arXiv:1904.10509, 2019.
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. arXiv preprint arXiv:1406.1078, 2014.
Andrzej Cichocki and Rolf Unbehauen. Neural networks for computing eigenvalues and eigenvectors. Biological
Cybernetics, 68(2):155–164, 1992.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and
John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. The neural data router: Adaptive control flow in
transformers improves systematic generalization. arXiv preprint arXiv:2110.07732, 2021.
George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals
and systems, 2(4):303–314, 1989.
-----
Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton. Deep symbolic
regression for recurrent sequences, 2022.
A Davies, P Velickovic, L Buesing, S Blackwell, D Zheng, N Tomasev, R Tanburn, P Battaglia, C Blundell,
A Juhasz, et al. Advancing mathematics by guiding human intuition with ai. Nature, 2021.
Ernest Davis. The use of deep learning for symbolic integration: A review of (lample and charton, 2019).
arXiv preprint arxiv:1912.05752, 2019.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal transformers.
arXiv preprint arXiv:1807.03819, 2018.
Aurélien Dersy, Matthew D. Schwartz, and Xiaoyuan Zhang. Simplifying polylogarithms with machine
learning, 2022.
Linhao Dong, Shuang Xu, and Bo Xu. Speech-transformer: A no-recurrence sequence-to-sequence model for
speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pp. 5884–5888, 2018.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An
image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929,
2021.
David Finol, Yan Lu, Vijay Mahadevan, and Ankit Srivastava. Deep convolutional neural networks for
eigenvalue problems in mechanics. International Journal for Numerical Methods in Engineering, 118(5):
258–275, 2019.
Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983,
2016.
Kaden Griffith and Jugal Kalita. Solving arithmetic word problems with transformers and preprocessing of
problem text. arXiv preprint arXiv:2106.00893, 2021.
Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, and Bernd Finkbeiner. Teaching
temporal logics to neural networks. arXiv preprint arXiv:2003.04218, 2021.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas Polu. Proof artifact co-training
for theorem proving with language models. arXiv preprint arxiv:2102.06203, 2021.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780,
1997.
Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257,
1991.
Valerii Iakovlev, Markus Heinonen, and Harri Lähdesmäki. Learning continuous-time pdes from sparse data
with graph neural networks, 2020.
Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. arXiv preprint
arxiv:1507.01526, 2015.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Donald E. Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms. AddisonWesley, third edition, 1997.
-----
Guillaume Lample and François Charton. Deep learning for symbolic mathematics. arXiv preprint
arXiv:1912.01412, 2019.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy
Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint
arXiv:1608.03983, 2016.
Madan Lal Mehta. Random Matrices. Academic Press, 3rd edition, 2004.
Yuanliang Meng and Anna Rumshisky. Solving math word problems with double-decoder transformer. arXiv
preprint arXiv:1908.10924, 2019.
Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple
arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
Erkki Oja. Principal components, minor components, and linear neural networks. Neural networks, 5(6):
927–935, 1992.
Theodoros Palamas. Investigating the ability of neural networks to learn simple modular arithmetic. 2017.
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv
preprint arXiv:2009.03393, 2020.
Victor M. Preciado and M. Amin Rahimian. Moment-based spectral analysis of random graphs with given
expected degrees. arXiv preprint arXiv:1512.03489, 2017.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by
generative pre-training. 2018.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models
are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Nikola Samardzija and RL Waterland. A neural network for computing eigenvectors and eigenvalues. Biological
Cybernetics, 65(4):211–214, 1991.
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning
abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph
neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of gradient-based deep learning. In Doina
Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,
volume 70 of Proceedings of Machine Learning Research, pp. 3067–3075. PMLR, 06–11 Aug 2017.
Feng Shi, Chonghan Lee, Mohammad Khairul Bashar, Nikhil Shukla, Song-Chun Zhu, and Vijaykrishnan
Narayanan. Transformer-based machine learning for fast sat solvers and logic synthesis. arXiv preprint
arXiv:2107.07116, 2021.
Kai-Yeung Siu and Vwani Roychowdhury. Optimal depth neural networks for multiplication and related
problems. In S. Hanson, J. Cowan, and C. Giles (eds.), Advances in Neural Information Processing Systems,
volume 5. Morgan-Kaufmann, 1992.
Ying Tang and Jianping Li. Another neural network based approach for computing eigenvalues and eigenvectors
of real skew-symmetric matrices. Computers & Mathematics with Applications, 60(5):1385–1392, 2010.
Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, and Phil Blunsom. Neural arithmetic logic units.
arXiv preprint arXiv:1808.00508, 2018.
-----
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser,
and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems,
pp. 6000–6010, 2017.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks, 2015.
Apoorv Vyas, Angelos Katharopoulos, and François Fleuret. Fast transformers with clustered attention.
arXiv preprint arXiv:2007.04825, 2020.
Jun Wang. A recurrent neural network for real-time matrix inversion. Applied Mathematics and Computation,
55(1):89–100, 1993a.
Jun Wang. Recurrent neural networks for solving linear matrix equations. Computers & Mathematics with
Applications, 26(9):23–34, 1993b.
Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear
complexity. arXiv preprint arXiv:2006.04768, 2020a.
Yongqiang Wang, Abdelrahman Mohamed, Due Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao
Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, and et al. Transformer-based acoustic modeling for
hybrid speech recognition. 2020 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), May 2020b.
Sean Welleck, Peter West, Jize Cao, and Yejin Choi. Symbolic brittleness in sequence models: on systematic
generalization in symbolic mathematics. arXiv preprint arXiv:2109.13986, 2021.
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover: Grounded
mathematical proof generation with language models, 2022.
Zhang Yi, Yan Fu, and Hua Jin Tang. Neural networks based approach for computing eigenvectors and
eigenvalues of symmetric matrix. Computers & Mathematics with Applications, 47(8-9):1155–1164, 2004.
Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip
Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. Big bird: Transformers for longer
sequences. arXiv preprint arXiv:2007.14062, 2021.
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from
examples, 2015.
Yunong Zhang, Weimu Ma, and Binghuang Cai. From zhang neural network to newton iteration for matrix
inversion. IEEE Transactions on Circuits and Systems I: Regular Papers, 56(7):1405–1415, 2008.
-----
**A** **Number encodings**
Let x be a non-zero real number, it can be represented uniquely as x = s.m.10[e], with s 1, 1,
_∈{−_ _}_
_m ∈_ [100, 1000[ and e ∈ Z. Rounding m to the nearest integer n (and potentially adjusting for round-up to
1000), we get the base ten, floating-point representation of x, with three significant digits:
_x ≈_ _s.m.10[e], (s, m, e) ∈{−1, 1} × {100, . . ., 999} × Z_
By convention, 0 is encoded as +0.10[0]. All encodings are possible representations of the triplets (s, m, e). In
this paper, e is restricted to [ 100, 100], and m to [100, 999].
_−_
In base N positional encoding, s (the sign) and e (the exponent) are encoded as unique tokens: + or - for s,
and one token from E-100 to E100 for e. The mantissa, m, is encoded as the representation of m in base N
(e.g. binary representation if N = 2, decimal representation if N = 10), a sequence of _logN_ (1000) tokens
_⌈_ _⌉_
from 0 to N-1. Overall, a number will be encoded as a sequence of _logN_ (1000) +2 tokens, from a vocabulary
_⌈_ _⌉_
of 202 + N tokens.
For instance, x = e[π] 23.14069, will be represented by +231.10[−][1], and encoded in P10 (base 10 positional)
_≈_
as the sequence [+,2,3,1,E-1], and in P1000 (base 1000 positional) as [+,231,E-1]. x = −0.5 will be
represented as −500.10[−][3], and encoded in P10 as [-,5,0,0,E-3], and in P1000 as [-,500,E-3]. Other bases
N could be considered, as well as different bases for the exponent, and different lengths for the mantissa.
This paper uses two positional encodings: P10, which encodes numbers rounded to three significant digits,
with absolute value in [10[−][100], 10[101]], as sequences of 5 tokens, using a vocabulary of 213 tokens (10 digits, 2
signs, and 202 values of the exponent), and P1000 which encodes numbers as sequences of 3 tokens, with a
vocabulary of 1104.
Balanced base 2a + 1 uses digits between _a and a (Knuth, 1997). For instance, in balanced base 11, digits_
_−_
range from 5 to 5. An every day example of a balanced base can be found in the way we state the hour as
_−_
“twenty to two”, or “twenty past two”. Setting a to 999 defines B1999, which encodes the sign and mantissa
as a single token between 999 and 999, and the exponent as in P10 and P1000. Numbers are encoded on
_−_
two tokens, with a vocabulary of 2004.
For an even more compact representation, floating point numbers can be encoded as unique tokens, by
rewriting any number as x = m10[b], with m [ 999, 999], b [ (p + 2)/2, (p + 2)/2] and p + 2 = 0, [2], and
_∈_ _−_ _∈_ _−_
representing it as the unique token FPm,b. This allows to represent numbers with 3 significant digits and a
dynamic range of 10[p][+2], using a vocabulary of 1800(p + 3) tokens. Setting p = 14, this paper introduces
FP15, which encodes numbers as unique tokens with a vocabulary of 30, 000.
**B** _L[1], L[2]_ **and L[∞]** **norms for evaluation**
The accuracy of trained models is evaluated by decoding their predictions and verifying that they approximate
the correct solution up to a fixed tolerance τ . In the general case, if the model predicts a sequence SP, and
the solution of the problem is O, the prediction is considered to be correct if SP can be decoded into a matrix
_P and_
_P_ _O_ _< τ_ _O_ (1)
_∥_ _−_ _∥_ _∥_ _∥_
For eigenvalue decomposition, the solution is correct if it can be decomposed as a pair (Q, D) that verifies:
_Q[T]_ _IQ_ _D_ _< τ_ _D_ (I the input matrix), i.e. that Q diagonalizes I into D. For singular value decomposition,
_∥_ _−_ _∥_ _∥_ _∥_
the solution must verify _UIV_ _S_ _< τ_ _S_, and for matrix inversion _PI_ _Id_ _< τ_ _Id_ = τ . The matrix
_∥_ _−_ _∥_ _∥_ _∥_ _∥_ _−_ _∥_ _∥_ _∥_
norm L[1]: _A_ = _i,j_
_∥_ _∥_ _[|][a][i,j][|][, for][ A][ = (][a][i,j][) is used throughout this paper. This section discusses its advantage]_
over two other possible norms: L[2] ( _A_ = _i,j_ _[a]i,j[2]_ [), and][ L][∞] [(][∥][A][∥] [= max][i,j][ |][a][i,j][|][).]
_∥_ _∥_
[P]
Using L[1] norm in equation 1 amounts to comparing the average absolute error on the predicted coefficients
[P]
(P _O) to the average absolute value of coefficients of O. L[2]_ compares the squared values and errors, and
_−_
_L[∞]_ the largest absolute error to the largest coefficient in _O_ . Compared to L[1], L[2] and L[∞] (Max) emphasize
_|_ _|_
large absolute errors, and large absolute coefficients of O. The impact of the norm varies from one problem to
another. Figure 1 presents learning curves using the three norms for our best models, on different problems.
-----
Transposition
5-15 rectangular matrices
20 40 60 80 100 120 140
L1
L2
Max
Eigenvalues
5x5 matrices
Addition
10x10 matrices
Matrix vector multiplication
5x5 matrices
5 10 15 20 25
Eigenvalues
10x10 matrices
100
80
60
40
20
0
100
80
60
40
20
0
100
80
60
40
20
100
80
60
40
20
100
80
60
40
20
10 20 30 40 50
Eigenvalues
5-15 matrices
|Col1|100 80 60 40 20 0|Col3|100 80 60 40 20 0|
|---|---|---|---|
100 200 300 400 500 600 700 800
Eigenvectors
6x6 matrices
10 20 30 40 50 60
Singular Values
4x4 matrices
|Col1|Col2|Col3|100 80 60 40 20 0|Col5|
|---|---|---|---|---|
||100 80 60 40 20 0||||
50 100 150 200 250 300
200 400 600 800 1000 1200
Matrix Inversion
5x5 matrices
50 100 150 200 250 300 350 400
25 50 75 100 125 150 175 200
Figure 1: Learning accuracies for different problems measured with norms L[1], L[2]and L[∞] **(Max).**
For basic arithmetic operations (transposition, addition, multiplication), there is little difference between L[1]
and L[2] accuracies, and no reason to prefer one over the other for model evaluation. L[∞] provides a stricter
criterion for accuracy, but it has little practical impact.
For eigenvalue and singular value problems, L[2] accuracies reach a high value early during training, long
before the model begins to learn according to the other norms. This is due to the fact that the eigenvalues
of Wigner matrices tend to be regularly spaced over the interval [ 2σ, 2σ] (σ = _ns with s the standard_
_−_ _[√]_
deviation of coefficients and n the dimension of the matrix). This means that the model can predict the
largest absolute eigenvalues from the distribution of the coefficients, which can be computed from the dataset.
For this reason, L[2] accuracy is not a good evaluation metric for the eigenvalue or singular value problem.
This is particularly clear in the 10 10 case: transformers struggle with such matrices, and L[1] and L[∞]
_×_
-----
accuracies remain very low even after a thousand epochs (300 million examples), but L[2] accuracy is close to
100% since the beginning of training.
A similar phenomenon takes place for eigenvector calculations: L[2] and L[∞] accuracy rise steeply, long before
the model begins to learn according to the L[1] norm. In this task, the model predicts both the eigenvalues
and the coefficients of the matrix of eigenvectors Q. Because Q is orthogonal, its coefficients will usually have
small absolute values, compared to those of eigenvalues. As training goes on, the largest eigenvalue is first
predicted, which causes the rise in the L[2] curve, then other eigenvalues are, which cause the rise in the L[∞]
curve, and finally the eigenvectors are correctly predicted, which is depicted in the (much slower) rise of the
_L[1]_ curve. Again, using L[2] or L[∞] amounts to evaluating an easier problem (computing eigenvalues) than
the one we are currently solving (eigen decomposition). These observations motivate the choice of L[1] as our
evaluation norm.
**C** **Impact of number precision**
In all experiments, matrix coefficients are rounded to three significant digits. Three-digit precision was
selected in order to keep the size of the FP15 vocabulary manageable. With three-digit precision, FP15 uses
a vocabulary of 30 000 words, with four digits, it would use 300 000 words, and would be difficult to train on
the small transformers this paper focuses on.
In this section, I investigate the impact of number precision on 10 10 matrix addition and 5 5 eigenvalue
_×_ _×_
computation. Random matrices are rounded to two, three and four significant digits, using the P100, P1000
and P10000 encoding (i.e. numbers encoded on three tokens, mantissa in base 100, 1000 and 10000). I train
transformers with 512 dimensions, 8 attention heads, and 2/2 layers (addition) or 4/1 layers (eigenvalues).
Results for 10, 5, 2, 1, 0.5 and 0.1% tolerance are presented in tables 13 and 14.
On the addition task, rounding precision has no impact for tolerances larger than 1%: all models achieve
close to 100% accuracy. At 0.5 tolerance, models trained with 2-digit precision are penalised by rounding
error, but there is no significant difference between 3 and 4-digit precision. However, 4-digit models need
significantly more examples to train: whereas 2 and 3-digit models achieve 99% accuracy at 5% tolerance
after 5 million examples, a 4-digit model need 21 millions to reach 99% accuracy.
|Tolerance|10 5 2 1 0.5 0.1|
|---|---|
|2-digit precision, P100 3-digit precision, P1000 4-digit precision, P10000|100 100 100 99.4 60.8 0 100 100 99.3 98.1 97.2 66.4 99.5 99.4 99.4 99.0 98.8 17.3|
|---|---|
Table 13: Accuracy of 10 10 matrix addition, for different precision and tolerance, after training on 30
_×_
million examples.
For eigenvalues, all models achieve 100% accuracy at 5% tolerance. At lower tolerance (1 or 0.5%), accuracy
increases with precision. On this task, learning speed is comparable for all models: 2, 3 and 4-digit models
achieve 99% accuracy (with 5% tolerance) after 9, 8 and 7 million examples respectively. Overall, number
precision has a marginal effect on accuracy.
|Tolerance|10 5 2 1 0.5 0.1|
|---|---|
|2-digit precision, P100 3-digit precision, P1000 4-digit precision, P10000|100 100 94.3 87.4 80.1 24.1 100 100 99.9 98.2 78.9 9.8 100 100 99.9 99.0 85.6 1.4|
|---|---|
Table 14: Accuracy of 5 5 eigenvalue calculation, for different precision and tolerance, after training on
_×_
60 million examples.
-----
250
200
150
100
50
0 20 40 60 80 100 120 140
Addition (10x10): test loss
P10 2/2 layers
P1000 2/2 layers
B1999 2/2 layers
B1999 1/1 layers
FP15 2/2 layers
FP15 1/1 layers
20
15
10
80
60
40
80
60
40
0 100 200 300 400 500 600 700
Addition (10x10): 5% accuracy
0 20 40 60 80 100 120 140
Eigenvalues: 5% accuracy
0 25 50 75 100 125 150 175 200
Eigenvectors: 5% accuracy
0 100 200 300 400 500
Inversion: 5% accuracy
0 100 200 300 400 500 600 700
Addition (10x10): 1% accuracy
0 20 40 60 80 100 120 140
Eigenvalues: 1% accuracy
0 25 50 75 100 125 150 175 200
Eigenvectors: 1% accuracy
0 100 200 300 400 500
Inversion: 1% accuracy
0 100 200 300 400 500 600 700
Epochs
100
80
60
40
20
0
100
80
60
40
20
0
100
80
60
40
20
0
100
80
60
40
20
100
80
60
40
20
0
100
80
60
40
20
0
100
80
60
40
20
0
100
80
60
40
20
Eigenvalues: test loss
P10 6/6 layers
P10 4/1 layers
P1000 4/1 layers
P1000 2/2 layers
B1999 6/6 layers
FP15 6/1 layers
0 25 50 75 100 125 150 175 200
Eigenvectors: test loss
P10 4/4 layers
P10 6/6 layers
P1000 4/4 layers
P1000 6/1 layers
FP15/P1000 6/1 layers
FP15 6/1 layers
0 100 200 300 400 500
Inversion: test loss
P10 8/8 heads
P1000 8/8 heads
P1000 10/8 heads
P1000 12/8 heads
FP15/P1000 10/4 heads
FP15/P1000 12/8 heads
Figure 2: Learning curves for different problems. All problems except addition use 5 5 matrices. All models
_×_
have 512 dimensions and 8/8 heads (except when mentioned in the legend). Inversion models have 6/1 layers. Epochs
correspond to 300 000 training examples Test loss is cross-entropy
-----
**D** **Additional experimental results**
**D.1** **Learning curves for different encodings and architectures**
Figure 2 presents learning curves (test loss, 5 and 1% accuracy) for (10 10) addition, and (5 5eigenvalues,
_×_ _×_
eigenvectors and inversion. The best models learn to perform addition (1% tolerance) in less than 10 epochs
(3 million examples). For other tasks, training size increases with operation complexity: from 20 million (70
epochs) for eigenvalues, to 50 million for eigenvectors, and over 120 million for matrix inversion. Some of
the learning curves exhibit the “step shapes” often observed in arithmetic tasks: losses are subject to brutal
drops, corresponding to steep increases in accuracy.
The learning curves for the harder problems (eigenvalues, eigenvectors and inversion) are noisy. This is caused
by the learning rates: our models usually need small learning rates (5 10[−][4] before scheduling is typical) and
_×_
there is a trade-off between low rates that will stabilize the learning curve, and larger rates that accelerate
training.
**D.2** **Impact of model size on accuracy and learning speed**
Two main factors influence model size: the number of layers and the number of dimensions (see Appendix G
for precise calculations). This section discusses their impact on accuracy and learning speed, when adding
10 10 matrices, multiplying a 5 5 matrix by a vector, and computing the eigenvalues of a 5 5 matrix. All
_×_ _×_ _×_
the models in this section are symmetric (same dimension and number of layers in the encoder and decoder)
and have 8 attention heads.
For the addition task, tables 15 and 16 present the accuracy after 60 epochs (18 million examples) and the
number of epochs (of 300,000 examples) needed to reach 95% accuracy, for models using the P1000 and B1999
encoding. Shallow architectures (i.e. 1 or 2 layers) learn addition with high accuracy for both encodings,
but the more compact B1999 supports smaller models (256 dimensions). In terms of speed, shallow (2-layer)
B1999 and deep (6-layer) P100 prove fastest.
|dimension|B1999 P1000 64 128 256 512 64 128 256 512|
|---|---|
|1/1 layers 2/2 layers 4/4 layers 6/6 layers|31 7 82 100 0 0 100 100 0 0 0 14 0 0 0 0|0 0 1 40 0 0 0 99 0 0 0 98 0 0 0 99|
|---|---|---|
Table 15: Accuracy of matrix addition for different model sizes. 10 10 matrices, 60 epochs (18 millions
_×_
examples), 5% tolerance.
|dimension|B1999 P1000 64 128 256 512 64 128 256 512|
|---|---|
|1/1 layers 2/2 layers 4/4 layers 6/6 layers|- - 76 15 - - 26 6 - - 70 63 - - - -|- - - 96 - - - 37 - - - 53 - - - 23|
|---|---|---|
Table 16: Learning speed of matrix addition for different model sizes. Number of epochs needed to reach
95% accuracy (5% tolerance). 1 epoch = 300,000 examples.
Table 17 presents the learning speed of models of different sizes for the matrix/vector product and eigenvalue
computation tasks (5 5 matrices, and P1000 encoding). For each problem, there exists a minimal dimension
_×_
and depth under which models struggle to learn: one layer and 128 dimensions for products, one layer or 128
dimensions for eigenvalues. Over that limit, increasing the dimension accelerates learning. Increasing the
depth, on the other hand, bring no clear improvement in speed or accuracy.
-----
|Col1|Matrix product Eigenvalues 128 256 512 128 256 512 1024|
|---|---|
|1/1 layers 2/2 layers 4/4 layers 6/6 layers 8/8 layers|- 29 18 24 12 7 28 11 5 24 10 6 18 12 6|- - - - - 102 36 23 244 90 24 13 - - 129 16 - - 34 24|
|---|---|---|
Table 17: Learning speed of matrix and vector products and eigenvalue calculations for different model
**sizes. Number of epochs needed to reach 95% accuracy (with 5% tolerance). 1 epoch = 300,000 examples. 5** 5
_×_
matrices, P1000 encoding.
**D.3** **Scaling to larger models**
This paper mostly focuses on small models, with up to 6 layers and 50 million parameters. In this section,
I investigate the performance of larger models, with up to 24 layers and over 400 million parameters. All
models are asymmetric, featuring either a deep encoder and a shallow (1-layer) decoder, or the reverse.
Shallow models have one layer, 512 dimensions and 8 attention heads. Deep models range from small (6
layers) to medium (8 layers), large (12 layers) and extra-large (24 layers). By default, they have a dimension
of 512, 640, 768 and 1024, and 8, 10, 12 and 16 heads respectively. For each size, I also experiment with models
with 50% more dimensions and attention heads. The basic encoding is FP15/P1000, but I also experiment
with two alternative configurations: FP15/FP15 and P1000/P1000. Table 18 summarizes the configurations
that were tested.
Model size Base configuration Larger dimension More heads
Small: 6 layers 512 dimensions, 8 heads 768 12
Medium: 8 layers 640 dimensions, 10 heads 960 15
Large: 12 layers 768 dimensions, 12 heads 1152 18
Extra-large: 24 layers 1024 dimensions, 16 heads 1536 24
Table 18: Large model configurations. All three configurations are tested for the encoder and decoder, and with
the three encodings: FP15/P1000, P1000/P1000, FP15/FP15.
For the eigenvalue task, models are trained on 8 8, 10 10 and 12 12 matrices. Table 19 presents the best
_×_ _×_ _×_
performing configurations, for different tolerance levels, and the sample size needed to achieve 99% accuracy
(with 5% tolerance). In general, architectures combining a deep decoder with a shallow encoder outperform
deep encoders and shallow decoders. Deeper models tend to learn faster, and solve larger problems, but for a
given number of layers, there is not clear advantage of increasing dimension or the number of attention heads.
In particular, models with 12 or 24 layers can compute the eigenvalues of 12 12 matrices, a task inaccessible
_×_
to small architectures. Learning to predict the eigenvalues of 8 8 matrices takes over 50 million examples
_×_
for 6-layer models, but only 11 million for 24-layer architectures. For 10 10 matrices, 8-layer models need
_×_
about 85 million examples, whereas 12 and 24-layer models need 38 and 36. Finally, it is interesting to note
that larger models achieve better precision: 12-layer models routinely achieve accuracies over 90% with 0.5%
tolerance, while smaller models struggle at such precision levels.
-----
|Model|Dimensions|5% 2% 1% 0.5%|Sample size*|
|---|---|---|---|
|Small (6 layers) E:1/512/8/FP15 D:6/516/12/P1000 E:1/512/8/FP15 D:6/512/8/P1000 E:6/768/8/P1000 D:1/512/8/P1000 E:1/512/8/FP15 D:6/768/8/P1000 E:1/512/8/FP15 D:6/516/12/P1000 E:1/512/8/FP15 D:6/768/8/FP15|8x8 8x8 8x8 8x8 10x10 10x10|100 99.5 88.8 35.9 100 97.7 86.5 50.3 100 98.8 80.8 28.1 100 97.9 69.8 17.7 99.7 72.6 22.3 2.2 83.6 17.0 1.5 0.1|51 64.2 64.8 71.7 194.7 -|
|---|---|---|---|
|Medium (8 layers) E:8/640/10/P1000 D:1/512/8/P1000 E:1/512/8/FP15 D:8/645/15/P1000 E:1/512/8/FP15 D:8/960/10/P1000 E:1/512/8/FP15 D:8/640/10/P1000 E:1/512/8/FP15 D:8/640/10/P1000 E:1/512/8/FP15 D:8/645/15/P1000 E:1/512/8/FP15 D:8/645/15/FP15|8x8 8x8 8x8 8x8 10x10 10x10 12x12|99.4 99.0 97.0 60.1 100 99.9 95.9 56.1 100 99.9 95.4 55.9 100 99.8 92.0 42.0 100 98.4 70.2 11.2 99.1 62.9 14.1 1.0 3.0 0 0 0|120.9 45 34.8 43.8 85.2 108.6 -|
|---|---|---|---|
|Large (12 layers) E:1/512/8/FP15 D:12/1152/12/P1000 E:1/512/8/FP15 D:12/768/12/P1000 E:1/512/8/FP15 D:12/768/12/P1000 E:1/512/8/FP15 D:12/1152/12/P1000 E:1/512/8/FP15 D:12/768/12/P1000 E:1/512/8/FP15 D:12/768/12/FP15|8x8 8x8 10x10 10x10 12x12 12x12|100 100 100 99.1 100 100 99.9 93.5 100 100 99.8 89.3 100 100 99.7 96.8 96.7 24.2 1.5 0.1 85.2 11.8 0.6 0|13.2 42 68.7 37.5 - -|
|---|---|---|---|
|Extra-large (24 layers) E:1/512/8/FP15 D:24/1024/16/P1000 E:1/512/8/FP15 D:24/1032/24/P1000 E:1/512/8/FP15 D:24/1032/24/FP15 E:1/512/8/FP15 D:24/1024/16/P1000 E:1/512/8/FP15 D:24/1032/24/P1000 E:1/512/8/FP15 D:24/1032/24/P1000 E:1/512/8/FP15 D:24/1024/16/P1000|8x8 8x8 8x8 10x10 10x10 12x12 12x12|100 100 100 99.2 100 100 100 93.9 100 100 99.7 82.6 100 100 99.6 75.9 100 100 99.5 69.3 100 99.6 95.5 43.7 100 99.6 88.9 23.0|11.1 14.1 19.5 36.3 42.6 99.6 99.3|
|---|---|---|---|
Table 19: Large models: accuracy of eigenvalue computations. E=Encoder, D=Decoder, 1/512/8 : 1 later,
512 dimensions, 8 heads. Sorted by 1% accuracy. *Millions of examples to reach 99% accuracy
**D.4** **Model performance on different training sets**
The models presented in the main part of this paper are trained on Wigner matrices (matrices with independent
and identically distributed, iid, coefficients) with fixed-range coefficient. In section 5, I argued that different
training sets allowed for better out-of-domain generalization. Table 20 summarizes in-domain performance
(i.e. accuracy when the test set is has the same distribution as the training set) on different training sets.
Wigner matrices with uniform or Gaussian distributed, and fixed or variable-range, coefficients are learned
to high accuracy (more than 99%) by all models. The eigenvalues of non-Wigner matrices with Gaussian
or Laplace distributed eigenvalues, and of mixtures of Wigner and non-Wigner matrices, are also predicted
to high accuracy by all models. Over matrices with positive or uniformly distributed eigenvalues, smaller
models using the FP15 encoding prove difficult to train.
-----
|Col1|FP15 P1000 4/1 layers 6/1 layers 4/1 layers 6/1 layers|
|---|---|
|Wigner matrices (iid coefficients) Uniform iid A=10 Gaussian iid A=10 Uniform iid A=1,100 Uniform iid A=1,1000|99.6 100 99.8 100 99.8 100 99.8 100 99.0 99.2 99.8 100 99.2 99.5 99.7 99.8|
|---|---|
|Non Wigner Positive A=10 Uniform A=10 Gaussian A=10 Laplace A=10 Gaussian+uniform+Laplace A=10|12.7 100 100 100 8.2 10.8 99.9 100 99.6 100 100 100 99.4 99.9 99.9 99.9 3.8 99.8 99.6 99.9|
|---|---|
|Wigner and non-Wigner mixtures iid+gaussian A=10 iid+positive A=10 iid+Laplace A=10 iid+positive+gaussian A=10 iid+positive+Laplace A=10|99.5 99.9 98.0 99.7 99.8 99.9 99.8 99.8 99.6 99.8 99.6 99.5 99.8 99.9 99.7 99.9 99.0 99.8 99.6 99.8|
|---|---|
Table 20: In-distribution eigenvalue accuracy (tolerance 2%) for different training distributions. All
models have 512 dimensions, and 8 attention heads, and are trained on 5x5 matrices.
**E** **Alternative architectures**
**E.1** **Other sequence to sequence models : LSTM and GRU**
I experimented with two popular recurrent architectures: long short-term memories (LSTM Hochreiter &
Schmidhuber (1997)), and gated recurrent units (GRU Cho et al. (2014)), on three tasks : addition of 5 5
_×_
and 10 10 matrices, eigenvalues and matrix inversion of 5 5 matrices. All models are sequence-to-sequence
_×_ _×_
architectures: an encoder and a decoder (LSTM or GRU), with 2 to 8 layers, and 1024 or 2048 hidden
dimensions. The input and output sequences, encoded as in the rest of the paper, are pre-processed (and
decoded) via an embedding layer with 256 or 512 dimensions.
Addition, a very easy task for transformers (see section 4.2) proves difficult for LSTM and GRU. None of
the models can learn addition of 10 10 matrices. Some models can learn addition of 5 5 matrices, but
_×_ _×_
whereas transformers achieve 100% accuracy for all tolerances, the best LSTM and GRU only exceed 90% at
1% tolerance. GRU seem to perform better than LSTM on this task, and 2-layer models perform better than
4-layer models, but transformers have a distinct advantage over LSTM and GRU for addition.
Both LSTM and GRU can be trained to predict eigenvalues of 5 5 matrices with the same accuracy as
_×_
transformers, for the P1000 and FP15 encoding (table 22). Matrix inversion, on the other hand, cannot be
learned. Overall, these experiments show that other sequence to sequence architectures, LSTM and GRU,
can learn tasks like eigenvalues and addition of small matrices. However, they are less efficient on addition
(in terms of precision and scaling to larger matrices) and fail on more complex tasks, like matrix inversion.
**E.2** **Shared-layer transformers: Universal transformers**
In the Universal Transformer (Dehghani et al., 2018), the stacked layers of usual transformer implementations
are replaced by one layer that is looped through a fixed number of times (feeding the output of one iteration
into the input of the next). This amounts to sharing the weights of the different layers, therefore greatly
reducing the number of parameters in the model. This technique can be applied to the encoder, the decoder
or both. The number of iterations is a fixed hyperparameter, but the original paper also proposed a halting
mechanism inspired by Adaptive Computation Time (Graves, 2016), to adaptively control loop length at the
token level. In this version, a stopping probability is learned for every position in the input sequence, and
-----
|Hidden dimension Embedding dimension|2 layers 4 layers 1024 2048 1024 2048 256 512 256 512 256 512 256 512|
|---|---|
|Long short-term memory 5% tolerance 2% tolerance 1% tolerance 0.5% tolerance Gated recurrent Units 5% tolerance 2% tolerance 1% tolerance 0.5% tolerance|100 0 0 100 0 0 0 0 98 0 0 100 0 0 0 0 95 0 0 86 0 0 0 0 34 0 0 1 0 0 0 0 100 100 0 100 0 100 0 0 100 28 0 100 0 99 0 0 44 0 0 91 0 74 0 0 0 0 0 9 0 4 0 0|
|---|---|
Table 21: 5 5 matrix addition using LSTM and GRU.
_×_
|Hidden dimension Layers|FP15 P1000 1024 2048 1024 2048 4 6 8 4 6 8 4 6 8 4 6 8|
|---|---|
|LSTM 5% tolerance 2% tolerance 1% tolerance 0.5% tolerance Gated recurrent Units 5% tolerance 2% tolerance 1% tolerance 0.5% tolerance|100 100 100 100 100 6 100 100 5 100 100 100 95 100 100 99 100 1 100 100 1 100 99 100 78 98 99 91 98 0 97 98 0 100 92 99 46 81 83 62 68 0 78 88 0 89 57 76 100 100 100 100 100 100 100 100 100 100 5 100 98 99 100 100 100 100 99 100 100 100 1 100 86 93 96 98 99 97 94 98 95 97 0 98 53 68 75 78 83 65 65 76 63 75 0 66|
|---|---|
Table 22: Eigenvalue computation with LSTM and GRU, 5 5 matrices.
_×_
once it reaches a certain threshold, the layer merely copies the input onto the output. The iteration stops
when all positions have halted, or a specific value is reached. A recent paper (Csordás et al., 2021) proposes
to use a similar copy-gating mechanism to skip iterations in a fixed-length loop. I experiment with these
three variants (fixed length, adaptive halting, copy gating) on the addition (of 10 10 matrices), eigenvalue
_×_
and matrix inversion tasks (5 5 matrices).
_×_
For the addition task, I train universal transformers with one layer and in the encoder and decoder, 256 or
512 dimensions and 8 attention heads, and use the B1999 encoding for the data. I experiment with looped
encoder, looped decoder, and loop in both, a loop length of 4, copy-gating and ACT (the 4 loops in then
a maximum number of iterations)and copy-gating. Table 23 summarizes my findings. Only models with
encoder loops learn to add, and models with 512 dimensions learn with over 95% accuracy for all tolerances.
Universal Transformers with one layer (looped-encoder only) perform as well as 2/2 transformers.
On the eigenvalue task, I experiment on the P1000 and FP15 encoding, with encoder-loop only 1/1 Universal Transformers with 4 or 8 loops. Universal transformers using the P1000 encoding achieve the same
performances (with only one layer) than transformers from section 4.4. 4 loop transformers seem to perform
best, gating does not seem to improve performance and ACT slightly degrades it. With the FP15 encoding,
universal transformers become very difficult to train: only the 4 loop gated version achieves significant
accuracy (still lower than the 6/1 transformers).
Finally, I experimented with matrix inversion, with FP15/P1000 and P1000/P1000 encodings, and 4 or 8
loops in the encoder. A gated universal transformer using FP15 in the input and P1000 in the output achieved
73% accuracy, a significant result albeit lower than the best result achieved with 6/1 transformers using
the same encodings (90%). With the P1000 encoding, the best universal transformers reach 55% accuracy,
-----
|Col1|5% 2% 1% 0.5%|
|---|---|
|Looped encoder 256 dimensions, 4 loops 512 dimensions, 4 loops 256 dimensions, 4 loops, gated 512 dimensions, 4 loops, gated 256 dimensions, 4 loops, ACT 512 dimensions, 4 loops, ACT|15 1 0 0 100 100 100 100 97 66 41 29 100 100 100 100 100 92 76 66 100 100 98 96|
|---|---|
|Looped decoder Looped encoder and decoder|0 0 0 0 0 0 0 0|
|---|---|
|2/2 transformer (baseline)|100 100 100 100|
|---|---|
Table 23: Accuracy of Universal transformers, 10 10 matrix addition for different tolerances.
_×_
|Col1|5% 2% 1% 0.5%|
|---|---|
|P1000 4 loops 8 loops 4 loops, gated 8 loops, gated 4 loops, ACT 8 loops, ACT|100 100 97 87 100 99 93 69 100 100 98 91 100 100 99 90 100 97 89 62 100 95 77 42|
|---|---|
|FP15 4 loops 8 loops 4 loops, gated 8 loops, gated 4 loops, ACT 8 loops, ACT|4 0 0 0 0 0 0 0 94 84 57 23 6 1 0 0 4 0 0 0 4 0 0 0|
|---|---|
|4/1 transformer (P1000 baseline) 6/1 transformer (FP15 baseline)|100 100 99 89 100 100 100 92|
|---|---|
Table 24: Accuracy of Universal transformers, 5 5 matrices eigenvalue computation for different tolerances.
_×_
compared to 80% for their 6/1 transformer counterparts. Overall, Universal Transformers seem to achieve
comparable performances with deep transformers (except on the inversion tasks), using less parameters. This
makes shared layer transformers an interesting direction for future work.
**F** **Additional experiments**
**F.1** **Retraining**
Models trained on matrices of a given size do not generalize to different dimensions, but they can be retrained
over samples of matrices of different size. This takes comparatively few examples: a 5 5 model, that takes 40
_×_
million examples to be trained, can learn to predict with high accuracy eigenvalues of matrices of dimension
6 and 7 with about 25 million additional examples. Table 25 presents those results. The possibility to retrain
large transformers (such as GPT-3) on different tasks is well documented, it is interesting to observe the
same phenomenon in smaller models.
-----
Encoding Retrain dimensions Accuracy (5%) Accuracy (2%) Retrain examples
P10 5-6 100 99.9 10M
P10 5-7 100 93.6 25M
P1000 5-6 100 97.7 25M
Table 25: Model accuracy after retraining. Models trained over 5x5 matrices, retrained over 5-6 and 5-7. Overall
performance after retraining (tolerance 5 and 2%), and number of examples needed for retraining. All models have
512 dimensions and 8 attention heads
**F.2** **Joint training: learning to perform several operations**
All models so far are trained on just one task. In this section, I investigate joint learning: training one model
to perform several operations. To this effect, a token is added at the beginning of the input and output
sequence, to indicate the task (e.g. Transpose or Add), and generate training data by randomly mixing
examples of the different operations to be performed.
Transformers with 4 or 6 layers, 512 dimensions and 8 attention heads are trained on eight datasets
corresponding to the following joint training goals (all operations in equal proportions):
- Transpose and add (TA)
- Transpose, add and dot product (vector matrix multiplication) (TAD)
- Transpose, add, dot product and matrix multiplication (TADM)
- Transpose, add, dot product, matrix multiplication and eigenvalues (TADME)
- Transpose, add, dot product, matrix multiplication, eigenvalues and eigenvectors (TADMEF)
- Transpose, add, dot product, matrix multiplication, eigenvalues, eigenvectors and matrix inversion
(TADMEFI)
- Eigenvalues, eigenvectors and matrix inversion (EFI)
Table 26 summarizes results.
|Col1|T A D M E F I|
|---|---|
|TA TAD TADM TADME TADMEF TADMEFI EFI|100 100 100 100 100 100 100 100 100 100 100 26 100 80 100 100 100 100 3 0 100 100 100 100 3 0 0 100 22 0|
|---|---|
Table 26: Accuracy of joint training, 5 5 matrices, 5% tolerance.
_×_
Over mixtures of the four basic operations (transposition, addition, dot products and multiplication: goals
TA, TAD and TADM), models predict all operations with almost perfect accuracy. Joint training on the basic
operations and eigenvalue computations (the TADME task) allows the model to predict eigenvalues with 80%
accuracy, in exchange for a loss of performances on the dot product task. As the number of non-basic tasks
increases, the model keeps learning basic operations to 100% accuracy (as in the TADM setting), but the
more advanced operations are not learned. Joint training on the advanced tasks only (eigenvalues, vectors
and inversion) results in 100% accuracy on eigenvalue computation, 22% on eigenvectors, and 0 on inversion.
These results demonstrate the feasibility of joint training on basic matrix operations, but also suggest that
further research is needed if one wants to extend joint training to all the tasks considered in this paper.
**F.3** **Additional results with noisy data**
See tables 27 and 28.
-----
|Col1|B1999 P1000 2/2 layers 4/4 layers 2/2 layers 4/4 layers 256 512 256 512 256 512 256 512|
|---|---|
|5% tolerance 0.01σ error 0.02σ 0.05σ|100 100 100 100 100 100 99.4 100 100 100 99.8 100 100 100 100 100 41.5 41.2 41.7 41.6 39.3 41.2 39.4 40.7|
|---|---|
|2% tolerance 0.01σ error 0.02σ 0.05σ|99.8 99.9 99.8 99.9 99.4 100 98.2 99.9 43.7 44.2 42.1 44.7 39.0 44.9 42.6 45.3 0 0 0 0 0 0 0 0|
|---|---|
|1% tolerance 0.01σ error 0.02σ 0.05σ|39.8 41.7 39.6 44.0 36.6 44.0 28.9 44.6 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0 0 0 0 0 0|
|---|---|
Table 27: Accuracy of noisy 5 5 matrix addition for different error levels and tolerances.
_×_
|Col1|FP15 P1000 4/4 layers 6/6 layers 4/4 layers 6/6 layers 512 1024 512 1024 512 1024 512 1024|
|---|---|
|5% tolerance 0.01σ error 0.02σ 0.05σ|6.1 100 5.1 6.0 100 100 100 100 100 100 6.7 100 100 100 100 100 99.1 99.3 99.3 6.4 99.3 99.0 99.0 98.8|
|---|---|
|2% tolerance 0.01σ error 0.02σ 0.05σ|0.7 99.8 0.5 0.8 99.3 99.6 99.9 99.8 97.0 97.1 0.8 88.4 97.3 97.9 93.1 95.4 37.9 38.4 40.6 0.5 40.1 37.3 37.5 35.3|
|---|---|
|1% tolerance 0.01σ error 0.02σ 0.05σ|0.1 82.1 0.1 0.2 79.7 83.8 87.9 83.8 47.8 51.3 0.1 26.1 46.2 47.5 36.4 41.3 3.8 4.2 4.1 0.1 4.1 3.8 3.9 3.4|
|---|---|
Table 28: Accuracy of noisy eigenvalue computations, for different error levels and tolerances, 5 5
_×_
matrices.
**G** **Number of parameters**
The number of parameters in the sequence-to-sequence transformers used in this paper can be calculated as
follows.
- A self-attention mechanism with dimension d has 4d(d + 1) parameters: it is composed of four linear
layers (K, Q, V and the output layer), with d input, d output and a bias.
- A cross-attention mechanism with de dimensions in the encoder, and d in the decoder has 2d(d+de +2)
parameters (K and V are de _d layers)._
_×_
- A FFN with one hidden layer, d input and output, and h hidden units has d(h + 1) + h(d + 1)
parameters.
- A layer normalization with d dimensions has 2d parameters.
- An encoder layer with dimension d has a self-attention mechanism, a FFN with 4d hidden units (in
this implementation) and two layer normalizations, for a total number of parameters of 12d[2] + 13d.
- A decoder layer has a cross-attention layer (encoding dimension de) and a layer normalization on top
of an encoder, for a total of 14d[2] + 19d + 2ded parameters.
-----
- An embedding of dimension d for a vocabulary of w words will use dw parameters, and 2d more if it
is coupled to a layer normalization.
- The final prediction layer with an output dimension of d and a decoded vocabulary of w words will
use (d + 1)w parameters (but in this case, dw will be shared with the decoder embedding).
Overall, the number of parameters for a transformer with ne encoding layers with dimension de, n _d decoding_
_∗_
layers with dimension dd, an input vocabulary of wi words, an output vocabulary of wo words and a positional
embedding of wp words (corresponding to the maximum sequence length) can be computed by the formula:
_P = de(wi + wp + 2) + ((wo + wp + 2)dd + wo) + nede(12de + 13) + nddd(14dd + 2de + 19)_
the four terms in the sum corresponding to the input embedding, the output embedding, the encoder and the
decoder.
Table 29 provides the number of parameters for some of the models used in this paper. For the positional
embedding, the number of words is the longest input and output sequence studied with that model.
|Experiment|Model|Parameters|
|---|---|---|
|Transposition|1/1 layers 256 dimensions P10 1/1 layers 256 dimensions P1000 1/1 layers 256 dimensions B1999 1/1 layers 256 dimensions FP15|2,276,171 2,737,871 3,297,554 17,045,441|
|---|---|---|
|Addition|2/2 layers, 512 dimensions, B1999|17,619,218|
|---|---|---|
|Matrix vector multiplication|2/2 layers 512 dimensions P10 2/2 layers 512 dimensions P1000 4/4 layers 512 dimensions P1000|15,578,443 16,500,943 31,213,775|
|---|---|---|
|Matrix multiplication|1/4 layers 512 dimensions P1000 1/6 layers 512 dimensions P1000|21,756,623 30,164,687|
|---|---|---|
|Eigen decomposition|1/6 layers 512 dimensions FP15 6/1 layers 512 dimensions FP15 6/1 layers 512 dimensions P1000 6/6 layers 512 dimensions P1000|58,751,937 53,493,697 24,906,447 45,926,607|
|---|---|---|
Matrix inversion 6/1 layers 512 dimensions FP15/P1000 39,186,127
Table 29: Number of parameters of transformers used in the paper.
**H** **Eigenvalue distribution of Wigner matrices, an empirical justification**
Figure 3 provides an empirical confirmation of the property of Wigner matrices mentioned in sections 2.2
and 5: the standard deviation of their eigenvalues is a function of their dimension and standard deviation of
their coefficients only, and does not depend on the actual distribution of the coefficients. In particular, for
coefficients with standard deviation σ = 10/√3 = 5.77, we expect the standard deviation of their eigenvalue
distribution to be σ = 12.91, 18.26, 22.36 and 25.81 for square matrices of dimension 5, 10, 15 and 20.
For three distributions, uniform, Laplace and gaussian, and four dimensions (5, 10, 15, and 20), 100 000
random matrices with the same standard deviation of coefficients were generated, and their eigenvalues were
computed. Standard deviations are within 0.01 of theoretical values for all distributions and dimensions. It is
interesting to note how the distributions (which correspond to the original coefficient distribution for n = 1)
are similar to the semi-circle as dimension increases.
-----
5x5
10x10
15x15
20x20
1000
800
600
400
200
0
1000
800
600
400
200
= 12.92
= 18.25
= 22.36
= 25.82
= 12.91 = 18.25 = 22.37 = 25.82
= 12.91
= 18.25
= 22.37
= 25.82
2000
1750
1500
1250
1000
750
500
250
= 12.90
60 40 20 0 20 40 60
= 18.25
75 50 25 0 25 50 75
= 22.37
75 50 25 0 25 50 75
= 25.83
75 50 25 0 25 50 75
Figure 3: Empirical distributions of eigenvalues for Wigner matrices, dimension 5x5 (left) to 20x20 (right),
with uniform (top), gaussian (middle) and Laplace (bottom) coefficients. All distributions computed from 100 000
random matrices.
-----
| [
"François, Charton"
] | 2022-11-08T00:00:00 | TMLR 2022 | false | 43 | 2 | null | http://arxiv.org/abs/2112.01898 | https://arxiv.org/abs/2112.01898 | https://www.semanticscholar.org/paper/45ece6f3b0a319dba60c20b3013b5161dd49c58b |
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Most existing Large Language Model (LLM) benchmarks on scientific problem reasoning focus on problems grounded in high-school subjects and are confined to elementary algebraic operations. To systematically examine the reasoning capabilities required for solving complex scientific problems, we introduce an expansive benchmark suite SciBench for LLMs. SciBench contains a carefully curated dataset featuring a range of collegiate-level scientific problems from mathematics, chemistry, and physics domains. Based on the dataset, we conduct an in-depth benchmarking study of representative open-source and proprietary LLMs with various prompting strategies. The results reveal that current LLMs fall short of delivering satisfactory performance, with the best overall score of merely 43.22%. Furthermore, through a detailed user study, we categorize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that no single prompting strategy significantly outperforms the others and some strategies that demonstrate improvements in certain problem-solving skills could result in declines in other skills. We envision that SciBench will catalyze further developments in the reasoning abilities of LLMs, thereby ultimately contributing to scientific research and discovery. | An in-depth benchmarking study of representative open-source and proprietary LLMs with various prompting strategies indicates that no single prompting strategy significantly outperforms the others and some strategies that demonstrate improvements in certain problem-solving skills could result in declines in other skills. | # SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
**Xiaoxuan Wang[†][∗]** **Ziniu Hu[†∗]** **Pan Lu[†∗]** **Yanqiao Zhu[†∗]** **Jieyu Zhang[‡]**
**Satyen Subramaniam[†]** **Arjun R. Loomba[†]** **Shichang Zhang[†]** **Yizhou Sun[†]** **Wei Wang[†]**
_†University of California, Los Angeles_ _‡University of Washington_
```
https://github.com/mandyyyyii/scibench
```
**Abstract**
Recent advances in Large Language Models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only contain problems grounded in junior and senior high school subjects, contain
only multiple-choice questions, and are confined to a limited scope of elementary
arithmetic operations. To address these issues, this paper introduces an expansive
benchmark suite SCIBENCH that aims to systematically examine the reasoning
capabilities required for solving complex scientific problems. SCIBENCH contains
two datasets: an open set featuring a range of collegiate-level scientific problems,
and a closed set comprising problems from undergraduate-level exams. Based on
the two datasets, we conduct an in-depth benchmarking study of five representative
LLMs with various prompting strategies. Furthermore, through a detailed user
study, we show that no single prompting strategy significantly outperforms the
others and some strategies that demonstrate improvements in certain problemsolving skills could result in declines in other skills.
**1** **Introduction**
Recent advancements in Large Language Models (LLMs) have dramatically expanded the boundaries
of artificial intelligence [5, 14, 25, 34, 35, 44, 48, 49]. They have demonstrated outstanding performance in many mathematical reasoning tasks and might suggest that LLMs are capable of performing
mathematical reasoning tasks [7, 8, 13, 22, 47]. However, we argue that this assertion might be
overly optimistic due to the inherent limitations of the current benchmarks. Firstly, many existing
benchmarks such as ScienceQA [28] and GSM8K [10] only contain problems grounded in grade-level
subjects, thereby lacking enough complexity. Although other benchmarks like MATH [18] introduce
high-school level problems, they only involve a restricted range of operations — addition, subtraction, multiplication, and exponentiation — which do not adequately assess the depth of reasoning
abilities of LLMs. Secondly, recent works including MMLU [17], AGIEval [50], and CEval [21],
despite introducing challenging problems that span a wide range of disciplines, mainly focus on
multiple-choice questions without providing detailed solutions. This setup could inadvertently mislead benchmark evaluation, as it allows LLMs to guess the answers from candidate choices and
appear knowledgeable in comprehending the questions. Moreover, the lack of detailed solutions
prevents us from understanding the limitations of LLMs and discerning why they commit certain
errors. Furthermore, these benchmarks often source problems from online material, where questions
are closely followed by answers. As these problems could already be a part of the training data,
the models, trained in an autoregressive manner, may directly predict the answer without genuinely
understanding the problem. This potential data leakage provides a shortcut for LLM evaluation,
further compromising its validity.
_∗Equal contribution._
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
To mitigate the aforementioned deficiencies in existing LLM evaluation, this paper introduces a novel
college-level Scientific problem solving Benchmark, referred to as SCIBENCH. Our SCIBENCH
contains two datasets of college-level scientific problems. The open dataset includes 695 problems
collected from widely-used textbooks in college-level Chemistry, Physics, and Math courses. To
simulate real-world evaluation, we also include a closed dataset that encompasses seven sets of
midterm and final examination questions from three college courses in Computer Science and
Mathematics. Distinct from existing benchmarks, all of the problems in SCIBENCH are open-ended,
free-response questions. They require multiple steps of reasoning and the computation therein involve
complex arithmetic operations such as differentiation and integration. To ensure the integrity of
our evaluation, these datasets have been manually extracted from PDF documents and formatted
into LaTeX documents, thereby minimizing the possibility of their leakage in LLM training data.
Importantly, SCIBENCH also includes detailed solution steps, facilitating detailed error analysis.
Our evaluation includes five representative LLMs: LLaMA-2-7B, LLaMA-2-70B, Claude2, GPT-3.5,
and GPT-4, with various prompting strategies, including CoT, zero-shot learning, and few-shot
learning and external tools such as Python and Wolfram languages. The experimental results indicate
that the complexity and difficulty of our dataset are sufficient to differentiate the performance levels of
different LLMs. With the strongest configuration, which combines both CoT prompting and external
tools, GPT-4 achieves an average score of 35.80% on the open dataset and 51.57% on the closed
exam dataset.
In order to gain a comprehensive understanding of the limitations of LLMs in scientific problem
solving, we propose a novel self-refinement method to uncover the deficient skills in the solutions
made by LLMs. Our analysis finds that individual settings only bolster certain skills, and in some
instances, can even undermine capabilities inherent in the original GPT models..
**2** **The SCIBENCH Dataset**
To evaluate the capabilities and analyze the limitations of Large Language Models (LLMs) to solve
scientific computing problems, we collect a new dataset consisting of college-level textbooks and
course exams in a variety of domains. Our dataset aims to improve the previous benchmarks by
including more challenging problems, which require more reasoning steps, and more advanced
types of computations. Specifically, the selected dataset should fulfill the following requirements:
1) Inclusion of college-level problems. The chosen problems demand a solid understanding of
domain-specific knowledge, proficiency in reasoning capability, adept calculation skills, and the
ability to comprehend complex concepts. 2) Inclusion of detailed solutions. To facilitate a thorough
analysis of the limitations of LLMs, detailed solutions should be provided as well, which could
facilitate a finer-grained examination of the capacity of LLMs to handle complex problem-solving
tasks. 3) Inaccessibility in text formats. To ensure an unbiased evaluation, questions should not be
readily accessible online and cannot be easily extracted or transformed into text. This aims to mitigate
any potential information leakage from the exposure of LLMs to pre-existing online question banks,
such as those found in standardized tests like the SAT exams. 4) Enabling of assessing advanced
**problem solving ability. The problems to benchmark should not be confined to basic arithmetic**
operations like addition and multiplication. Rather, they should enable evaluating the capability of
LLMs in performing advanced computations such as integration and differentiation, particularly
when dealing with exceptionally small or large floating numbers.
Accordingly, we select ten textbooks that have been extensively used in college courses as the open
textbook dataset from three scientific fields Physics, Chemistry, and Math. We report the number of
problems and the ratio of problems with detailed solutions of each title in Table S1. For brevity, we
will be using their acronyms when referring to specific textbooks throughout the paper. Furthermore,
in order to simulate real-world evaluation, we collect a closed set of exam questions from college
courses from Computer Science and Math departments, including Data Mining, Machine Learning,
and Differential Equations. The statistics of the problems in each exam is detailed in Table S2. We
refer readers of interest to Appendix A for details on these textbooks and exams.
To reduce the likelihood of correct answers being merely guessed from candidates, we choose to
mainly include questions with more challenging, free-response answers, rather than multiple-choice
questions in previous works [9, 26, 28]. In order to facilitate standardized and automated evaluation,
-----
Table 1: Experimental results in terms of accuracy (%) on the textbook dataset. The best performing
score is highlighted in bold and second-best is underlined.
Zero−S 0.00 0.00 0.00 0.00 1.37 0.00 0.00 2.00 2.67 4.76 1.03
Chemistry Physics Math
Model Setting Avg.
```
atkins chemmc quan matter fund class thermo diff stat calc
```
Zero 0.00 0.00 0.00 0.00 1.37 0.00 0.00 2.00 5.33 0.00 1.03
Zero+CoT 0.00 2.56 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.67
LLaMA-2-7B Few 3.74 5.13 5.99 2.04 4.11 0.00 1.49 6.00 8.00 0.00 3.78
Few+CoT 1.87 5.13 2.94 0.00 5.48 0.00 0.00 0.00 12.00 7.14 3.60
Few+Py 0.93 2.56 0.00 0.00 0.00 0.00 0.00 0.00 6.67 0.00 1.20
Few+Wol 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Zero−S 1.87 2.56 0.00 0.00 0.00 0.00 0.00 0.00 2.67 0.00 0.86
Zero 1.87 2.56 0.00 0.00 1.40 0.00 0.00 0.00 10.70 4.76 2.41
Zero+CoT 0.93 2.56 0.00 0.00 0.00 0.00 1.49 0.00 10.70 0.00 1.89
LLaMA-2-70B Few 9.30 12.83 14.71 2.04 15.07 6.38 2.94 8.00 21.33 9.52 10.45
Few+CoT 13.10 12.83 14.71 4.08 12.33 0.00 0.00 0.00 13.30 9.52 8.40
Few+Py 0.93 7.69 2.94 0.00 9.59 0.00 1.49 0.00 17.30 9.52 5.14
Few+Wol 1.87 0.00 0.00 0.00 1.39 0.00 0.00 2.00 5.33 11.90 2.23
Zero−S 16.82 17.95 8.82 8.16 6.85 12.77 7.46 4.00 37.33 9.52 14.06
Zero 15.00 12.83 14.71 10.20 12.33 6.40 9.00 4.00 38.70 16.70 14.94
Zero+CoT 20.56 15.38 8.82 4.08 8.23 4.26 5.97 6.00 36.00 14.29 13.89
Claude2 Few 15.87 20.51 8.82 8.16 6.85 10.64 8.51 4.00 32.00 11.90 13.50
Few+CoT 15.89 25.64 14.65 6.12 9.59 6.38 10.45 8.00 33.33 19.05 15.26
Few+Py 6.54 12.82 14.71 4.08 17.81 8.51 5.97 20.00 40.00 16.67 14.92
Few+Wol 9.35 0.00 2.94 0.00 1.39 0.00 0.00 2.00 5.33 11.90 3.77
Zero−S 8.41 28.21 5.88 4.08 12.33 2.13 5.97 4.00 21.33 13.95 10.62
Zero 4.67 20.51 8.82 2.04 10.96 2.13 2.94 6.00 28.00 9.30 9.59
Zero+CoT 6.54 23.08 2.94 10.20 12.33 2.12 5.97 12.00 33.33 9.30 12.17
GPT-3.5 Few 5.61 15.38 11.76 4.08 8.22 0.00 1.49 10.00 26.67 13.95 9.60
Few+CoT 8.41 20.51 8.82 6.12 10.96 2.12 1.49 10.00 38.67 6.98 11.99
Few+Py 13.08 33.33 8.82 16.33 26.01 4.26 7.46 16.00 44.00 26.19 19.91
Few+Wol 3.74 7.69 2.94 18.37 17.81 6.38 2.99 12.00 5.33 2.38 7.87
Zero−S 14.95 25.64 8.82 18.37 21.92 12.77 7.46 8.00 28.00 19.05 16.81
Zero 27.10 23.08 14.71 22.45 15.07 8.51 11.94 18.00 56.00 **42.86** 25.09
Zero+CoT **28.04** 43.59 14.71 20.41 21.92 **19.15** 17.91 22.00 50.67 **42.86** 28.52
GPT-4 Few 15.87 30.77 17.65 12.24 26.03 12.77 5.97 8.00 49.33 33.33 21.46
Few+CoT 21.05 **46.15** 17.65 26.53 27.40 14.00 13.43 18.00 61.33 35.71 28.35
Few+Py 21.05 41.03 **38.24** **28.57** **38.36** 17.02 **29.85** **34.00** **69.33** **42.86** **35.80**
Few+Wol 3.74 0.00 17.65 26.53 27.30 17.02 17.91 32.00 7.69 14.29 15.56
we focus on answers that only contain single numerical numbers to avoid ambiguity for the textbook
dataset. The details of data processing are provided in Appendix A.3.
**3** **Experiments**
**3.1** **Experiment Setup**
We evaluate three close-source LLMs: Claude2 (claude2) [1], GPT-3.5 (gpt-3.5-turbo) [34],
GPT-4 (gpt-4) [35], along with two open LLMs: LLaMA-2-7B (llama-2-7b-chat) and LLaMA2-70B (llama-2-70b-chat) [45] on two benchmark datasets. We consider seven combinations of
prompting strategies and learning paradigms: zero-shot learning without the system prompt (Zero−S),
zero-shot learning with the system prompt (Zero), few-shot learning (Few), CoT prompting under
zero-shot (Zero+CoT) and few-shot learning (Few+CoT) scenarios, few-shot learning that prompts
to use Python (Few+Py), and Wolfram Language (Few+Wol) as external tools. Regarding the exam
dataset, to replicate a real-world exam environment, we only consider two specific settings: zeroshot learning (Zero) and zero-shot learning supplemented with CoT prompting (Zero+CoT). The
information about settings description and implementation details are provided in Appendix C
**3.2** **Results and Analysis**
We report the model performance in terms of accuracy score for each textbook and an average score
over all problems. The results of all LLMs in various settings on the textbook and the exam dataset
are summarized in Table 1 and Appendix C.3 respectively. We have the following observations.
-----
**Observation 1. SCIBENCH is complex enough to differentiate among LLMs. Our findings show**
that open-source models LLaMA-2-7B and LLaMA-2-70B do not yet rival closed-source counterparts
on both textbook and exam datasets, where the best performance is obtained with GPT-4 with Python
as the external tool in the few-shot learning setting. Within both the LLaMA and GPT series, we
also observe a clear correlation between increased model capacity (i.e., larger parameter sizes) and
improved performance. This observation demonstrates that the complexity of SCIBENCH is able to
differentiate the capacities of different LLMs.
**Observation 2. The zero-shot learning setting exhibits comparable performance to the few-shot**
**learning setting. For example, with CoT prompting, GPT-3.5 achieves average scores of 12.17%**
and 11.99%, and GPT-4 achieves 28.52% and 28.35% in zero- and few-shot settings respectively.
Moreover, in many textbooks such as Quantum Chemistry (quan and chemmc), which focus on a
specialized subdomain within each field, few-shot learning outperforms zero-shot learning, with
improvements of 2.94% and 2.56% in GPT-4 and 14.71% and 10.20% in LLaMA-2-70B under
the CoT setting, for instance. This could be attributed to the selected prompt examples being
representative and informative to the domain.
**Observation 3. Utilizing advanced prompting strategies like CoT brings advantages over**
**vanilla LLMs. For the textbook dataset, the CoT prompting yields average improvements of 2.58%**
and 2.39% under zero-shot and few-shot learning for GPT-3.5, and 3.43% and 6.89% for GPT-4,
respectively. This improvement suggests that encouraging LLMs to generate detailed solution steps
helps obtain correct final answers, though its effectiveness varies across different models and settings.
However, this trend is less obvious in LLaMA models with 10.45% and 8.40% in LLaMA-2-70B
under the few-shot setting, possibly due to their inherent inadequacy.
**Observation 4. Prompts that utilize Python yield improvements in certain models while those**
**using Wolfram diminish performance. Under few-shot learning scenarios, utilizing Python as an**
external tool results in an improvement of 7.92% compared to the CoT prompting for GPT-3.5, and
an improvement of 7.45% for GPT-4. However, in Claude2, this trend is less evident with average
scores of 15.26% and 14.92% with and without utilizing Python. Similarly, LLaMA models exhibit
a decrease in performance from 8.40% to 5.14% in LLaMA-2-70B. Utilizing Wolfram Language
does not help few-shot learning and even results in a deteriorated performance, with a decrease of
11.49% compared to the CoT prompting for Claude2, and a decrease of 12.79% for GPT-4. We note
that converting the solution steps to Wolfram Language often introduces syntax issues and thus fails
to produce satisfactory results, particularly in textbooks like Quantum Chemistry (chemmc), which
involve numerous variables.
**3.3** **Error Analysis of Various Prompting Strategies**
We present an evaluation protocol that automates the classification of error reasons into deficient
skills. The evaluation protocol involves analyzing both LLMs and reference (correct) solutions with
the assistance of human annotators to identify error reasons. These reasons are then summarized
into ten essential scientific problem-solving skills in which LLM may face challenges. Subsequently,
a LLM verifier is employed to automatically attribute each incorrectly answered problem to a lack
of a specific skill. The resulting error profiles enable the interpretation of the improved skills by
certain prompting strategies and the direct comparison of various strategies. More details about
the evaluation protocol are provided in Appendix D. Our finding indicates that there is a lack of a
universally effective setting: each configuration only enhances some specific abilities and occasionally
even hurts other skills that the original GPT models possess.
**4** **Conclusion**
In conclusion, this paper presents SCIBENCH, a college-level dataset that includes scientific problems
from Mathematics, Physics, and Chemistry, as well as exam questions in Computer Science and
Mathematics. We also conduct extensive experiments on five representative models, LLaMA-27B, LLaMA-2-70B, Claude2, GPT-3.5, and GPT4. The evaluation protocol we employ serves as a
framework for analyzing advanced problem-solving skills of LLMs in scientific domains. The findings
of experiment and analysis underscore the limitations of current LLMs in achieving satisfactory
performance, even with the assistance of various tools.
-----
**References**
[[1] Anthropic. Claude2. https://www.anthropic.com/index/claude-2, 2023. 3](https://www.anthropic.com/index/claude-2)
[2] Peter Atkins, Peter William Atkins, and Julio de Paula. Atkins’ physical chemistry. Oxford university
press, 2014. 8, 11
[3] Peter Atkins, Julio De Paula, and Ronald Friedman. Physical chemistry: quanta, matter, and change.
Oxford University Press, USA, 2014. 8, 11
[4] William E Boyce, Richard C DiPrima, and Douglas B Meade. Elementary differential equations and
_boundary value problems. John Wiley & Sons, 2021. 8, 11_
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.
_Advances in neural information processing systems, 33:1877–1901, 2020. 1_
[6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan,
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models
trained on code. arXiv preprint arXiv:2107.03374, 2021. 16
[7] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan,
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models
trained on code. arXiv preprint arXiv:2107.03374, 2021. 1
[8] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588,
2022. 1
[9] Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, and Xinyi
Wang. Theoremqa: A theorem-driven question answering dataset. arXiv preprint arXiv:2305.12524, 2023.
2, 16, 17
[10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word
problems. arXiv preprint arXiv:2110.14168, 2021. 1, 16, 17
[11] Thomas Engel and Philip J Reid. Thermodynamics, statistical thermodynamics, and kinetics. Prentice Hall
Upper saddle River, 2010. 8, 11
[12] Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Chain-of-thought hub: A continuous effort to measure large language models’ reasoning performance. arXiv preprint arXiv:2305.17306,
2023. 16
[13] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham
Neubig. PAL: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022. 1
[14] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui
He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient visual instruction
model. arXiv preprint arXiv:2304.15010, 2023. 1
[15] Taicheng Guo, Kehan Guo, Zhengwen Liang, Zhichun Guo, Nitesh V Chawla, Olaf Wiest, Xiangliang
Zhang, et al. What indeed can gpt models do in chemistry? a comprehensive benchmark on eight tasks.
_arXiv preprint arXiv:2305.18365, 2023. 16_
[16] David Halliday, Robert Resnick, and Jearl Walker. Fundamentals of physics. John Wiley & Sons, 2013. 8,
11
[17] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob
Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
1, 16, 17
[18] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874, 2021. 1, 17_
[19] Robert V Hogg, Elliot A Tanis, and Dale L Zimmerman. Probability and statistical inference, volume 993.
Macmillan New York, 1977. 8, 11
-----
[20] Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos qa: Machine reading
comprehension with contextual commonsense reasoning. arXiv preprint arXiv:1909.00277, 2019. 16
[21] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu,
Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. 1, 17
[22] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language
models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022. 1
[23] Ira N Levine, Daryle H Busch, and Harrison Shull. Quantum chemistry, volume 6. Pearson Prentice Hall
Upper Saddle River, NJ, 2009. 8, 11
[24] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian
Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models.
_arXiv preprint arXiv:2211.09110, 2022. 16_
[25] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint
_arXiv:2304.08485, 2023. 1_
[26] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Intergps: Interpretable geometry problem solving with formal language and symbolic reasoning. In The Joint
_Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th_
_International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), 2021. 2, 9_
[27] Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and
Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language
reasoning. arXiv preprint arXiv:2110.13214, 2021. 17
[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question
answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022. 1, 2, 17
[29] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and
Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv
_preprint arXiv:2304.09842, 2023. 12_
[30] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and
Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning.
In International Conference on Learning Representations (ICLR), 2023. 16, 17
[31] Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for
mathematical reasoning. In The 61st Annual Meeting of the Association for Computational Linguistics
_(ACL), 2023. 16_
[32] Donald A McQuarrie. Quantum chemistry. University Science Books, 2008. 8, 11
[33] Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, et al. Lila: A unified benchmark for mathematical
reasoning. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP),
2022. 16, 17
[[34] OpenAI. Chatgpt: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/.,](https: //openai.com/blog/chatgpt/.)
2022. 1, 3
[35] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 1, 3
[36] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for
squad. arXiv preprint arXiv:1806.03822, 2018. 16
[37] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv
_preprint arXiv:2302.04761, 2023. 12_
[38] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game:
Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
16
-----
[39] James Stewart, Saleem Watson, and Daniel Clegg. Calculus: Early transcendentals, 8th. Edition, Brooks/_Cole, Cengae learning, 2012. 8, 11_
[40] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu.
Scieval: A multi-level large language model evaluation benchmark for scientific research. arXiv preprint
_arXiv:2308.13149, 2023. 17_
[41] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and
whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. 16
[42] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia,
Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv
_preprint arXiv:2211.09085, 2022. 17_
[43] Stephen T Thornton and Jerry B Marion. Classical dynamics of particles and systems. Cengage Learning,
2021. 8, 11
[44] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation
language models. arXiv preprint arXiv:2302.13971, 2023. 1
[45] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and
fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 3
[46] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue:
A multi-task benchmark and analysis platform for natural language understanding. _arXiv preprint_
_arXiv:1804.07461, 2018. 16_
[47] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain
of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
1, 16
[48] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and
Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint
_arXiv:2303.16199, 2023. 1_
[49] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal
chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023. 1
[50] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu
Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv
_preprint arXiv:2304.06364, 2023. 1, 14, 16, 17_
-----
## Supplementary Material for SCIBENCH
**A** **SciBench: Textbook Sources**
**A.1** **Textbook**
- PHYSICAL CHEMISTRY, ATKINS ET AL. [2] (atkins) provides an exploration of equilibrium,
structure, and reactions, integrating contemporary techniques like nanoscience, spectroscopy, and
computational chemistry.
- QUANTUM CHEMISTRY, MCQUARRIE [32] (chemmc) meticulously covers Quantum Mechanics,
from foundational principles like blackbody radiation and Heisenberg’s Uncertainty Principle to
complex topics such as Schrödinger’s equation, quantum mechanical operators, and the application
of quantum mechanics in chemical bonding.
- QUANTUM CHEMISTRY, LEVINE ET AL. [23] (quan) explores quantum chemistry, providing a
detailed understanding of the Schrödinger equation, particle behavior in various scenarios, quantum
mechanics operators, and other foundational quantum principles. It delves into specific applications
like the electronic structure of diatomic and polyatomic molecules, variation methods, perturbation
theory, electron spin and its implications in quantum mechanics, as well as various computational
methods for molecular quantum mechanics.
- PHYSICAL CHEMISTRY, QUANTA, MATTER, AND CHANGE, ATKINS ET AL. [3] (matter) combines physics and mathematics, beginning with basics like differentiation and integration, advancing
through quantum mechanics and atomic structure, then exploring thermodynamics, molecular motion, and chemical kinetics. Each section is supplemented with mathematical concepts such as
differential equations, vectors, and probability theory.
- CLASSICAL DYNAMICS OF PARTICAL AND SYSTEMS, THORNTON AND MARION [43] (class)
initiates with an exploration of fundamental mathematical concepts, discussing scalars, vectors,
matrix operations, coordinate transformations, differentiation, and integration of vectors, using
these constructs to illustrate concepts like velocity, acceleration, and angular velocity. It then
transitions into the realm of Newtonian mechanics, detailing Newton’s laws, frames of reference,
and the equation of motion for a single particle.
- THERMODYNAMICS, STATISTICAL THERMODYNAMICS, AND KINETICS, [11] (thermo) navigates through thermodynamics’ principles, from fundamental concepts to complex laws, further
discussing real and ideal gases, solutions, electrochemical cells, and statistical thermodynamics. It
concludes with an examination of the kinetic theory of gases, transport phenomena, and chemical
kinetics.
- FUNDAMENTALS OF PHYSICS, HALLIDAY ET AL. [16] (fund) covers undergraduate physics
topics, ranging from fundamental concepts like motion and energy to more advanced areas such as
quantum physics and nuclear physics.
- ELEMENTARY DIFFERENTIAL EQUATIONS AND BOUNDARY VALUE PROBLEMS, [4] (diff)
provides a detailed exploration of differential equations, progressing from basic mathematical
models to advanced topics like the Laplace Transform, linear systems, numerical methods, and
Fourier series. It culminates with a deep dive into nonlinear equations, partial differential equations,
and boundary value problems.
- PROBABILITY AND STATISTICAL INFERENCE, [19] (stat) covers probability and statistics, including fundamental concepts, discrete and continuous distributions, bivariate distributions, functions
of random variables, and estimation techniques.
- CALCULUS: EARLY TRANSCENDENTALS, [39] (calculus) begins with diagnostic tests in foundational topics, and explores functions from multiple perspectives. It comprehensively covers
calculus concepts from limits to three-dimensional analytic geometry, incorporating applications in
various fields.
**A.2** **Examination**
- INTRODUCTION TO DATA MINING provides an introductory survey of data mining, which involves
the automatic discovery of patterns, associations, changes, and anomalies in large databases. It
-----
explores various application areas of data mining, including bioinformatics, e-commerce, environmental studies, financial markets, multimedia data processing, network monitoring, and social
service analysis.
- FUNDAMENTALS ARTIFICIAL INTELLIGENCE provides an introduction to the core problemsolving and knowledge representation paradigms in artificial intelligence. It covers Lisp programming with regular assignments, as well as topics such as search methods, planning techniques,
knowledge structures, natural language processing, expert systems, vision, and parallel architectures.
- DIFFERENTIAL EQUATIONS covers various topics in differential equations, including first-order
and second-order linear equations with constant coefficients, power series solutions, and linear
systems. Students will explore the principles and applications of these mathematical concepts.
**A.3** **Data Preprocessing**
We collect each problem from the original textbooks in PDF documents and manually process them
into LaTeX documents using an OCR tool Mathpix. The data is manually collected by human
annotators using a web-based annotation tool [26], whose user interface is shown in Appendix B.
All problems are carefully verified by human annotators to ensure that LaTeX documents can be
compiled without any syntax errors. For reference, we also provide the original numbers in textbooks.
For every problem, we provide the answer in two forms: the numerical value and the corresponding
LaTeX expression with mathematical notations retained (e.g., 0.450 and _√π2_ [). Further, we convert the]
answer to floating-point numbers rounded to three decimal places. For example, the answer _√π2_ [will]
be converted to the decimal representation of 0.450. We also treat scientific notation as a unit to avoid
overflow issues. For example, if the answer is 2.2 × 10[−][31] m, we take 2.2 as the final answer and
10[−][31] m as the unit. The unit of each answer is saved as a separate attribute.The detailed step-by-step
solutions are also provided in LaTeX. For problems having multiple answers, we either keep only the
first subproblem and discard the remaining subproblems or convert each subproblem into a separate
problem.
**A.4** **Textbook Examples**
-----
**Problem (fund)**
Two charged particles are fixed to an x axis: Particle 1 of charge q1 = 2.1 × 10[−][8]C is at position x = 20 cm and particle 2 of charge
_q2 = −4.00q1 is at position x = 70 cm. At what coordinate on the axis (other than at infinity) is the net electric field produced by the two_
particles equal to zero?
**Answer: −30 cm**
**Problem (thermo)**
N2O3 dissociates according to the equilibrium N2O3( g) ⇌ NO2( g) + NO(g). At 298 K and one bar pressure, the degree of
dissociation defined as the ratio of moles of NO2(g) or NO(g) to the moles of the reactant assuming no dissociation occurs is 3.5 × 10[−][3].
Calculate ∆G[◦]R [for this reaction.]
**Answer: 28 kJ mol[−][1]**
**Problem (class)**
Halley’s comet, which passed around the sun early in 1986, moves in a highly elliptical orbit with an eccentricity of 0.967 and a period of 76
years. Calculate its minimum distances from the Sun.
**Answer: 8.8 ×10[10]m**
**Problem (quan)**
A one-particle, one-dimensional system has Ψ = a[−][1][/][2]e[−|][x][|][/a] at t = 0, where a = 1.0000 nm. At t = 0, the particle’s position is
measured. Find the probability that the measured value is between x = 0 and x = 2 nm.
**Answer: 0.4908**
**Problem (chemmc)**
One of the most powerful modern techniques for studying structure is neutron diffraction. This technique involves generating a collimated
beam of neutrons at a particular temperature from a high-energy neutron source and is accomplished at several accelerator facilities around
the world. If the speed of a neutron is given by vn = (3kBT/m)[1][/][2], where m is the mass of a neutron, then what temperature is needed
so that the neutrons have a de Broglie wavelength of 50pm ?
**Answer: 2500 K**
**Problem (atkins)**
The change in molar internal energy when CaCO3( s) as calcite converts to another form, aragonite, is +0.21 kJ mol[−][1]. Calculate the
difference between the molar enthalpy and internal energy changes when the pressure is 1.0 bar given that the densities of the polymorphs are
2.71 g cm[−][3] and 2.93 g cm[−][3], respectively.
**Answer: -0.28 Pa m[3]** mol[−][1]
**Problem (matter)**
In an industrial process, nitrogen is heated to 500 K at a constant volume of 1.000 m[3]. The gas enters the container at 300 K and 100 atm.
The mass of the gas is 92.4 kg. Use the van der Waals equation to determine the approximate pressure of the gas at its working temperature
of 500 K. For nitrogen, a = 1.39dm[6] atm mol[−][2], b = 0.0391dm[3] mol[−][1].
**Answer: 140 atm**
**Problem (calc)**
A planning engineer for a new alum plant must present some estimates to his company regarding the capacity of a silo designed to contain
bauxite ore until it is processed into alum. The ore resembles pink talcum powder and is poured from a conveyor at the top of the silo. The silo
is a cylinder 100ft high with a radius of 200ft. The conveyor carries ore at a rate of 60, 000π ft[3]/h and the ore maintains a conical shape
whose radius is 1.5 times its height. If, at a certain time t, the pile is 60ft high, how long will it take for the pile to reach the top of the silo?
**Answer: 9.8 h**
**Problem (stat)**
In a study concerning a new treatment of a certain disease, two groups of 25 participants in each were followed for five years. Those in one
group took the old treatment and those in the other took the new treatment. The theoretical dropout rate for an individual was 50% in both
groups over that 5 -year period. Let X be the number that dropped out in the first group and Y the number in the second group. Assuming
independence where needed, give the sum that equals the probability that Y ≥ _X + 2. HINT: What is the distribution of Y −_ _X + 25 ?_
**Answer: 0.3359**
**Problem (diff)**
Newton’s law of cooling states that the temperature of an object changes at a rate proportional to the difference between its temperature and
that of its surroundings. Suppose that the temperature of a cup of coffee obeys Newton’s law of cooling. If the coffee has a temperature of
200[◦]F when freshly poured, and 1 min later has cooled to 190[◦]F in a room at 70[◦]F, determine when the coffee reaches a temperature of
150[◦]F
**Answer: 6.07 min**
Figure S1: Textbook examples with acronym highlighted in brown.
**B** **SciBench: Statistics**
**B.1** **Dataset Statistics**
-----
Table S1: Summary of the open textbook dataset. We report the number of problems and the ratio of
problems with detailed solutions in the fourth and fifth columns respectively.
_Fundamentals of Physics [16]_ `fund` 83 12.0%
Subject Title Acronym # Problems % Solutions
Physics _Statistical Thermodynamics [11]_ `thermo` 84 20.2%
_Classical Dynamics of Particles and Systems [43]_ `class` 54 13.0%
_Quantum Chemistry [23]_ `quan` 42 19.0%
_Quantum Chemistry [32]_ `chemmc` 48 18.8%
Chemistry
_Physical Chemistry [2]_ `atkins` 123 13.0%
_Physical Chemistry, Quanta, Matter, and Change [3]_ `matter` 59 16.9%
_Calculus: Early Transcendentals [39]_ `calc` 52 19.2%
Math _Probability and Statistical Inference [19]_ `stat` 95 21.1%
_Elementary Differential Equations and Boundary Value Problems [4]_ `diff` 55 9.1%
Table S2: Statistics of the close exam dataset. We report the number of problem instances in each
exam and the ratio of problems in the exam that include detailed solutions. We further report the
ratio of problems in different formats, including free-response, multiple-choice, and true-false. For
reference, the number in parentheses denotes the grading points assigned to the problems.
# Problems 25 (90) 24 (75) 12 (56) 16 (75) 8 (100) 8 (100) 11 (95)
_Data Mining_ _Machine Learning_ _Differential Equations_
Midterm Final Midterm Final Exam 1 Exam 2 Final
% Solutions 56.0% (58) 16.7% (19) 100.0% (56) 31.2% (26) 100.0% (100) 100.0% (100) 90.9% (90)
% Free-response 40.0% (46) 33.3% (29) 66.7% (38) 81.3% (62) 100.0% (100) 100.0% (100) 90.9% (90)
% Multiple-choice 28.0% (28) 29.2% (28) 33.3% (18) 18.7% (13) 0.0% (0) 0.0% (0) 9.1% (5)
% True-false 32.0% (16) 37.5% (18) 0.0% (0) 0.0% (0) 0.0% (0) 0.0% (0) 0.0% (0)
**B.2** **UI Design**
We employed a team of seven individuals to gather data from textbooks using an annotation tool.
Each individual was responsible for 1-2 books, encompassing approximately 100 examples. The user
interface of the annotation tool is depicted in Figure S2. For subsequent verification, we preserved
images of problems and their corresponding answers. To ensure clarity in future references, we have
maintained the original sequence of problems as they appear in the textbooks.
Figure S2: The UI design of data annotation.
-----
**C** **Experimental Details**
**C.1** **Settings**
- Zero-shot and few-shot learning. In the zero-shot learning setting, models are not provided with
any prior examples, which evaluates their inherent problem-solving capabilities with background
knowledge and reasoning abilities. In the few-shot setting, a few of examples are given to the
models before the test example. This aims to assess their capability to learn new information from
the demonstrations and incorporate it into their problem-solving processes.
- Prompting-based approaches. In the zero-shot setting, we evaluate both with and without the
system prompt, which describes the types and categories of questions, along with instructions;
all other settings incorporate the system prompt. Additionally, we utilize CoT as our prompting
strategy in the zero-shot setting. Besides, we further explore an answer-only strategy in the few-shot
setting, where the prompt solely provides questions and answers without any intermediate solutions.
- Tool-augmented approaches. Given that LLMs are limited to acquiring exact knowledge and performing precise calculations, some recent approaches, such as Toolformer [37] and Chameleon [29],
explored the use of external tools to enhance the capabilities of solving complex reasoning tasks.
In line with this approach and acknowledging the limitations of LLMs in performing precise
calculations, we also include a setting that prompts the model to convert its solution steps in natural
language into either Wolfram Language[∗] or Python code, aiming to achieve more accurate results
for certain computation steps. This prompt is only tested in the few-shot learning setting. We
manually construct Python and Wolfram Language code that produces the correct answer.
**C.2** **Implementation details.**
We set temperature to zero for all models to reduce the randomness of the predictions. Few-shot
examples, including solutions, are randomly selected from problems within each textbook. When
external tools are used, we add a code snippet that translates the solution into specific programming
languages in all few-shot examples. The code snippets are verified by human annotators that will
produce the correct output. In terms of evaluation metrics, we compare the model outputs with the
correct answers, allowing a relative tolerance of 0.05. In particular to the exam dataset, the model
solutions are graded using the rubrics provided by the instructors. Readers may refer to Appendix C
for all prompts and the implementation details for utilizing external tools.
**C.3** **Exam Result**
Table S3: Experimental results in terms of total scores under zero-shot learning on the exam dataset.
The best performing score is highlighted in bold.
_Data Mining_ _Machine Learning_ _Differential Equations_
Model Setting
Midterm Final Midterm Final Exam 1 Exam 2 Final
Zero 24 / 90 14 / 75 6 / 56 6/ 75 5 / 100 0 / 100 0 / 95
LLaMA-2-7B
Zero+CoT 18 / 90 14 / 75 2 / 56 10 / 75 10 / 100 0 / 100 10 / 95
Zero 23 / 90 18 / 75 18 / 56 12 / 75 20 / 100 5 / 100 0 / 95
LLaMA-2-70B
Zero+CoT 31 / 90 18 / 75 10 / 56 11/ 75 35 / 100 10 / 100 0 / 95
Zero 37 / 90 26 / 75 28 / 56 35 / 75 35 / 100 30 / 100 20 / 95
Claude2
Zero+CoT 33 / 90 38 / 75 22 / 56 **41 / 75** 25 / 100 15 / 100 20 / 95
Zero 44 / 90 39 / 75 16 / 56 32 / 75 0 / 100 45 / 100 15 / 95
GPT-3.5
Zero+CoT 38 / 90 33 / 75 32 / 56 37 / 75 28 / 100 30 / 100 10 / 95
Zero 56 / 90 **44 / 75** 30 / 56 37 / 75 25 / 100 **80 / 100** **25 / 95**
GPT-4
Zero+CoT **58 / 90** 32 / 75 **40 / 56** 35 / 75 **50 / 100** 70 / 100 15 / 95
**C.4** **Prompting**
ChatGPT and GPT-4’s API have three message parameters: SYSTEM, USER, and ASSISTANT.
The SYSTEM parameter represents the system prompt, which provides context and instructions
_[∗https://www.wolfram.com/language/](https://www.wolfram.com/language/)_
-----
to the model. The USER parameter is the training prompt or input provided by the user, and the
ASSISTANT parameter contains the model’s output or response. We provide all system prompts and
training prompts used in our experiments as below.
**System Prompt for Zero-Shot, Few-Shot, and Chain-of-Thought setting:**
Please provide a clear and step-by-step solution for a scientific problem in the categories of
Chemistry, Physics, or Mathematics. The problem will specify the unit of measurement, which
should not be included in the answer. Express the final answer as a decimal number with three digits
after the decimal point. Conclude the answer by stating "The answer is therefore \boxed[ANSWER]."
**System Prompt for Python setting:**
Please provide a clear and step-by-step solution for a scientific problem in the categories of Chemistry,
Physics, or Mathematics. The problem will specify the unit of measurement. Please translate the
solution steps into Python code and encase the Python code within triple backticks for clarity.
**System Prompt for Wolfram setting:**
Please provide a clear and step-by-step solution for a scientific problem in the categories of Chemistry,
Physics, or Mathematics. The problem will specify the unit of measurement. Please translate the solution steps into Wolfram code and encase the Wolfram Language code within triple backticks for clarity.
**System Prompt for Evaluation Protocol:**
Examine the given problem, the correct solution, and the model’s solution. Identify the reason for the
error in the model’s solution based on the following 10 categories:
1. Logical Decomposition and Analysis Skills: This ability involves decomposing the problem into
smaller, manageable parts, and understanding the relationships between these parts.
2. Identification of Assumptions: This skill involves the AI’s ability to recognize relevant and
necessary assumptions in the problem.
3. Spatial Perception: This is important for understanding problems in areas such as physics and
chemistry, where you need to visualize molecules, forces, fields, etc.
4. Causal Reasoning: This is the ability to understand cause and effect relationships.
5. Problem Deduction Skills: This pertains to the ability to infer and deduce potential solutions or
underlying principles from the given information in a problem.
6. Abstract Reasoning: This skill involves the ability to understand complex concepts that can’t be
perceived physically, and to recognize patterns or relationships beyond concrete examples.
7. Scientific Literacy: This skill involves a comprehensive understanding of key scientific principles,
terminology, and methodologies across a range of disciplines.
8. Code Conversion Skills: This denotes the ability to accurately translate solution steps into different
programming languages, like Python or Wolfram, without syntax errors.
9. Logical Reasoning: This is the ability to make a reasoned argument and to identify fallacies or
inconsistencies in an argument or set of data.
10. Calculation Skills: This involves the ability to accurately carry out mathematical operations and
computations.
Conclude your final error reason category number within \boxed.
**Training Prompt for Zero-Shot Chain-of-Thought:**
_Stage 1:_
Input: [input-question] Let’s think step by step.
Output: <explanation>
_Stage 2:_
Input: [input-question] Let’s think step by step. [explanation] + Therefore, the answer is:
Output: <answer>
**Training Prompt for Few-Shot:**
Input:
Problem 1: [Question 1] The answer is \boxed{[Answer 1]}.
Problem 2: [Question 2] The answer is \boxed{[Answer 2]}.
...
-----
Problem n: [Question n] The answer is \boxed{[Answer n]}.
Problem n+1: [Question n+1]
Output: The answer is \boxed{<answer>}.
**Training Prompt for Few-Shot Chain-of-Thought:**
Input:
Problem 1: [Question 1] Explanation for Problem 1: [Explanation 1]. The answer is \boxed{[Answer
1]}.
Problem 2: [Question 2] Explanation for Problem 2: [Explanation 2]. The answer is \boxed{[Answer
2]}.
...
Problem n: [Question n] Explanation for Problem n: [Explanation n]. The answer is \boxed{[Answer
n]}.
Problem n+1: [Question n+1]
Output: Explanaiton for Problem n+1: <explanation>. The answer is \boxed{<answer>}.
**Training Prompt for Few-Shot Python/Wolfram:**
Input:
Problem 1: [Question 1] Explanation for Problem 1: [Explanation 1]. Python/Wolfram language for
Problem 1: ```[Python/Wolfram code 1]```.
Problem 2: [Question 2] Explanation for Problem 2: [Explanation 2]. Python/Wolfram language for
Problem 2: ```[Python/Wolfram code 2]```.
...
Problem n: [Question n] Explanation for Problem n: [Explanation n]. Python/Wolfram language for
Problem n: ```[Python/Wolfram code n]```.
Problem n+1: [Question n+1]
Output: Explanaiton for Problem n+1: <explanation>. Python/Wolfram language for Problem n+1:
```[Python/Wolfram code n+1]```.
**Training Prompt for Evaluation Protocol:**
Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model
solution is [Model-Solution].
Output: <Error Type>
**Training Prompt for Evaluation Protocol in Python/Wolfram:**
Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model
solution is [Model-Solution]. The translated program generates the answer as [Program Generated
Answer], which is treated as model’s output answer.
Output: <Error Type>
**C.5** **Experiment Process**
All model output is extracted using \boxed{} notation. To prevent any missed extractions, we
supplement this process with a manual check. For both Python and Wolfram settings, we extract
the programming language with the triple backtick ```method, subsequently executing it within
the corresponding language. The entirety of our code can be accessed via the following URL:
```
https://anonymous.4open.science/r/anonymous-4FFB.
```
**D** **Error Analysis of Various Prompting Strategies**
Considering the substantial advancements of current LLMs, an in-depth analysis of the particular
skills that are either enhanced or limited under certain settings becomes imperative. Previous works
have relied on human labor to annotate error reasons into different categories, which is both expensive
and time-consuming [50]. In this section, we present an evaluation protocol that automates the
classification of error reasons into deficient skills. This time-efficient approach enables large-scale
analyses in future research.
-----
_Calculus, Statistics, Probability, …_
LLM / Reference Human Error Summary Essential LLM Error
Solutions Annotator Reason Skills Verifier Profiles
_Data Mining, Differential Equations, …_
**Datasets** **Evaluation**
Figure S3: Pipeline of the evaluation protocol.
In order to quantify the impact of each setting on scientific problem-solving, we first define an
essential skill set that is required by solving scientific problems. Then, an LLM verifier is employed
to automatically classify each incorrectly solved problem based on the absence of a specific skill
from the essential skill set. This approach generates error profiles, showcasing a direct comparison of
different strategies. This evaluation protocol is summarized in Figure S3.
Firstly, we analyze the incorrect solutions made by GPT-3.5 for problems that provide detailed
solutions. We hire two college students, who are highly familiar with the problems in our datasets, to
annotate the source of the error for each problem, indicating the specific line where the model makes
a mistake and why. From 112 such error annotations and with the assistance of GPT-4, we distill
these errors into ten essential skills that GPT-3.5 might lack:
- Logical decomposition and analysis skills. This ability involves decomposing the problem into
smaller, manageable parts, and understanding the relationships between these parts.
- Identification of assumptions. This skill involves the ability to recognize relevant and necessary
assumptions in the problem.
- Spatial perception. This is important for understanding problems in areas such as Physics and
Chemistry, where models need to visualize molecules, forces, fields, etc.
- Causal reasoning. This is the ability to understand cause and effect relationships.
- Problem deduction skills. This pertains to the ability to infer and deduce potential solutions or
underlying principles from the given information in a problem.
- Abstract reasoning. This skill involves the ability to understand complex concepts that cannot be
perceived physically, and to recognize patterns or relationships beyond concrete examples.
- Scientific literacy. This skill involves a comprehensive understanding of key scientific principles,
terminology, and methodologies across a range of disciplines.
- Code conversion skills. This involves the ability to accurately translate solution steps into different
programming languages, like Python or Wolfram Language.
- Logical reasoning. This is the ability to make a reasoned argument and to identify fallacies or
inconsistencies in an argument or set of data.
- Calculation skills. This involves the ability to accurately carry out mathematical operations and
computations.
After identifying this essential skill set, we assess the performance of the LLMs under different
settings to discern the specific problem-solving skills they lack. Given the high cost of human
annotations required to attribute the cause of incorrect solutions to specific skill deficiencies, we
propose a novel self-critique protocol: we design a specific prompt that outlines these abilities,
and employ another LLM to serve as a classifier and determine whether a specific error results
from the lack of a particular problem-solving skill. Finally, we ask human annotators to scrutinize
the classification results, which results in approximately 20% of incorrectly classified skills being
discarded. To be specific, we utilize a GPT-3.5 model as the verifier to determine the reason behind
each error and pinpoint the missing skill. The details regarding the specific prompts used are provided
in Appendix C.4. This verification process is conducted for six settings, with results represented in
bar charts (Figure S4). Additional examples of the evaluation protocol are elaborated in Appendix F.
Overall, our findings suggest that there is a lack of a universally effective setting: each config**uration only enhances some specific abilities and occasionally even hurts other skills that the**
**original GPT models possess. First, CoT prompting significantly improves calculation skills in**
both zero- and few-shot scenarios, with 7.1% and 8.0% error rates caused by calculation ability
respectively, considerably lower than the 24.1% error rate of the vanilla zero-shot baseline. However,
CoT shows limitations in improving other skills, with 15.2% and 15.2% error rates in casual ability
and logical decomposition ability in the zero-shot CoT setting, respectively, compared to 17.0% and
-----
Correct 11.6% 47.3% 42.9%
Calculation 19.6% 7.1% 3.6%
Logical Reasoning 1.8% 1.8% 4.5%
Code Conversion 1.8% 0.9% 12.5%
Scientific Literacy 6.2% 3.6% 2.7%
Abstract Reasoning 2.7% 0.9% 0.9%
Problem Deduction 11.6% 4.5% 6.2%
Causal Reasoning 20.5% 17.0% 8.9%
Spatial Perception 1.8% 0.9% 0.0%
Identification of Assumptions 11.6% 2.7% 3.6%
Logical Decomposition 10.7% 13.4% 14.3%
0 5 10 15 20 0 10 20 30 40 0 10 20 30 40
(a) Zero−S (b) Zero+CoT (c) Few+Py
Correct 17.0% 44.6% 12.5%
Calculation 24.1% 8.0% 5.4%
Logical Reasoning 2.7% 2.7% 7.1%
Code Conversion 0.0% 0.9% 41.1%
Scientific Literacy 8.0% 3.6% 3.6%
Abstract Reasoning 0.0% 0.9% 0.0%
Problem Deduction 9.8% 5.4% 6.2%
Causal Reasoning 15.2% 10.7% 8.9%
Spatial Perception 2.7% 2.7% 0.0%
Identification of Assumptions 5.4% 5.4% 4.5%
Logical Decomposition 15.2% 15.2% 10.7%
0 5 10 15 20 25 0 10 20 30 40 0 10 20 30 40
(d) Zero (e) Few+CoT (f) Few+Wol
Figure S4: Error profiles of GPT-3.5 on the text dataset under six settings, which reveal the distribution
of their deficiencies in ten essential problem-solving abilities.
13.4% in the zero-shot setting. This contradicts previous claims about universal skill enhancement
through zero-shot CoT and carefully-designed few-shot CoT prompts [47]. In Appendix, we show an
example in Figure S6, where the zero-shot learning setting without CoT has generated the correct
formula but fails in the calculation steps. In this case, CoT prompting is even unable to use the correct
formula as it misinterprets the specific conditions (non-necessity) in the problem. Second, while the
use of external tools significantly reduces calculation errors, they can weaken other skills, particularly
the code conversion skills, i.e., generating the correct programs for the solution. This issue becomes
particularly prominent when using the Wolfram Language, with 41.1% error rate in code conversion
skill comparing 0.9% in the few-shot CoT setting. Despite providing grammar specifications in
system prompts and a few examples as demonstrations, most attempts of code conversion result in
syntax errors. In Wolfram Language, the error mainly comes from the violation of variable rules (for
instance, Wolfram Language reserves certain letters such as E as protected symbols and disallows
underscores in variable names) or incorrect usage of certain functions.
Additionally, few-shot learning does not universally improve scientific problem-solving skills, as
indicated in the comparison between zero-shot and few-shot CoT settings. The improvement in one
skill is offset by the shortcomings in others: although the few-shot CoT setting results in a reduction
of 6.3% in errors related to causal reasoning, it also leads to an increase in errors associated with
other skills, such as logical decomposition and calculation.
Moreover, the skill of identifying assumptions appears to be most lacking in the zero-shot setting
**without a system prompt. In this scenario, the LLM does not have any predefined direction to**
follow. However, when a system prompt with instructions about which scientific domain the model is
tackling, this issue can be significantly mitigated, decreasing this error from 11.6% to 5.4%.
**E** **Related Work**
Traditional benchmarks primarily focus on evaluating the general abilities of models. For instance,
SQuAD [36] offers a dataset designed for evaluating the reading comprehension ability of models.
GLUE [46] is a model-agnostic tool for evaluating and analyzing performance across diverse natural
language understanding tasks. Cosmos QA [20] offers questions in natural language contexts to assess
common sense reasoning abilities of models. HumanEval [6] is a handwritten dataset evaluating
the coding ability of models, featuring 164 Python programming problems. BIG-Bench [38] is a
large-scale general-purpose test suite comprising 204 multiple-choice or exact-match tasks, while
BIG-Bench Hard [41] poses particularly challenging chain-of-thought prompts. HELM [24] presents
a systematic, multi-metric evaluation of LLMs, highlighting their capabilities, limitations, and risks.
Recent benchmarks focus on assessing problem-solving skills of LLMs, particularly in scientific and
mathematical domains [9, 12, 15, 17, 30, 31, 33, 50]. GSM8K [10] is a widely used math dataset con
-----
Table S4: Comparison of SCIBENCH with other benchmarks. “Level” represents the grade level of
problems. “Computation” represents the level of computational type that problems use. “Solution”
represents whether datasets contain detailed solutions. “Type” represents the type of most problems
provided in the dataset: “MT” denotes multiple-choice questions and “Free” denotes free-response
questions. “Human” indicates whether the analysis process employs a human annotation process.
“Auto” represents whether the analysis process uses an automatic annotation process.
ScienceQA [28] Grade 1–12 Algebra Yes MT Yes Yes Yes No No No
Dataset Evaluation Analysis
Benchmark
Level Computation Solution Type Zero-Shot Few-Shot CoT Tool Human Auto
IconQA [27] Grade 1-12 Algebra No MT No Yes No No No No
TabMWP [30] Grade 1–12 Algebra Yes Free No Yes No No No No
GSM8K [10] Grade 1–12 Algebra Yes Free No Yes No No No No
MATH [18] High School Exponentiation Yes Free No Yes No No No No
LILA [33] High School Exponentiation Yes Free Yes Yes No No No No
SciEval [40] High School Exponentiation No MT Yes Yes Yes No No No
MMLU [17] High School + College Exponentiation No MT No Yes No No No No
CEval [21] High School + College Differentiation No MT No Yes Yes No No No
AGIEval [50] High School + College Exponentiation No MT Yes Yes Yes No Yes No
TheroemQA [9] College Differentiation No Free No Yes Yes Yes No No
SCIBENCH College Differentiation Yes Free Yes Yes Yes Yes Yes Yes
taining 8.5K grade school math word problems. ScienceQA [28] is a multimodal question-answering
dataset with accompanying lecture and explanation annotations. The MATH dataset [18] presents a
challenging collection of 12.5K math problems gathered from math competitions. LILA [33] extends
20 datasets by including task instructions and Python program solutions. However, the majority of
those benchmarks concentrates on the grade or high school level tasks involving basic arithmetic
operations such as addition, multiplication, and exponentiation, rather than more sophisticated operations like differentiation. TheroemQA [9] is a theorem-oriented dataset comprising 800 high-quality
questions that aim to evaluate the ability of LLMs to apply theorems to solve problems. However, it
does not offer an in-depth qualitative analysis of their benchmark. Galactica [42] provides a set of
scientific tasks, including latex equation conversions, domain knowledge probes, citation prediction
and chemical QA. C-EVAL [21] focuses on evaluating LLMs in a Chinese context, offering questions
from humanities to science and engineering. AGIEval [50] evaluates the performance of LLMs
in human-centric standardized exams, such as college entrance exams, law school admission tests,
math competitions, and lawyer qualification tests. It utilizes human annotated qualitative analysis to
evaluate the capabilities of the model. However, relying on human labor for direct solution analysis
can be costly. Our evaluation protocol, based on predefined fundamental problem solving skills,
enables automated classification of deficient skills for each incorrectly answered question. This
approach enables a more comprehensive and larger scale of qualitative analysis results. We include
the comparison between different benchmarks in Table S4.
**F** **Problem Solving Abilities of Current LLMs**
**F.1** **Example**
In the context of each specific capability, we present several exemplary errors accompanied by their
corresponding classifications and explanations derived from the GPT model. Referencing Figure
S6, the ChatGPT solution employing the Chain-of-Thought method corresponds to error category
4, "Causal Reasoning". The model explanation posits that "The error reason category is 4. Causal
Reasoning. The model solution uses the relativistic momentum formula to calculate the momentum
of the electron, which is not necessary for this problem since the electron is traveling at only 1.00%
of the speed of light. The relativistic momentum formula is only needed when the velocity of the
particle is a significant fraction of the speed of light. Therefore, using this formula in this problem
leads to an incorrect value for the momentum and consequently an incorrect value for the de Broglie
wavelength." Meanwhile, the ChatGPT solution that doesn’t utilize the Chain-of-Thought method
is categorized under error case 10, "Calculation Skills". This example is further highlighted as an
illustrative showcase for explanations of the Chain-of-Thought method in Section D. More examples
resulting in different error types are shown in Figure S7, S8, S9, S10, S11, S12, S13, S14, S15,S16.
-----
|Problem (Using the Planck distribution) Compare the energy output of a black-body radiator (such as an incandescent lamp) at two different wavelengths by calculating the ratio of the energy output at 450 nm (blue light) to that at 700 nm (red light) at 298 K.|Col2|Col3|
|---|---|---|
|Correct Solution At a temperature T, the ratio of the spectral density of states at a wavelength λ1 to that at λ2 is given by ⇢ ⇢( (λ λ1 2, T T) =✓λ λ2 1◆5 ⇥# #e eh hc c/ /λ λ2 1k kT − −1 1$, ) T $ Insert the data and evaluate this ratio. Answer With λ1 = 450 nm and λ2 = 700 nm, …, and therefore ⇢ ⇢( (4 75 00 n nm m, 2 29 98 K K) = 2.10 ⇥10−16 0, 8 ) Final Answer:2.10 ⇥10−16|GPT Solution with CoT Prompting Step 1: We need to use the Planck distribution formula to calculate the energy output of a black- body radiator. The formula is: 2hc2 1 B(λ, T) = λ5 eλkh BcT −1 Step 2: We calculate the energy output at 450 nm and 700 nm… Let’s use T = 298 K. B(450 nm, 298 K) 2hc2 1 = (450 ⇥10−9 m)5e450⇥10−9 h mc ·kB·298 K −1 ⇡1.23 ⇥10−16 ) 1.67 ⇥10−31 … Step 3: Calculate the ratio of the energy output at 450 nm to that at 700 nm: B(450 nm, 298 K) ⇡4.39 B(700 nm, 298 K) Final Answer: 4.39 ⇥10−16|GPT Solution with Python Prompting as External Tools … The ratio of the energy output at λ1 = 450 nm and λ u2 (= λ 17, 0 T0 ) n =m i 8s ⇡ λh 51c ehc/(λ11 kT )−1= λ5 2ehc/(λ1vW5v7tdkWrZIX5CWpEZfskH1ySs7JWHkC/lKvpHvlR+VX5XflT9T6vLS7M5zUjgry38BlX53XQ=</latexit>) kTλ )2 −1 u (λ2, T) 8⇡ λh 52c ehc/(λ21 λ5 1ehc/(λ2 3rc392myLVskL8pLUiEt2yD45JefkjDyhXwl38j3yo/Kr8rvyp8pdXlpduc5KZyV5b+Rwndc</latexit>)kT ) −1 kT )−1 λ 1 Substituting the given values, we get a Python code for this problem: … ⇒lambda2 ratio = ((lambda2 / lambda1)**5) *((math.exp((h * c) / (lambda1 * k * T)) - 1) / (math.exp((h * c) / (lambda2 * k * T)) - 1)) ⇒lambda1 … Final Answer: 3.95 ⇥1033|
Figure S5: An example problem from Physical Chemistry with solutions generated under two
prompting strategies. GPT-4 with Chain-of-Thought (CoT) prompting shows calculation errors, while
GPT-4 that prompts Python as external tools misunderstands mathematical equations. Errors are
highlighted in red and the corrections are shown in purple.
**F.2** **Assessment of evaluation protocol**
In order to assess the effectiveness of our evaluation protocol’s classification, we enlisted the assistance
of two annotators to determine whether the errors identified by the model verifier were accurate or
not. Through the annotation of 151 samples across different settings, we observed that 123 of them
were correctly classified, resulting in an accuracy rate of 81.45%.
-----
**Problem**
Calculate the de Broglie wavelength of an electron traveling at 1.00% of the speed of light.
**Correct Solution**
The mass of an electron is 9.109 × 10[−][31] kg. One percent of the speed of light is
_v = (0.0100)_ 2.998 × 10[8] m · s[−][1][] = 2.998 × 10[6] m · s[−][1]
The momentum of the electron is given by
_p = mev =_ 9.109 × 10[−][31] kg 2.998 × 10[6] m · s[−][1][] = 2.73 × 10[−][24] kg · m · s[−][1]
The de Broglie wavelength of this electron is
_λ =_ _[h]_ 6.626 × 10[−][34] J · s
_p_ [=] 2.73 10[−][24] kg m s[−][1][ = 2][.][43][ ×][ 10][−][10][ m = 243pm]
_×_ _·_ _·_
This wavelength is of atomic dimensions.
**Final Answer: 243**
**Chatgpt Solution with Chain-of-Thought Method**
Let’s start by recalling the de Broglie wavelength formula:λ = _[h]p_ [where][ λ][ is the de Broglie wavelength,][ h][ is Planck’s constant, and][ p][ is the]
momentum of the particle. To calculate the momentum of the electron, we can use the relativistic momentum formula:
_p =_ _mev_
1 − _[v]c[2][2]_
q
where me is the mass of the electron, v is its velocity, and c is the speed of light. Plugging in the given values, we get:
_p = [(9][.][11][ ×]1 −[ 10](0[−].[31]01[kg)(0]c)[2]/c[.][2][01][c][)]_ = 8.19 × 10[−][23]kg · m/s
Now we can plug this momentum into the de Broglie wavelength formula:p
_λ =_ _[h]_ 6.626 × 10[−][34]J · s
_p_ [=] 8.19 10[−][23]kg m/s [= 8][.][09pm]
_×_ _·_
Therefore, the de Broglie wavelength of the electron is 8.09 .
**Final Answer: 8.09**
**ChatGPT Solution without Chain-of-Thought Method**
The de Broglie wavelength of a particle is given by the equation:λ = _[h]p_ [where][ h][ is the Planck constant and][ p][ is the momentum of the particle.]
The momentum of an electron is given by:
_p = mv_
where m is the mass of the electron and v is its velocity. We are given that the electron is traveling at 1.00% of the speed of light, which
is:v = 0.01cwhere c is the speed of light. The mass of an electron is:m = 9.11 × 10[−][31], kgPutting it all together, we have:
_h_ _h_
_λ =_
_mv_ [=] _m(0.01c)_
Substituting the values of h, m, and c, we get:
_λ =_ 6.626 × 10[−][34], J · s
9.11 10[−][31], kg 0.01 2.998 10[8], m/s [=][ 3][.][31][ ×][ 10][−][12][m = 3][.][31pm]
_×_ _·_ _·_ _×_
Therefore, the de Broglie wavelength of the electron is 3.31pm
**Final Answer: 3.31**
Figure S6: An example problem, inaccurately solved by ChatGPT with and without the Chain of
Thought method, is provided to highlight different types of errors. In the example, the mistaken step
is highlighted in red.
-----
**Problem**
Find the length of the cardioid r = 1 + sin θ.
**Correct Solution**
The cardioid’s full length is given by the parameter interval 0 ≤ _θ ≤_ 2π, so Formula 5 gives
2π _dr_ 2 2π 2π
_L =_ _r[2]_ + _dθ =_ (1 + sin θ)[2] + cos[2] _θdθ_ = 2 + 2 sin θdθ
0 s _dθ_ 0 0
Z Z q Z p
We could evaluate this integral by multiplying and dividing the integrand by 2 2 sin θ, or we could use a computer algebra system. In
_[√]_ _−_
any event, we find that the length of the cardioid is L = 8.
**Final Answer: 8**
**Model Solution**
To find the length of the cardioid, we use the formula for arc length in polar coordinates:
_β_ _dr_ 2
_L =_ _r[2]_ + _, dθ,_
_α_ s _dθ_
Z
where α and β are the angles that correspond to one full loop of the cardioid (i.e. α = 0 and β = 2π). First, we find _dθ[dr]_ [:]
_dr_
_dθ_ [=][ d]dθ [(1 + sin][ θ][) = cos][ θ.]
Then, we plug in r = 1 + sin θ and _dθ[dr]_ [= cos][ θ][ into the formula for arc length:]
2π
_L =_ (1 + sin θ)[2] + cos[2] _θ_
0
Z 2π q
_dθ =_ 1 + 2 sin θ + sin[2] _θ + cos[2]_ _θ_
0
Z 2π p
_dθ =_ 2 + 2 sin θ
0
Z _πp_
_dθ = 4_ 2 + 2 sin θ
0
Z π/p2
_dθ_ (due to symmetry) = 8 2 + 2 sin θ
0
Z π/2 p
_dθ = 8_ 4 cos[2][ θ]
Z0 r 2
_π/2_
_dθ_ (using the half-angle formula) = 16 cos _[θ]_
0 2
Z
_π/2_
_dθ = 32 sin_ _[θ]_
2 0
= 32.
Therefore, the length of the cardioid is 32, and the unit of the answer is whatever unit was used for r (e.g. meters, centimeters, etc.).
**Final Answer: 32**
**Error Reason**
2π
The model solution made an error in the calculation of the integralintegral to 4 _π0_ _√2 + 2 sin θdθ due to the symmetry of the integrand, and then used a substitution to evaluate the integral. The modelR0_ _√2 + 2 sin θdθ. The correct solution correctly simplified this_
solution, on the other hand, incorrectly evaluated this integral asR 8π. This error falls under the category of 10. Calculation Skills .
Figure S7: An example problem is inaccurately solved by error reason 10. Calculation Skills. The
mistaken step is highlighted in red.
-----
**Problem**
2
Use Simpson’s Rule with n = 10 to approximate 1 [(1][/x][)][dx][.]
**Correct Solution** R
Putting f (x) = 1/x, n = 10, and ∆x = 0.1 in Simpson’s Rule, we obtain
2 1
1 _x [dx]_
Z
_≈_ _S10_
= [∆][x]
3 [[][f] [(1) + 4][f] [(1][.][1) + 2][f] [(1][.][2) + 4][f] [(1][.][3) +][ · · ·][ + 2][f] [(1][.][8) + 4][f] [(1][.][9) +][ f] [(2)]]
= [0][.][1] 1 4 2 4 2 4 2 4 2 4
3 1 [+] 1.1 [+] 1.2 [+] 1.3 [+] 1.4 [+] 1.5 [+] 1.6 [+] 1.7 [+] 1.8 [+] 1.9 [+ 1]2
_≈_ 0.693150
**Final Answer: 0.693150**
**Model Solution**
Explanation for Problem 6: With n = 10 we have h = [2]10[−][1] = 101 [and][ x][i][ = 1 +][ ih][ for][ i][ = 0][,][ 1][,][ 2][, . . .,][ 10][. Then Simpson’s Rule]
gives
2 1
1 _x [dx][ ≈]_ _[h]3 [[][f]_ [(][x][0][) + 4][f] [(][x][1][) + 2][f] [(][x][2][) + 4][f] [(][x][3][) +][ ...][ + 2][f] [(][x][8][) + 4][f] [(][x][9][) +][ f] [(][x][10][)]]
Z
where f (x) = _x[1]_ [.]
wolfram language for Problem 6:
(* Constants *)
n = 10;
a = 1;
b = 2;
(* Function to integrate *)
f[x_] := 1/x;
(* Simpson’s Rule *)
h = (b - a)/n;
x = Table[a + i*h, i, 0, n];
approximation = (h/3)*Total[Table[If[EvenQ[i], 2, 4]*f[x[[i]]], i, 1, n - 1] + f[x[[0]]], f[x[[n]]], 4*f[x[[n/2]]]];
approximation
**Final Answer: "Objects of unequal length in {4,20/11,10/3,20/13,20/7,4/3,5/2,20/17,20/9}+{1/List,10/19,20/7} cannot be combined."**
**Error Reason**
The error reason category for the model’s solution is 8. Code Conversion Skills . The model’s solution contains a syntax error in the
Wolfram Language code, which causes the program to exceed the recursion limit and terminate prematurely. This error likely occurred during
the translation of the correct solution’s code into Wolfram Language.
Figure S8: An example problem is inaccurately solved by error reason 8. Code Conversion Skills.
"Error Reason" denotes the output from the LLM Verifier utilized in the classification of error causes.
In the example, the mistaken step is highlighted in red.
**Problem**
2
Use Simpson’s Rule with n = 10 to approximate 1 [(1][/x][)][dx][.]
**Correct Solution** R
The equations for c1 and c2 are
_c1(α −_ _E) + c2β = 0_ and _c1β + c2(α −_ _E) = 0_
For E = α + β, either equation yields c1 = c2. Thus, ψb = c1 (2pz1 + 2pz2). The value of c1 can be found by requiring that the
wave function be normalized. The normalization condition on ψπ gives c[2]1[(1 + 2][S][ + 1) = 1][. Using the Hückel assumption that][ S][ = 0][,]
we find that c1 = 1/√2.Substituting E = α − _β into either of the equations for c1 and c2 yields c1 = −c2, or_
_ψa = c1 (2pz1 −_ 2pz2)
The normalization condition gives c[2](1 − 2S + 1) = 1, or c1 = 1/√2.
**Final Answer: 0.7071067**
**Model Solution**
therefore not provided as this problem requires visual representations and cannot be solved through text alone.
**Error Reason**
The error reason category is 3. Spatial Perception . This problem requires visual representations to find the bonding and antibonding Hückel
molecular orbitals for ethene. The model’s solution is incomplete as it only provides equations and does not provide any visual representation.
Figure S9: An example problem is inaccurately solved by error reason 3. Spatial Perception. "Error
Reason" denotes the output from the LLM Verifier utilized in the classification of error causes.
-----
**Problem**
What is the mean speed, ¯c, of N2 molecules in air at 25[◦]C
**Correct Solution**
The integral required is
_M_ 3/2 _∞_ _M_ 3/2 2RT 2 8RT 1/2
_c¯ = 4π_ 2πRT 0 _v[3]e[−][Mv][2]_ _[/][2][RT]_ dv = 4π 2πRT _×_ [1]2 _M_ = _πM_
Z
where we have used the standard result from tables of integrals (or software) that
_∞_ 1
_x[3]e[−][ax][2]_ dx =
0 2a[2]
Z
Substitution of the data then gives
_c¯ =_ 8 × 8.3141 J K[−][1] mol[−][1][] _× (298 K)_ 1/2 = 475 m s[−][1]
_π ×_ 28.02 × 10[−][3] kg mol[−][1][] !
where we have used 1 J = 1 kg m[2] s[−][2].
**Final Answer: 475**
**Model Solution**
approximately 515 m s[−][1]. To calculate this, we can use the formula for the root-mean-square (rms) speed of gas molecules:
3kT
_crms =_
r _m_
where k is the Boltzmann constant, T is the temperature in Kelvin, and m is the mass of the molecule. For N2, m = 28 amu.Plugging in
the values, we get:
_crms =_ s 3 × 1.2838 × 1 10.66[−][23] 10J/[−]K[27] × (25 + 273) Kkg/amu _≈_ 515 m s[−][1]
_×_ _×_
Note that we converted the temperature from Celsius to Kelvin and the mass from atomic mass units (amu) to kilograms.
**Final Answer: 515**
**Error Reason**
The error reason category is 7. Scientific Literacy . The model solution uses the formula for the root-mean-square (rms) speed of gas
molecules, which is only valid for ideal gases. However, the given problem does not specify that the gas is ideal, and in fact, air is not an ideal
gas. The correct solution uses a more general formula that applies to all gases, including non-ideal gases like air. Therefore, the model’s
solution is not applicable to the given problem.
Figure S10: An example problem is inaccurately solved by error reason 7. Scientific Literacy. "Error
Reason" denotes the output from the LLM Verifier utilized in the classification of error causes. In the
example, the mistaken step is highlighted in red, while the correct step is highlighted in blue.
**Problem**
What is the number of possible four-letter code words, selecting from the 26 letters in the alphabet?
**Correct Solution**
The number of possible four-letter code words, selecting from the 26 letters in the alphabet, in which all four letters are different is
26[P]4 [= (26)(25)(24)(23) = 26!]
22! [= 358][,][ 800][.]
**Final Answer: 358800**
**Model Solution**
Explanation for Problem 6: The number of possible four-letter code words, selecting from the 26 letters in the alphabet is
26[4] = 456, 976.
**Final Answer: 456976**
**Error Reason**
The model’s error reason category is 9. Logical Reasoning . The model did not consider the condition that all four letters should be different.
The correct solution considers this condition and uses the permutation formula to calculate the number of possible four-letter code words.
Figure S11: An example problem is inaccurately solved by error reason 9. Logical reasoning. “Error
Reason” denotes the output from the LLM Verifier utilized in the classification of error causes. In the
example, the mistaken step is highlighted in red.
-----
**Problem**
Find the bonding and antibonding Hückel molecular orbitals for ethene.
**Correct Solution**
The equations for c1 and c2 associated with Equation 11.7 are
_c1(α −_ _E) + c2β = 0_ and _c1β + c2(α −_ _E) = 0_
For E = α + β, either equation yields c1 = c2. Thus,
_ψb = c1 (2pz1 + 2pz2)_
The value of c1 can be found by requiring that the wave function be normalized. The normalization condition on ψπ gives c[2]1[(1+2][S][ +1) =]
1. Using the Hückel assumption that S = 0, we find that c1 = 1/√2.Substituting E = α − _β into either of the equations for c1 and c2_
yields c1 = −c2, or
_ψa = c1 (2pz1 −_ 2pz2)
The normalization condition gives c[2](1 − 2S + 1) = 1, or c1 = 1/√2.
**Final Answer: 0.70710678**
**Model Solution**
two bonding and two antibonding Hückel molecular orbitals for ethene. The unit of the answer is not applicable as it is a qualitative description
of the molecular orbitals.
**Final Answer: None**
**Error Reason**
The error reason category for the model’s solution is 6. Abstract Reasoning . The model’s solution is incorrect because it did not consider
the Hückel molecular orbital theory, which predicts that there are only two pi molecular orbitals for ethene: one bonding and one antibonding.
This is due to the fact that ethene has only two pi electrons, which can occupy the two pi molecular orbitals. Therefore, the model’s solution
of two bonding and two antibonding molecular orbitals is not possible for ethene.
Figure S12: An example problem is inaccurately solved by error reason 6. Abstract Reasoning.
“Error Reason” denotes the output from the LLM Verifier utilized in the classification of error causes.
**Problem**
A living room has floor dimensions of 3.5 m and 4.2 m and a height of 2.4 m.What does the air in the room weigh when the air pressure is
1.0 atm ?
**Correct Solution**
The air’s weight is equal to mg, where m is its mass Mass m is related to the air density ρ and the air volume V (ρ = m/V ).Calculation:
Putting the two ideas together and taking the density of air at 1.0 atm, we find
_mg = (ρV )g_ = 1.21 kg/m[3][] (3.5 m × 4.2 m × 2.4 m) 9.8 m/s[2][] = 418 N
**Final Answer: 418**
**Model Solution**
101, 640 N. To find the weight of the air in the room, we need to first calculate the volume of the room:V = lwh = 3.5 m × 4.2 m ×
2.4 m = 35.28 m[3]Next, we can use the ideal gas law to find the mass of the air in the room:P V = nRT where P is the pressure, V is
the volume, n is the number of moles of gas, R is the gas constant, and T is the temperature. Assuming room temperature of 20[◦]C (293 K),
we can solve for n:
(1.0 atm)(35.28 m[3])
_n =_ _[P V]_
_RT_ [=] (0.0821 L atm/mol K)(293 K) [= 1][,][ 610 mol]
Finally, we can find the weight of the air using the molar mass of air (approximately 28.97 g/mol):
_W = mg = nMg = (1, 610 mol)(0.02897 kg/mol)(9.81 m/s[2]) = 101, 640 N_
**Final Answer: 101640**
**Error Reason**
The error in the model’s solution is that they used the ideal gas law to find the mass of the air in the room, which assumes that the air behaves
like an ideal gas. However, the ideal gas law is not applicable in this case because the problem does not provide information about the
temperature of the air. The correct approach is to use the density of air at 1.0 atm and the volume of the room to find the mass of the air, as
shown in the correct solution. Therefore, the reason for the error in the model’s solution is 2. Identification of Assumptions .
Figure S13: An example problem is inaccurately solved by error reason 2. Identification of Assumptions. "Error Reason" denotes the output from the LLM Verifier utilized in the classification of error
causes. In the example, the mistaken step is highlighted in red.
-----
**Problem**
From an ordinary deck of playing cards, cards are to be drawn successively at random and without replacement. What is the probability that
the third spade appears on the sixth draw?
**Correct Solution**
Let A be the event of two spades in the first five cards drawn, and let B be the event of a spade on the sixth draw. Thus, the probability that
we wish to compute is P (A ∩ _B). It is reasonable to take_
132 39
3
_P (A) =_ = 0.274 and _P (B_ _A) = [11]_
525 _|_ 47 [= 0][.][234]
[]
The desired probability, P (A ∩ _B), is the product of those numbers: _ _P (A ∩_ _B) = (0.274)(0.234) = 0.064_
**Final Answer: 0.064**
**Model Solution**
The probability that the third spade appears on the sixth draw is
13[C]2 _[·]_ 39[C]4 = [78][ ·][ 741][,][ 948]
52[C]6 20, 358, 520 [= 1]5[,],[ 441] 577 _[.]_
**Final Answer: 0.258**
**Error Reason**
The model’s solution is incorrect because it calculates the probability of getting two spades in the first five cards and any spade on the sixth
draw, instead of specifically calculating the probability of the third spade appearing on the sixth draw given that two spades have already
appeared in the first five draws. Therefore, the error reason category is 4. Causal Reasoning
Figure S14: An example problem is inaccurately solved by error reason 4. Causal Reasoning. The
mistaken step is highlighted in red.
**Problem**
Gas cylinders of CO2 are sold in terms of weight of CO2. A cylinder contains 50lb (22.7 kg ) of CO2. How long can this cylinder be
used in an experiment that requires flowing CO2 at 293 K(η = 146µP) through a 1.00-m-long tube (diameter = 0.75 mm) with an
input pressure of 1.05 atm and output pressure of 1.00 atm ? The flow is measured at the tube output
**Correct Solution**
The gas flow rate ∆V/∆t is
∆V _πr[4]_
_P2[2]_ 1 = 2.76 10[−][6] m[3] s[−][1]
∆t [=] 16ηLP0 _[−]_ _[P][ 2]_ _×_
Converting the CO2 contained in the cylinder to the volume occupied at 298 K and 1 atm pressure, we get
1 10[−][3] m[3]
_nCO2 = 22.7 kg_ 0.044 kg mol[−][1] = 516 mol, V = _[nRT]P_ = 1.24 × 10[4] L L ! = 12.4 m[3]
12.4 m[3]
Given the effective volume of CO2 contained in the cylinder, the duration over which the cylinder can be used is 2.76 10[−][6 ]m[3 ]s[−][1][ =]
_×_
4.49 × 10[6] s. This time corresponds to roughly 52 days.
**Final Answer: 52 days**
**Model Solution**
The volume of CO2 in the cylinder is given by
22.7 kg
_V =_ _[m]_
_ρ_ [=] 1.98 kg m[−][3][ = 11][.][5 m][3]
The volume of CO2 that flows through the tube is given by
_V =_ _[π]_ 0.75 10[−][3] m 2 (1.00 m) = 1.77 10−7 m3
4 _[d][2][L][ =][ π]4_ _×_ _×_
The time required to flow V volume of CO2 is given byt = _[V]Q_ [=] _QmPV P21_ [=] (11.05 atm.77×10−[−]1[7 ].00 atm)(146m[3] [](1.00 atm)µP) [= 2][.][08][ ×][ 10][6][ s][. The]
number of seconds in a year is 3.15 × 10[7] s. Therefore, the cylinder can be used for 3.152.×0810×[7 ]10s yr[6 ]s[−][1][ = 0][.][066 yr = 24][.][1 days]
**Final Answer: 24.1 days**
**Error Reason**
The model’s solution is incorrect because it calculates the time required to flow a volume of CO2 through the tube, but it does not take into
account the total volume of CO2 in the cylinder. The correct solution calculates the total volume of CO2 in the cylinder and divides it by the
flow rate to determine the duration over which the cylinder can be used. Therefore, the error reason category is 5. Problem Deduction Skills .
Figure S15: An example problem is inaccurately solved by error reason 5. Problem Deduction Skills.
The mistaken step is highlighted in red.
-----
**Problem**
An astronaut whose height h is 1.70 m floats "feet down" in an orbiting space shuttle at distance r = 6.77 × 10[6] m away from the center
of Earth. What is the difference between the gravitational acceleration at her feet and at her head?
**Correct Solution**
We can approximate Earth as a uniform sphere of mass ME . Then, from Eq. 13-11, the gravitational acceleration at any distance r from the
center of Earth is
_ag =_ _[GM][E]_
_r[2]_
We might simply apply this equation twice, first with r = 6.77 × 10[6] m for the location of the feet and then with r = 6.77 × 10[6] m +
1.70 m for the location of the head. However, a calculator may give us the same value for ag twice, and thus a difference of zero, because h
is so much smaller than r. Here’s a more promising approach: Because we have a differential change dr in r between the astronaut’s feet and
head, we should differentiate Eq. 13-15 with respect to r.Calculations: The differentiation gives us
_dag = −2_ _[GM]r[3]_ _[E]_ _dr_
where dag is the differential change in the gravitational acceleration due to the differential change dr in r. For the astronaut, dr = h and
_r = 6.77 × 10[6]_ m. Substituting data into Eq.13-16, we find
_dag = −2_ 6.67 × 10[−][11](6m.77[3]/ ×kg 10 · s[6][2][ ]m)5[3].98 × 10[24] kg (1.70 m) = −4.37 × 10[−][6] m/s[2]
**Final Answer: −4.37 × 10[−][6]** m/s[2]
**Model Solution**
We can use the equation for the gravitational acceleration due to a point mass, given by Eq. 13-2 (g = GM/r[2]), to find the gravitational
acceleration at the astronaut’s feet and head. Because the astronaut is in free fall, the difference between these two accelerations is the tidal
acceleration, given by Eq. 13-22 (∆g = 2GMh/r[3]).Calculations: The gravitational acceleration at the astronaut’s feet is
_gfeet =_ _[GM]_ = 6.67 × 10[−][11] N · m[2]/kg[2][ ]5.98 × 10[24] kg = 9.78 m/s[2].
_r[2]_ (6.77 × 10[6] m)[2]
The gravitational acceleration at the astronaut’s head is
_ghead =_ _GM_ 6.67 × 10[−][11] N · m[2]/kg[2][ ]5.98 × 10[24] kg = 9.77 m/s[2].
(r + h)[2][ =] (6.77 × 10[6] m + 1.70 m)[2]
The difference between these two accelerations is
∆g = gfeet − _ghead_ = 9.78 m/s[2] _−_ 9.77 m/s[2] = 0.01 m/s[2] = 1.0 × 10[−][5] m/s[2].
**Final Answer: 1.0 × 10[−][5]** m/s[2]
**Error Reason**
The model’s solution is incorrect because it uses the equation for the gravitational acceleration due to a point mass, which is not applicable in
this case since the Earth is not a point mass. The correct equation to use is the one given in the correct solution, which takes into account the
fact that the Earth is a uniform sphere of mass. Therefore, the error reason category is 1. Logical Decomposition and Analysis Skills .
Figure S16: An example problem is inaccurately solved by error reason 1. Logical Decomposition and
Analysis Skills. “Error Reason” denotes the output from the LLM Verifier utilized in the classification
of error causes. In the example, the mistaken step is highlighted in red.
-----
# SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
**Xiaoxuan Wang** [* 1] **Ziniu Hu** [* 2] **Pan Lu** [* 1] **Yanqiao Zhu** [* 1] **Jieyu Zhang** [3] **Satyen Subramaniam** [1]
**Arjun R. Loomba** [1] **Shichang Zhang** [1] **Yizhou Sun** [1] **Wei Wang** [1]
[Project Homepage: https://scibench-ucla.github.io](https://scibench-ucla.github.io)
**Abstract**
Most existing Large Language Model (LLM)
benchmarks on scientific problem reasoning focus on problems grounded in high-school subjects
and are confined to elementary algebraic operations. To systematically examine the reasoning capabilities required for solving complex scientific
problems, we introduce an expansive benchmark
suite SCIBENCH for LLMs. SCIBENCH contains
a carefully curated dataset featuring a range of
collegiate-level scientific problems from mathematics, chemistry, and physics domains. Based on
the dataset, we conduct an in-depth benchmarking
study of representative open-source and proprietary LLMs with various prompting strategies.
The results reveal that current LLMs fall short
of delivering satisfactory performance, with the
best overall score of merely 43.22%. Furthermore,
through a detailed user study, we categorize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that no single prompting strategy significantly outperforms the others
and some strategies that demonstrate improvements in certain problem-solving skills could result in declines in other skills. We envision that
SCIBENCH will catalyze further developments
in the reasoning abilities of LLMs, thereby ultimately contributing to scientific research and
discovery.
**1. Introduction**
Recent advancements in Large Language Models (LLMs)
have dramatically expanded the boundaries of artificial in
*Equal contribution 1University of California, Los Angeles, Los
Angeles, CA, USA [2]California Institute of Technology, Pasadena,
CA, USA [3]University of Washington, Seattle, WA, USA. Correspondence to: Xiaoxuan Wang <[email protected]>.
_Proceedings of the 41[st]_ _International Conference on Machine_
_Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by_
the author(s).
telligence (Brown et al., 2020; Gao et al., 2023; Liu et al.,
2023b; OpenAI., 2022; Touvron et al., 2023a; Zhang et al.,
2023a;b). They have demonstrated outstanding performance
in many mathematical reasoning tasks that are typically
considered challenging even for well-educated individuals (Chen et al., 2021; 2023a; Gao et al., 2022; Kojima et al.,
2022; Wei et al., 2022). Notably, GPT-4 achieves a remarkable score of 163 out of 170 on the GRE Quantitative Exam,
placing it at the 80th percentile ranking (OpenAI., 2023).
While the remarkable improvements in these benchmark
performances might suggest that LLMs are capable of
performing scientific reasoning tasks, we argue that this
assertion might be overly optimistic due to the inherent limitations of current benchmarks. Firstly, many existing benchmarks such as ScienceQA (Lu et al., 2022)
and GSM8K (Cobbe et al., 2021) only contain problems
grounded in grade-level subjects. Although other benchmarks like MATH (Hendrycks et al., 2021) introduce highschool level questions, they primarily focus on math problems. Secondly, recent works like MMLU (Hendrycks et al.,
2020), AGIEval (Zhong et al., 2023), and JEEBench (Arora
et al., 2023), despite introducing challenging problems
that span a wide range of disciplines, only require basic
computations—addition, subtraction, multiplication, and
exponentiation—which do not adequately assess the depth
of reasoning abilities of LLMs for solving scientific problems. Lastly, most of these benchmarks only include textual
problems, which omit problems that incorporate visual elements such as figures or diagrams.
In parallel to benchmark developments, many studies propose various prompting strategies aimed at enhancing the
reasoning abilities of LLMs in scientific problem solving. A
notable example is the Chain-of-Thought (CoT) approach,
which instructs LLMs to generate detailed, step-by-step solutions that prompt deeper problem thinking (Huang et al.,
2022; Wang et al., 2022; Wei et al., 2022; Zhou et al., 2022).
Additionally, other strategies propose to enable LLMs to
utilize external tools (Lu et al., 2023b; Schick et al., 2023)
that improve the numerical computation capability. However, even these strategic approaches, each with its specific
strengths, struggle to fully address complex scientific prob
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
|Problem (Using the Planck distribution) Compare the energy output of a black-body radiator (such as an incandescent lamp) at two different wavelengths by calculating the ratio of the energy output at 450 nm (blue light) to that at 700 nm (red light) at 298 K.|Col2|Col3|
|---|---|---|
|Correct Solution At a temperature T, the ratio of the spectral density of states at a wavelength λ to that at λ is given by 1 2 ⇢ ⇢( (λ λ1 2, T T) =✓λ λ2 1◆5 ⇥# #e eh hc c/ /λ λ2 1k kT − −1 1$, ) T $ Insert the data and evaluate this ratio. Answer With λ = 450 nm and λ = 700 nm, …, and 1 2 therefore ⇢(450 nm, 298 K) = 2.10 ⇥10−16 ⇢(700 nm, 298 K) Final Answer: 2.10 ⇥10−16|GPT Solution with CoT Prompting Step 1: We need to use the Planck distribution formula to calculate the energy output of a black-body radiator. The formula is: 2hc2 1 B(λ, T) = λ5 eλkh Bc T −1 Step 2: We calculate the energy output at 450 nm and 700 nm… Let’s use T = 298 K. B(450 nm, 298 K) 2hc2 1 = (450 ⇥10−9 m)5 e450⇥10−9 h mc ·kB·298 K −1 ⇡ 1.23 ⇥10−16 ) 1.67 ⇥10−31 … Step 3: Calculate the ratio of the energy output at 450 nm to that at 700 nm: B(450 nm, 298 K) ⇡4.39 B(700 nm, 298 K) Final Answer: 4.39 ⇥10−16|GPT Solution with Python Prompting as External Tools … The ratio of the energy output at λ = 450 nm and λ 1 2 = 700 nm is u (λ1, T) = 8⇡ λh 51c ehc/(λ11 kT )−1 = λ5 2ehc/(λ1s+5u19vW5v7tdkWrZIX5CWpEZfskH1ySs7JWHkC/lKvpHvlR+VX5XflT9T6vLS7M5zUjgry38BlX53XQ=</latexit>) kTλ )2 −1 u (λ2, T) 8⇡ λh 52c ehc/(λ21 λ5 1ehc/(λ2 WXe36+3rc392myLVskL8pLUiEt2yD45JefkjDyhXwl38j3yo/Kr8rvyp8pdXlpduc5KZyV5b+Rwndc</latexit>)kT ) −1 kT )−1 λ 1 Substituting the given values, we get a Python code for this problem: … ⇒lambda2 ratio = ((lambda2 / lambda1)***5) *((math.exp((h * c) / (lambda1 * k * T)) - 1) / (math.exp((h * c) / (lambda2 * k * T)) - 1)) ⇒ lambda1 … Final Answer: 3.95 ⇥1033|
_Figure 1. An example problem from Physical Chemistry with solutions generated under two prompting strategies. GPT-4 with Chain-of-_
Thought (CoT) prompting shows calculation errors, while GPT-4 that prompts Python as external tools misunderstands mathematical
equations. Errors are highlighted in red and the corrections are shown in purple.
lems. Consider an example problem from college-level
_Physical Chemistry (Atkins et al., 2014b) that requires the_
use of the Planck distribution to derive certain quantities.
As shown in Figure 1, LLMs with CoT prompts accurately
generate the correct formula, but fail in the final numerical
calculation. As a remedy, when instructed to simultaneously
generate a Python program for numerical computation and
employ the CoT reasoning, the LLM misplaces λ1 in the
numerator rather than the denominator in the formula, illustrating a misunderstanding of mathematical relationships
when employing external tools. This example highlights a
crucial gap: even advanced LLMs struggle with complex
scientific problem solving, necessitating a fine-grained analysis of the skills required for such complex tasks.
To mitigate these deficiencies, in this paper, we present a
novel college-level Scientific problem solving Benchmark,
referred to as SCIBENCH. SCIBENCH contains a carefully
curated dataset of college-level scientific problems, including 869 problems collected from widely-used textbooks in
college-level Chemistry, Physics, and Mathematics courses.
Distinct from existing benchmarks, all of the problems are
open-ended, free-response questions that demand multi-step
reasoning abilities, the understanding of scientific concepts,
the retrieval of domain-specific knowledge (e.g., equations
and theorems), and complex numeric computation capabilities (e.g., calculus or differential equations). Besides that,
our dataset includes a multimodal subset of 177 problems
that incorporate visual elements (such as graphs and figures)
as additional contexts, which enables of the evaluation of
multimodal LLMs. It is noted that SCIBENCH also includes
step-by-step solutions for example problems, facilitating
detailed error analysis. To align our evaluation with real
world scenarios, we provide a separate, closed dataset that
encompasses 103 problems from seven sets of midterm and
final exams from collegiate Computer Science and Math
courses. To ensure the integrity of our evaluation, these
datasets have been manually extracted from PDF documents
and formatted into LaTeX documents, thereby minimizing
the risk of their leakage in LLM training data.
Our evaluation includes a wide range of representative opensource and proprietary LLMs. For unimodal, textual-based
LLMs, we assess LLaMA-2, Mistral, Claude2, GPT-3.5,
GPT-4, and their variants. For multimodal vision-language
models, we include GPT-4, InternLM-XComposer2, QwenVL, SPHINX-MoE, and LLaVA. These models are tested
using various prompting strategies, including CoT, zero-shot
learning, and few-shot learning. We also prompt LLMs to
utilize external scientific computing libraries in Python and
Wolfram language. The experimental results indicate that
the complexity and difficulty of our dataset are sufficient to
differentiate the performance levels of different LLMs. Even
with the strongest configuration—combining CoT prompting and the use of external tools—the best model achieves
an average score of 43.22% on the textual dataset, 13.8%
on the multimodal dataset, and 51.57% on the closed exam
dataset. These results suggest a considerable potential for
improvement in future LLMs.
In order to gain a comprehensive understanding of the limitations of LLMs in scientific problem solving, we propose
a novel self-refinement method to uncover the deficient
skills in the solutions made by LLMs. Firstly, we compare the correct solutions with the solutions generated by
LLMs and, with the assistance of human annotators, summarize ten essential skills requisite for successful scientific
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
_Table 1. Comparison of SCIBENCH with other benchmarks. “Algebra” refers to high-school level arithmetic computations; “Calculus”_
involves using integrals and differentials; “Statistics” focuses on applying statistical and probability concepts like bivariate distributions.
ScienceQA (Lu et al., 2022) ✓ ✓ ✓ ✓ ✓
Subject Calculation College Visual Detailed Free
Benchmark
Math Chemistry Physics Algebra Calculus Statistics Level Contexts Solutions Response
IconQA (Lu et al., 2021b) ✓ ✓ ✓ ✓ ✓
TabMWP (Lu et al., 2023c) ✓ ✓ ✓ ✓
GSM8K (Cobbe et al., 2021) ✓ ✓ ✓ ✓
MATH (Hendrycks et al., 2021) ✓ ✓ ✓ ✓
LILA (Mishra et al., 2022) ✓ ✓ ✓ ✓
MMLU (Hendrycks et al., 2020) ✓ ✓ ✓ ✓
TheroemQA (Chen et al., 2023b) ✓ ✓ ✓ ✓ ✓ ✓
AGIEval (Zhong et al., 2023) ✓ ✓ ✓ ✓ ✓
SciEval (Sun et al., 2023) ✓ ✓ ✓ ✓
JEEBench (Arora et al., 2023) ✓ ✓ ✓ ✓ ✓ ✓
SCIBENCH ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
problem-solving. These skills include proficiency in domain
knowledge, mathematical reasoning, numerical calculation
abilities, and comprehension of common sense concepts.
Subsequently, we employ an LLM-empowered self-critic
approach to automatically classify the lacking skills in the
solutions made by the benchmarked LLMs under each experiment configuration. Our analysis finds that (1) although
CoT significantly improves the calculation ability, it is less
effective in other aspects; (2) prompts with the use of external tools could potentially compromise other fundamental
skills; (3) few-shot learning does not universally improve
scientific problem-solving skills.
**2. Related Work**
Recently, many benchmarks have been proposed to assess
the scientific problem-solving skills of LLMs, particularly
in mathematical domains (Chen et al., 2023b; Fu et al.,
2023; Guo et al., 2023; Hendrycks et al., 2020; Lu et al.,
2023c;d; Mishra et al., 2022; Welleck et al., 2021; Zhong
et al., 2023). Notable works include GSM8K (Cobbe et al.,
2021) including 8.5K grade school math word problems;
LILA (Mishra et al., 2022) which extends 20 datasets with
task instructions and Python solutions; MATH (Hendrycks
et al., 2021), a challenging collection of 12.5K math problems from math competitions; TheroemQA (Chen et al.,
2023b), focusing on theorem applications on problem solving; and MathVista (Lu et al., 2023a), which evaluates the
mathematical reasoning ability of LLMs in visual contexts.
To provide a more holistic evaluation, recent studies have expanded their scope to multiple disciplines: ScienceQA (Lu
et al., 2022) introduces a multimodal question-answering
dataset with accompanying lecture notes and explanatory
annotations. Taylor et al. (2022) provide a set of scientific tasks, including LaTeX equation conversions, domain
knowledge probes, citation prediction, and chemical ques
tion answering. BIG-Bench (Ghazal et al., 2013) offers
a large-scale general-purpose test suite that requires 204
multiple-choice or exact-match tasks, and its extension BIGBench Hard (Suzgun et al., 2022) poses challenging CoT
prompts. SciEval (Sun et al., 2023) includes a mix of objective and subjective questions across multiple scientific
fields to assess understanding, application, and research
capabilities. JEEBench (Arora et al., 2023) incorporates preengineering-level scientific problems derived from college
entrance exams. AGIEval (Zhong et al., 2023) evaluates
LLMs on human-centric standardized exams, such as college entrance exams and lawyer qualification tests.
Despite their extensive coverage across diverse disciplines,
these datasets exhibit certain limitations. Sourced from
lower educational level subjects, the majority of them focus on basic arithmetic operations rather than advanced
mathematical computations. Furthermore, most of these
benchmarks are confined to textual-only problems, omitting
problems with visual elements such as graphs or diagrams.
These drawbacks result in an incomplete assessment of the
analytical and problem-solving skills required to tackle complex scientific problems. In contrast, SCIBENCH focuses on
college-level scientific problems across a broad spectrum
of disciplines including Mathematics, Physics, and Chemistry. It emphasizes on a deep understanding of diverse
scientific concepts, challenging LLMs to not only grasp
these principles but also to efficiently retrieve and apply
relevant knowledge. Furthermore, it demands sophisticated
numerical computation skills, including the execution of
advanced mathematical operations such as calculus and differential equations, as well as the application of advanced
statistical and probability theories. Additionally, we include
multimodal problems that necessitate the interpretation and
integration of both textual and visual information. A detailed comparison of SCIBENCH with some representative
works is summarized in Table 1.
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
_Table 2. Summary of the textbook dataset. We report the number of total problems, percentage with detailed solutions, and percentage_
with visual elements in columns four to six respectively.
Subject Title Acronym # Problems % Solutions % Visual
_Fundamentals of Physics (Halliday et al., 2013)_ `fund` 142 9.2% 43.0%
Physics _Statistical Thermodynamics (Engel & Reid, 2010)_ `thermo` 83 20.5% 0.0%
_Classical Dynamics of Particles and Systems (Thornton & Marion, 2021)_ `class` 66 12.1% 4.5%
_Quantum Chemistry (Levine et al., 2009)_ `quan` 41 19.5% 0.0%
_Quantum Chemistry (McQuarrie, 2008)_ `chemmc` 47 19.1% 0.0%
Chemistry
_Physical Chemistry (Atkins et al., 2014a)_ `atkins` 122 13.9% 0.8%
_Physical Chemistry, Quanta, Matter, and Change (Atkins et al., 2014b)_ `matter` 59 16.9% 3.4%
_Calculus: Early Transcendentals (Stewart et al., 2012)_ `calc` 161 19.3% 67.7%
Math _Probability and Statistical Inference (Hogg et al., 1977)_ `stat` 93 21.5% 1.1%
_Elementary Differential Equations and Boundary Value Problems (Boyce et al., 2021)_ `diff` 55 9.1% 0.0%
While the aforementioned datasets focus on evaluating
LLMs’ performance on scientific problem solving tasks,
another line of research aims to analyze the diverse capabilities of LLMs more comprehensively. Liu et al. (2023c)
assess the reading abilities of LLMs using multiple-choice
questions. Frieder et al. (2023) focus on evaluating the
mathematical capabilities of LLMs, including those at the
college level, but with topics such as functional analysis or
topology that differ from those in SCIBENCH, such as differential equations and calculus. Bubeck et al. (2023) explore
the comprehensive abilities of GPT-4, but only use up to
high-school level mathematical problems such as those in
GSM8k (Cobbe et al., 2021) and MATH (Hendrycks et al.,
2021). Zhang et al. (2024) develop SciGLM, a scientific
language model for collegiate-level problem reasoning, and
evaluate its performance across multiple scientific datasets.
Kabir et al. (2023) conduct a detailed manual analysis for
LLMs. They also provide human-annotated qualitative analysis to assess the capabilities of the models. However, relying on human labor for direct solution analysis can be costly.
Our evaluation protocol, based on predefined fundamental problem solving skills, enables automated classification
of deficient skills for each incorrectly answered question.
This approach enables an affordable, large-scale qualitative
analysis of model solutions.
**3. The SCIBENCH Dataset**
To evaluate the capabilities and analyze the limitations of
Large Language Models (LLMs) to solve scientific computing problems, we collect a new dataset consisting of
college-level textbooks and course exams in a variety of domains. This section details the dataset construction process.
**Data selection criteria. Our dataset aims to improve the**
previous benchmarks by including more challenging problems. Specifically, the selected dataset should fulfill the
following requirements:
- Inclusion of college-level problems. The chosen problems demand a solid understanding of domain-specific
knowledge, adept calculation skills, and the ability to
perform complex numerical computations.
- Inclusion of detailed solutions. To facilitate a thorough
analysis of the limitations of LLMs, detailed solutions
should be provided as well, which could facilitate a finergrained examination of the capacity of LLMs to handle
complex problem-solving tasks.
- Inclusion of visual elements. In the real world, many
scientific problems require the interpretation and integration of both textual and visual information. The included
problems should thus contain visual elements (such as
figures) in the contexts.
- Inaccessibility in text formats. To ensure an unbiased
evaluation, questions should not be readily accessible
online and cannot be easily extracted or transformed into
text. This aims to mitigate any potential information
leakage from the exposure of LLMs to pre-existing online
question banks, such as those found in standardized tests
like the SAT exams.
- Assessment of advanced problem-solving capabilities.
The problems to benchmark should not be confined to
basic arithmetic operations like addition and multiplication. Rather, they should enable evaluating the capability
of LLMs in performing advanced computations such as
calculus and differential equations.
Accordingly, to construct the dataset, we select ten textbooks from three scientific fields Physics, Chemistry, and
Mathematics that have been extensively used in college
courses. We summarize the statistics of this textbook dataset
in Table 2 and we use acronyms to refer to each textbook
throughout the paper for brevity. Furthermore, in order
to simulate real-world evaluation, we compile a closed set
of exam questions from college courses from Computer
Science and Math departments, including Data Mining, Ma_chine Learning, and Differential Equations. This subset is_
less likely to be in LLM training data, making it an effective
tool for LLM evaluation. Detailed statistics of these exam
problems are summarized in Table S1. We refer readers to
Appendix A for details on these textbooks and exams.
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
To reduce the likelihood of correct answers being merely
guessed from candidates, we choose to mainly include questions with more challenging, free-response answers, rather
than multiple-choice questions in previous works (Chen
et al., 2023b; Lu et al., 2021a; 2022). In order to facilitate standardized and automated evaluation, we focus on
answers that only contain single numerical numbers to avoid
ambiguity for the textbook dataset. Further, we convert the
answer to floating-point numbers rounded to three decimal
places. For example, the answer _√π2_ [will be converted to]
the decimal representation of 0.450. We also treat scientific
notation as a unit to avoid overflow issues. For example, if
the answer is 2.2 _×_ 10[−][31] m, we take 2.2 as the final answer
and 10[−][31] m as the unit.
**Data preprocessing. We collect each problem from the**
original textbooks in PDF documents and manually process
[them into LaTeX documents using an OCR tool Mathpix.](https://mathpix.com/)
The data is manually collected by human annotators using
a web-based annotation tool (Lu et al., 2021a), whose user
interface is shown in Appendix A.3. All problems are carefully verified by human annotators to ensure that LaTeX
documents can be compiled without any syntax errors. For
reference, we also provide the original numbers in textbooks.
For every problem, we provide the answer in two forms: the
numerical value and the corresponding LaTeX expression
with mathematical notations retained (e.g., 0.450 and _√π2_ [);]
the unit of each answer is saved as a separate attribute. The
detailed step-by-step solutions are also provided in LaTeX.
For problems having multiple answers, we either keep only
the first subproblem and discard the remaining subproblems
or convert each subproblem into a separate problem.
**4. Experiments**
This section presents the experiments to assess the capabilities of LLMs in scientific problem-solving. We first
describe our experimental setup. Subsequently, we evaluate
unimodal LLMs on the textbook dataset. Following this,
we include additional experiments on the multimodal subset
and the closed exam subset, as well as comparisons with
other numerical computational tools.
**4.1. Experiment Setup**
We evaluate the textbook dataset on seven unimodal
LLMs, which include four proprietary models: Claude2
(claude2) (Anthropic., 2023), GPT-3.5-Turbo (gpt```
3.5-turbo) (OpenAI., 2022), GPT-4 (gpt-4), GPT-4
```
Turbo (gpt-4-turbo) (OpenAI., 2023), along with three
open-source models: LLaMA-2-7B (llama-2-7b-chat),
LLaMA-2-70B (llama-2-70b-chat) (Touvron et al.,
2023b), and Mistral-7B (mistral-7b-instruct) (Jiang
et al., 2023).
We consider two prompting strategies, including the Chainof-Thought (CoT) prompting and prompting to use external
tools.
- Zero-shot and few-shot learning. In the zero-shot learning setting, models are not provided with any prior examples, which evaluates their inherent problem-solving
capabilities with background knowledge and reasoning
abilities. In the few-shot setting, a few examples are given
to the models before the test example. This aims to assess
their capability to learn new information from the demonstrations and incorporate it into their problem-solving
processes.
- Prompting-based approaches. For our experiments, all
settings begin with a system prompt that describes the
types and categories of questions. Additionally, we utilize
a CoT prompting strategy in zero- and few-shot settings.
- Tool-augmented approaches. Given that LLMs are limited in acquiring exact knowledge and performing precise
calculations, some recent approaches, such as PAL (Gao
et al., 2022) and PoT (Chen et al., 2023a) explore utilizing external tools such as the Python interpreter for
program synthesis to enhance the capabilities of solving
complex reasoning tasks. In line with these approaches
and acknowledging the limitations of LLMs in performing precise calculations, we also include a setting that
prompts the model to convert its solution steps in natural language into Python code, aiming to achieve more
accurate results for certain computation steps. This toolaugmented approach can only be tested in the few-shot
learning setting. We manually construct Python programs
that produce the correct answer.
**Implementation details. We set temperature to zero for all**
models to reduce the randomness of the predictions. Fewshot examples, including solutions, are randomly selected
from problems within each textbook. When external tools
are used, we add a code snippet that translates the solution
into specific programming languages in all few-shot examples. The code snippets are verified by human annotators
that will produce the correct output. In terms of evaluation
metrics, we compare the model outputs with the correct
answers, allowing a relative tolerance of 5%. In particular
to the exam dataset, the model solutions are graded using
the rubrics provided by the instructors. Readers may refer to
Appendix C for all prompts and the implementation details
for utilizing external tools.
**4.2. Results and Analysis**
We report the model performance in terms of accuracy score
for each textbook and an average score over all problems.
The results of all LLMs in various settings on the textbook
and the exam dataset are summarized in Tables 3 and S2
respectively. We have the following observations.
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
_Table 3. Experimental results in terms of accuracy (%) on the textbook dataset. The best performing score is highlighted in bold and_
second-best is underlined. The average score is weighted by the number of problems in each textbook.
Chemistry Physics Math
Model Avg.
```
atkins chemmc quan matter fund class thermo diff stat calc
```
_Zero-Shot Learning_
LLaMA-2-7B 0.00 0.00 0.00 0.00 1.37 0.00 0.00 2.00 5.33 0.00 1.03
LLaMA-2-70B 1.87 2.56 0.00 0.00 1.40 0.00 0.00 0.00 10.70 4.76 2.41
Mistral-7B 9.35 5.13 8.82 4.08 5.48 2.13 0.00 4.00 12.00 2.38 6.23
Claude2 15.00 12.83 14.71 10.20 12.33 6.40 9.00 4.00 38.70 16.70 14.94
GPT-3.5-Turbo 4.67 20.51 8.82 2.04 10.96 2.13 2.94 6.00 28.00 9.30 9.59
GPT-4 45.79 28.21 26.47 22.45 23.29 **25.53** 17.91 32.00 49.33 **54.76** 33.79
GPT-4-Turbo **57.01** **41.03** **35.29** **26.53** **24.66** 21.28 **26.87** **46.00** **61.33** 52.38 **40.99**
_Zero-Shot Learning + CoT Prompting_
LLaMA-2-7B 0.00 2.56 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.67
LLaMA-2-70B 0.93 2.56 0.00 0.00 0.00 0.00 1.49 0.00 10.70 0.00 1.89
Mistral-7B 6.54 5.13 2.94 0.00 0.00 2.12 1.49 6.00 10.67 9.52 4.63
Claude2 20.56 15.38 8.82 4.08 8.23 4.26 5.97 6.00 36.00 14.29 13.89
GPT-3.5-Turbo 6.54 23.08 2.94 10.20 12.33 2.12 5.97 12.00 33.33 9.30 12.17
GPT-4 28.04 **43.59** 14.71 20.41 21.92 19.15 17.91 22.00 50.67 42.86 28.52
GPT-4-Turbo **60.75** 35.90 **29.41** **28.57** **30.14** **31.91** **25.37** **38.00** **64.00** **54.76** **42.37**
_Few-Shot Learning + CoT Prompting_
LLaMA-2-7B 1.87 5.13 2.94 0.00 5.48 0.00 0.00 0.00 12.00 7.14 3.60
LLaMA-2-70B 13.10 12.83 14.71 4.08 12.33 0.00 0.00 0.00 13.30 9.52 8.40
Mistral-7B 6.54 10.26 2.94 2.04 2.74 2.13 4.48 4.00 14.67 9.52 6.17
Claude2 15.89 25.64 14.65 6.12 9.59 6.38 10.45 8.00 33.33 19.05 15.26
GPT-3.5-Turbo 8.41 20.51 8.82 6.12 10.96 2.12 1.49 10.00 38.67 6.98 11.99
GPT-4 41.12 33.33 17.65 16.33 17.81 17.02 20.90 30.00 49.33 45.24 30.36
GPT-4-Turbo **59.81** **35.90** **26.47** **18.37** **23.29** **19.15** **32.84** **32.00** **65.33** **50.00** **39.45**
_Few-Shot Learning + Python_
LLaMA-2-7B 0.93 2.56 0.00 0.00 0.00 0.00 0.00 0.00 6.67 0.00 1.20
LLaMA-2-70B 0.93 7.69 2.94 0.00 9.59 0.00 1.49 0.00 17.30 9.52 5.14
Mistral-7B 4.67 0.00 5.88 2.04 2.74 2.13 0.00 4.00 17.33 11.90 5.32
Claude2 6.54 12.82 14.71 4.08 17.81 8.51 5.97 20.00 40.00 16.67 14.92
GPT-3.5-Turbo 13.08 33.33 8.82 16.33 26.01 4.26 7.46 16.00 44.00 26.19 19.91
GPT-4 **57.01** **38.46** **44.12** **34.69** **28.77** **23.40** **34.33** **44.00** **68.00** **38.10** **43.22**
GPT-4-Turbo 32.71 33.33 17.65 26.53 27.40 12.76 16.42 34.00 42.67 30.95 28.47
- Observation 1. SCIBENCH is complex enough to differ**entiate among LLMs. Our results show that open-source**
models such as LLaMA-2 and Mistral are consistently
outperformed by their proprietary counterparts across all
settings within the textbook dataset. Notably, GPT-4 and
GPT-4-Turbo lead in performance by a significant margin. For example, GPT-4-Turbo outperforms Mistral-7B
by 34.76% in the zero-shot setting. Additionally, within
both LLaMA and GPT series, we observe a clear correlation between increased model capacity (i.e., larger
parameter sizes) and improved performance. Therefore,
the complexity of SCIBENCH is able to differentiate the
performance among different LLMs.
- Observation 2. SCIBENCH highlights varied efficacy
**of prompting strategies across LLMs. Our findings**
suggest that the effectiveness of employing prompting
strategies or external computational tools varies signif
icantly among different LLMs. As shown in the table,
LLaMA-2-70B shows a marked improvement in the fewshot setting over the zero-shot setting, increasing from
2.41% to 8.40%. Similarly, the performance of GPT-4 is
significantly improved when incorporating external tools,
with an increase from 30.36% to 43.22%. Meanwhile, the
up-to-date model GPT-4-Turbo exhibits superior performance in zero-shot learning settings. However, despite
its advanced capabilities demonstrated by its outstanding
zero-shot learning performance, it falls short compared
to GPT-4 in few-shot learning when leveraging Python
for numerical computation. This suggests a potential
reduction in its program understanding capabilities. In
summary, such findings illustrate SCIBENCH can reveal
the nuanced differences in the ability of LLMs to utilize
prompting strategies and external tools effectively.
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
that utilizing Wolfram Language does not help few-shot
learning and even results in a deteriorated performance,
with a decrease of 6.70% compared to the CoT prompting
for Claude2, and a decrease of 6.17% for LLaMA-2-70B.
A plausible explanation is the introduction of syntax errors
when translating solution steps into the Wolfram Language,
which could be a potential direction for improvement. For a
detailed error analysis, readers are directed to Appendix C.3.
**5. Error Analysis of Prompting Strategies**
20
15
10
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
||Open-|Source||Propr|ietary|||
|||||||||
|||||||I Q L S G G|nternLM-XComposer2 wen-VL-Plus LaVA (LLaMA-2-13B) PHINX-MoE PT-4 (CoT) PT-4 (PoT)|
|||||||||
Model Size
5
0
7B 13B 45B Other
_Figure 2. Performance of LLMs on the multimodal subset. GPT-4_
models are augmented with image captions and OCR text.
Considering the substantial advancements of current LLMs,
an in-depth analysis of the particular skills that are either enhanced or limited under certain settings becomes imperative.
Previous works have relied on human labor to annotate error
reasons into different categories, which is both expensive
and time-consuming (Zhong et al., 2023). In this section, we
present an evaluation protocol that automates the classification of error reasons into deficient skills. This time-efficient
approach enables large-scale analyses in future research.
In order to quantify the impact of each setting on scientific
problem-solving, we first define an essential skill set that is
required by solving scientific problems. Then, an LLM verifier is employed to automatically classify each incorrectly
solved problem based on the absence of a specific skill from
the essential skill set. This approach generates error profiles,
showcasing a direct comparison of different strategies. This
evaluation protocol is summarized in Figure 3.
Firstly, we analyze the incorrect solutions made by GPT-3.5
for problems that provide detailed solutions. We hire two
college students, who are highly familiar with the problems
in our datasets, to annotate the source of the error for each
problem, indicating the specific line where the model makes
a mistake and why. From 112 such error annotations and
with the assistance of GPT-4, we distill these errors into ten
essential skills that GPT-3.5 might lack:
- Logical decomposition and analysis skills. This ability
involves decomposing the problem into smaller, manageable parts, and understanding the relationships between
these parts.
- Assumption identification. This skill involves the ability
to recognize relevant and necessary assumptions in the
problem.
- Spatial perception. This is important for understanding
problems in areas such as Physics and Chemistry, where
models need to visualize molecules, forces, fields, etc.
- Causal reasoning. This is the ability to understand cause
and effect relationships.
- Problem deduction skills. This pertains to the ability to
infer and deduce potential solutions or underlying principles from the given information in a problem.
**4.3. Additional Experiments**
**Evaluation on the multimodal subset. We evaluate two**
categories of models on problems with visual contexts:
(1) GPT-4 (OpenAI., 2023) augmented with image captions from Multimodal Bard (Google, 2023) and OCR texts
from EasyOCR (JaidedAI, 2022) and (2) open-source Large
Multimodal Models (LMMs): InternLM-XComposer2VL (Dong et al., 2024), Qwen-VL-Plus (Bai et al., 2023),
SPHINX-MoE (Lin et al., 2023), and LLaVA-LLaMA-213B (Liu et al., 2023a). For GPT-4, we explore two prompting strategies: Chain-of-Thought (CoT) (Wei et al., 2022)
and Program-of-Thoughts (PoT) (Chen et al., 2023a). The
results presented in Figure 2 reveal that proprietary models
augmented with image captions and OCR-detected text, significantly outperform their open-source counterparts. GPT-4
(PoT) that combines programming capabilities achieves an
accuracy of 13.8%, markedly higher than 7.4% obtained by
the best open model LLaVA-LLaMA-2-13B. This demonstrates the substantial potential for LLMs to effectively utilize visual contexts in scientific problem solving.
**Evaluation on the exam subset. To mirror real-world test-**
ing conditions with no few-shot examples provided, we evaluate GPT-3.5, GPT-4, Claude, LLaMA-2-7B, and LLaMA2-70B on the closed exam dataset under zero-shot and zeroshot CoT settings. The experiment results summarized in
Table S2 indicate a notable performance advantage of GPT4, which achieves an averaged score of 57.54%. However,
we note that their performance remains significantly lower
than human benchmarking. For instance, in the Data Mining
course, GPT-4 scores 64.44% and 42.67% in the midterm
and final exams, lower than the average student scores of
80.18% and 72.71%, respectively, as reported by the course
instructor. The results once again underline the challenging
nature of our dataset.
**Comparison with other scientific computing tools. We**
further utilize another famous scientific computing library
[Wolfram Language as the external tool and conduct experi-](https://www.wolfram.com/language/)
ments using GPT-3.5, Claude, LLaMA-2-7B, and LLaMA2-70B. The experiment results reported in Figure S7 show
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
_Calculus, Statistics, Probability, …_
LLM / Reference Human Error Summary Essential LLM Error
Solutions Annotator Reason Skills Verifier Profiles
_Data Mining, Differential Equations, …_
**Datasets** **Evaluation**
_Figure 3. Pipeline of the evaluation protocol. The evaluation protocol involves analyzing both LLMs and reference (correct) solutions_
with the assistance of human annotators to identify error reasons. These reasons are then summarized into ten essential scientific
problem-solving skills in which LLM may face challenges. Subsequently, a LLM verifier is employed to automatically attribute each
incorrectly answered problem to a lack of a specific skill. The resulting error profiles enable the interpretation of the improved skills by
certain prompting strategies and the direct comparison of various strategies.
Calculation 29.0% 13.6% 14.5% 6.2%
Logical Reasoning 3.2% 3.4% 4.8% 7.8%
Code Conversion 0.0% 1.7% 1.6% 21.9%
Scientific Literacy 9.7% 6.8% 6.5% 4.7%
Abstract Reasoning 0.0% 1.7% 1.6% 1.6%
Problem Deduction 11.8% 8.5% 9.7% 10.9%
Causal Reasoning 18.3% 32.2% 19.4% 15.6%
Spatial Perception 3.2% 1.7% 4.8% 0.0%
Assumption Identification 6.5% 5.1% 9.7% 6.2%
Logical Decomposition 18.3% 25.4% 27.4% 25.0%
0 10 20 30 0 10 20 30 0 10 20 0 10 20
(a) Zero-Shot Learning (b) Zero-Shot Learning + CoT Prompting (c) Few-Shot Learning + CoT Prompting (d) Few-Shot Learning + Python
_Figure 4. Error profiles of GPT-3.5 on the textbook dataset under four settings, which reveal the distribution of their deficiencies in ten_
essential problem-solving abilities.
- Abstract reasoning. This skill involves the ability to
understand complex concepts that cannot be perceived
physically, and to recognize patterns or relationships beyond concrete examples.
- Scientific literacy. This skill involves a comprehensive
understanding of key scientific principles, terminology,
and methodologies across a range of disciplines.
- Code conversion skills. This involves the ability to accurately translate solution steps into different programming
languages, like Python or Wolfram Language.
- Logical reasoning. This is the ability to make a reasoned
argument and to identify fallacies or inconsistencies in an
argument or set of data.
- Calculation skills. This involves the ability to accurately
carry out mathematical operations and computations.
After identifying this essential skill set, we assess the performance of the LLMs under different settings to discern
the specific problem-solving skills they lack. Given the high
cost of human annotations required to attribute the cause of
incorrect solutions to specific skill deficiencies, we propose
a novel self-critique protocol: we design a specific prompt
that outlines these abilities, and employ another LLM to
serve as a classifier and determine whether a specific error results from the lack of a particular problem-solving
skill. Finally, we ask human annotators to scrutinize the
classification results, which results in approximately 20% of
incorrectly classified skills being discarded. To be specific,
we utilize a GPT-3.5 model as the verifier to determine the
reason behind each error and pinpoint the missing skill. The
details regarding the specific prompts used are provided in
Appendix C.1. This verification process is conducted for
four settings, with results represented in bar charts (Figure 4). Additional examples of the evaluation protocol are
elaborated in Appendix D.
Our findings suggest that there is a lack of a universally
**effective setting: each configuration only enhances some**
**specific abilities and occasionally even hurts other skills**
**that the original model possesses. First, CoT prompting**
significantly improves calculation skills in the zero-shot scenario, with 13.6% error rates caused by calculation ability,
considerably lower than the 29.0% error rate of the vanilla
zero-shot baseline. However, CoT shows limitations in
improving other skills, with 32.2% and 25.4% error rates
in casual ability and logical decomposition ability in the
zero-shot CoT setting, respectively, compared to 18.3% and
18.3% in the zero-shot setting. This contradicts previous
claims about universal skill enhancement through zero-shot
CoT and carefully-designed few-shot CoT prompts (Wei
et al., 2022). An example in Figure S9 shows that the zeroshot learning setting without CoT has generated the correct
formula but fails in the calculation steps. In this case, CoT
prompting is even unable to use the correct formula as it
misinterprets the specific conditions (non-necessity) in the
problem. Second, the use of external tools significantly
reduces calculation errors compared to the few-shot Cot setting, with a notable decrease from 14.5% to 6.2%. However,
the use of external tools can weaken other skills, particularly the code conversion skills, i.e., generating the correct
programs for the solution. Third, few-shot learning does
not universally improve scientific problem-solving skills, as
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Impact Statement**
The introduction of SCIBENCH represents a significant advancement in the evaluation of Large Language Models
(LLMs) for scientific problem-solving tasks. By focusing
on collegiate-level problems in mathematics, chemistry, and
physics, SCIBENCH addresses a critical gap in existing
benchmarks, which have primarily focused on high-school
subjects and basic algebraic operations. This development
underscores the necessity of developing specialized benchmarks that challenge LLMs with higher complexity problems, thereby pushing the boundaries of the capabilities of
LLMs in academic and research settings.
While the current scope of SCIBENCH encompasses a select group of scientific disciplines, the potential for future
extensions is vast. Incorporating additional subjects such as
biology, computer science, and engineering could provide
a more comprehensive understanding of LLM capabilities
across a broader spectrum of scientific knowledge. Moreover, extending the benchmark to social sciences, humanities, and other human-centric domains would be equally
beneficial, as these areas often involve nuanced reasoning
and interpretation of complex social dynamics and ethical
considerations, posing unique challenges that could further
enhance the versatility and applicability of LLMs.
**Acknowledgements**
This work was supported by the National Science Foundation (NSF) under Grant Nos. 1829071, 1937599, 2106859,
2119643, 2202693, 2211557, 2303037, and 2312501;
the National Institutes of Health (NIH) under Grant No.
U54HG012517; the Defense Advanced Research Projects
Agency (DARPA) under Grant No. HR00112490370;
NASA; SRC JUMP 2.0 Center; Amazon Research Awards;
and Snapchat Gifts.
**References**
[Anthropic. Claude2. https://www.anthropic.com/index/](https://www.anthropic.com/index/claude-2)
```
claude-2, 2023. 5
```
Arora, D., Singh, H. G., et al. Have llms advanced enough?
a challenging problem solving benchmark for large language
models. arXiv preprint arXiv:2305.15074, 2023. 1, 3
Atkins, P., Atkins, P. W., and de Paula, J. Atkins’ physical chem_istry. Oxford university press, 2014a. 4, 12_
Atkins, P., De Paula, J., and Friedman, R. Physical chemistry:
_quanta, matter, and change. Oxford University Press, USA,_
2014b. 2, 4, 12
Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou,
C., and Zhou, J. Qwen-vl: A versatile vision-language model
for understanding, localization, text reading, and beyond. arXiv
_preprint arXiv:2308.12966, 2023. 7_
indicated in the comparison between zero-shot and few-shot
CoT settings. The improvement in one skill is offset by the
shortcomings in others: although the few-shot CoT setting
results in a reduction of 12.8% in errors related to causal
reasoning, it also leads to an increase in errors associated
with other skills, such as logical decomposition.
**6. Conclusion**
This paper presents SCIBENCH, a college-level benchmark
that includes scientific problems from Mathematics, Physics,
and Chemistry, as well as exam questions in Computer Science and Mathematics. Our comprehensive evaluation includes a diverse array of Large Language Models (LLMs),
spanning both open-source and proprietary models, including unimodal as well as multimodal settings, and employing
a variety of prompting strategies. The evaluation protocol
we employ serves as a framework for evaluating advanced
problem-solving skills of LLMs in scientific domains. The
findings of this study highlight that while large language
models (LLMs) exhibit impressive performance on introductory mathematical benchmarks, their mastery of problem
solving ability remains weak. These findings underscore
the limitations of current LLMs in achieving satisfactory
performance, even with the assistance of various tools. We
envision that the SCIBENCH benchmark dataset and evaluation protocol presented in this paper could lay a foundation
for future research and enable advancements in understanding and enhancing problem-solving capabilities of LLMs.
**Reproducibility Statement**
To foster reproducible research, we include all dataset processing and experiment details of SCIBENCH. We detail
data processing in Section 3 and provide the UI design of
data collection in Appendix A.3. We include all experiment
details with LLM prompts in Appendix C. Finally, we make
[our dataset and code publicly available at this repository.](https://github.com/mandyyyyii/scibench)
**Ethical Statement**
The questions of SCIBENCH are sourced from science textbooks and exams. We conduct a manual examination of
our dataset to ensure the absence of potential sensitive background or ethical concerns. The inclusion of exam questions has been authorized by the instructors of the respective
courses.
The purpose of the textbook dataset is solely for academic
use. Its collection adheres to the Fair Use Law in the US,
where only a certain number of questions from each textbook are selected, ensuring that only a small portion of the
textbook is utilized.
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
Boyce, W. E., DiPrima, R. C., and Meade, D. B. Elementary
_differential equations and boundary value problems. John Wiley_
& Sons, 2021. 4, 12
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell,
A., et al. Language models are few-shot learners. Advances in
_neural information processing systems, 33:1877–1901, 2020. 1_
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E.,
Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., et al. Sparks
of artificial general intelligence: Early experiments with gpt-4.
_arXiv preprint arXiv:2303.12712, 2023. 4_
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G.,
et al. Evaluating large language models trained on code. arXiv
_preprint arXiv:2107.03374, 2021. 1_
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program of
thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine
_Learning Research (TMLR), 2023a. 1, 5, 7_
Chen, W., Yin, M., Ku, M., Lu, P., Wan, E., Ma, X., Xu, J.,
Xia, T., and Wang, X. Theoremqa: A theorem-driven question
answering dataset. arXiv preprint arXiv:2305.12524, 2023b. 3,
5
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser,
L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems. _arXiv preprint_
_arXiv:2110.14168, 2021. 1, 3, 4_
Dong, X., Zhang, P., Zang, Y., Cao, Y., Wang, B., Ouyang, L., Wei,
X., Zhang, S., Duan, H., Cao, M., Zhang, W., Li, Y., Yan, H.,
Gao, Y., Zhang, X., Li, W., Li, J., Chen, K., He, C., Zhang, X.,
Qiao, Y., Lin, D., and Wang, J. Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in
vision-language large model. arXiv preprint arXiv:2401.16420,
2024. 7
Engel, T. and Reid, P. J. Thermodynamics, statistical thermody_namics, and kinetics. Prentice Hall Upper saddle River, 2010._
4, 12
Frieder, S., Pinchetti, L., Griffiths, R.-R., Salvatori, T.,
Lukasiewicz, T., Petersen, P. C., Chevalier, A., and Berner,
J. Mathematical capabilities of chatgpt. _arXiv preprint_
_arXiv:2301.13867, 2023. 4_
Fu, Y., Ou, L., Chen, M., Wan, Y., Peng, H., and Khot, T.
Chain-of-thought hub: A continuous effort to measure large
language models’ reasoning performance. _arXiv preprint_
_arXiv:2305.17306, 2023. 3_
Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan,
J., and Neubig, G. PAL: Program-aided language models. arXiv
_preprint arXiv:2211.10435, 2022. 1, 5_
Gao, P., Han, J., Zhang, R., Lin, Z., Geng, S., Zhou, A., Zhang,
W., Lu, P., He, C., Yue, X., Li, H., and Qiao, Y. Llama-adapter
v2: Parameter-efficient visual instruction model. arXiv preprint
_arXiv:2304.15010, 2023. 1_
Ghazal, A., Rabl, T., Hu, M., Raab, F., Poess, M., Crolotte, A.,
and Jacobsen, H.-A. Bigbench: Towards an industry standard
benchmark for big data analytics. In Proceedings of the 2013
_ACM SIGMOD international conference on Management of_
_data, pp. 1197–1208, 2013. 3_
[Google. Bard. https://bard.google.com, 2023. 7](https://bard.google.com)
Guo, T., Guo, K., Liang, Z., Guo, Z., Chawla, N. V., Wiest, O.,
Zhang, X., et al. What indeed can gpt models do in chemistry?
a comprehensive benchmark on eight tasks. arXiv preprint
_arXiv:2305.18365, 2023. 3_
Halliday, D., Resnick, R., and Walker, J. Fundamentals of physics.
John Wiley & Sons, 2013. 4, 12
Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song,
D., and Steinhardt, J. Measuring massive multitask language
understanding. arXiv preprint arXiv:2009.03300, 2020. 1, 3
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S.,
Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874, 2021. 1, 3, 4_
Hogg, R. V., Tanis, E. A., and Zimmerman, D. L. Probability and
_statistical inference, volume 993. Macmillan New York, 1977._
4, 13
Huang, J., Gu, S. S., Hou, L., Wu, Y., Wang, X., Yu, H., and Han,
J. Large language models can self-improve. arXiv preprint
_arXiv:2210.11610, 2022. 1_
[JaidedAI. Easyocr: Ready-to-use ocr. https://github.com/J](https://github.com/JaidedAI/EasyOCR)
```
aidedAI/EasyOCR, 2022. 7
```
Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot,
D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G.,
Saulnier, L., et al. Mistral 7b. arXiv preprint arXiv:2310.06825,
2023. 5
Kabir, S., Udo-Imeh, D. N., Kou, B., and Zhang, T. Who answers it better? an in-depth analysis of chatgpt and stack overflow answers to software engineering questions. arXiv preprint
_arXiv:2308.02312, 2023. 4_
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y.
Large language models are zero-shot reasoners. arXiv preprint
_arXiv:2205.11916, 2022. 1_
Levine, I. N., Busch, D. H., and Shull, H. Quantum chemistry,
volume 6. Pearson Prentice Hall Upper Saddle River, NJ, 2009.
4, 12
Lin, Z., Liu, C., Zhang, R., Gao, P., Qiu, L., Xiao, H., Qiu, H.,
Lin, C., Shao, W., Chen, K., et al. Sphinx: The joint mixing
of weights, tasks, and visual embeddings for multi-modal large
language models. arXiv preprint arXiv:2311.07575, 2023. 7
Liu, H., Li, C., Wu, Q., and Lee, Y. J. Visual instruction tuning. In
_NeurIPS, 2023a. 7_
Liu, H., Li, C., Wu, Q., and Lee, Y. J. Visual instruction tuning.
_arXiv preprint arXiv:2304.08485, 2023b. 1_
Liu, H., Ning, R., Teng, Z., Liu, J., Zhou, Q., and Zhang, Y.
Evaluating the logical reasoning ability of chatgpt and gpt-4.
_arXiv preprint arXiv:2304.03439, 2023c. 4_
10
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
Lu, P., Gong, R., Jiang, S., Qiu, L., Huang, S., Liang, X., and
Zhu, S.-C. Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. In The Joint
_Conference of the 59th Annual Meeting of the Association for_
_Computational Linguistics and the 11th International Joint Con-_
_ference on Natural Language Processing (ACL-IJCNLP 2021),_
2021a. 5
Lu, P., Qiu, L., Chen, J., Xia, T., Zhao, Y., Zhang, W., Yu, Z.,
Liang, X., and Zhu, S.-C. Iconqa: A new benchmark for abstract
diagram understanding and visual language reasoning. arXiv
_preprint arXiv:2110.13214, 2021b. 3_
Lu, P., Mishra, S., Xia, T., Qiu, L., Chang, K.-W., Zhu, S.-C.,
Tafjord, O., Clark, P., and Kalyan, A. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems,
35:2507–2521, 2022. 1, 3, 5
Lu, P., Bansal, H., Xia, T., Liu, J., Li, C., Hajishirzi, H., Cheng, H.,
Chang, K.-W., Galley, M., and Gao, J. Mathvista: Evaluating
mathematical reasoning of foundation models in visual contexts.
_arXiv preprint arXiv:2310.02255, 2023a. 3_
Lu, P., Peng, B., Cheng, H., Galley, M., Chang, K.-W., Wu, Y. N.,
Zhu, S.-C., and Gao, J. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint
_arXiv:2304.09842, 2023b. 1_
Lu, P., Qiu, L., Chang, K.-W., Wu, Y. N., Zhu, S.-C., Rajpurohit,
T., Clark, P., and Kalyan, A. Dynamic prompt learning via
policy gradient for semi-structured mathematical reasoning. In
_International Conference on Learning Representations (ICLR),_
2023c. 3
Lu, P., Qiu, L., Yu, W., Welleck, S., and Chang, K.-W. A survey of
deep learning for mathematical reasoning. In The 61st Annual
_Meeting of the Association for Computational Linguistics (ACL),_
2023d. 3
McQuarrie, D. A. Quantum chemistry. University Science Books,
2008. 4, 12
Mishra, S., Finlayson, M., Lu, P., Tang, L., Welleck, S., Baral,
C., Rajpurohit, T., Tafjord, O., Sabharwal, A., Clark, P., et al.
Lila: A unified benchmark for mathematical reasoning. In The
_2022 Conference on Empirical Methods in Natural Language_
_Processing (EMNLP), 2022. 3_
OpenAI. Chatgpt: Optimizing language models for dialogue.
```
https://openai.com/blog/chatgpt/., 2022. 1, 5
```
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774,
2023. 1, 5, 7
Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M.,
Zettlemoyer, L., Cancedda, N., and Scialom, T. Toolformer:
Language models can teach themselves to use tools. arXiv
_preprint arXiv:2302.04761, 2023. 1_
Stewart, J., Watson, S., and Clegg, D. Calculus: Early transcendentals, 8th. Edition, Brooks/Cole, Cengae learning, 2012. 4,
13
Sun, L., Han, Y., Zhao, Z., Ma, D., Shen, Z., Chen, B., Chen,
L., and Yu, K. Scieval: A multi-level large language model
evaluation benchmark for scientific research. arXiv preprint
_arXiv:2308.13149, 2023. 3_
Suzgun, M., Scales, N., Schärli, N., Gehrmann, S., Tay, Y., Chung,
H. W., Chowdhery, A., Le, Q. V., Chi, E. H., Zhou, D., et al.
Challenging big-bench tasks and whether chain-of-thought can
solve them. arXiv preprint arXiv:2210.09261, 2022. 3
Taylor, R., Kardas, M., Cucurull, G., Scialom, T., Hartshorn, A.,
Saravia, E., Poulton, A., Kerkez, V., and Stojnic, R. Galactica: A large language model for science. _arXiv preprint_
_arXiv:2211.09085, 2022. 3_
Thornton, S. T. and Marion, J. B. Classical dynamics of particles
_and systems. Cengage Learning, 2021. 4, 12_
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A.,
Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al.
LLaMA: Open and efficient foundation language models. arXiv
_preprint arXiv:2302.13971, 2023a. 1_
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A.,
Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S.,
et al. Llama 2: Open foundation and fine-tuned chat models.
_arXiv preprint arXiv:2307.09288, 2023b. 5_
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., and Zhou,
D. Self-consistency improves chain of thought reasoning in
language models. arXiv preprint arXiv:2203.11171, 2022. 1
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q.,
and Zhou, D. Chain of thought prompting elicits reasoning in
large language models. arXiv preprint arXiv:2201.11903, 2022.
1, 7, 8
Welleck, S., Liu, J., Bras, R. L., Hajishirzi, H., Choi, Y., and Cho,
K. Naturalproofs: Mathematical theorem proving in natural
language. arXiv preprint arXiv:2104.01112, 2021. 3
Zhang, D., Hu, Z., Zhoubian, S., Du, Z., Yang, K., Wang, Z., Yue,
Y., Dong, Y., and Tang, J. Sciglm: Training scientific language
models with self-reflective instruction annotation and tuning.
_arXiv preprint arXiv:2401.07950, 2024. 4_
Zhang, R., Han, J., Zhou, A., Hu, X., Yan, S., Lu, P., Li, H.,
Gao, P., and Qiao, Y. Llama-adapter: Efficient fine-tuning
of language models with zero-init attention. arXiv preprint
_arXiv:2303.16199, 2023a. 1_
Zhang, Z., Zhang, A., Li, M., Zhao, H., Karypis, G., and Smola,
A. Multimodal chain-of-thought reasoning in language models.
_arXiv preprint arXiv:2302.00923, 2023b. 1_
Zhong, W., Cui, R., Guo, Y., Liang, Y., Lu, S., Wang, Y., Saied,
A., Chen, W., and Duan, N. Agieval: A human-centric
benchmark for evaluating foundation models. arXiv preprint
_arXiv:2304.06364, 2023. 1, 3, 7_
Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X.,
Schuurmans, D., Bousquet, O., Le, Q., and Chi, E. Least-tomost prompting enables complex reasoning in large language
models. arXiv preprint arXiv:2205.10625, 2022. 1
11
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
# Supplementary Material for SCIBENCH
**A The Textbook Dataset** **12**
A.1 Textbook Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
A.2 Textbook Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
A.3 UI Design of the Labeling Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
**B** **The Exam Dataset** **14**
**C Experimental Details** **18**
C.1 Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
C.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
C.3 Additional Experiment on Wolfram Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
**D Problem Solving Abilities of Current LLMs** **21**
D.1 Assessment of the Evaluation Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
D.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
**A. The Textbook Dataset**
**A.1. Textbook Sources**
- PHYSICAL CHEMISTRY (ATKINS ET AL., 2014A) (atkins) provides an exploration of equilibrium, structure, and
reactions, integrating contemporary techniques like nanoscience, spectroscopy, and computational chemistry.
- QUANTUM CHEMISTRY (MCQUARRIE, 2008) (chemmc) meticulously covers Quantum Mechanics, from foundational
principles like blackbody radiation and Heisenberg’s Uncertainty Principle to complex topics such as Schrödinger equation,
quantum mechanical operators, and the application of quantum mechanics in chemical bonding.
- QUANTUM CHEMISTRY (LEVINE ET AL., 2009) (quan) explores quantum chemistry, providing a detailed understanding
of the Schrödinger equation, particle behavior in various scenarios, quantum mechanics operators, and other foundational
quantum principles. It delves into specific applications like the electronic structure of diatomic and polyatomic molecules,
variation methods, perturbation theory, electron spin and its implications in quantum mechanics, as well as various
computational methods for molecular quantum mechanics.
- PHYSICAL CHEMISTRY, QUANTA, MATTER, AND CHANGE (ATKINS ET AL., 2014B) (matter) combines physics
and mathematics, beginning with basics like differentiation and integration, advancing through quantum mechanics and
atomic structure, then exploring thermodynamics, molecular motion, and chemical kinetics. Each section is supplemented
with mathematical concepts such as differential equations, vectors, and probability theory.
- CLASSICAL DYNAMICS OF PARTICAL AND SYSTEMS (THORNTON & MARION, 2021) (class) initiates with an exploration of fundamental mathematical concepts, discussing scalars, vectors, matrix operations, coordinate transformations,
differentiation, and integration of vectors, using these constructs to illustrate concepts like velocity, acceleration, and
angular velocity. It then transitions into the realm of Newtonian mechanics, detailing Newton’s laws, frames of reference,
and the equation of motion for a single particle.
- THERMODYNAMICS, STATISTICAL THERMODYNAMICS, AND KINETICS (ENGEL & REID, 2010) (thermo) navigates
through thermodynamics’ principles, from fundamental concepts to complex laws, further discussing real and ideal gases,
solutions, electrochemical cells, and statistical thermodynamics. It concludes with an examination of the kinetic theory of
gases, transport phenomena, and chemical kinetics.
- FUNDAMENTALS OF PHYSICS (HALLIDAY ET AL., 2013) (fund) covers undergraduate physics topics, ranging from
fundamental concepts like motion and energy to more advanced areas such as quantum physics and nuclear physics.
- ELEMENTARY DIFFERENTIAL EQUATIONS AND BOUNDARY VALUE PROBLEMS (BOYCE ET AL., 2021) (diff)
provides a detailed exploration of differential equations, progressing from basic mathematical models to advanced topics
12
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
like the Laplace Transform, linear systems, numerical methods, and Fourier series. It culminates with a deep dive into
nonlinear equations, partial differential equations, and boundary value problems.
- PROBABILITY AND STATISTICAL INFERENCE (HOGG ET AL., 1977) (stat) covers probability and statistics, including
fundamental concepts, discrete and continuous distributions, bivariate distributions, functions of random variables, and
estimation techniques.
- CALCULUS: EARLY TRANSCENDENTALS (STEWART ET AL., 2012) (calculus) begins with diagnostic tests in
foundational topics, and explores functions from multiple perspectives. It comprehensively covers calculus concepts from
limits to three-dimensional analytic geometry, incorporating applications in various fields.
**A.2. Textbook Examples**
The textbook examples are provided in Figure S1. The examples from the multimodal subset are provided in Figures S2
to S5.
**Problem (fund)**
Two charged particles are fixed to an x axis: Particle 1 of charge q1 = 2.1 × 10[−][8]C is at position x = 20 cm and particle 2 of charge q2 = −4.00q1 is at position
_x = 70 cm. At what coordinate on the axis (other than at infinity) is the net electric field produced by the two particles equal to zero?_
**Answer: −30 cm**
**Problem (thermo)**
N2O3 dissociates according to the equilibrium N2O3( g) ⇌ NO2( g) + NO(g). At 298 K and one bar pressure, the degree of dissociation defined as the ratio of
moles of NO2(g) or NO(g) to the moles of the reactant assuming no dissociation occurs is 3.5 × 10[−][3]. Calculate ∆G[◦]R [for this reaction.]
**Answer: 28 kJ mol[−][1]**
**Problem (class)**
Halley’s comet, which passed around the sun early in 1986, moves in a highly elliptical orbit with an eccentricity of 0.967 and a period of 76 years. Calculate its minimum
distances from the Sun.
**Answer: 8.8 ×10[10]m**
**Problem (quan)**
A one-particle, one-dimensional system has Ψ = a[−][1][/][2]e[−|][x][|][/a] at t = 0, where a = 1.0000 nm. At t = 0, the particle’s position is measured. Find the probability
that the measured value is between x = 0 and x = 2 nm.
**Answer: 0.4908**
**Problem (chemmc)**
One of the most powerful modern techniques for studying structure is neutron diffraction. This technique involves generating a collimated beam of neutrons at a
particular temperature from a high-energy neutron source and is accomplished at several accelerator facilities around the world. If the speed of a neutron is given by
_vn = (3kBT/m)[1][/][2], where m is the mass of a neutron, then what temperature is needed so that the neutrons have a de Broglie wavelength of 50pm ?_
**Answer: 2500 K**
**Problem (atkins)**
The change in molar internal energy when CaCO3( s) as calcite converts to another form, aragonite, is +0.21 kJ mol[−][1]. Calculate the difference between the molar
enthalpy and internal energy changes when the pressure is 1.0 bar given that the densities of the polymorphs are 2.71 g cm[−][3] and 2.93 g cm[−][3], respectively.
**Answer: -0.28 Pa m[3]** mol[−][1]
**Problem (matter)**
In an industrial process, nitrogen is heated to 500 K at a constant volume of 1.000 m[3]. The gas enters the container at 300 K and 100 atm. The mass of the gas is 92.4 kg.
Use the van der Waals equation to determine the approximate pressure of the gas at its working temperature of 500 K. For nitrogen, a = 1.39dm[6] atm mol[−][2], b =
0.0391dm[3] mol[−][1].
**Answer: 140 atm**
**Problem (calc)**
A planning engineer for a new alum plant must present some estimates to his company regarding the capacity of a silo designed to contain bauxite ore until it is processed into
alum. The ore resembles pink talcum powder and is poured from a conveyor at the top of the silo. The silo is a cylinder 100ft high with a radius of 200ft. The conveyor
carries ore at a rate of 60, 000π ft[3]/h and the ore maintains a conical shape whose radius is 1.5 times its height. If, at a certain time t, the pile is 60ft high, how long will it
take for the pile to reach the top of the silo?
**Answer: 9.8 h**
**Problem (stat)**
In a study concerning a new treatment of a certain disease, two groups of 25 participants in each were followed for five years. Those in one group took the old treatment and
those in the other took the new treatment. The theoretical dropout rate for an individual was 50% in both groups over that 5 -year period. Let X be the number that dropped
out in the first group and Y the number in the second group. Assuming independence where needed, give the sum that equals the probability that Y ≥ _X + 2. HINT: What_
is the distribution of Y − _X + 25 ?_
**Answer: 0.3359**
**Problem (diff)**
Newton’s law of cooling states that the temperature of an object changes at a rate proportional to the difference between its temperature and that of its surroundings. Suppose
that the temperature of a cup of coffee obeys Newton’s law of cooling. If the coffee has a temperature of 200[◦]F when freshly poured, and 1 min later has cooled to 190[◦]F
in a room at 70[◦]F, determine when the coffee reaches a temperature of 150[◦]F
**Answer: 6.07 min**
_Figure S1. Textbook examples with acronym highlighted in brown._
13
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
The region R enclosed by the curves y = x and y = x[2] is rotated about the x-axis. Find the volume of the resulting solid.
**Image**
**Correct Solution**
The curves y = x and y = x[2] intersect at the points (0, 0) and (1, 1). The region between them, the solid of rotation, and a cross-section perpendicular to the x-axis are
shown in the Figure. A cross-section in the plane Px has the shape of a washer (an annular ring) with inner radius x[2] and outer radius x, so we find the cross-sectional area
by subtracting the area of the inner circle from the area of the outer circle:
_A(x) = πx[2]_ _π_ _x[2][][2]_ = π _x[2]_ _x[4][]_
_−_ _−_
Therefore we have
1 1
_V =_ 0 _A(x)dx =_ 0 _π_ _x[2]_ _−_ _x[4][]_ _dx_
Z 1Z
_x[3]_
= π = [2][π]
" 3 _[−]_ _[x]5[5]_ #0 15
**Final Answer:** [2]15[π]
_Figure S2. The example from the textbook Calculus: Early Transcendentals._
**A.3. UI Design of the Labeling Tool**
We employed a team of seven individuals to gather data from textbooks using an annotation tool. Each individual was
responsible for one to two books, encompassing approximately 100 examples. The user interface of the annotation tool is
depicted in Figure S6. For subsequent verification, we preserved images of problems and their corresponding answers. To
ensure clarity in future references, we have maintained the original sequence of problems as they appear in the textbooks.
**B. The Exam Dataset**
The exam dataset is drawn from the following sources:
- INTRODUCTION TO DATA MINING provides an introductory survey of data mining, which involves the automatic
discovery of patterns, associations, changes, and anomalies in large databases. It explores various application areas of data
mining, including bioinformatics, e-commerce, environmental studies, financial markets, multimedia data processing,
network monitoring, and social service analysis.
- FUNDAMENTALS ARTIFICIAL INTELLIGENCE provides an introduction to the core problem-solving and knowledge
representation paradigms in artificial intelligence. It covers Lisp programming with regular assignments, as well as topics
such as search methods, planning techniques, knowledge structures, natural language processing, expert systems, vision,
and parallel architectures.
- DIFFERENTIAL EQUATIONS covers various topics in differential equations, including first-order and second-order linear
equations with constant coefficients, power series solutions, and linear systems. Students will explore the principles and
applications of these mathematical concepts.
A detailed statistics of the exam dataset is summarized in Table S1. The experiment results of exam dataset are provided in
Table S2.
14
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
A 2.00 kg particle moves along an x axis in one-dimensional motion while a conservative force along that axis acts on it. The potential energy U (x) associated with the
force is plotted in the Figure. That is, if the particle were placed at any position between x = 0 and x = 7.00 m, it would have the plotted value of U . At x = 6.5 m, the
particle has velocity ⃗v0 = (−4.00 m/s)[ˆ]i. From the Figure, determine the particle’s speed at x1 = 4.5 m.
**Image**
**Correct Solution**
The particle’s kinetic energy is given by Eq _K =_ [1]2 _[mv][2][]. Because only a conservative force acts on the particle, the mechanical energy Emec(= K + U_ ) is conserved
as the particle moves. Therefore, on a plot of U (x), the kinetic energy is equal to the difference between Emec and U .
Calculations: At x = 6.5 m, the particle has kinetic energy
_K0 = [1]_ 0 [= 1] (S1)
2 _[mv][2]_ 2 [(2][.][00 kg)(4][.][00 m][/][s)][2]
= 16.0 J. (S2)
Because the potential energy there is U = 0, the mechanical energy is Emec = K0 + U0 = 16.0 J + 0 = 16.0 J. This value for Emec is plotted as a horizontal line in
the Figure. From that figure we see that at x = 4.5 m, the potential energy is U1 = 7.0 J. The kinetic energy K1 is the difference between Emec and U1 :
_K1 = Emec −_ _U1 = 16.0 J −_ 7.0 J = 9.0 J. (S3)
Because K1 = [1]2 _[mv]1[2][, we find][ v][1][ = 3][.][0 m][/][s][.]_
**Final Answer: 3.0 m/s**
_Figure S3. An example problem from the textbook Fundamentals of Physics._
_Table S1. Statistics of the close exam dataset. We report the number of problem instances in each exam and the ratio of problems in the_
exam that include detailed solutions. We further report the ratio of problems in different formats, including free-response, multiple-choice,
and true-false. For reference, the number in parentheses denotes the grading points assigned to the problems.
_Data Mining_ _Machine Learning_ _Differential Equations_
Midterm Final Midterm Final Exam 1 Exam 2 Final
# Problems 25 (90) 24 (75) 12 (56) 16 (75) 8 (100) 8 (100) 11 (95)
% Solutions 56.0% (58) 16.7% (19) 100.0% (56) 31.2% (26) 100.0% (100) 100.0% (100) 90.9% (90)
% Free-response 40.0% (46) 33.3% (29) 66.7% (38) 81.3% (62) 100.0% (100) 100.0% (100) 90.9% (90)
% Multiple-choice 28.0% (28) 29.2% (28) 33.3% (18) 18.7% (13) 0.0% (0) 0.0% (0) 9.1% (5)
% True-false 32.0% (16) 37.5% (18) 0.0% (0) 0.0% (0) 0.0% (0) 0.0% (0) 0.0% (0)
_Table S2. Experimental results in terms of total scores under zero-shot learning on the exam dataset. The best performing score is_
highlighted in bold and second-best is underlined.
_Data Mining_ _Machine Learning_ _Differential Equations_
Model Setting
Midterm Final Midterm Final Exam 1 Exam 2 Final
Zero 24 / 90 14 / 75 6 / 56 6/ 75 5 / 100 0 / 100 0 / 95
LLaMA-2-7B
Zero+CoT 18 / 90 14 / 75 2 / 56 10 / 75 10 / 100 0 / 100 10 / 95
Zero 37 / 90 26 / 75 28 / 56 35 / 75 35 / 100 30 / 100 20 / 95
Zero 23 / 90 18 / 75 18 / 56 12 / 75 20 / 100 5 / 100 0 / 95
LLaMA-2-70B
Zero+CoT 31 / 90 18 / 75 10 / 56 11/ 75 35 / 100 10 / 100 0 / 95
Claude2
Zero+CoT 33 / 90 38 / 75 22 / 56 **41 / 75** 25 / 100 15 / 100 20 / 95
Zero 56 / 90 **44 / 75** 30 / 56 37 / 75 25 / 100 **80 / 100** **25 / 95**
Zero 44 / 90 39 / 75 16 / 56 32 / 75 0 / 100 45 / 100 15 / 95
GPT-3.5
Zero+CoT 38 / 90 33 / 75 32 / 56 37 / 75 28 / 100 30 / 100 10 / 95
GPT-4
Zero+CoT **58 / 90** 32 / 75 **40 / 56** 35 / 75 **50 / 100** 70 / 100 15 / 95
15
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
If the particles in a system all move together, the com moves with them-no trouble there. But what happens when they move in different directions with
different accelerations? Here is an example. The three particles in the Figure are initially at rest. Each experiences an external force due to bodies outside the threeparticle system. The directions are indicated, and the magnitudes are F1 = 6.0 N, F2 = 12 N, and F3 = 14 N. What is the acceleration of the center of mass of the system?
**Image**
**Correct Solution**
The position of the center of mass is marked by a dot in the figure. We can treat the center of mass as if it were a real particle, with a mass equal to the system’s total mass
_M = 16 kg. We can also treat the three external forces as if they act at the center of mass (Figure b). We can now apply Newton’s second law_ _F⃗net = m⃗a_ to the center
of mass, writing
_F⃗net = M⃗acom,_ (S4)
_F⃗1 +_ _F[⃗]2 +_ _F[⃗]3 = M⃗acom,_ (S5)
_F⃗1 +_ _F[⃗]2 +_ _F[⃗]3_
_⃗acom =_ _._ (S6)
_M_
The equation tells us that the acceleration ⃗acom of the center of mass is in the same direction as the net external force _F[⃗]net on the system (Figure b). Because the particles_
are initially at rest, the center of mass must also be at rest. As the center of mass then begins to accelerate, it must move off in the common direction of ⃗acom and _F[⃗]net. We_
can evaluate the right side of Eq. S6 directly on a vector-capable calculator, or we can rewrite Eq. S6 in component form, find the components of ⃗acom, and then find ⃗acom .
Along the x axis, we have
_acom,x =_ _[F][1][x][ +][ F][2][x][ +][ F][3][x]_ = = 1.03 m/s[2]. (S7)
_M_ _[−][6][.][0 N + (12 N) cos 45]16 kg_ _[◦]_ [+ 14 N]
Along the y axis, we have
_acom,y =_ _[F][1][y][ +][ F][2][y][ +][ F][3][y]_ = [0 + (12 N) sin 45][◦] [+ 0] = 0.530 m/s[2]. (S8)
_M_ 16 kg
From these components, we find that ⃗acom has the magnitude
_acom =_ (acom,x)[2] + (acom,y)[2] = 1.16 m/s[2]. (S9)
q
**Final Answer: 1.16 m/s[2]**
_Figure S4. The example from the textbook Fundamentals of Physics._
16
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
At time t = 0 a tank contains Q0lb of salt dissolved in 100 gal of water; see Figure 2.3.1. Assume that water containing [1]4 [lb][ of salt/gal is entering the tank at a rate of]
_rgal/min and that the well-stirred mixture is draining from the tank at the same rate. Set up the initial value problem that describes this flow process. By finding the amount_
of salt Q(t) in the tank at any time, and the limiting amount QL that is present after a very long time, if r = 3 and Q0 = 2QL, find the time T after which the salt level is
within 2% of QL.
**Image**
**Correct Solution**
We assume that salt is neither created nor destroyed in the tank. Therefore variations in the amount of salt are due solely to the flows in and out of the tank. More precisely,
the rate of change of salt in the tank, dQ/dt, is equal to the rate at which salt is flowing in minus the rate at which it is flowing out. In symbols,
_dQ_
_dt_ [=][ rate in][ −] [rate out]
The rate at which salt enters the tank is the concentration [1]4 [lb][/][gal][ times the flow rate][ r][gal][/][min][, or][ (][r/][4)lb][/][min][. To find the rate at which salt leaves the tankl we need]
to multiply the concentration of salt in the tank by the rate of outflow, rgal/min. Since the rates of flow in and out are equal, the volume of water in the tank remains
constant at 100gal, and since the mixture is "well-stirred," the concentration throughout the tank is the same, namely, [Q(t)/100]lb/gal.Therefore the rate at which salt
leaves the tank is [rQ(t)/100]lb/min. Thus the differential equation governing this process is
_dQ_
_dt_ [=][ r]4 100
_[−]_ _[rQ]_
The initial condition is
_Q(0) = Q0_
Upon thinking about the problem physically, we might anticipate that eventually the mixture originally in the tank will be essentially replaced by the mixture flowing in,
whose concentration is [1]4 [lb][/][gal][. Consequently, we might expect that ultimately the amount of salt in the tank would be very close to][ 25lb][. We can also find the limiting]
amount QL = 25 by setting dQ/dt equal to zero and solving the resulting algebraic equation for Q. Rewriting it in the standard form for a linear equation, we have
_dQ_
_dt_ [+][ rQ]100 [=][ r]4
Thus the integrating factor is e[rt/][100] and the general solution is
_Q(t) = 25 + ce[−][rt/][100]_
where c is an arbitrary constant. To satisfy the initial condition, we must choose c = Q0 − 25. Therefore the solution of the initial value problem is
_Q(t) = 25 + (Q0 −_ 25)e[−][rt/][100]
_Q(t) = 25(1 −_ _e[−][rt/][100]) + Q0e[−][rt/][100]_
From above Equations, you can see that Q(t) → 25 (lb) as t →∞, so the limiting value QL is 25, confirming our physical intuition. Further, Q(t) approaches the limit
more rapidly as r increases. In interpreting the solution, note that the second term on the right side is the portion of the original salt that remains at time t, while the first term
gives the amount of salt in the tank due to the action of the flow processes. Now suppose that r = 3 and Q0 = 2QL = 50; then
_Q(t) = 25 + 25e[−][0][.][03][t]_
Since 2% of 25 is 0.5, we wish to find the time T at which Q(t) has the value 25.5. Substituting t = T and Q = 25.5 and solving for T, we obtain
_T = (ln 50)/0.03_ _[∼]= 130.400766848( min)._
**Final Answer: (ln 50)/0.03**
_Figure S5. The example from the textbook Elementary Differential Equations and Boundary Value Problems._
17
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
_Figure S6. The UI design of data annotation._
**C. Experimental Details**
**C.1. Prompts**
The APIs of ChatGPT and GPT-4 have three message parameters: SYSTEM, USER, and ASSISTANT. The SYSTEM parameter
represents the system prompt, which provides context and instructions to the model. The USER parameter is the training
prompt or input provided by the user, and the ASSISTANT parameter contains the output of the model or the response. All
system prompts and training prompts used in our experiments are provided below.
**System Prompt for Zero-Shot, Few-Shot, and Chain-of-Thought settings.**
Please provide a clear and step-by-step solution for a scientific problem in the categories of Chemistry, Physics,
or Mathematics. The problem will specify the unit of measurement, which should not be included in the answer.
Express the final answer as a decimal number with three digits after the decimal point. Conclude the answer by
stating "The answer is therefore \boxed{[ANSWER]}."
**System Prompt for Few-Shot Learning + Python.**
Please provide a clear and step-by-step solution for a scientific problem in the categories of Chemistry, Physics, or
Mathematics. The problem will specify the unit of measurement. Please translate the solution steps into Python
code and encase the Python code within triple backticks for clarity.
**System Prompt for Few-Show Learning + Wolfram Language.**
Please provide a clear and step-by-step solution for a scientific problem in the categories of Chemistry, Physics, or
Mathematics. The problem will specify the unit of measurement. Please translate the solution steps into Wolfram
code and encase the Wolfram Language code within triple backticks for clarity.
18
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**System Prompt for Evaluation Protocol.**
Examine the given problem, the correct solution, and the model’s solution. Identify the reason for the error in the
model’s solution based on the following 10 categories:
1. Logical Decomposition and Analysis Skills: This ability involves decomposing the problem into smaller,
manageable parts, and understanding the relationships between these parts.
2. Identification of Assumptions: This skill involves the AI’s ability to recognize relevant and necessary assumptions
in the problem.
3. Spatial Perception: This is important for understanding problems in areas such as physics and chemistry, where
you need to visualize molecules, forces, fields, etc.
4. Causal Reasoning: This is the ability to understand cause and effect relationships.
5. Problem Deduction Skills: This pertains to the ability to infer and deduce potential solutions or underlying
principles from the given information in a problem.
6. Abstract Reasoning: This skill involves the ability to understand complex concepts that can’t be perceived
physically, and to recognize patterns or relationships beyond concrete examples.
7. Scientific Literacy: This skill involves a comprehensive understanding of key scientific principles, terminology,
and methodologies across a range of disciplines.
8. Code Conversion Skills: This denotes the ability to accurately translate solution steps into different programming
languages, like Python or Wolfram, without syntax errors.
9. Logical Reasoning: This is the ability to make a reasoned argument and to identify fallacies or inconsistencies in
an argument or set of data.
10. Calculation Skills: This involves the ability to accurately carry out mathematical operations and computations.
Conclude your final error reason category number within \boxed{}.
**Training Prompt for Zero-Shot Chain-of-Thought.**
_Stage 1:_
Input: [Input-Question] Let’s think step by step.
Output: <explanation>
_Stage 2:_
Input: [Input-Question] Let’s think step by step. [Explanation]. Therefore, the answer is:
Output: <answer>
**Training Prompt for Few-Shot Chain-of-Thought.**
Input:
Problem 1: [Question 1] Explanation for Problem 1: [Explanation 1]. The answer is \boxed{[Answer 1]}.
Problem 2: [Question 2] Explanation for Problem 2: [Explanation 2]. The answer is \boxed{[Answer 2]}.
...
Problem n: [Question n] Explanation for Problem n: [Explanation n]. The answer is \boxed{[Answer n]}.
Problem n+1: [Question n+1]
Output: Explanation for Problem n+1: <explanation>. The answer is \boxed{<answer>}.
19
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Training Prompt for Few-Shot Python or Wolfram Language.**
Input:
Problem 1: [Question 1] Explanation for Problem 1: [Explanation 1]. Python/Wolfram language for Problem 1:
```
```[Python/Wolfram code 1]```.
```
Problem 2: [Question 2] Explanation for Problem 2: [Explanation 2]. Python/Wolfram language for Problem 2:
```
```[Python/Wolfram code 2]```.
```
...
Problem n: [Question n] Explanation for Problem n: [Explanation n]. Python/Wolfram language for Problem n:
```
```[Python/Wolfram code n]```.
```
Problem n+1: [Question n+1]
Output: Explanation for Problem n+1: `<explanation>.` Python/Wolfram language for Problem n+1:
```
```[Python/Wolfram code n+1]```.
```
**Training Prompt for Evaluation Protocol.**
Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model solution is [ModelSolution].
Output: <Error Type>
**Training Prompt for Evaluation Protocol in Python or Wolfram Language.**
Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model solution is [ModelSolution]. The translated program generates the answer as [Program Generated Answer], which is treated as model’s
output answer.
Output: <Error Type>
**C.2. Implementation Details**
All model output is extracted using \boxed{} notation. To prevent any missed extractions, we supplement this process with
a manual check. For both Python and Wolfram settings, we extract the programming language with the triple backtick ```,
[subsequently executing it within the corresponding language. The entirety of our code can be accessed via this repository.](https://github.com/mandyyyyii/scibench)
**C.3. Additional Experiment on Wolfram Language**
The experiment results and error analysis for using Wolfram Language as external tools are presented in Figure S7 and
Figure S8, compared with using CoT and Python Language. We observe that the use of external tools can weaken other
LLaMA-2-7B LLaMA-2-70B Claude2 GPT-3.5
19.9
20 20 20
15.3 14.9
15 15 15
12.0
10 10 10
8.4 7.9
Average Score (%) 5.1
5 3.6 5 5 3.8
2.2
1.2
0.0
0 0 0
(a) CoT Prompting (b) Python (c) Wolfram Language
_Figure S7. Comparison between few-shot learning with external tools._
20
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
skills, particularly the code conversion skills. This issue becomes particularly prominent when using the Wolfram Language,
with 46.9% error rate in code conversion skill. Despite providing grammar specifications in system prompts and a few
examples as demonstrations, most attempts of code conversion result in syntax errors. In Wolfram Language, the error
mainly comes from the violation of variable rules (for instance, Wolfram Language reserves certain letters such as E as
protected symbols and disallows underscores in variable names) or incorrect usage of certain functions. This observation
suggests a potential improvement for LLM when using Wolfram Language.
**D. Problem Solving Abilities of Current LLMs**
**D.1. Assessment of the Evaluation Protocol**
In order to assess the effectiveness of our evaluation protocol’s classification, we enlisted the assistance of two annotators to
determine whether the errors identified by the model verifier were accurate or not. Through the annotation of 151 samples
across different settings, we observed that 123 of them were correctly classified, resulting in an accuracy rate of 81.45%.
Two human annotators participate in the process. Decisions on the final abilities are determined by annotators, aided by
assistants. By going through errors, these two annotators develop ten abilities and then employ a Language Learning
Model (LLM) as a third evaluator to suggest additional abilities. They then compare and refine their findings based on
this input. Ultimately, the final outcomes are determined by the annotators. After LLM annotate the error reasons, we
conduct human-check by sampling 151 examples across all settings to make sure the annotations make sense. We make
this human-AI cooperated analysis pipeline to reduce the cost of human post-analysis, while incorporate human checking
to make sure the correctness of LLM decision and try to reduce the risk that reviewer mentioned. Though not perfect, we
believe it can be another type of analyzing framework for future study of LLM problem-solving.
**D.2. Examples**
In the context of each specific capability, we present several exemplary errors accompanied by their corresponding
classifications and explanations derived from the GPT model. Referencing Figure S9, the ChatGPT solution employing the
Chain-of-Thought method corresponds to error category 4, "Causal Reasoning". The model explanation posits that "The
error reason category is 4. Causal Reasoning. The model solution uses the relativistic momentum formula to calculate
the momentum of the electron, which is not necessary for this problem since the electron is traveling at only 1.00% of the
speed of light. The relativistic momentum formula is only needed when the velocity of the particle is a significant fraction
of the speed of light. Therefore, using this formula in this problem leads to an incorrect value for the momentum and
consequently an incorrect value for the de Broglie wavelength." Meanwhile, the ChatGPT solution that doesn’t utilize the
Chain-of-Thought method is categorized under error case 10, "Calculation Skills". This example is further highlighted as an
illustrative showcase for explanations of the Chain-of-Thought method in Section 5. More examples resulting in different
error types are shown in Figures S10 to S19.
Calculation 14.5% 6.2% 6.1%
Logical Reasoning 4.8% 7.8% 8.2%
Code Conversion 1.6% 21.9% 46.9%
Scientific Literacy 6.5% 4.7% 4.1%
Abstract Reasoning 1.6% 1.6% 0.0%
Problem Deduction 9.7% 10.9% 7.1%
Causal Reasoning 19.4% 15.6% 10.2%
Spatial Perception 4.8% 0.0% 0.0%
Assumption Indentification 9.7% 6.2% 5.1%
Logical Decomposition 27.4% 25.0% 12.2%
0 10 20 0 10 20 0 10 20 30 40
(a) CoT Prompting (b) Python (c) Wolfram Language
_Figure S8. Error Analysis between few-shot learning with external tools._
21
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
Calculate the de Broglie wavelength of an electron traveling at 1.00% of the speed of light.
**Correct Solution**
The mass of an electron is 9.109 × 10[−][31] kg. One percent of the speed of light is
_v = (0.0100)_ 2.998 × 10[8] m · s[−][1][] = 2.998 × 10[6] m · s[−][1]
The momentum of the electron is given by
_p = mev =_ 9.109 × 10[−][31] kg 2.998 × 10[6] m · s[−][1][] = 2.73 × 10[−][24] kg · m · s[−][1]
The de Broglie wavelength of this electron is
_λ =_ _[h]_ 6.626 × 10[−][34] J · s
_p_ [=] 2.73 10[−][24] kg m s[−][1][ = 2][.][43][ ×][ 10][−][10][ m = 243pm]
_×_ _·_ _·_
This wavelength is of atomic dimensions.
**Final Answer: 243pm**
**Chatgpt Solution with Chain-of-Thought Method**
Let’s start by recalling the de Broglie wavelength formula:λ = _[h]p_ [where][ λ][ is the de Broglie wavelength,][ h][ is Planck’s constant, and][ p][ is the momentum of the particle. To]
calculate the momentum of the electron, we can use the relativistic momentum formula:
_p =_ _mev_
1 − _[v]c[2][2]_
q
where me is the mass of the electron, v is its velocity, and c is the speed of light. Plugging in the given values, we get:
_p = [(9][.][11][ ×]1 −[ 10](0[−].[31]01[kg)(0]c)[2]/c[.][2][01][c][)]_ = 8.19 × 10[−][23]kg · m/s
p
Now we can plug this momentum into the de Broglie wavelength formula:
_λ =_ _[h]_ 6.626 × 10[−][34]J · s
_p_ [=] 8.19 10[−][23]kg m/s [= 8][.][09pm]
_×_ _·_
Therefore, the de Broglie wavelength of the electron is 8.09 .
**Final Answer: 8.09pm**
**ChatGPT Solution without Chain-of-Thought Method**
The de Broglie wavelength of a particle is given by the equation:λ = _[h]p_ [where][ h][ is the Planck constant and][ p][ is the momentum of the particle. The momentum of an electron]
is given by:
_p = mv_
where m is the mass of the electron and v is its velocity. We are given that the electron is traveling at 1.00% of the speed of light, which is:v = 0.01cwhere c is the speed
of light. The mass of an electron is:m = 9.11 × 10[−][31], kgPutting it all together, we have:
_h_ _h_
_λ =_
_mv_ [=] _m(0.01c)_
Substituting the values of h, m, and c, we get:
_λ =_ 6.626 × 10[−][34], J · s
9.11 10[−][31], kg 0.01 2.998 10[8], m/s [=][ 3][.][31][ ×][ 10][−][12][m = 3][.][31pm]
_×_ _·_ _·_ _×_
Therefore, the de Broglie wavelength of the electron is 3.31pm
**Final Answer: 3.31pm**
_Figure S9. An example problem, inaccurately solved by ChatGPT with and without the Chain of Thought method, is provided to highlight_
different types of errors. In the example, the mistaken step is highlighted in red.
22
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
Find the length of the cardioid r = 1 + sin θ.
**Correct Solution**
The cardioid’s full length is given by the parameter interval 0 ≤ _θ ≤_ 2π, so
2π _dr_ 2 2π 2π
_L =_ _r[2]_ + _dθ =_ (1 + sin θ)[2] + cos[2] _θdθ_ = 2 + 2 sin θdθ
0 s _dθ_ 0 0
Z Z q Z p
We could evaluate this integral by multiplying and dividing the integrand by 2 2 sin θ, or we could use a computer algebra system. In any event, we find that the length
_[√]_ _−_
of the cardioid is L = 8.
**Final Answer: 8**
**Model Solution**
To find the length of the cardioid, we use the formula for arc length in polar coordinates:
_β_ _dr_ 2
_L =_ _r[2]_ + _, dθ,_
_α_ s _dθ_
Z
where α and β are the angles that correspond to one full loop of the cardioid (i.e. α = 0 and β = 2π). First, we find _dθ[dr]_ [:]
_dr_
_dθ_ [=][ d]dθ [(1 + sin][ θ][) = cos][ θ.]
Then, we plug in r = 1 + sin θ and _dθ[dr]_ [= cos][ θ][ into the formula for arc length:]
2π
_L =_ (1 + sin θ)[2] + cos[2] _θ_
0
Z 2π q
_dθ =_ 1 + 2 sin θ + sin[2] _θ + cos[2]_ _θ_
0
Z 2π p
_dθ =_ 2 + 2 sin θ
0
Z _πp_
_dθ = 4_ 2 + 2 sin θ
0
Z π/p2
_dθ_ (due to symmetry) = 8 2 + 2 sin θ
0
Z π/2 p
_dθ = 8_ 4 cos[2][ θ]
Z0 r 2
_π/2_
_dθ_ (using the half-angle formula) = 16 cos _[θ]_
0 2
Z
_π/2_
_dθ = 32 sin_ _[θ]_
2 0
= 32.
Therefore, the length of the cardioid is 32, and the unit of the answer is whatever unit was used for r (e.g. meters, centimeters, etc.).
**Final Answer: 32**
**Error Reason**
The model solution made an error in the calculation of the integral 20 _π_ _√2 + 2 sin θdθ. The correct solution correctly simplified this integral to 4_ _π0_ _√2 + 2 sin θdθ_
due to the symmetry of the integrand, and then used a substitution to evaluate the integral. The model solution, on the other hand, incorrectly evaluated this integral as 8π.
R R
This error falls under the category of 10. Calculation Skills .
_Figure S10. An example problem is inaccurately solved by error reason 10. Calculation Skills. The mistaken step is highlighted in red._
23
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
2
Use Simpson’s Rule with n = 10 to approximate 1 [(1][/x][)][dx][.]
**Correct Solution** R
Putting f (x) = 1/x, n = 10, and ∆x = 0.1 in Simpson’s Rule, we obtain
2 1
1 _x [dx]_
Z
_≈_ _S10_
= [∆][x]
3 [[][f] [(1) + 4][f] [(1][.][1) + 2][f] [(1][.][2) + 4][f] [(1][.][3) +][ · · ·][ + 2][f] [(1][.][8) + 4][f] [(1][.][9) +][ f] [(2)]]
= [0][.][1] 1 4 2 4 2 4 2 4 2 4
3 1 [+] 1.1 [+] 1.2 [+] 1.3 [+] 1.4 [+] 1.5 [+] 1.6 [+] 1.7 [+] 1.8 [+] 1.9 [+ 1]2
_≈_ 0.693150
**Final Answer: 0.693150**
**Model Solution**
Explanation for Problem 6: With n = 10 we have h = [2]10[−][1] = 101 [and][ x][i][ = 1 +][ ih][ for][ i][ = 0][,][ 1][,][ 2][, . . .,][ 10][. Then Simpson’s Rule gives]
2 1
1 _x [dx][ ≈]_ _[h]3 [[][f]_ [(][x][0][) + 4][f] [(][x][1][) + 2][f] [(][x][2][) + 4][f] [(][x][3][) +][ ...][ + 2][f] [(][x][8][) + 4][f] [(][x][9][) +][ f] [(][x][10][)]]
Z
where f (x) = _x[1]_ [.]
wolfram language for Problem 6:
(* Constants *)
n = 10;
a = 1;
b = 2;
(* Function to integrate *)
f[x_] := 1/x;
(* Simpson’s Rule *)
h = (b - a)/n;
x = Table[a + i*h, i, 0, n];
approximation = (h/3)*Total[Table[If[EvenQ[i], 2, 4]*f[x[[i]]], i, 1, n - 1] + f[x[[0]]], f[x[[n]]], 4*f[x[[n/2]]]];
approximation
**Final Answer: "Objects of unequal length in {4,20/11,10/3,20/13,20/7,4/3,5/2,20/17,20/9}+{1/List,10/19,20/7} cannot be combined."**
**Error Reason**
The error reason category for the model’s solution is 8. Code Conversion Skills . The model’s solution contains a syntax error in the Wolfram Language code, which causes
the program to exceed the recursion limit and terminate prematurely. This error likely occurred during the translation of the correct solution’s code into Wolfram Language.
_Figure S11. An example problem is inaccurately solved by error reason 8. Code Conversion Skills. "Error Reason" denotes the output_
from the LLM Verifier utilized in the classification of error causes. In the example, the mistaken step is highlighted in red.
**Problem**
2
Use Simpson’s Rule with n = 10 to approximate 1 [(1][/x][)][dx][.]
**Correct Solution** R
The equations for c1 and c2 are
_c1(α −_ _E) + c2β = 0_ and _c1β + c2(α −_ _E) = 0_
For E = α + β, either equation yields c1 = c2. Thus, ψb = c1 (2pz1 + 2pz2). The value of c1 can be found by requiring that the wave function be normalized. The
normalization condition on ψπ gives c[2]1[(1 + 2][S][ + 1) = 1][. Using the Hückel assumption that][ S][ = 0][, we find that][ c][1][ = 1][/]√2.Substituting E = α − _β into either of_
the equations for c1 and c2 yields c1 = −c2, or
_ψa = c1 (2pz1 −_ 2pz2)
The normalization condition gives c[2](1 − 2S + 1) = 1, or c1 = 1/√2.
**Final Answer: 0.7071067**
**Model Solution**
therefore not provided as this problem requires visual representations and cannot be solved through text alone.
**Error Reason**
The error reason category is 3. Spatial Perception . This problem requires visual representations to find the bonding and antibonding Hückel molecular orbitals for ethene.
The model’s solution is incomplete as it only provides equations and does not provide any visual representation.
_Figure S12. An example problem is inaccurately solved by error reason 3. Spatial Perception. "Error Reason" denotes the output from the_
LLM Verifier utilized in the classification of error causes.
24
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
What is the mean speed, ¯c, of N2 molecules in air at 25[◦]C
**Correct Solution**
The integral required is
_M_ 3/2 _∞_ _M_ 3/2 2RT 2 8RT 1/2
_c¯ = 4π_ 2πRT 0 _v[3]e[−][Mv][2]_ _[/][2][RT]_ dv = 4π 2πRT _×_ 2[1] _M_ = _πM_
Z
where we have used the standard result from tables of integrals (or software) that
_∞_ 1
_x[3]e[−][ax][2]_ dx =
0 2a[2]
Z
Substitution of the data then gives
_c¯ =_ 8 × 8.3141 J K[−][1] mol[−][1][] _× (298 K)_ 1/2 = 475 m s[−][1]
_π ×_ 28.02 × 10[−][3] kg mol[−][1][] !
where we have used 1 J = 1 kg m[2] s[−][2].
**Final Answer: 475 m s[−][1]**
**Model Solution**
To calculate this, we can use the formula for the root-mean-square (rms) speed of gas molecules:
3kT
_crms =_
r _m_
where k is the Boltzmann constant, T is the temperature in Kelvin, and m is the mass of the molecule. For N2, m = 28 amu. Plugging in the values, we get:
_crms =_ s 3 × 1.2838 × 1 10.66[−][23] 10J/[−]K[27] × (25 + 273) Kkg/amu _≈_ 515 m s[−][1]
_×_ _×_
Note that we converted the temperature from Celsius to Kelvin and the mass from atomic mass units (amu) to kilograms.
**Final Answer: 515 m s[−][1]**
**Error Reason**
The error reason category is 7. Scientific Literacy . The model solution uses the formula for the root-mean-square (rms) speed of gas molecules, which is only valid for
ideal gases. However, the given problem does not specify that the gas is ideal, and in fact, air is not an ideal gas. The correct solution uses a more general formula that applies
to all gases, including non-ideal gases like air. Therefore, the model’s solution is not applicable to the given problem.
_Figure S13. An example problem is inaccurately solved by error reason 7. Scientific Literacy. "Error Reason" denotes the output from the_
LLM Verifier utilized in the classification of error causes. In the example, the mistaken step is highlighted in red, while the correct step is
highlighted in blue.
**Problem**
What is the number of possible four-letter code words, selecting from the 26 letters in the alphabet?
**Correct Solution**
The number of possible four-letter code words, selecting from the 26 letters in the alphabet, in which all four letters are different is
26[P]4 [= (26)(25)(24)(23) = 26!]
22! [= 358][,][ 800][.]
**Final Answer: 358800**
**Model Solution**
The number of possible four-letter code words, selecting from the 26 letters in the alphabet is
26[4] = 456, 976.
**Final Answer: 456976**
**Error Reason**
The model’s error reason category is 9. Logical Reasoning . The model did not consider the condition that all four letters should be different. The correct solution considers
this condition and uses the permutation formula to calculate the number of possible four-letter code words.
_Figure S14. An example problem is inaccurately solved by error reason 9. Logical reasoning. “Error Reason” denotes the output from the_
LLM Verifier utilized in the classification of error causes. In the example, the mistaken step is highlighted in red.
25
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
Find the bonding and antibonding Hückel molecular orbitals for ethene.
**Correct Solution**
The equations for c1 and c2 associated are
_c1(α −_ _E) + c2β = 0_ and _c1β + c2(α −_ _E) = 0_
For E = α + β, either equation yields c1 = c2. Thus,
_ψb = c1 (2pz1 + 2pz2)_
The value of c1 can be found by requiring that the wave function be normalized. The normalization condition on ψπ gives c[2]1[(1 + 2][S][ + 1) = 1][. Using the Hückel]
assumption that S = 0, we find that c1 = 1/√2.Substituting E = α − _β into either of the equations for c1 and c2 yields c1 = −c2, or_
_ψa = c1 (2pz1 −_ 2pz2)
The normalization condition gives c[2](1 − 2S + 1) = 1, or c1 = 1/√2.
**Final Answer: 0.70710678**
**Model Solution**
Two bonding and two antibonding Hückel molecular orbitals for ethene. The unit of the answer is not applicable as it is a qualitative description of the molecular orbitals.
**Final Answer: None**
**Error Reason**
The error reason category for the model’s solution is 6. Abstract Reasoning . The model’s solution is incorrect because it did not consider the Hückel molecular orbital
theory, which predicts that there are only two pi molecular orbitals for ethene: one bonding and one antibonding. This is due to the fact that ethene has only two pi electrons,
which can occupy the two pi molecular orbitals. Therefore, the model’s solution of two bonding and two antibonding molecular orbitals is not possible for ethene.
_Figure S15. An example problem is inaccurately solved by error reason 6. Abstract Reasoning. “Error Reason” denotes the output from_
the LLM Verifier utilized in the classification of error causes.
**Problem**
A living room has floor dimensions of 3.5 m and 4.2 m and a height of 2.4 m.What does the air in the room weigh when the air pressure is 1.0 atm ?
**Correct Solution**
The air’s weight is equal to mg, where m is its mass Mass m is related to the air density ρ and the air volume V (ρ = m/V ).
Calculation: Putting the two ideas together and taking the density of air at 1.0 atm, we find
_mg = (ρV )g_ = 1.21 kg/m[3][] (3.5 m × 4.2 m × 2.4 m) 9.8 m/s[2][] = 418 N
**Final Answer: 418 N**
**Model Solution**
To find the weight of the air in the room, we need to first calculate the volume of the room: V = lwh = 3.5 m × 4.2 m × 2.4 m = 35.28 m[3]. Next, we can use the
ideal gas law to find the mass of the air in the room:P V = nRT where P is the pressure, V is the volume, n is the number of moles of gas, R is the gas constant, and T is
the temperature. Assuming room temperature of 20[◦]C (293 K), we can solve for n:
(1.0 atm)(35.28 m[3])
_n =_ _[P V]_
_RT_ [=] (0.0821 L atm/mol K)(293 K) [= 1][,][ 610 mol]
Finally, we can find the weight of the air using the molar mass of air (approximately 28.97 g/mol):
_W = mg = nMg = (1, 610 mol)(0.02897 kg/mol)(9.81 m/s[2]) = 101, 640 N_
**Final Answer: 101640 N**
**Error Reason**
The error in the model’s solution is that they used the ideal gas law to find the mass of the air in the room, which assumes that the air behaves like an ideal gas. However, the
ideal gas law is not applicable in this case because the problem does not provide information about the temperature of the air. The correct approach is to use the density
of air at 1.0 atm and the volume of the room to find the mass of the air, as shown in the correct solution. Therefore, the reason for the error in the model’s solution is
2. Identification of Assumptions .
_Figure S16. An example problem is inaccurately solved by error reason 2. Identification of Assumptions. "Error Reason" denotes the_
output from the LLM Verifier utilized in the classification of error causes. In the example, the mistaken step is highlighted in red.
26
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
From an ordinary deck of playing cards, cards are to be drawn successively at random and without replacement. What is the probability that the third spade appears on the
sixth draw?
**Correct Solution**
Let A be the event of two spades in the first five cards drawn, and let B be the event of a spade on the sixth draw. Thus, the probability that we wish to compute is P (A ∩ _B)._
It is reasonable to take
132 39
3
_P (A) =_ = 0.274 and _P (B_ _A) = [11]_
525 _|_ 47 [= 0][.][234]
[]
The desired probability, P (A _B), is the product of those numbers: _ _P (A_ _B) = (0.274)(0.234) = 0.064_
_∩_ _∩_
**Final Answer: 0.064**
**Model Solution**
The probability that the third spade appears on the sixth draw is
13[C]2 _[·]_ 39[C]4 = [78][ ·][ 741][,][ 948]
52[C]6 20, 358, 520 [= 1]5[,],[ 441] 577 _[.]_
**Final Answer: 0.258**
**Error Reason**
The model’s solution is incorrect because it calculates the probability of getting two spades in the first five cards and any spade on the sixth draw, instead of specifically
calculating the probability of the third spade appearing on the sixth draw given that two spades have already appeared in the first five draws. Therefore, the error reason
category is 4. Causal Reasoning
_Figure S17. An example problem is inaccurately solved by error reason 4. Causal Reasoning. The mistaken step is highlighted in red._
**Problem**
Gas cylinders of CO2 are sold in terms of weight of CO2. A cylinder contains 50lb (22.7 kg ) of CO2. How long can this cylinder be used in an experiment that requires
flowing CO2 at 293 K(η = 146µP) through a 1.00-m-long tube (diameter = 0.75 mm) with an input pressure of 1.05 atm and output pressure of 1.00 atm ? The
flow is measured at the tube output
**Correct Solution**
The gas flow rate ∆V/∆t is
∆V _πr[4]_
_P2[2]_ 1 = 2.76 10[−][6] m[3] s[−][1]
∆t [=] 16ηLP0 _[−]_ _[P][ 2]_ _×_
Converting the CO2 contained in the cylinder to the volume occupied at 298 K and 1 atm pressure, we get
1 10[−][3] m[3]
_nCO2 = 22.7 kg_ 0.044 kg mol[−][1] = 516 mol, V = _[nRT]P_ = 1.24 × 10[4] L L ! = 12.4 m[3]
12.4 m[3]
Given the effective volume of CO2 contained in the cylinder, the duration over which the cylinder can be used is 2.76 10[−][6 ]m[3 ]s[−][1][ = 4][.][49][ ×][ 10][6][ s][. This time]
_×_
corresponds to roughly 52 days.
**Final Answer: 52 days**
**Model Solution**
The volume of CO2 in the cylinder is given by
22.7 kg
_V =_ _[m]_
_ρ_ [=] 1.98 kg m[−][3][ = 11][.][5 m][3]
The volume of CO2 that flows through the tube is given by
_V =_ _[π]_ 0.75 10[−][3] m 2 (1.00 m) = 1.77 10−7 m3
4 _[d][2][L][ =][ π]4_ _×_ _×_
The time required to flow V volume of CO2 is given byt = _[V]Q_ [=] _QmPV P21_ [=] (11.05 atm.77×10−[−]1[7 ].00 atm)(146m[3] [](1.00 atm)µP) [= 2][.][08][ ×][ 10][6][ s][. The number of seconds in a year is]
3.15 × 10[7] s. Therefore, the cylinder can be used for 3.152.×0810×[7 ]10s yr[6 ]s[−][1][ = 0][.][066 yr = 24][.][1 days]
**Final Answer: 24.1 days**
**Error Reason**
The model’s solution is incorrect because it calculates the time required to flow a volume of CO2 through the tube, but it does not take into account the total volume of CO2
in the cylinder. The correct solution calculates the total volume of CO2 in the cylinder and divides it by the flow rate to determine the duration over which the cylinder can be
used. Therefore, the error reason category is 5. Problem Deduction Skills .
_Figure S18. An example problem is inaccurately solved by error reason 5. Problem Deduction Skills. The mistaken step is highlighted in_
red.
27
-----
**SCIBENCH: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models**
**Problem**
An astronaut whose height h is 1.70 m floats "feet down" in an orbiting space shuttle at distance r = 6.77 × 10[6] m away from the center of Earth. What is the difference
between the gravitational acceleration at her feet and at her head?
**Correct Solution**
We can approximate Earth as a uniform sphere of mass ME . Then, the gravitational acceleration at any distance r from the center of Earth is
_ag =_ _[GM][E]_
_r[2]_
We might simply apply this equation twice, first with r = 6.77 × 10[6] m for the location of the feet and then with r = 6.77 × 10[6] m + 1.70 m for the location of the
head. However, a calculator may give us the same value for ag twice, and thus a difference of zero, because h is so much smaller than r. Here’s a more promising approach:
Because we have a differential change dr in r between the astronaut’s feet and head, we should differentiate the equation with respect to r.
Calculations: The differentiation gives us
_dag = −2_ _[GM]r[3]_ _[E]_ _dr_
where dag is the differential change in the gravitational acceleration due to the differential change dr in r. For the astronaut, dr = h and r = 6.77 × 10[6] m. Substituting
data into the equation, we find
6.67 10[−][11] m[3]/kg s[2][ ]5.98 10[24] kg
_dag = −2_ _×_ (6.77 × 10 · [6] m)[3] _×_ (1.70 m) = −4.37 × 10[−][6] m/s[2]
**Final Answer: −4.37 × 10[−][6]** m/s[2]
**Model Solution**
We can use the equation for the gravitational acceleration due to a point mass, given by Eq (g = GM/r[2]), to find the gravitational acceleration at the astronaut’s feet and
head. Because the astronaut is in free fall, the difference between these two accelerations is the tidal acceleration, given by Eq (∆g = 2GMh/r[3]).
Calculations: The gravitational acceleration at the astronaut’s feet is
_gfeet =_ _[GM]_ = 6.67 × 10[−][11] N · m[2]/kg[2][ ]5.98 × 10[24] kg = 9.78 m/s[2].
_r[2]_ (6.77 × 10[6] m)[2]
The gravitational acceleration at the astronaut’s head is
_ghead =_ _GM_ 6.67 × 10[−][11] N · m[2]/kg[2][ ]5.98 × 10[24] kg = 9.77 m/s[2].
(r + h)[2][ =] (6.77 × 10[6] m + 1.70 m)[2]
The difference between these two accelerations is
∆g = gfeet − _ghead_ = 9.78 m/s[2] _−_ 9.77 m/s[2] = 0.01 m/s[2] = 1.0 × 10[−][5] m/s[2].
**Final Answer: 1.0 × 10[−][5]** m/s[2]
**Error Reason**
The model’s solution is incorrect because it uses the equation for the gravitational acceleration due to a point mass, which is not applicable in this case since the Earth is not a
point mass. The correct equation to use is the one given in the correct solution, which takes into account the fact that the Earth is a uniform sphere of mass. Therefore, the
error reason category is 1. Logical Decomposition and Analysis Skills .
_Figure S19. An example problem is inaccurately solved by error reason 1. Logical Decomposition and Analysis Skills. “Error Reason”_
denotes the output from the LLM Verifier utilized in the classification of error causes. In the example, the mistaken step is highlighted in
red.
28
-----
| [
"Pan, Lu",
"Xiaoxuan, Wang",
"Ziniu, Hu",
"Yanqiao, Zhu",
"Jieyu, Zhang",
"Satyen, Subramaniam",
"Arjun, Loomba",
"Shichang, Zhang",
"Yizhou, Sun",
"Wei, Wang"
] | 2023-10-28T00:00:00 | ICML 2024 Poster | true | 43 | 3 | null | https://openreview.net/forum?id=A3W864NIW2 | https://arxiv.org/abs/2307.10635 | https://www.semanticscholar.org/paper/4993258852711c4e3d0011325ac3db680eae84f4 |
A Promising Path Towards Autoformalization and General Artificial Intelligence | N/A | This paper argues that autoformalization is a promising path for systems to learn sophisticated, general purpose reasoning in all domains of mathematics and computer science and provides the outline for a realistic path towards those goals. | null | [
"Szegedy, Christian"
] | 2020-01-01T00:00:00 | null | false | 42 | 6 | null | https://scholar.google.co.uk/scholar?hl=en&as_sdt=0%2C5&as_vis=1&q=A+promising+path+towards+autoformalization+and+general+artificial+intelligence.&btnG= | null | https://www.semanticscholar.org/paper/da97bd6d2d0a2f11bb011b9925585e086010cff0 |
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Recent work has shown the immense potential of synthetically generated datasets for training large language models (LLMs), especially for acquiring targeted skills. Current large-scale math instruction tuning datasets such as MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed using outputs from closed-source LLMs with commercially restrictive licenses. A key reason limiting the use of open-source LLMs in these data generation pipelines has been the wide gap between the mathematical skills of the best closed-source LLMs, such as GPT-4, and the best open-source LLMs. Building on the recent progress in open-source LLMs, our proposed prompting novelty, and some brute-force scaling, we construct OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution pairs. The dataset is constructed by synthesizing code-interpreter solutions for GSM8K and MATH, two popular math reasoning benchmarks, using the recently released and permissively licensed Mixtral model. Our best model, OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves a score of 84.6% on GSM8K and 50.7% on MATH, which is competitive with the best gpt-distilled models. We will release our code, models, and the OpenMathInstruct-1 dataset under a commercially permissive license. | The dataset is constructed by synthesizing code-interpreter solutions for GSM8K and MATH, two popular math reasoning benchmarks, using the recently released and permissively licensed Mixtral model and achieves a score competitive with the best gpt-distilled models. | ## OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
**Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman,**
**Fei Jia, Igor Gitman**
NVIDIA
Figure 1: Training set coverage of Mixtral model generated solutions as a function of number of solutions sampled per problem (using temperature of 1.0 and top_p
= 0.95). The statistics for the training set coverage of
GPT-4 are from Gou et al. (2024).
**Abstract**
Recent work has shown the immense potential of synthetically generated datasets for training large language models (LLMs), especially
for acquiring targeted skills. Current largescale math instruction tuning datasets such as
MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed using
outputs from closed-source LLMs with commercially restrictive licenses. A key reason limiting the use of open-source LLMs in these data
generation pipelines has been the wide gap between the mathematical skills of the best closedsource LLMs, such as GPT-4, and the best opensource LLMs. Building on the recent progress
in open-source LLMs, our proposed prompting novelty, and some brute-force scaling, we
construct OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution
pairs. The dataset is constructed by synthesizing code-interpreter solutions for GSM8K
and MATH, two popular math reasoning benchmarks, using the recently released and permissively licensed Mixtral model. Our best model,
OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves a score
of 84.6% on GSM8K and 50.7% on MATH,
which is competitive with the best gpt-distilled
models. We release our code, models, and the
OpenMathInstruct-1 dataset under a commercially permissive license.[1]
2023) and training smaller models on the generated
_distillation data (Eldan and Li, 2023; Gunasekar_
et al., 2023; Li et al., 2023). For mathematical reasoning, our task of interest, all the current state-ofthe-art open-source models are gpt-distilled (Wang
et al., 2024; Yue et al., 2024; Gou et al., 2024; Liao
et al., 2024). However, model development recipes
relying on proprietary models like GPT-4 can have
serious limitations: (a) legal restraints on how the
finetuned models can be used,[2] (b) generating data
with closed-source models is typically costlier than
state-of-the-art open-source models, and (c) these
recipes lack reproducibility as closed-source model
behaviors can vary significantly over time (Chen
et al., 2023a).
_For developing mathematical reasoning models,_
_why are open-source models not used in place of_
_closed-source models? To answer this, we compare_
GPT-4 with Mixtral 8x7B model (Jiang et al., 2024),
currently one of the best open-source LLMs at mathematical reasoning, by generating code-interpreter
[2https://openai.com/policies/terms-of-use](https://openai.com/policies/terms-of-use)
**1** **Introduction**
The huge development and inference costs associated with general-purpose large language models (LLMs) have led to the rise of smaller, taskspecific LLMs. Recent work has proposed creating these domain/task-specific LLMs by generating
_high-quality synthetic data using powerful closed-_
source models such as GPT-3.5/4 (OpenAI et al.,
1Data and models are available at [https:](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014)
[//huggingface.co/collections/nvidia/](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014)
[openmath-65c5619de2ba059be0775014](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014)
Code is available at [https://github.com/Kipok/](https://github.com/Kipok/NeMo-Skills)
[NeMo-Skills](https://github.com/Kipok/NeMo-Skills)
-----
Table 1: Comparison of OpenMathInstruct-1 with mathematical reasoning fine-tuning datasets used by current
state-of-the-art open-source models. OpenMathInstruct1 is 4x bigger than the current largest dataset, MetaMathQA, and is the only one, except Lila, with a permissive license. Datasets marked with * have not been
publicly released.
Dataset Size Generating LM
(Permissive License)
Lila (Mishra et al., 2022) 272K - (✓)
MathInstruct (Yue et al., 2024) 262K GPT-4 (✗)
MetaMathQA (Yu et al., 2024) 395K GPT-3.5 (✗)
MathCodeInstruct (Wang et al., 2024) 80K GPT-4 + Self (✗)
WizardMath* (Luo et al., 2023) 96K GPT-3.5 (✗)
ToRA* (Gou et al., 2024) 16K GPT-4 (✗)
OpenMathInstruct-1 (Ours) 1.8M Mixtral (✓)
style solutions for two popular mathematical reasoning benchmarks, namely GSM8K (Cobbe et al.,
2021) and MATH (Hendrycks et al., 2021). We use
the metric training set coverage (TSC) to compare
the models, where TSC measures the number of
training problems for which any of the generated
solutions leads to the ground truth answer (pass@k).
Figure 1 shows the training set coverage (TSC) of
the Mixtral model as a function of the number of
sampled solutions. For the relatively easier GSM8K
benchmark, the Mixtral model’s coverage catches
up to GPT-4’s with almost 8x the number of solution
samples. For the challenging MATH benchmark,
even with 12x the number of solutions, the Mixtral
model still has a lower TSC than GPT-4. This gap in
the training set coverage reflects the distillation data
quality and, hence, the quality of the final fine-tuned
model. This explains the preference for GPT-4 in
the current distillation pipelines for mathematical
reasoning.
_Bridging the coverage gap between GPT-4 and_
_Open-source LLMs: We limit our investigation of_
open-source LLMs for synthesizing solutions to
the Mixtral-base model due to (a) its strong performance on mathematical reasoning tasks compared
to other open-source LLMs and (b) its permissive
license.[3] As a first attempt, we use a brute-force
approach of sampling several solutions per problem. However, this approach only scales logarithmically, limiting its effectiveness (Figure 1). Next,
we explore the approach of targeted solution generation, where we write few-shot prompts focused
on specific sections of the training data. Concretely,
we write few-shot prompts for each mathematics
subject in the MATH dataset and merge the syn
[3https://mistral.ai/news/mixtral-of-experts/](https://mistral.ai/news/mixtral-of-experts/)
thesized solutions. The motivation is that these
subject-specific few-shot prompts could better target the latent mathematical capabilities of these
general-purpose LLMs. Unfortunately, we only
find a marginal gain in TSC with this approach (Section 2.2.2). Finally, we utilize the fact that text solutions accompany mathematical benchmarks such
as MATH and GSM8K. These text solutions can
aid the synthesis of code-interpreter style solutions.
We show that using the text solution in our fewshot prompt with a slight modification substantially
increases the coverage and, consequently, the performance of the fine-tuned model (Section 2.2.3).
Our solution synthesis experiments result in
OpenMathInstruct-1, a collection of 1.8M problemsolution pairs. OpenMathInstruct-1 has a training
set coverage of 93% for MATH and 99.9% for
GSM8K. Table 1 shows that compared to previous mathematical reasoning fine-tuning datasets,
OpenMathInstruct-1 is at least four times bigger
and, even more importantly, it is permissively licensed, allowing unrestricted usage by future work.
To illustrate the quality of OpenMathInstruct-1, we
train and release a range of models based on Mistral7B (Jiang et al., 2023), Llama 2 (Touvron et al.,
2023), and CodeLlama (Rozière et al., 2023). In
particular, the CodeLlama-70B model fine-tuned
on a subset of OpenMathInstruct-1, referred to as
OpenMath-CodeLlama-70B, achieves a score of
84.6% on GSM8K and 50.7% on MATH. These
scores are competitive with the current best gpt_distilled models. Finally, to support the open-source_
efforts in this direction, we publicly release all our
fine-tuned models, code, and the OpenMathInstruct1 dataset along with a further 6.6M incorrect sampled solutions.[4]
**2** **Training Data Synthesis**
**2.1** **Overview**
**Setup.** Let X = {(q1, a1), · · ·, (qN _, aN_ )} be
a typical mathematical reasoning training dataset,
where qi and ai denote the i[th] question and answer
respectively. Optionally, the training data may include text solution ti, which illustrates a trajectory
from qi to ai using mathematical principles.[5] Besides the data, we assume access to a foundation
LLM like Mixtral-base. The goal is to generate
4The incorrect solution trajectories can be used to train
verifier models (Cobbe et al., 2021; Yu et al., 2023; Lightman
et al., 2023).
5Both GSM8K and MATH have these text solutions.
-----
MATH. Formally, the prompt has the form:
(q1, c1), _, (qK, cK) q[′]_
_I_ _· · ·_
where I represents a text-based instruction for the
task, _q1,_ _, qK_ represent K problems represen_{_ _· · ·_ _}_
tative of the dataset, _c1,_ _, cK_ represent their
_{_ _· · ·_ _}_
respective solutions in the code-interpreter format,
and q[′] represents a question from the training set.
Given this prompt, the base LLM generates a candidate solution c[′] for the question q[′]. If c[′] leads
to the correct answer for the question q[′], we add
the pair (q[′], c[′]) to our fine-tuning set. For all our
experiments, we choose K = 5, and the representative problems are chosen from the training set of
the corresponding benchmark. In the instruction
_I, we instruct the model to output the answer in-_
side the \boxed{} block. Complete instruction is
in Table 12 in Appendix.
**Sampling Details.** We sample solutions with temperature=1.0 and top_p=0.95. We use the following
constraints in our generation pipeline: (a) the total
number of input-output tokens is limited to 4096,
(b) a maximum of 512 new tokens after each code
block, (c) a maximum of 3 code blocks, and (d) the
generation halts after any code execution error. We
use the TensorRT-LLM toolkit.[7]
**2.2** **Prompting**
In the previous section, we described our solution generation pipeline. A key ingredient of this
pipeline is the few-shot prompt examples. We next
describe the different prompting strategies explored
in this work.
**2.2.1** **Default**
We choose five representative examples of GSM8K
and MATH to create the few-shot prompt for the
respective datasets. For GSM8K, we use a mix
of problems that require vanilla Python code and
problems that are best solved using Python’s sympy
library. For MATH, we compose a 5-shot prompt
with examples from different subjects. To reflect
this diversity of reasoning paths required for MATH,
we choose a mix of problems that require codebased solutions, text-based solutions, and a combination of both. The prompts used for the two
datasets are shown in Appendix B.6.
For GSM8K, we sample 128 solutions per training problem, which gets a training set coverage of
99.1%. For MATH, we sample 224 solutions per
[7https://github.com/NVIDIA/TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM)
Question
A department store displays a 20% discount on all fixtures.
What will be the new price of a 25 cm high bedside lamp
that was worth $120?
Code-Interpreter Style Solution
Let’s solve this problem using Python code.
<llm-code>
discount_percent = 20
price_before_discount = 120
discount = discount_percent / 100
discount_amount = price_before_discount * discount
price = price_before_discount - discount_amount
price
</llm-code>
<llm-code-output>
96.0
</llm-code-output>
So the new price of the lamp is 96 dollars.
Figure 2: Code-Interpreter style solution for a training
set problem from GSM8K.
diverse, high-quality solutions for the training set
problems using the LLM: a popular recipe for reasoning tasks (Zelikman et al., 2022; Huang et al.,
2023). Recent work has also attempted augmenting
training set problems (Yue et al., 2024; Yu et al.,
2024), but we limit our exploration to solution synthesis for existing problems in the benchmark.
**Solution Format.** We use the code-interpreter
format for the synthesized solutions (Figure 2).
The code-interpreter format interweaves natural
language reasoning with Python code blocks. It
thus combines the computation precision of coding environments with the expressiveness of natural language reasoning, which is particularly suitable for mathematical tasks (Zhou et al., 2024;
Gou et al., 2024). To demarcate the start and end
of a code block, we use the strings ⟨llm-code⟩
and ⟨/llm-code⟩. A code block is followed
by its execution block, which is demarcated by
_⟨llm-code-output⟩_ and ⟨/llm-code-output⟩.
During inference, the model invokes the Python
interpreter to run the preceding code block after
generating ⟨/llm-code⟩, appends the execution result in between the ⟨llm-code-output⟩ separators,
and resumes the autoregressive model inference.[6]
**Approach.** We use few-shot prompting to synthesize solutions for the training sets of GSM8K and
6During training, we don’t mask the code execution output
surrounded by ⟨llm-code-output⟩ separators.
-----
Table 2: Statistics of unique solutions generated by prompts described in Section 2.2. Default prompt refers to the
single prompt used for the two benchmarks, Mask-Text refers to prompting the model with masked text solution,
and Subj refers to prompting with subject-specific prompts (applicable only to MATH). Coverage % refers to the
percentage of problems in the training set for which there’s at least one solution among the generated solutions.
|Prompt|MATH GSM8K # Samples # Unique Solns. Coverage (in %) # Samples # Unique Solns. Coverage (in %)|Col3|
|---|---|---|
|Default + Subj|224 177K 80.1 128 434K 99.1 224 191K 80.1 - - -||
|Mask-Text + Subj|224 192K 85.9 224 227K 87.5|128 602K 99.9 - - -|
|Total|896 787K 93.0|256 1036K 99.9|
training problem, which only achieves a training
set coverage of 80.1%. This difference in coverage
reflects the difficulty of the MATH benchmark compared to GSM8K, which has been noted in previous
work as well (Gou et al., 2024; Liao et al., 2024).
**2.2.2** **Subject-specific Prompting (Subj)**
_Could the diversity of mathematical topics in MATH_
_be a reason for the low training set coverage_
_with a single 5-shot prompt? To answer this ques-_
tion, we create subject-specific prompts for the
seven subjects in the MATH benchmark, namely
algebra, geometry, intermediate algebra,
number theory, prealgebra, precalculus,
and probability (See Table 10 in the appendix for
the subject-wise split of MATH training data). The
MATH benchmark also labels problems by their
hardness level, with levels ranging from 1 to 5,
where level 5 is the hardest. For creating subjectspecific 5-shot prompts, we choose one example
from each level for a given subject. For each of the
seven prompts, we sample 32 solutions per problem and combine the data generated with all the
prompts, which is equivalent to 32 x 7 = 224 solutions per problem. However, even with this finegrained prompting, we only find a negligible gain in
the training set coverage, though the total number
of correct solutions increases by 14K (Table 2).
Combining this fine-tuning dataset with the earlier single default prompt dataset yields a training
coverage of 85.1% for MATH, a boost of 5% absolute. But achieving this coverage required sampling
almost 450 solutions per problem (224 + 224 = 448).
_Can we make the solution generation pipeline more_
_efficient?_
**2.2.3** **Masked Text Solution Prompting**
**(Mask-Text)**
GSM8K and MATH benchmarks come with groundtruth text solutions. Using these text solutions can,
**Question**
Lynne bought 7 books about cats and 2 books about the
solar system. She also bought 3 magazines. Each book
cost $7 and each magazine cost $4. How much did Lynne
spend in all?
**Ground-Truth Text Solution**
Lynne bought a total of 7 + 2 = 9 books. The books cost
Lynne 9 x 7 = $63. For 3 magazines, Lynne spent 3 x 4 =
$12. In total, Lynne spent 63 + 12 = $75
**Masked Text Solution**
Lynne bought a total of 7 + 2 = M books. The books cost
Lynne M x 7 = N. For 3 magazines, Lynne spent 3 x 4 =
P. In total, Lynne spent N + P = Q
Figure 3: A sample masked solution from GSM8K training set. The masked text solution only masks the intermediate computations, such as 9 → M and 63 → N, and
doesn’t mask the amounts introduced in the question,
such as 7, 2, and $4.
in theory, reduce the problem of code-interpreter solution generation to a translation problem from text
to code. We initially experimented by prompting
the LLM with:
(q1, t1, c1), _, (qK, tK, cK) q[′], t[′]_
_I_ _· · ·_
where ti’s represent the text solution of representative problem qi’s and t[′] represents the text solution
of the problem q[′]. Using the text solution in the
prompt leads to a considerable increase in training
set coverage. However, our manual analysis revealed that many solutions were shortcuts. E.g.,
trivial solutions such as print(ANSWER) or The
answer is ANSWER where the ANSWER is copied
from the text solution t[′] in the prompt. Our attempts
to filter out these trivial solutions proved challenging as we ran into many creative ways in which the
generated solution was cheating (see Figure 9 in
Appendix).
To deter the possibility of such shortcut solutions
-----
where the results of intermediate computations or
the final answer from the text solution are copied,
we propose prompting with a masked text solution.
Such solutions have all numbers in intermediate
computations replaced with symbols. A sample
masked text solution is shown in Figure 3. These
masked text solutions are generated using few-shot
prompting as follows:
mask (q1, t1, t[mask]1 ), _, (qK, tK, t[mask]K_ ) q[′], t[′]
_I_ _· · ·_
where Imask represents the instruction for the solution masking task, and _t[mask]1_ _,_ _, t[mask]K_ rep_{_ _· · ·_ _}_
resent masked text solutions corresponding to
_t1,_ _, tK_ . For a detailed overview of the
_{_ _· · ·_ _}_
masked text solution generation pipeline, we refer the reader to Appendix B.5. Using these masked
text solutions in the prompts significantly boosts the
training set coverage for MATH, increasing from
80.1% → 85.9% for the single default prompt, and
80.1% → 87.5% for the subject-specific prompts.
For GSM8K, it leads to the coverage increasing
from 99.1% to 99.9%.
Table 2 summarizes the statistics of the solutions dataset generated via different prompts. The
OpenMathInstruct-1 dataset is obtained by merging and deduplicating the problem-solution pairs
resulting from the above-described prompt strategies. OpenMathInstruct-1 consists of 787K unique
solutions for 6978 problems (out of 7500) in MATH
and 1.04M unique solutions for 7469 problems (out
of 7473) in GSM8K. To get to this final dataset, we
also perform a few post-processing steps, which are
described next.
**2.3** **Post-processing**
The generated solutions can sometimes be syntacti_cally noisy even if they lead to the right answer. We_
fix or remove the following solutions:
- The solution has multiple answers as it has
multiple \boxed{} blocks. We remove such
solutions.
- The solution has the ⟨llm-code⟩ string but
not the ⟨/llm-code⟩ string. We remove such
solutions.
- The solution continues even after generating
the answer, i.e., the \boxed{} block. While
in some cases, this continuation merely concludes the answer, we noticed that continuations that went beyond two lines were almost
always gibberish generated by the LLM. We
removed the text in the lines beyond the solution line with the answer. See Figure 10 in the
Appendix for an example solution where we
perform trimming.
While these post-processing steps can fix some of
the syntactic errors, filtering semantically noisy, i.e.,
solutions that get to the right answer with flawed
reasoning (Cobbe et al., 2021), is a much harder
problem and beyond the scope of this work. Anecdotally, we find such solutions to be rare in our
corpus. See Figure 11 in the Appendix for a sample
_semantically noisy solution._
**2.4** **Data Selection**
OpenMathInstruct-1 on average has hundreds of solutions per problem. These solutions can have different formats (code vs. text), and problems can have
very different numbers of solutions in the dataset.
Careful data selection allows for reduced training
times and can also benefit performance. We detail
the data selection strategies explored in this work.
**2.4.1** **Fair vs Naive Downsampling**
For a dataset like MATH, where problems can have
very different difficulty levels, our solution generation strategy leads to a corpus where easier problems have a lot of solutions and harder problems
have very few solutions (see Appendix A.3 for a detailed discussion on solution count). A naive strategy for downsampling treats every instance, i.e.,
problem-solution pair, as an equal. This problemagnostic sampling perpetuates the imbalance of the
original corpus, as seen in Figure 4(a). We propose a fair sampling alternate in which we iterate
over all the problems round-robin and sample from
unpicked solutions for each problem. This problemdependent sampling ensures a more balanced representation of each problem in the downsampled
dataset (see Figure 4(b)). Experimental results show
that fair downsampling outperforms naive downsampling (Section 4.1.1).
**2.4.2** **Code-Preferred Solutions**
The code-interpreter format allows for mixing code
and text, and also text-based solutions without any
code blocks. For GSM8K, the proportion of textbased solutions is 2%, but for MATH, their representation is 35.1%.[8] While natural language reasoning is more expressive, it lacks the precision of
8We detect the presence of code by searching for
_⟨llm-code⟩_ in the solution string.
-----
(a) Naive Sampling (b) Fair Sampling
Figure 4: Histogram of the number of solutions for problems in a 64K downsampled subset of MATH instances in
OpenMathInstruct-1.
code-based solutions (Gao et al., 2023). Suppose
for a problem q there are a total of Ntotal correct
solutions in the corpus, out of which Ncode represents the number of code-based solutions, and Ntext
represents the text-based solutions. We propose
the following two code-preferential data selection
strategies:
- Majority-Code: If Ncode > Ntext, remove all
the text-based solutions.
- Any-Code: If Ncode > 0, remove all the textbased solutions.
Ablation experiments over the MATH subset
of OpenMathInstruct-1 show the benefit of code_preferential data selection (Section 4.1.3)._
**3** **Experimental Setup**
**3.1** **Training Details**
For all our experiments, including ablations, models
of size 34B or smaller are trained for four epochs.
A global batch size of 128 is used along with
the AdamW optimizer with a weight decay of 1e2 (Loshchilov and Hutter, 2019) and dropout (Hinton et al., 2012) of 0.1. We save one checkpoint
per epoch for ablation experiments and two checkpoints per epoch for final model runs. The final
checkpoint is created by averaging all the saved
checkpoints. All experiments are performed using
the NeMo toolkit[9] (Kuchaiev et al., 2019). For
the full set of training hyperparameters, see Appendix B.1.
[9https://github.com/NVIDIA/NeMo](https://github.com/NVIDIA/NeMo)
**3.2** **Evaluation Setup**
We evaluate our models on the GSM8K and MATH
benchmarks, which are also used to create the finetuning dataset. For ablation studies and hyperparameter selection, we create a validation set of 1K
examples from the training set of GSM8K and
MATH since both datasets lack an actual validation set. All the fine-tuned models are evaluated
in the zero-shot setting. We use greedy decoding
and self-consistency/majority voting (Wang et al.,
2023) for evaluation. For majority voting, we found
that using the lower temperature of 0.7 is beneficial
compared to the data generation setup. We also deviate from the data generation setup by allowing the
model to continue answering questions after code
execution errors.
**4** **Results**
We finetune all the models on a mixture of (a)
512K fair downsampled GSM8K instances and (b)
512K MATH instances with any-code filtering (Section 2.4).[10] Thus, the total finetuning corpus size
is roughly 1.2M. We will justify the data selection
choice later in the ablation experiments.
Table 3 compares the performance of OpenMath_finetuned models against their gpt-distilled coun-_
terparts. Among the 7B models, our OpenMathMistral-7B is competitive with all the gpt-distilled
models. It is second-best to WizardMath on
GSM8K, and bested by ToRA by 0.1% on MATH.[11]
10The actual number of MATH instances is 511,677.
11Our grading script scores the publicly released ToRA outputs about 2-3% lower than the reported numbers. We believe
that ToRA uses some heuristics to extract answers when the
-----
Table 3: Comparison of our OpenMath-finetuned models with their gpt-distilled counterparts. We present results
on popular mathematical reasoning tasks, namely, GSM8K, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, and
MAWPS. For ToRA and MAmmoTH, we report the results of their "-Code(r)" versions whenever available since they
are always better than their non-code counterparts. SC (k=50) denotes self-consistency decoding with 50 samples.
We highlight the following results for a parameter range: best with SC, best and second best with greedy decoding.
**Size Base Model Model** **GSM8K MATH GSM-Hard SVAMP TabMWP ASDiv MAWPS**
- GPT-4 (Code Interpreter) 97.0 69.7 77.6 94.8 95.9 92.6 97.7
WizardMath 54.9 10.7 - 36.1 - - -
Llama-2
MetaMath 66.4 19.4 -
MAmmoTH 59.4 33.4 - 71.4 - - -
ToRA 72.6 **44.6** 56.0 70.4 51.6 78.7 91.3
CodeLlama + SC (k=50) 76.8 52.5 - - - - -
OpenMath-CodeLlama 75.9 43.6 60.1 79.6 56.0 77.7 93.5
+ SC (k=50) 84.8 55.6 - - - - -
MetaMath-Mistral-7B 77.7 28.2 - - - - -
MAmmoTH-7B-Mistral 75.0 40.0 - - - - -
Mistral WizardMath **83.2** 33.0 - - - - -
OpenMath-Mistral-7B 80.2 44.5 **63.7** **82.4** **70.0** **82.7** **95.4**
+ SC (k=50) 86.9 57.2 - - - -
WizardMath 63.9 14.0 - 51.9 - - -
Llama-2
MetaMath 72.3 22.4 - - - - -
MAmmoTH 64.7 36.3 - 73.7 - - -
ToRA 75.8 **48.1** 60.5 75.7 **65.4** **81.4** 92.5
CodeLlama + SC (k=50) 80.4 55.1 - - - - -
OpenMath-CodeLlama **78.8** 45.5 **61.9** **78.8** 59.7 81.2 **93.6**
7B
13B
+ SC (k=50) 86.8 57.6 - - - - -
MAmmoTH 72.7 43.6 - **84.3** - - -
ToRA **80.7** **51.0** 63.7 80.5 **70.5** **84.2** 93.3
+ SC (k=50) 85.1 60.0 - - - - -
OpenMath-CodeLlama **80.7** 48.3 **64.0** 83.6 66.0 82.7 **94.9**
34B CodeLlama
+ SC (k=50) 88.0 60.2 - - - -
Llama-2
70B
WizardMath 81.6 22.7 - 71.8 - - -
MetaMath 82.3 26.6 - - - - -
MAmmoTH 76.9 41.8 - 82.4 - - -
Llama-2 ToRA 84.3 49.7 **67.2** 82.7 74.0 **86.8** 93.8
+ SC (k=50) 88.3 56.9 - - - - -
OpenMath-Llama2 **84.7** 46.3 65.7 85.0 70.8 84.3 95.6
+ SC (k=50) 90.1 58.3 - - - - -
OpenMath-CodeLlama 84.6 **50.7** 66.6 **87.8** **74.2** 84.7 **95.7**
CodeLlama
+ SC (k=50) 90.8 60.4 - - - - -
Our models easily outperform both MAmmoTH
and MetaMath, even when controlling for the base
fine-tuned model. Since WizardMath and ToRA
finetuning datasets are not publicly available yet,
OpenMathInstruct-1 presents a superior alternative
to the publicly available MetaMathQA and MathInstruct datasets, which are used to fine-tune MetaMath and MAmmoTH, respectively.
With the increase in model parameters, our models continue to outperform MAmmoTH and MetaMath substantially. Compared to ToRA, with
greedy decoding, we see a meaningful drop in performance on MATH, though our models are equal
or better on GSM8K. With self-consistency (SC)
decoding, however, our models outperform ToRA
model doesn’t generate answers in the correct format.
on both MATH and GSM8K. The substantial gains
with SC can be attributed to the diversity of our
fine-tuning data.
**4.1** **Ablations**
We perform ablation experiments with the Mistral7B as the base model. We report results on the
1K-sized validation subsets for MATH and GSM8K
created by us.
**4.1.1** **Fair vs Naive Downsampling**
We finetune the base model on a dataset of 128K
instances created by combining 64K naive or fair
downsampled instances from the GSM8K and
MATH portion of the data. Table 4 shows that
the model fine-tuned on the data downsampled with
fair sampling outperforms the one created by naive
-----
Table 4: Comparison of performance of fair vs naive sampling on our validation subset of GSM8K and MATH.
Prompt GSM8K MATH
Random 74.3 35.0
Fair 75.3 37.0
downsampling. The performance gap is particularly
substantial for MATH, which suffers from a graver
data imbalance than GSM8K in our corpus.
**4.1.2** **Impact of Fine-Tuning Dataset Size**
Table 5: Effect of fine-tuning dataset size on performance on our validation subset of GSM8K and MATH.
Dataset Size GSM8K MATH
128K 75.3 37.0
256K 79.0 38.6
512K 81.0 41.6
To determine the impact of the size of the
fine-tuning dataset, we create datasets of size
128K/256K/512K by combining 64K/128K/256K
fair downsampled subsets of GSM8K and MATH.
Table 5 shows that the performance increases on
both GSM8K and MATH with the increase in the
fine-tuning dataset size. We didn’t find benefit from
training the models for more steps, so the performance gain is attributable to the increased data size.
**4.1.3** **MATH-only Ablations**
This section presents the ablation results for only
the MATH portion of OpenMathInstruct-1. In all
experiments, we finetune the base model on a 128K
fair downsampled subset to control for the impact
of data size.
Table 6: Comparison of default vs subject-wise prompt
performance on our MATH validation subset.
Prompt Pass@1 SC (k=4)
Default 39.1 41.7
Subject 38.3 44.5
**Default vs Subject-Specific Prompting.** In section 2.2.2, we motivated using subject-specific
prompts, which ultimately didn’t result in much
training set coverage difference. But how are the
_solutions generated by the combination of subject-_
_wise prompts different from a single default prompt?_
To answer this, we create a subset of 128K instances
generated with the default prompt/subject-specific
prompts.
Table 6 compares the finetuning performance
on these two splits on our MATH validation subset. While the model trained on the subject-specific
subset trails the model trained on the default subset for greedy decoding; the trend is decisively reversed for self-consistent decoding with four samples. This suggests that the subset collected with
subject-specific prompts has a higher diversity of
solutions than the ones collected using a single
prompt.
Table 7: Impact of code-preferential data selection on
our MATH validation subset performance.
Prompt Pass@1 SC (k=4)
Default 37.4 45.2
Majority-Code 39.8 42.6
Any-Code 39.4 42.6
**Code-Preferential Subsets.** In this ablation, we
determine the impact of code-preferential solution selection strategies proposed in Section 2.4.2.
Table 7 shows that code-preferential solution
strategies aid the greedy decoding performance.
However, the reduction in solution diversity arguably results in decreased performance with selfconsistency decoding (text-based solutions are only
1/3rd of the original corpus to begin with). Based
on these results and because any-code results in a
smaller finetuning dataset (512K compared to 664K
with majority-code), we chose to use the any-code
subset in our finetuning data blend.
**5** **Analysis**
Table 8: Performance split based on solution format.
Solutions without ⟨llm-code-output⟩ string are considered text-based.
**Solution Type** **Accuracy (in %)** **Count**
Text-based 32.0 278
Code + Text 45.3 722
Total 41.6 1000
We analyze the performance of the ablation
model trained on 512K instances from Section 4.1.2.
We focus our discussion on the MATH benchmark
-----
(a) Subject-wise performance (b) Level-wise performance
Figure 5: Performance split by subjects and levels on our MATH validation subset.
Table 9: Types of errors and their counts.
**Error Type** **Count**
Text Reasoning Error 189
Code Reasoning Error 292
Code Execution Error 78
Code timeout 15
Max code executions reached 10
Total 584
where this model scores 41.6% on our MATH validation subset.
**5.1** **Performance-split by Subjects and Levels**
Figure 5 presents the performance split by subjects
and levels on the MATH validation subset. Among
subjects, we see that the model’s worst performance
is on geometry, which can be attributed to the lack
of multi-modality in our base models (Zhou et al.,
2024). We see a monotonic decrease in performance with the increase in hardness level which
is to be expected (Zhou et al., 2024). The model
scores 72.4% on Level 1 problems and only 16.3%
on the hardest problems, i.e., Level 5.
**5.2** **Error Analysis**
Table 8 shows that the model performs an absolute
13.3% better when using code for answering questions in comparison to when not using it. We find
that some of the errors made by text-based solution could have been avoided by preferring codebased solution; see Figure 15 for a sample solution
where the model makes an arithmetic calculation
error. This analysis provides another support for our
proposal and use of code-preferred solutions from
Section 2.4.2.
Table 9 presents the count of different error categories. For code-based solutions, we find that almost 74% of the errors in such solutions are due to
reasoning error and the remaining 26% attributable
to execution-related issues. We present sample solutions from these error categories in Appendix B.3.
**6** **Related Work**
**Mathematical Reasoning and LLMs.** Recently,
a plethora of work has been done on enhancing the
mathematical reasoning capabilities of LLMs. Inference techniques such as Chain-of-Thought (Wei
et al., 2022), its programmatic counterpart, Program
of Thought (Gao et al., 2023; Chen et al., 2023b),
Self-Consistency (Wang et al., 2023), and SelfVerification (Zhou et al., 2024) have been shown to
significantly improve the reasoning capabilities of
LLMs without any further training.
Pretraining language models on math-heavy content has resulted in foundation LLMs such as Minerva (Lewkowycz et al., 2022), Galactica (Taylor
et al., 2022), and Llemma (Azerbayev et al., 2023)
with stronger mathematical skills out-of-the-box. A
more direct approach of dataset-specific training
does instruction fine-tuning on problem-solution
pairs derived from math reasoning datasets. Our
work falls in this latter category and bears similarity with recent work such as RFT (Yuan et al.,
2023), ToRA (Gou et al., 2024), MAmmoTH (Yue
et al., 2024), MetaMath (Yu et al., 2024) and MathCoder (Wang et al., 2024). We differ from the pre
-----
vious work along one factor or a combination of
the following factors: (a) reliance on GPT-3.5/4,
(b) solution format, and (c) use of ground truth text
solution in synthesizing code-based solutions.
**Knowledge Distillation via Synthetic Data.** Recent work exploring the use of targeted synthetic
data generated by large foundation models for pretraining/instruction tuning smaller LLMs has led
to tremendous progress in reasoning skills of these
smaller LLMs (Gunasekar et al., 2023; Li et al.,
2023; Eldan and Li, 2023; Mukherjee et al., 2023;
Xu et al., 2023; Liu et al., 2023).
**7** **Conclusion**
We introduce OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution
pairs with a commercially permissive license. Compared to previous work, OpenMathInstruct-1 is at
least four times bigger. The problems are taken
from the training set of GSM8K and MATH benchmarks, and the solutions are synthesized by fewshot prompting the Mixtral model. With our proposed prompting novelty of using masked text solu_tions and some brute-force scaling, we achieve train-_
ing set coverage of 99.9% for the GSM8K benchmark and 93% for the challenging MATH benchmark. The quality of these synthesized solutions is
illustrated by finetuning experiments, which show
models achieving performance comparable to or
outperforming their gpt-distilled counterparts. To
support the open-source efforts in this direction, we
publicly release all our fine-tuned models, code, and
the OpenMathInstruct-1 along with a further 6.6M
incorrect sampled solutions.
**Acknowledgement**
We want to thank the NeMo team at NVIDIA for
their support.
**References**
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q. Jiang,
Jia Deng, Stella Biderman, and Sean Welleck. 2023.
[Llemma: An Open Language Model For Mathemat-](http://arxiv.org/abs/2310.10631)
[ics.](http://arxiv.org/abs/2310.10631)
Lingjiao Chen, Matei Zaharia, and James Zou. 2023a.
[How is ChatGPT’s behavior changing over time?](http://arxiv.org/abs/2307.09009)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023b. Program of Thoughts
Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. TMLR.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training Verifiers to Solve Math Word Prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
[Ronen Eldan and Yuanzhi Li. 2023. TinyStories: How](http://arxiv.org/abs/2305.07759)
[Small Can Language Models Be and Still Speak Co-](http://arxiv.org/abs/2305.07759)
[herent English? arXiv.](http://arxiv.org/abs/2305.07759)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. PAL: Program-aided Language
Models. In ICML, pages 10764–10799. PMLR.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
[Chen. 2024. ToRA: A Tool-Integrated Reasoning](https://openreview.net/forum?id=Ep0TtjVoap)
[Agent for Mathematical Problem Solving. In ICLR.](https://openreview.net/forum?id=Ep0TtjVoap)
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio
César Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo
de Rosa, Olli Saarikivi, Adil Salim, Shital Shah,
Harkirat Singh Behl, Xin Wang, Sébastien Bubeck,
Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and
Yuanzhi Li. 2023. [Textbooks Are All You Need.](http://arxiv.org/abs/2306.11644)
_arXiv._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring Mathematical
Problem Solving With the MATH Dataset. NeurIPS.
Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky,
Ilya Sutskever, and Ruslan R Salakhutdinov. 2012.
Improving neural networks by preventing coadaptation of feature detectors. _arXiv preprint_
_arXiv:1207.0580._
Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi
Wang, Hongkun Yu, and Jiawei Han. 2023. Large
Language Models Can Self-Improve. In EMNLP.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix, and
[William El Sayed. 2023. Mistral 7B. arXiv.](http://arxiv.org/abs/2310.06825)
Albert Q. Jiang, Alexandre Sablayrolles, Antoine
Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas,
Emma Bou Hanna, Florian Bressand, Gianna Lengyel,
Guillaume Bour, Guillaume Lample, Lélio Renard
Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre
Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet,
Thibaut Lavril, Thomas Wang, Timothée Lacroix, and
[William El Sayed. 2024. Mixtral of Experts.](http://arxiv.org/abs/2401.04088)
O. Kuchaiev, J. Li, H. Nguyen, O. Hrinchuk, R. Leary,
B. Ginsburg, S. Kriman, S. Beliaev, V. Lavrukhin,
-----
J. Cook, et al. 2019. NeMo: a toolkit for building
AI applications using neural modules. In Systems for
_ML Workshop, NeurIPS._
Aitor Lewkowycz, Anders Johan Andreassen,
David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Venkatesh Ramasesh, Ambrose Slone, Cem
Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu,
Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra.
2022. Solving Quantitative Reasoning Problems with
Language Models. In NeurIPS.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del
Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023.
Textbooks Are All You Need II: phi-1.5 technical
report. arXiv.
Minpeng Liao, Wei Luo, Chengxi Li, Jing Wu, and Kai
Fan. 2024. [MARIO: MAth Reasoning with code](http://arxiv.org/abs/2401.08190)
[Interpreter Output – A Reproducible Pipeline.](http://arxiv.org/abs/2401.08190)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
Schulman, Ilya Sutskever, and Karl Cobbe. 2023.
[Let’s Verify Step by Step. arXiv.](http://arxiv.org/abs/2305.20050)
Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan
Kulkarni, Yuanzhi Li, Anh Nguyen, Rachel Ward,
and Yi Zhang. 2023. TinyGSM: achieving> 80% on
GSM8k with small language models. arXiv preprint
_arXiv:2312.09241._
[Ilya Loshchilov and Frank Hutter. 2019. Decoupled](http://arxiv.org/abs/1711.05101)
[Weight Decay Regularization. arXiv.](http://arxiv.org/abs/1711.05101)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. WizardMath: Empowering Mathematical Reasoning for
Large Language Models via Reinforced Evol-Instruct.
_arXiv preprint arXiv:2308.09583._
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard
Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit,
Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and
Ashwin Kalyan. 2022. Lila: A Unified Benchmark
for Mathematical Reasoning. In EMNLP.
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar,
Sahaj Agarwal, Hamid Palangi, and Ahmed Awadal[lah. 2023. Orca: Progressive Learning from Complex](http://arxiv.org/abs/2306.02707)
[Explanation Traces of GPT-4.](http://arxiv.org/abs/2306.02707)
OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin,
Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mo Bavarian, Jeff Belgum, Irwan Bello,
Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman,
Tim Brooks, Miles Brundage, Kevin Button, Trevor
Cai, Rosie Campbell, Andrew Cann, Brittany Carey,
Chelsea Carlson, Rory Carmichael, Brooke Chan,
Che Chang, Fotis Chantzis, Derek Chen, Sully Chen,
Ruby Chen, Jason Chen, Mark Chen, Ben Chess,
Chester Cho, Casey Chu, Hyung Won Chung, Dave
Cummings, Jeremiah Currier, Yunxing Dai, Cory
Decareaux, Thomas Degry, Noah Deutsch, Damien
Deville, Arka Dhar, David Dohan, Steve Dowling,
Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna
Eloundou, David Farhi, Liam Fedus, Niko Felix,
Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik
Goel, Tarun Gogineni, Gabriel Goh, Rapha GontijoLopes, Jonathan Gordon, Morgan Grafstein, Scott
Gray, Ryan Greene, Joshua Gross, Shixiang Shane
Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris,
Yuchen He, Mike Heaton, Johannes Heidecke, Chris
Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele,
Brandon Houghton, Kenny Hsu, Shengli Hu, Xin
Hu, Joost Huizinga, Shantanu Jain, Shawn Jain,
Joanne Jang, Angela Jiang, Roger Jiang, Haozhun
Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo
Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak
Khan, Logan Kilpatrick, Jong Wook Kim, Christina
Kim, Yongjik Kim, Hendrik Kirchner, Jamie Kiros,
Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk,
Andrew Kondrich, Aris Konstantinidis, Kyle Kosic,
Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai
Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy,
Chak Ming Li, Rachel Lim, Molly Lin, Stephanie
Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe,
Patricia Lue, Anna Makanju, Kim Malfacini, Sam
Manning, Todor Markov, Yaniv Markovski, Bianca
Martin, Katie Mayer, Andrew Mayne, Bob McGrew,
Scott Mayer McKinney, Christine McLeavey, Paul
McMillan, Jake McNeil, David Medina, Aalok Mehta,
Jacob Menick, Luke Metz, Andrey Mishchenko,
Pamela Mishkin, Vinnie Monaco, Evan Morikawa,
Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk,
David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev
Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub
Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy
Parparita, Alex Passos, Mikhail Pavlov, Andrew
Peng, Adam Perelman, Filipe de Avila Belbute Peres,
Michael Petrov, Henrique Ponde de Oliveira Pinto,
Michael, Pokorny, Michelle Pokrass, Vitchyr Pong,
Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae,
Aditya Ramesh, Cameron Raymond, Francis Real,
Kendra Rimbach, Carl Ross, Bob Rotsted, Henri
Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders,
Shibani Santurkar, Girish Sastry, Heather Schmidt,
David Schnurr, John Schulman, Daniel Selsam, Kyla
Sheppard, Toki Sherbakov, Jessica Shieh, Sarah
Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler,
Maddie Simens, Jordan Sitkin, Katarina Slama, Ian
Sohl, Benjamin Sokolowsky, Yang Song, Natalie
Staudacher, Felipe Petroski Such, Natalie Summers,
Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine
Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry
Tworek, Juan Felipe Cerón Uribe, Andrea Vallone,
Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright,
Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan
-----
Ward, Jason Wei, CJ Weinmann, Akila Welihinda,
Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu,
Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo,
Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan
Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao,
Tianhao Zheng, Juntang Zhuang, William Zhuk, and
[Barret Zoph. 2023. GPT-4 Technical Report.](http://arxiv.org/abs/2303.08774)
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle,
Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish
Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal
Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
[Thomas Scialom, and Gabriel Synnaeve. 2023. Code](http://arxiv.org/abs/2308.12950)
[Llama: Open Foundation Models for Code. arXiv.](http://arxiv.org/abs/2308.12950)
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas
Scialom, Anthony Hartshorn, Elvis Saravia, Andrew
Poulton, Viktor Kerkez, and Robert Stojnic. 2022.
[Galactica: A Large Language Model for Science.](http://arxiv.org/abs/2211.09085)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David
Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu,
Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini,
Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez,
Madian Khabsa, Isabel Kloumann, Artem Korenev,
Punit Singh Koura, Marie-Anne Lachaux, Thibaut
Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar
Mishra, Igor Molybog, Yixin Nie, Andrew Poulton,
Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang,
Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin
Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and
[Thomas Scialom. 2023. Llama 2: Open Foundation](http://arxiv.org/abs/2307.09288)
[and Fine-Tuned Chat Models.](http://arxiv.org/abs/2307.09288)
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
Mingjie Zhan, and Hongsheng Li. 2024. MathCoder:
Seamless Code Integration in LLMs for Enhanced
Mathematical Reasoning. In ICLR.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2023. Self-consistency improves chain
of thought reasoning in language models. In ICLR.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
Jiang. 2023. [WizardLM: Empowering Large](http://arxiv.org/abs/2304.12244)
[Language Models to Follow Complex Instructions.](http://arxiv.org/abs/2304.12244)
_arXiv._
Fei Yu, Anningzhe Gao, and Benyou Wang. 2023.
[Outcome-supervised Verifiers for Planning in Mathe-](http://arxiv.org/abs/2311.09724)
[matical Reasoning. arXiv.](http://arxiv.org/abs/2311.09724)
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. 2024. MetaMath: Bootstrap Your Own Mathematical Questions
for Large Language Models. In ICLR.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling Relationship on Learn-](http://arxiv.org/abs/2308.01825)
[ing Mathematical Reasoning with Large Language](http://arxiv.org/abs/2308.01825)
[Models.](http://arxiv.org/abs/2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
[2024. MAmmoTH: Building math generalist models](https://openreview.net/forum?id=yLClGs770I)
[through hybrid instruction tuning. In ICLR.](https://openreview.net/forum?id=yLClGs770I)
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. 2022. STaR: Bootstrapping Reasoning With
Reasoning. In NeurIPS.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun
Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song,
Mingjie Zhan, and Hongsheng Li. 2024. Solving
Challenging Math Word Problems Using GPT-4 Code
Interpreter with Code-based Self-Verification. In
_ICLR._
-----
**A** **Data Statistics**
**A.1** **MATH Training Data Split by Subjects**
Table 10: MATH training set decomposition by subjects.
|Subject|# of Training Instances|
|---|---|
|Algebra Geometry Intermediate Algebra Number Theory Prealgebra Precalculus Probability|1744 870 1295 869 1205 746 771|
|Total|7500|
**A.2** **Code-Block Count Frequencies**
Figure 6: Distribution of solutions with different codeblock counts in OpenMathInstruct-1.
The code-interpreter allows solving a problem
by breaking it into multiple code blocks. Most of
the solutions in OpenMathInstruct-1 have 0 or 1
code blocks, 16.4% and 81.7% of the solutions,
respectively. The remaining 2% of the solutions
have two or more code blocks. Figure 7 shows an
example of a solution using two code blocks.
**A.3** **Frequency of solutions per problem in**
**OpenMathInstruct-1**
Figure 12 shows the histogram of the frequency
of the number of solutions in OpenMathInstruct-1
for problems in GSM8K and MATH. For GSM8K,
57.4% problems (4292 out of 7473) have more than
half the sampled solutions, i.e., > 128 solutions out
of 256 sampled solutions, that are valid. For MATH,
the solution frequency histogram mimics a power
Solution with Two Code Blocks
Question
Let d and e denote the solutions of 2x[2] + 3x − 5 = 0.
What is the value of (d − 1)(e − 1)?
Generated Solution
Let’s solve the quadratic equation using Sympy:
<llm−code>
from sympy import Symbol, solve, Eq, simplify
# Define the variable x
x = Symbol('x')
# Define the equation
eq = 2*x**2 + 3*x −5
# Solve the equation
roots = solve(eq, x)
# Print the solutions
print("The solutions are:")
for root in roots:
print("x = ", simplify(root))
</llm−code>
<llm−code−output>
The solutions are:
x = −5/2
x = 1
</llm−code−output>
So d = −5/2 and e = 1.
Let’s calculate the answer:
<llm−code>
d = −5/2
e = 1
d_minus_1 = d −1
e_minus_1 = e −1
result = d_minus_1 * e_minus_1
print("The value of (d−1)(e−1) is:", result)
</llm−code>
<llm−code−output>
The value of (d−1)(e−1) is: −0.0
</llm−code−output>
So the answer is −0.0 .
Figure 7: Sample solution with multiple code blocks.
The first code block computes the roots of the given
quadratic equation and the second block computes the
expression involving them.
law curve, with most problems having very few solutions and very few problems with many solutions.
Concretely, 19% problems (1324 out of 6978) with
any solutions in OpenMathInstruct-1 have ≤ 10
valid solutions out of the 896 sampled solutions
per problem. To further highlight the hardness of
the MATH benchmark in comparison to GSM8K,
just 3.1% problems (233 out of 7500) have more
than half the sampled solutions ( > 448 out of 896
-----
Code + Text Reasoning
Question
Let p(x) = _x, and q(x) = 8x[2]_ + 10x 3. The
_[√]−_ _−_
domain of p(q(x)) can be written in the form a ≤ _x ≤_ _b._
Find b − _a._
Generated Solution
Let's use sympy to solve for the domain of $p(q(x))$.
<llm−code>
import sympy as sp
# define the symbols
x = sp.symbols('x')
# define the functions
p = sp.sqrt(−x)
q = 8*x**2 + 10*x −3
# solve for the domain of p(q(x))
domain = sp.solve(q >= 0, x)
# print the domain
print(domain)
</llm−code>
<llm−code−output>
((1/4 <= x) & (x < oo)) | ((−oo < x) & (x <= −3/2))
</llm−code−output>
So the domain is $x \in [1/4, \infty) \cup (−\infty, −3/2)
$.
The difference between the upper and lower bounds is
_b −_ _a = 1/4 −_ (−3/2) .
Figure 8: Sample solution where the model performs the
initial reasoning in code and performs the last reasoning
steps to answer the question in natural language. This
shows the strength of the code-interpreter solution format
for mathematical reasoning.
sampled solutions), that are valid.
**B** **Miscellaneous**
**B.1** **Training Hyperparameters**
Table 11: Details of training hyperparameters for finetuning the different base models. LR=Learning rate,
TP=Tensor Parallel, PP=Pipeline Parallel.
**Epochs** **LR** **# of GPUs** **TP** **PP**
**Mistral-7B** 4 1e-6 64 4 1
**CodeLlama-7B** 4 2e-5 64 4 1
**CodeLlama-13B** 4 2e-5 64 4 1
**CodeLlama-34B** 4 1e-5 128 8 1
**Llama 2-70B** 2 1e-5 256 8 2
**CodeLlama-70B** 3 1e-5 256 8 2
Table 11 details the hyperparameters used for
finetuning the different base models. Due to com
Shortcut Solution
Question
Jolene and Phil have four children, each with the same
birthday. They gave birth to their first child exactly 15
years ago. They gave birth to their second child exactly
one year after the birth of their first child. They gave birth
to their third child on the fourth birthday of their second
child. Two years after the birth of their third child, they
gave birth to their fourth child. How old, in years, is their
fourth child?
Reference Solution
Their 1st child was born 15 years ago, and therefore is 15
years old.
Their 2nd child was born 1 year after their 15-year-old
child, and therefore is 15-1=14 years old.
Their 3rd child was born 4 years after their 14-year-old
child, and therefore is 14-4=10 years old.
Their 4th child was born 2 years after their 10-year-old
child, and therefore is 10-2=8 years old.
Generated Solution
Let’s write down a python script to answer this problem.
<llm−code>
children = ['first', 'second', 'third', 'fourth']
child_age = [15, 14, 10, 8]
number_of_children = len(children)
children = children[:−1] # let's get rid of the youngest
one since we already know that one
dictionary = dict(zip(children, child_age))
dictionary
</llm−code>
<llm−code−output>
{'first': 15, 'second': 14, 'third': 10}
</llm−code−output>
The answer is 8 years old.
Figure 9: Sample shortcut solution generated from
prompting the LLM with the reference solution in context. The code initializes the variable child_age with
the children’s ages copied from the reference solution.
The generated code then performs ad-hoc computations
before answering the question in the text.
pute limitations, for 70B models, we limit the training to 2 epochs for Llama2 and 3 epochs for CodeLlama. For CodeLlama models, we use the learning
rate of 2e-5 used by ToRA models (Gou et al., 2024).
For the Mistral-7B model, we use a lower learning
rate of 1e-6 based on our hyperparameter search.
**B.2** **Sample Solutions**
In this section, we illustrate sample solutions representative of different phenomena encountered during the creation of OpenMathInstruct-1.
- Figure 8 shows a sample solution that utilizes
the strength of the code-interpreter solution
-----
Solution Requiring Trimming
Question
Caroline can make eleven lassis out of two mangoes.
How many lassis can she make out of twelve mangoes?
Generated Solution
Let f (x) be the number of lassis she can make out of x
mangoes. From the question, we can see that f (2) = 11.
Using basic algebra, we can see that f (12) = 66 .
Let's verify this with sympy.
<llm−code>
import sympy as sp
# define the unknown function
x, y = sp.symbols('x y')
# let's define the parabola
parabola = sp.Eq(y, x**2 + b*x + c)
# substitute points into parabola equation and solve for
b, c
point_1 = parabola.subs({x: −1, y: −11})
point_2 = parabola.subs({x: 3, y: 17})
solutions = sp.solve((point_1,point_2), (b, c))
solutions[b]
</llm−code>
Trimmed Solution
Let f (x) be the number of lassis she can make out of x
mangoes. From the question, we can see that f (2) = 11.
Using basic algebra, we can see that f (12) = 66 .
Figure 10: Sample solution where the model generates
unrelated text after answering the question. During postprocessing, we trim the solution by removing text beyond the solution line with the answer.
format with reasoning in both code and natural
language.
- Figure 9 demonstrates a sample solution generated when the reference solution is used in the
few-shot prompt. The model copies the children’s ages from the reference solution and
initializes the child_age variable. Such solutions are the reason why we propose the use
of masked text solutions in the prompt.
- Figure 11 shows a sample solution where the
generated solution gets the right answer but
through flawed reasoning. These semantically
_noisy solutions are much harder to detect with_
simple syntactic filters. One solution might
be to use models like GPT-4 to grade the generated solutions as done in recent work (Gu
Flawed Reasoning
Question
The areas of two squares are in the ratio 25 : 36. What is
the ratio of their perimeters? Express your answer in the
form a : b.
Reference Solution
In general, if the ratio of the perimeters of two similar
figures is a/b, then the ratio of their areas is (a/b)[2]. So
in this case, (a/b)[2] = 25/36, or a/b = 5/6.
So the answer is 5 : 6 .
Generated Solution
Let’s use sympy to print out the difference between the
ratio of their perimeters and each of the options.
<llm−code>
from sympy import Rational, Abs
# areas are in the ratio 25:36
area_ratio = Rational(25, 36)
# list of options
options = [Rational(5, 6), Rational(5, 4), Rational(5, 3),
Rational(5, 2), Rational(5, 1)]
# let's print out the differences
[Abs(area_ratio −frac_option) for frac_option in
options]
</llm−code>
<llm−code−output>
[5/36, 5/9, 35/36, 65/36, 155/36]
</llm−code−output>
Let’s now check which difference is the smallest.
<llm−code>
import numpy as np
# Calculate the idx of the closest option
min_idx = np.argmin([5/36, 5/9, 35/36, 65/36, 155/36])
# Print the closest option
print(options[min_idx])
</llm−code>
<llm−code−output>
5/6
</llm−code−output>
So the answer is 5 : 6 .
Figure 11: Sample solution where the model uses completely flawed reasoning to arrive at the correct answer.
Such flawed reasoning is hard to detect, but fortunately,
these solutions are rare.
nasekar et al., 2023). We leave the work of developing such semantic filters for future work.
- Figure 10 illustrates a sample solution where
the solution goes beyond answering the question, with the model generating coherent but
unrelated text for the input problem.
-----
(a) GSM8K (b) MATH
Figure 12: Histogram of the number of solutions for problems in GSM8K and MATH.
Table 12: Instructions for prompting the model.
**Task** **Instruction**
**B.3** **Error Analysis of Solutions Generated by**
**Fine-tuned Model**
In this section, we illustrate instances of the different kind of errors made by the ablation model
analyzed in Section 5.
- Figure 13 shows a sample solution where the
code generated in the solution runs into an
execution error. Nevertheless, the model still
generates an incorrect answer to the question.
- Figure 14 demonstrates a sample where the
model performs correct reasoning while generating the code. However, the model falters
at copying the code output and ends up generating a new answer.
- Figure 15 illustrates a sample where the model
performs correct reasoning but falters in arithmetic calculation (multiplication). Failure at
arithmetic computation has been a known issue with LLMs and justifies our choice of preferring code-based solutions.
**B.4** **Instructions for Few-shot Data Generation**
Table 12 details the instructions used for the different generation tasks.
**B.5** **Masked Text Solution Generation**
We generate masked text solutions using a pipeline
very similar to the solution generation pipeline. We
use the following procedure:
- Generate eight candidate masked solutions.
- Filter out solutions that have very different
lengths.
Few-shot
prompt
(I)
Few-shot
prompt
text
masking
(Imask)
Zero-shot
prompt for
fine-tuned
model
Here are some examples of questions
and solutions followed by a new question that you need to solve. Make sure
to put the answer (and only answer)
inside \boxed{}.
Here are some examples of questions,
solutions, and their masked solutions
followed by a new question and solution that you need to mask. The goal
is to ensure that the masked solution
doesn’t have any of the numerical values not mentioned in the question. So
intermediate values calculated in the
solution are to be masked by single
letter capital variables, such as M, N.
System: You’re an expert Python programmer and mathematician. Help
the user to solve this problem using
code when necessary. Make sure to
put the answer (and only answer) inside \boxed{}.
- Filter out solutions that have the final answer.
This ensures that our masked solutions are at
least masking the final answer.
- Rank solutions where higher-ranked solutions
have fewer numbers. This is essentially a
heuristic to ensure we pick solutions where
the intermediate computation is masked out
the most.
-----
Last Step Mishap
Question
What is the 100th digit to the right of the decimal point in
the decimal representation of [13]90 [?]
Generated Solution
We can use sympy to calculate the decimal representation
of [13]90 [and then extract the 100th digit.]
<llm−code>
from sympy import Rational, N
# Calculate the decimal representation of 13/90
decimal_rep = N(Rational(13, 90), 100)
# Extract the 100th digit
digit = int(str(decimal_rep)[−1])
print(digit)
</llm−code>
<llm−code−output>
4
</llm−code−output>
So the 100th digit to the right of the decimal point in the
decimal representation of [13]90 [is][ 7][ .]
Figure 14: The model performs correct reasoning till
code generation. But rather than copying the code output
(4) it concludes with a different answer (7).
Code Execution Error
Question
The area of a triangle is 600 square feet. Find the altitude,
in feet, of the triangle if the length of the corresponding
base is 30 feet.
Generated Solution
Let’s use sympy to solve this problem.
<llm−code>
from sympy import symbols, Eq, solve
# define the variables
base, altitude = symbols('base altitude')
# area of the triangle
area_eq = Eq(base * altitude / 2, 600)
# length of the corresponding base
base_eq = Eq(base, 30)
# solve the equations
solutions = solve((area_eq, base_eq), (base, altitude))
# print the altitude
print(solutions[altitude])
</llm−code>
<llm−code−output>
[0;31m−−−−−−−−−−−−−−−−−−−−−−−−−−[0m
[0;31mTypeError[0m Traceback (most recent call last)
File [0;32m<ipython−input−1−f95732badac7>:16[0m
[1;32m 13[0m solutions [38;5;241m=[39m solve((
area_eq, base_eq), (base, altitude))
[1;32m 15[0m [38;5;66;03m# print the altitude[39;00m
[0;32m−−−> 16[0m [38;5;28mprint[39m([43
msolutions[49m[43m[[49m[43maltitude[49m[43m][49
m)
[0;31mTypeError[0m: list indices must be integers or
slices, not Symbol
</llm−code−output>
So the altitude is 20 .
Figure 13: Sample solution where the model generated
code runs into execution error. Nevertheless, the model
still generates an incorrect answer to the question.
**B.6** **Few-shot prompts used for GSM8K and**
**MATH**
Calculation Error
Question
Our basketball team has 12 members, each of whom can
play any position. In how many ways can we choose a
starting lineup consisting of a center, a power forward, a
shooting forward, a point guard, and a shooting guard?
Generated Solution
We can choose a starting lineup in 12×11×10×9×8 =
11880 ways.
Figure 15: Sample solution where the model performs
the right text reasoning but makes calculation mistake.
-----
Listing 1: Few-shot prompt for GSM8K.
Here are some examples of questions and solutions followed by a new question that you need to
solve.
Make sure to put the answer (and only answer) inside \boxed{}.
Question:
Missy had a giant piggy bank in her bedroom. Every day she would search the house for change
to put in her bank. After 4 years, the bank was opened and it contained $450 in change. If
the second, third, and fourth-year she doubled the amount of money she put in the bank from
the amount she had put in the previous year, how much money, in dollars, did she put in the
bank the first year?
My solution:
Missy started with some amount of money in the first year.
She was then doubling the amount of money every year for 4 years and ended up with $450.
Let's write down an equation for this problem and solve it using sympy.
<llm-code>
from sympy import solve, symbols
first_year_money = symbols('first_year_money')
second_year_money = 2 * first_year_money
third_year_money = 2 * second_year_money
fourth_year_money = 2 * third_year_money
# Solve for first year money given that the combined saving is 450
result = solve(first_year_money + second_year_money + third_year_money + fourth_year_money 450, first_year_money)[0]
result
</llm-code>
<llm-code-output>
30
</llm-code-output>
Thus Missy put \boxed{30} dollars in the bank the first year.
Question:
Pete has to take a 10-minute walk down to the train station and then board a 1hr 20-minute
train to LA. When should he leave if he cannot get to LA later than 0900 hours? (24-hr time)
My solution:
Since Pete needs to take a 10 minutes walk and then a 1 hour 20 minutes train ride, he will
spend a total of 1 hour and 30 minutes.
This means that he needs to leave 1 hour and 30 minutes earlier than 09:00 hours.
Subtracting 1 hour and 30 minutes from 09:00 hours we get \boxed{07:30} hours.
Question:
Mark deposited $88 in a bank. Bryan deposited $40 less than five times as much as Mark. How
much did Bryan deposit in the bank?
My solution:
Let's solve this problem using Python code.
<llm-code>
mark_deposit = 88
five_times_mark_deposit = 5 * mark_deposit
bryan_deposit = five_times_mark_deposit - 40
bryan_deposit
</llm-code>
<llm-code-output>
400
</llm-code-output>
So Bryan deposited \boxed{400} dollars in the bank.
-----
Question:
A department store displays a 20% discount on all fixtures. What will be the new price of a
25 cm high bedside lamp that was worth $120?
My solution:
Let's solve this problem using Python code.
<llm-code>
discount_percent = 20
price_before_discount = 120
discount_portion = discount_percent / 100
discount_amount = price_before_discount * discount_portion
price_after_discount = price_before_discount - discount_amount
price_after_discount
</llm-code>
<llm-code-output>
96.0
</llm-code-output>
So the new price of the lamp is \boxed{96} dollars.
Question:
James opens up a flower shop. He needs to pay rent of $1200 a week with an additional 20% of
rent to pay for utilities and he has 2 employees per shift with the store open 16 hours a day
for 5 days a week. If he pays each employee $12.50 an hour, what are his weekly expenses to
run the store?
My solution:
The cost consists of rent, utilities, and employee salaries. Let's compute each of them
separately and then add them up.
<llm-code>
# rent cost
rent_per_week = 1200
# utility cost
utility_per_week = rent_per_week * 20 / 100
# employee cost
employee_work_hours = 16
work_days_per_week = 5
employee_work_hours_per_week = work_days_per_week * employee_work_hours
number_of_employees = 2
employee_cost_per_hour = 12.5
employees_cost_per_week = number_of_employees * employee_work_hours_per_week *
employee_cost_per_hour
# add the three to get total cost
cost_per_week = rent_per_week + utility_per_week + employees_cost_per_week
cost_per_week
</llm-code>
<llm-code-output>
3440.0
</llm-code-output>
Thus James's weekly expences add up to \boxed{3440} dollars.
Question:
Tim has 30 less apples than Martha, and Harry has half as many apples as Tim. If Martha has
68 apples, how many apples does Harry have?
My solution:
-----
Listing 2: Few-shot prompt for MATH.
Here are some examples of questions and solutions followed by a new question that you need to
solve.
Make sure to put the answer (and only answer) inside \boxed{}.
Question:
A parabola with equation $y=x^2+bx+c$ passes through the points $(-1,-11)$ and $(3,17)$. What
is $c$?
My solution:
Let's write down an equation for the parabola and solve for $c$ using sympy.
<llm-code>
import sympy as sp
# define the symbols
x, y, b, c = sp.symbols('x y b c')
# define the parabola equation
parabola_eq = sp.Eq(y, x**2 + b*x + c)
# the parabola passes through the points (-1,-11) and (3,17)
# so we substitute these points into the parabola equation
point_1 = parabola_eq.subs({x: -1, y: -11})
point_2 = parabola_eq.subs({x: 3, y: 17})
# we now have two equations and two unknowns (b and c)
# we will solve for b and c
solutions = sp.solve((point_1,point_2), (b, c))
solutions[c]
</llm-code>
<llm-code-output>
-7
</llm-code-output>
So c is \boxed{-7}
Question:
Let $f(x)$ be an odd function. Is $f(f(x))$ even, odd, or neither?
Enter "odd", "even", or "neither".
My solution:
The definition of an odd function is that $f(-x) = -f(x)$.
Applying this to $f(f(-x))$ we get $f(f(-x)) = f(-f(x)) = -f(f(x))$.
Thus, $f(f(x))$ is an \boxed{odd} function.
Question:
At the 2007 Math Olympics, Team Canada won $17$ out of a possible $100$ medals. Which one of
the following is closest to the fraction of medals that they won? $$
\frac{1}{4} \qquad \frac{1}{5} \qquad \frac{1}{6} \qquad \frac{1}{7} \qquad \frac{1}{8}
$$
My solution:
Let's use sympy to print out the differences between the fraction of medals that Canada won
and each of the options.
<llm-code>
from sympy import Rational, Abs
# team Canada won 17 out of 100 medals
medal_frac = Rational(17, 100)
# list of options
-----
options = [Rational(1, 4), Rational(1, 5), Rational(1, 6), Rational(1, 7), Rational(1, 8)]
# let's print out the differences
[Abs(medal_frac - frac_option) for frac_option in options]
</llm-code>
<llm-code-output>
[2/25, 3/100, 1/300, 19/700, 9/200]
</llm-code-output>
Let's now check which difference is the smallest.
<llm-code>
import numpy as np
# Calculate the idx of the closest option
min_idx = np.argmin([2/25, 3/100, 1/300, 19/700, 9/200])
# Print the closest option
print(options[min_idx])
</llm-code>
<llm-code-output>
1/6
</llm-code-output>
So the answer is \boxed{1/6}.
Question:
A rectangular box $P$ is inscribed in a sphere of radius $r$. The surface area of $P$ is 384,
and the sum of the lengths of its 12 edges is 112. What is $r$?
My solution:
Let the dimensions of $P$ be $x$, $y$, and $z$.
The sum of the box's edges is $4(x + y + z)$ and the surface area is $2xy + 2yz + 2xz$.
The diameter of the sphere is $2r$ and it's equal to the diagonal of the box.
Let's now write down the equations based on the above information and solve them using sympy.
<llm-code>
from sympy import symbols, Eq, solve
# define the variables
x, y, z, r = symbols('x y z r')
# equations based on the given information
eq1 = Eq(2 * (x*y + x*z + y*z), 384)
eq2 = Eq(4 * (x + y + z), 112)
# the diagonal of the box is the diameter of the sphere
diagonal_eq = Eq(x**2 + y**2 + z**2, (2*r)**2)
# solve the equations
solutions = solve((eq1, eq2, diagonal_eq), (x, y, z, r))
# let's see which values of r we get
[solution[-1] for solution in solutions]
</llm-code>
<llm-code-output>
[-10, -10, 10, 10]
</llm-code-output>
Since the radius of the sphere is positive, we get $r = \boxed{10}$.
Question:
A bee starts flying from point $P_0$. She flies $1$ inch due east to point $P_1$. For $j \ge
1$, once the bee reaches point $P_j$, she turns $30^{\circ}$ counterclockwise and then flies
$j+1$ inches straight to point $P_{j+1}$. When the bee reaches $P_{2015},$ how far from $P_0$
is she, in inches?
-----
My solution:
We can represent the rotation action via $\omega = e^{\pi i/6}$.
Let's assume the bee starts at the origin, so $P_{2015}$ is at the point \[z = 1 + 2 \omega +
3 \omega^2 + 4 \omega^3 + \dots + 2015 \omega^{2014}.\]
This is an arithmetic-geometric series which we can solve by simplifying the expression.
Alternatively, we can solve for |z| using sympy using the following code.
<llm-code>
from sympy import I, pi, exp, sqrt
# rotation action of 30 degrees
omega = exp(I * pi/6)
position = 0
for i in range(2015):
delta = (i + 1) * omega**(i)
position += delta
real, img = (position.as_real_imag())
# Distance from origin i.e. starting point
dist = sqrt(real**2 + img**2)
print(dist)
</llm-code>
<llm-code-output>
sqrt(2)*(1008 + 1008*sqrt(3))
</llm-code-output>
So the bee is $\boxed{1008\sqrt{2} + 1008\sqrt{6}}$ far from the starting point.
Question:
If $f (x) = x^2 - 1$, what is the value of $f (-1)$?
My solution:
-----
| [
"Shubham, Toshniwal",
"Ivan, Moshkov",
"Sean, Narenthiran",
"Daria, Gitman",
"Fei, Jia",
"Igor, Gitman"
] | 2024-02-15T00:00:00 | NeurIPS 2024 | true | 42 | 5 | null | http://arxiv.org/abs/2402.10176 | https://arxiv.org/abs/2402.10176 | https://www.semanticscholar.org/paper/a3d749bc119f5c8425779e4e72e650720db2fe4b |
Advancing LLM Reasoning Generalists with Preference Trees | We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of Eurus can be primarily attributed to UltraInteract, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. UltraInteract can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. UltraInteract allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model. | This work introduces Eurus, a suite of large language models (LLMs) optimized for reasoning, and derives a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model. | ## Advancing LLM Reasoning Generalists with Preference Trees
**Lifan Yuan[1,2], Ganqu Cui[∗]** [1][∗], Hanbin Wang[†] [3,4][∗], Ning Ding[1†], Xingyao Wang[2], Jia Deng[5],
**Boji Shan[6], Huimin Chen[1], Ruobing Xie[7], Yankai Lin[5], Zhenghao Liu[3], Bowen Zhou[1],**
**Hao Peng[2], Zhiyuan Liu[1†], Maosong Sun[1]**
1Tsinghua University 2University of Illinois Urbana-Champaign 3Northeastern University
4 ModelBest.Inc 5 Renmin University of China 6 BUPT 7 Tencent
[email protected] [email protected] [email protected]
**Abstract**
We introduce EURUS, a suite of large language models (LLMs) optimized
for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, EURUS
models achieve state-of-the-art results among open-source models on a
diverse set of benchmarks covering mathematics, code generation, and
logical reasoning problems. Notably, EURUS-70B beats GPT-3.5 Turbo in
reasoning through a comprehensive benchmarking across 12 tests covering
five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6%
on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong
performance of EURUS can be primarily attributed to ULTRAINTERACT,
our newly-curated large-scale, high-quality alignment dataset specifically
designed for complex reasoning tasks. ULTRAINTERACT can be used in
both supervised fine-tuning and preference learning. For each instruction,
it includes a preference tree consisting of (1) reasoning chains with diverse
planning strategies in a unified format, (2) multi-turn interaction trajectories
with the environment and the critique, and (3) pairwise data to facilitate
preference learning. ULTRAINTERACT allows us to conduct an in-depth
exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less
suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective
which, together with ULTRAINTERACT, leads to a strong reward model.[1]
45
40
35
30
25
20
15
10
|~7B ~40|B|Col3|Col4|Col5|Col6|G|PT-4|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|~70 GPT|B -Series|||||||||
|Our|s|||Eu|rus-70B-NC|A||||
|||DeepSeek-|Coder-33B|-Instruct||||||
|DeepSee|Ma k-LLM-67B|giCoder-S-D -Chat Op|S-6.7B enCI-CL-70|B||GPT-3.5-|Turbo|||
|CL-70B-Ins|Open truct|CI-DS-6.7B QW|Eurus-7 en1.5-72B|B-KTO -Chat||||||
|Op|OpenM enMath-M|ath-CL-70B istral-7B||||||||
|Mistral-|7B-Instruct|WizardM -v0.2|ath-7B -v|1.1||||||
|Zep|hyr-7B-𝛽||Mixtral-8|x7B-Instruc|t|||||
10 15 20 25 30 35 40 5545 50 55
TheoremQA
Figure 1: Evaluation results on LeetCode and TheoremQA, two challenging OOD coding
5545
and math benchmarks with only test sets. Our EURUS-7B is comparable with baselines that
are 10x larger and EURUS-70B is the only one on par with GPT-3.5 Turbo.
_∗Equal Contribution._
†Corresponding Authors.
[1Models and data are available at: https://github.com/OpenBMB/Eurus.](https://github.com/OpenBMB/Eurus)
-----
**1** **Introduction**
Current alignment techniques have significantly advanced the development of open-source
large language models (LLMs) that effectively meet user expectations and align with human
values (Touvron et al., 2023; Tunstall et al., 2023). On complex reasoning, success has been
achieved by specializing models for specific capabilities, such as coding (Wei et al., 2023;
Guo et al., 2024a; Zheng et al., 2024) and solving math problems (Fu et al., 2023; Yue et al.,
2023; Luo et al., 2023a; Toshniwal et al., 2024). However, these models still fall short, by
large margins, of the most advanced proprietary models in their all-around capabilities
to tackle a diverse range of challenging problems. We conjecture that this performance
gap can be primarily attributed to (1) the lack of high-quality alignment data and (2)
the underexploration of preference learning techniques for improving models’ complex
reasoning capabilities. In this paper, we take strides towards bridging this gap by addressing
both factors and developing EURUS.
EURUS consists of a suite of LLMs finetuned from Mistral-7B (Jiang et al., 2023a) and
CodeLLaMA-70B (Roziere et al., 2023). Across a diverse set of complex reasoning benchmarks that are mostly out-of-distribution (OOD), EURUS achieves state-of-the-art overall
performance among all open-source models. In particular, EURUS excels in solving challenging problems that often require sophisticated planning, reasoning, tool integration, and
the ability to interact with and learn from the environment and users. As shown in Figure 1,
on university-level STEM questions TheoremQA (Chen et al., 2023) and competition-level
coding problems LeetCode Contest (Guo et al., 2024a), EURUS-70B significantly outperforms
all open-source models, achieving comparable performance to GPT-3.5 Turbo.
EURUS models are trained on ULTRAINTERACT, our newly-curated, large-scale, and
high-quality alignment data specifically designed to improve LLMs’ reasoning capabilities.
ULTRAINTERACT consists of a diverse set of instructions spanning math, coding, and logical
reasoning problems from 12 established datasets. For each instruction, ULTRAINTERACT
collects a preference tree that includes: (1) Diverse planning strategies in a unified pattern,
such as sequential processing (Wei et al., 2022) and tool creation (Qian et al., 2023), followed
by executing step-by-step actions formatted in either text or code, to provide divserse
reasoning trajectories. (2) Multi-turn interaction trajectories with the environment
**and the critique, to improve models’ capabilities to learn from feedback and correct**
previous errors (Wang et al., 2023b). (3) Paired correct and incorrect actions organized
**in tree structures, to facilitate preference learning. In total, ULTRAINTERACT contains 86K**
instructions and 220K action pairs, where each pair consists of an instruction, a correct
response, and an incorrect one. Conceptually, ULTRAINTERACT’s data resemble imbalanced
binary trees as shown in Figure 2.
ULTRAINTERACT can be used in both supervised fine-tuning and preference learning.
Our experiments show that, using ULTRAINTERACT along with established datasets in
instruction fine-tuning already achieves strong performance. ULTRAINTERACT further
facilitates preference learning for reasoning tasks, improving the performance even further
with KTO (Ethayarajh et al., 2024) and NCA (Chen et al., 2024a). Surprisingly, applied to an
instruction finetuned EURUS model, DPO (Rafailov et al., 2023) hurts the performance.
Through careful analysis, we provide evidence that the performance in reasoning correlates
with the value of rewards of chosen data—a higher final reward often indicates a better
reasoning capability. Besides, our investigation suggests that DPO may be less suitable
for reasoning tasks than KTO and NCA. Inspired by this fresh finding, we devise a new
objective for reward modeling to augment the Bradley-Terry objective (Bradley & Terry,
1952), explicitly encouraging training to increase the absolute rewards of chosen solution
and decrease those of rejected data. Furthermore, ULTRAINTERACT leads to our reward
model EURUS-RM-7B, which achieves a better correlation with human annotators than
all existing models on AutoJ (Li et al., 2023a) and MT-Bench (Zheng et al., 2023), including
GPT-4 (OpenAI, 2023). EURUS-RM-7B demonstrates especially strong preference modeling
performance on reasoning tasks.
Checkpoints of our EURUS models, accompanying ULTRAINTERACT alignment data to
reproduce this research, will be publicly available.
-----
**2** **ULTRAINTERACT: Tree-structured Alignment Data for Reasoning**
Solving complex problems
capability in planning and
This is reflected in ULTRAIN
scale (§2.1); (2) It provides
**U** **U** **U** **U** **User**
**Instruction**
**A** **A** **R** **C** **A** **Unpaired**
**Action**
**O** **U** **O&C** **C** **Chosen**
**Action**
**A** **A** **R** **C** **R** **Rejected**
**Action**
**O** **U** **O&C** **O** **Observation**
**A** **R** **C** **R** **C** **O&C** **Observation**
**& Critique**
multi-turn trajectories that
solve the input instruction Figure 2: Left: CodeActInstruct (Wang et al., 2024) and
through multiple turns of Code-Feedback (Zheng et al., 2024); Middle: HH-RLHF (Bai
interaction with and learning et al., 2022); Right: ULTRAINTERACT. Each instruction in
from the environment and ULTRAINTERACT is constructed as a preference tree.
critique. At each turn, it
breaks down the problem into smaller ones (§2.2). (3) ULTRAINTERACT includes pairwise
data to facilitate preference learning (§2.3).
Conceptually, ULTRAINTERACT collects a preference tree for each instruction, with the
instruction being the root and each action a node (Figure 2). A trajectory is a root-to-leaf
path consisting of a sequence of actions. In each preference tree, all nodes of correct actions
and all trajectories ending with correct actions can be used for SFT. Paired correct and
incorrect nodes or trajectories can be used for preference learning.
**2.1** **Instruction Selection Emphasizing Complexity, Quality, and Diversity**
We target three representative reasoning tasks: math problem-solving, code generation,
and logical reasoning. The complexity, quality, and diversity of the alignment data are
crucial to the model’s performance (Liu et al., 2023). Following Wang et al. (2023b), we
select challenging problems that GPT-3.5-Turbo fails to solve. We intentionally restrict
the selection of the datasets to those with ground-truth solutions, aiming to ensure highquality oversight signals rather than relying on LLM-as-a-judge annotation (Weyssow et al.,
2024). Besides, the gold solutions also serve as references for the critique model to generate
feedback. To promote ULTRAINTERACT’s diversity, we pick datasets of different categories.
For each dataset, we include distinct reasoning patterns based on question categories or
formulations necessary to solve the problems. Table 6 summarizes the datasets selected by
ULTRAINTERACT. Except for MATH, none of the training datasets is used in our evaluation.
**2.2** **Decomposition and Interaction at Each Turn**
Figure 3 provides an illustrative example. In what follows, we connect the actor model
with a Python interpreter as the “environment”. Unless otherwise specified, we use GPT-3.5
Turbo as the actor model.
Following Wang et al. (2024), the actor model first decomposes the input problem into several
sub-problems and then solves each by generating Python code pieces as actions and using
the environment to execute them. To promote solution diversity, the actor model randomly
samples one reasoning schema in the form of either CoT (Wei et al., 2022) or modularization
programming (Qian et al., 2023; Yuan et al., 2023). The actor then generates actions in text
or code to solve each sub-problem, with each step being marked by explicit notations.
Multi-turn interactions with the environment are often necessary to solve challenging
problems (Wang et al., 2023b). To improve such capabilities of the models, ULTRAINTERACT
collects trajectories in which the actor model interacts with the environment and a critique
model (a proxy for user) and refines its action based on their feedback.
-----
Figure 3: An illustrative example of an ULTRAINTERACT trajectory over two turns. In each
In a classroom, x children have 7 books each. […] what is the value of unknown variable x ?
**1** **Generation** **2** **Observation** **4** **Generation** **5** **Observation**
```
<thought> […] <thought> […]
```
Step 1: First, let's calculate the total […]. `NameError: name 'x' is` Step 1: First, create an equation […]. `Out: 10.0`
Step 2: Then, we need to add the […]. `not defined.` Step 2: Each child has 7 books, so […].
`</thought>Step 3: The sum of the books should […].` `Your answer is wrong.` Step 4: Now we can solve for x by […].Step 3: The teacher brings 8 […]. `Your answer is correct.`
```
<execute> </thought>
```
`# Step 4: First, let's calculate […].total_books_child = x * 7` **3** **Critique** `<execute># Step 5: Setting up equation:` **6** **Critique**
```
# Step 5: Next, we need to add […]. Expert feedback: equation = "7x + 8 = 78" Expert feedback:
```
`total_books = total_books_child + # Step 6: We can now solve for […].x = (78 - 8) / 7` `8` Your thought process is accurate and your error lies `</execute># Step 6: Solving for x:x = (78 - 8) / 7` Good job! You have fixed the error in last turn.
`</execute>` in your code. You need to `<solution>` Now your answer is
`<solution>` define `x` first […] `x` correct. […]
```
x </solution>
</solution>
```
turn, the actor model generates step-by-step reasoning chains, and the environment and the
critique model provide observations and textual critique respectively.
The environment receives an action from the actor model along with the interaction history,
and then the code interpreter returns two kinds of “Observation”: (1) Python execution
results, either program outputs or error traceback messages; (2) binary feedback, indicating
whether the solution is correct or not. Then, the observations along with the history
will be passed to a critique model, which locates the errors and provides suggestions for
improvements. To avoid potential bias introduced by self-correction (Wang et al., 2023b; Xu
et al., 2024), we adopt a stronger model, GPT-4, as the critique and ensure critique quality
by providing GPT-4 with ground truth answers as references.
This procedure resembles Wang et al. (2024). However, we adopt more diverse reasoning
patterns to teach LLMs to learn rationales rather than simply memorizing answers (Mitra
et al., 2023), and learn to create and use tools (Qian et al., 2023; Yuan et al., 2023; Qin et al.,
2023). Besides, we believe that it is important for LLMs to learn from the feedback provided
by the critique rather than solely from observations of the environment.
**2.3** **Preference Trees Facilitates Preference Learning Across Multiple Turns**
Unlike open-ended conversations, where human preference is ambiguous and challenging
to specify, many reasoning tasks have clear and objective preferences for correct actions. The
preference annotation is threfore an evaluation of the correctness of the solutions conditioning
_ground truth ones, which come with the datasets in ULTRAINTERACT. This eliminates the_
need for human or LLM-based preference annotation and ensures high data quality. To
facilitate preference learning, ULTRAINTERACT pairs correct and incorrect actions.
**Sampling Paired Correct and Incorrect Actions at Each Turn. For each instruction in**
ULTRAINTERACT, we sample, from the actor model, a pair of correct and incorrect actions
following §2.2. We follow Cui et al. (2023) to sample the pair from different actor models
to ensure response diversity. To prevent models from exploiting shortcuts based on surface
features, we exclude instances that fail to pass the Python syntax check.
Certain challenging problems in ULTRAINTERACT pose difficulties in obtaining correct
actions, even using strong actors such as GPT-4, with nearly zero pass@100 accuracies. To
improve the pass rates of the actor models while keeping the expense under control, we
sequentially take the following steps. (1) Directly sampling 20 actions and randomly keeping
a correct one, if any. (2) If no correct action is obtained, we repeat the above process up
to three times, progressively switching from more cost-effective models to the strong yet
expensive GPT-4 Turbo. (3) For the remaining difficult problems where no correct action is
acquired after the previous two steps, we provide the actor with ground-truth rationales and
answers, and then apply various techniques to elicit correct actions. The specific information
provided and the techniques applied vary depending on the tasks (Appendix A.2).
-----
**Tree-structured Action Pairs Across Multiple Turns. After each turn, the correct action**
concludes its trajectory. We expand the incorrect action into the next turn, and have the actor
interact with the environment and the critique to refine its solution (§2.2). We then repeat
the procedures introduced earlier in this section to collect an additional action pair. By
expanding the incorrect action, ULTRAINTERACT can provide data to help models learn from
feedback, and collect multiple action pairs for preference learning across multiple turns.
Conceptually, for every instruction, ULTRAINTERACT constructs a binary preference tree
with each action being a node (Figure 2). We cap the tree at a maximum of five turns.
**Additional Instruction-action Pairs for Challenging Problems. We believe the challenging**
instructions that make it to step (3) above can provide valuable training signals. Therefore,
for a subset of these problems with multiple ground truth solutions, we further sample
additional correct actions to cover all ground truths. Accordingly, we further sample
incorrect actions to pair with these additional correct actions, so that they can be used in
both supervised fine-tuning and preference learning.
With the tree-structured data, ULTRAINTERACT enables comparisons at every turn, in
contrast to comparing only at the last turn (Bai et al., 2022), and thus can improve the
models’ interaction ability. Closing this section, Table 1 summarizes some statistics of
ULTRAINTERACT, and more details are in Appendix A.4.
Table 1: Some statistics of ULTRAINTERACT.
**Type** **# Turns per Traj.** **# Tokens** **Avg. # Traj** **Total** **# Correct**
**Task** **# Instructions**
**w/ Interaction?** **w/ Tool?** **T1** **T2** **T3** **T4** **T5** **per Traj.** **per Ins.** **# Pairs** **Answers**
22,928 10,440 4,122 1,898 904 5,564 1,750.0 1.0 42,780 68,033
2,757 16,154 - - - - 439.1 5.9 13,217 16,154
22,639 10,708 3,521 1,459 723 6,228 1,521.9 1.0 44,750 62,182
2,083 16,348 - - - - 538.1 7.8 12,624 16,348
**Math**
! - 20,463 13,265 2,584 987 379 3,248 1,728.5 1.0 18,106 22,215
**Coding**
% - 8,495 92,618 - - - - 1,070.4 5.5 78,634 92,618
! ! 2,086 1,685 298 72 8 23 1,299.8 1.0 1,750 2,198
**Logic**
! % 4,467 2,453 1,674 340 0 0 1,266.7 1.0 7,958 7,231
**Total** - - 85,918 163,671 12,199 4,756 2,014 15,063 1,201.8 2.3 219,819 286,979
**3** **EURUS: State-of-the-art Open LLMs in Reasoning**
ULTRAINTERACT helps us develop EURUS, a suite of LLMs and a reward model (RM).
**Supervised Fine-Tuning. EURUS-7B-SFT is fine-tuned from Mistral-7B (Jiang et al., 2023a)**
and EURUS-70B-SFT from CodeLLaMA-70B (Roziere et al., 2023). First, we perform SFT
using all correct actions (287K) in ULTRAINTERACT. We find it yields better performance to
discard interaction history and train only on correct leaf nodes in each tree. To improve general instruction-following ability, we include into our SFT data mixture UltraChat (Ding et al.,
2023), ShareGPT[2], and OpenOrca (Lian et al., 2023). Please find mixture ratios in Appendix B.
**Perference Learning. Based on EURUS-SFT models, we explore three preference learning**
algorithms, DPO (Rafailov et al., 2023), KTO (Ethayarajh et al., 2024), and NCA (Chen
et al., 2024a). Differently from SFT, here we include all multi-turn trajectory pairs in our
ULTRAINTERACT (220K) and include all UltraFeedback (Cui et al., 2023) pairs (340K).
**Reward Modeling. Similarly to the preference learning, we use all 220K multi-turn trajec-**
tory pairs from ULTRAINTERACT; it is further augmented with the 240K single-turn action
pairs from ULTRAINTERACT. More details are in the Appendix B. We include all 340K pairs
from UltraFeedback and one pair for each instruction from UltraSafety (Guo et al., 2024b),
totaling 3K. EURUS-RM-7B is initialized from EURUS-7B-SFT with a new linear layer.
Our findings in §6 indicate that the absolute values of rewards make a big difference in the
models’ reasoning performance. We therefore augment the established Bradley-Terry (BT)
objective LBT with an additional term LDR to directly increase the reward of the chosen
actions for instances from ULTRAINTERACT, and decrease those of the rejected ones:
[2https://huggingface.co/datasets/openchat/openchat sharegpt4 dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset)
-----
Table 2: Open-source LLM baselines that we compare to.
**Type** **Models**
Mistral-7B-Instruct-v0.2 (Jiang et al., 2023a), Zephyr-7B-β (Tunstall et al., 2023), OpenChat-3.5-1210 (Wang
**General Purpose** et al., 2023a), Starling-LM-7B-α (Zhu et al., 2023), Mixtral-8x7B-Instruct (Jiang et al., 2023a), DeepSeekLLM-67B-Chat (DeepSeek-AI, 2024), QWen1.5-72B-Chat (Bai et al., 2023)
Magicoder-S-DS-6.7B (Wei et al., 2023), OpenCodeInterpreter (OpenCI for short, DS-6.7B/CL-70B) (Zheng
**Coding**
et al., 2024), DeepSeek-Coder-33B-Instruct (Guo et al., 2024a), and CodeLLaMA-70B-Instruct(Roziere et al.,
2023).
MAmmoTH-7B-Mistral (Yue et al., 2023), WizardMath-7B-v1.1 (Luo et al., 2023a), OpenMath (Mistral**Math**
7B/CodeLLaMA-70B) (Toshniwal et al., 2024).
_LULTRAINTERACT = −_ log _σ_ _rθ(x, yc) −_ _rθ(x, yr)_ _−_ log _σ_ _rθ(x, yc)_ _−_ log _σ_ _−rθ(x, yr)_
BT: optimize relative rewards [] DR : increase _rθ_ (x,[] yc) and decrease _rθ_ (x, yr) []
_L_ _L_
For instances from other datasets, we train with| {z } | BT. θ denotes the reward model’s parame-{z }
_L_
ters, rθ (·) and rθ (x, yr) the rewards on the chosen and rejected actions respectively. Our
ablation study demonstrates the importance of both LBT and LDR.
**4** **Evaluation of EURUS-7B and EURUS-70B**
**Evaluation Setup. We consider both single-turn and multi-turn reasoning. For single-turn**
evaluation, we consider HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and
LeetCode (Guo et al., 2024a) for coding, GSM-Plus (Li et al., 2024), MATH, TheoremQA
(Chen et al., 2023), SVAMP (Patel et al., 2021), and ASDiv (Miao et al., 2020) for math, and
BBH-Hard (Suzgun et al., 2022) for reasoning. We evaluate with pass@1 accuracy. We
also use IFEval (Zhou et al., 2023) to assess the instruction-following ability and report the
prompt-level loose score. For multi-turn evaluation, we adopt MINT (Wang et al., 2023b) and
only consider the coding and math problems. We report the success rate at Turn 5. Please
find further details on evaluation setups and evaluations beyond reasoning in Appendix C.
As shown in Table 2, we compare our EURUS with general-purpose models, and those
specialized in coding and math of various sizes. We also summarize the results of GPT-3.5
Turbo and GPT-4 reported in previous works.
Table 3: Overall performance. All test sets except MATH are out-of-distribution to our
models and most baselines. MAmmoTH, OpenChat, and Starling-LM have been trained on
TheoremQA test sets. We strikethrough the contaminated numbers.
|Model|Coding|Math|Reasoning|Ins-Following|Multi-Turn|Avg.|
|---|---|---|---|---|---|---|
||HumanE. MBPP LeetC.|GSM-Plus MATH Theo.QA SVAMP ASDiv|BBH|IFEval|Code Math||
_∼7B_
|Mistral-7B-Instruct-v0.2 Zephyr-7B-β OpenChat-3.5-1210 Starling-LM-7B-α Magicoder-S-DS-6.7B OpenCI-DS-6.7B MAmmoTH-7B-Mistral WizardMath-7B-v1.1 OpenMath-Mistral-7B EURUS-7B-SFT + DPO + KTO + NCA|39.0 30.8 6.1 29.3 35.8 2.2 64.0 61.7 11.7 46.3 51.1 8.9 75.6 70.4 23.9 76.8 66.2 16.1 24.4 42.4 7.2 50.0 53.9 6.7 33.5 46.6 11.7 55.5 59.1 20.0 50.6 52.1 8.3 56.1 58.6 18.9 55.5 60.2 14.4|15.7 9.5 8.5 42.9 49.5 23.3 5.0 7.8 19.1 28.0 46.7 28.1 19.1 75.4 77.0 23.7 21.5 12.0 26.3 39.8 16.4 19.9 13.1 61.6 62.8 41.5 31.6 16.1 74.5 79.8 40.1 36.0 26.3 60.7 72.3 54.6 30.0 16.5 57.8 73.5 59.4 39.1 13.1 83.4 79.8 52.1 32.6 20.0 82.2 84.1 51.0 28.3 20.9 78.7 83.8 55.0 33.2 20.6 84.4 85.0 54.9 34.2 20.9 84.6 85.4|62.4 61.8 67.0 67.1 57.0 53.9 57.7 64.4 58.6 64.6 65.0 67.6 64.3|44.4 39.7 50.3 26.1 21.1 22.6 34.9 22.6 15.0 44.0 42.5 43.1 42.7|7.4 26.2 5.2 16.9 21.3 32.4 18.4 28.9 27.9 8.0 5.9 1.3 3.7 6.7 16.2 8.9 2.9 5.3 15.4 28.4 20.6 32.4 19.1 43.6 21.3 38.7|28.5 22.8 46.2 30.8 38.1 40.5 34.4 37.9 37.4 46.5 44.5 48.8 48.1|
|---|---|---|---|---|---|---|
_∼40B_
|Mixtral-8x7B-Instruct DeepSeek-Coder-33B-Ins|50.6 50.1 5.6 82.3 73.9 27.8|49.6 25.9 20.4 66.4 68.8 29.5 20.2 21.9 75.2 85.0|73.5 61.5|48.8 26.1|12.5 37.3 35.3 21.8|42.5 46.7|
|---|---|---|---|---|---|---|
_∼70B_
|CodeLLaMA-70B-Instruct DeepSeek-LM-67B-Chat QWen1.5-72B-Chat OpenCI-CL-70B OpenMath-CL-70B EURUS-70B-SFT + KTO + NCA|56.7 58.6 14.4 70.7 65.7 20.0 71.3 56.9 15.6 77.4 71.7 20.0 39.0 52.6 15.0 75.6 74.2 33.3 76.8 68.2 26.1 79.3 71.9 33.3|34.9 12.0 8.4 63.5 70.1 65.0 41.0 17.9 74.0 84.0 65.4 43.4 18.5 79.5 79.1 46.1 29.2 18.8 76.1 79.4 62.2 45.9 15.9 86.6 82.8 58.1 40.6 28.0 86.3 88.5 62.2 41.3 30.6 90.4 89.0 62.8 41.7 32.6 89.5 90.3|74.5 78.9 78.0 66.7 59.9 79.9 80.8 80.0|24.0 52.7 53.4 26.8 15.7 49.2 46.4 49.2|3.7 14.2 30.9 41.8 27.2 38.2 30.9 12.0 14.0 0.4 31.6 40.4 39.0 49.8 38.2 39.6|36.3 53.5 52.2 46.3 40.8 57.1 58.4 59.0|
|---|---|---|---|---|---|---|
Proprietary Models
|GPT-3.5 Turbo GPT-4|76.8 82.5 23.3 85.4 83.5 41.8|61.2 37.8 35.6 83.0 90.6 85.6 69.7 52.4 94.8 92.6|70.1 86.7|56.6 79.7|29.4 36.9 59.6 65.8|57.0 74.8|
|---|---|---|---|---|---|---|
-----
**4.1** **Results**
Results are shown in Table 3. We summarize the takeaways as follows:
**EURUS, both the 7B and 70B variants, achieve the best overall performance among**
**open-source models of similar sizes. EURUS even outperform specialized models in**
**corresponding domains in many cases. Notably, EURUS-7B outperforms baselines that**
are 5× larger and EURUS-70B achieves better performance than GPT-3.5 Turbo. EURUS’s
instruction-following performance is among the best general-purpose models, substantially
better than specialized ones.
**Preference learning with ULTRAINTERACT can further improve the performance, espe-**
**cially in math and the multi-turn ability. KTO and NCA consistently improve the models’**
performance in all five math benchmarks and mult-turn evaluations, while their effects
vary in others. Since SFT models only use the single-turn data from ULTRAINTERACT while
preference learning uses the multi-turn ones, the improvements in interaction ability should
also be attributed to ULTRAINTERACT rather than the algorithms alone. Surprisingly, we
observe that DPO hurts model performance on most benchmarks. DPO training of our
70B model fails since the rewards go down to −∞. We analyze this phenomenon in §6.1.
**5** **Evaluation of EURUS-RM-7B**
**Evaluation Setup.** We evaluate EURUS-RM-7B on three RM benchmarks, RewardBench (Lambert et al., 2024), AutoJ (Li et al., 2023a), and MT-Bench (Zheng et al., 2023).
Aiming for a more realistic OOD evalation, we exclude the “prior sets” split from RewardBench, since many baselines train on the datasets that this split contains. We compare with
PairRM (Jiang et al., 2023b), Starling-RM-7B/34B (Zhu et al., 2023), UltraRM-13B (Cui et al.,
2023), GPT-3.5 Turbo, and GPT-4. To further explore EURUS-RM-7B’s potential in improving
models’ performance through reranking, we use it to rerank Mistral-7B-Instruct-v0.2’s
responses on HumanEval, MBPP, GSM8K, and MATH. We report the results of random
sampling, self-consistency, and Starling-RM-34B as baselines.
**5.1** **Results**
Table 4 summarizes reward modeling performance, and Figure 4 plots some reranking
results with others in Appendix D.1.
**EURUS-RM-7B stands out as the best 7B RM overall, and achieves similar or better**
**performance than much larger baselines. Particularly, it outperforms GPT-4 in certain**
**tasks. EURUS-RM-7B achieves a better correlation with human experts than all existing**
models on AutoJ and MT-Bench, and it achieves comparable performance to the 5× larger
Starling-RM-34B on RewardBench. On RewardBench, EURUS-RM-7B outperforms all
baselines on the “Chat-Hard” split while achieving very competitive performance on the
“Reasoning” split. Across the AutoJ splits, EURUS-RM-7B outperforms nearly all existing
models, with the only exception being GPT-4’s results on Coding.
**Our training objective is beneficial in improving RM performance on hard problems and**
**reasoning. Table 4 shows that optimizing LDR improves RM’s reasoning ability, but BT**
modeling is still beneficial in equipping RM with abilities in general chatting as suggested
in the “Chat-Hard” column, though its effect on reasoning may vary.
**ULTRAINTERACT is compatible with other datasets like UltraFeedback and UltraSafety,**
**and mixing these datasets can balance different RM abilities. Improving RM’s capa-**
bilities in reasoning with ULTRAINTERACT does not sacrifice others, which indicates that
ULTRAINTERACT can be a great ingredient for the training data mixture of reward models.
**EURUS-RM-7B improves LLMs’ reasoning performance by a large margin through rerank-**
**ing. EURUS-RM-7B consistently improves pass@1 accuracy across all tasks and performs**
better than 5× larger baseline Starling-RM-34B. Also, EURUS-RM-7B’s reranking performance scales well with #responses per instruction, except a slight decrease in HumanEval
-----
MBPP
44
42
40
38
36
Pass@1 Accuracy47.5
50
16
14
12
10
#Samples Per Instruction #Samples Per Instruction
1 2 4 8 16 1 2 4 8 16 1 2 4 8 16 1 2 4 8 16
|Col1|Hum|Col3|Col4|MB|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
|2 424 Pas||8 10|3162 1 Pas||
||6|||4|
|7.5 GSM8K|Col2|Col3|Col4|
|---|---|---|---|
|7.5 5.0 GSM8K MATH 2.5 17.5 Accuracy 0.0 15.0 12.5 7.5 Pass@1 10.0 2 4 6 87.5 10 12 14 16||||
|2.5||||
|0.0 7.5||||
|||||
|2 4||||
||6 87.5 10 Pas||16|
|||12 14||
HumanEval MBPP
44
48 42
46 40
44 38
2 Pass@1 Accuracy424 6 8 10 Pass@1 Accuracy3612 14 16
#Samples Per Instruction
#Responses Per Instruction #Responses Per Instruction #Responses Per Instruction #Responses Per Instruction
Self-Consistency Starling-RM-34B Eurus-RM-7B
Figure 4: Results on reranking Mistral-7B-Instruct-v0.2’s responses. Full results in Table 9.
Table 4: Results on reward modeling benchmarks. UF: UltraFeedback; US: UltraSafety. The
best performance in each benchmark is in bold and the second best one is underlined. Most
baseline results are from Jiang et al. (2023b) and Lambert et al. (2024).
Reward Bench AutoJ
Model MT-Bench
Chat Chat-Hard Safety Reasoning Avg. Code Math Others Overall
PairRM 90.2 53.0 31.5 60.0 58.7 58.3 52.8 58.9 59.1 59.0
Starling-RM-7B 98.0 43.4 88.6 74.6 76.2 59.2 47.2 61.4 60.8 56.8
Starling-RM-34B 96.9 59.0 89.9 90.3 **84.0** 65.8 54.2 62.3 62.6 60.4
UltraRM-13B 96.1 55.3 45.8 82.0 69.8 55.0 43.1 59.6 59.9 56.0
GPT-3.5 Turbo - - - - - 36.6 40.3 41.2 42.7 57.1
GPT -4 - - - - - 69.2 51.4 61.4 61.9 63.9
EURUS-RM-7B 96.5 65.3 80.7 87.0 82.4 67.5 62.5 63.6 64.5 **72.9**
w/ow/ow/o US L LDRBT 96.596.896.4 58.559.966.2 67.779.583.8 77.584.281.7 73.380.878.3 66.767.564.2 66.759.761.1 64.764.865.0 **65.765.065.6** 72.672.672.8
w/o UF + US 95.1 61.1 63.7 73.4 78.0 55.8 58.3 59.0 58.7 67.2
when increasing response number form 8 to 16. In contrast, Starling-RM-34B suffers from
severe performance drop on HumanEval and it consistently hurts model accuracy on MATH.
**6** **Analysis**
Figure 5: Reward patterns of EURUS-7B preference learning with DPO, KTO, and NCA.
DPO KTO NCA
3
4
2 2
2 1
0 0.4 0.16
Value -1.26 Value 0 Value 0
2 Margins 2 Margins 1 Margins
Rewards/Chosen Rewards/Chosen 2 Rewards/Chosen
4 Rewards/Rejected 4 Rewards/Rejected Rewards/Rejected
3
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Steps (%) Steps (%) Steps (%)
For all algorithms, the rewards of rejected data keep decreasing and the margins between
chosen and rejected data keep increasing. However, the rewards of chosen data decrease
below zero in DPO while keeping increasing and staying positive in KTO and NCA. The
absolute values of the reward in the last step (in red) of the three algorithms positively
correlate with their performance in Table 3.
**6.1** **Explicit Reward as A Proxy? Hypothesis for Preference Learning in Reasoning**
We investigate the reason why DPO behaves differently than KTO and NCA. We start by
empirically inspecting the rewards throughout the preference learning process, as shown
in Figure 5. Rewards for chosen rejected data both keep decreasing through DPO, though
the rewards for chosen data is still higher hence the loss decreases. In KTO and NCA, the
rewards of chosen data keep increasing with those of rejected data decreasing.
Therefore, we hypothesize it is the distinction in the trend of rewards that leads to the performance gap between DPO and the other two algorithms. This distinction can be attributed
to that DPO, derived from the Bradley-Terry model, only optimizes the relative differences
between chosen and rejected data overlooking the absolute values of the rewards. This is a
non-issue in alignment with general human values where preference is “relative” and there
-----
can be many valid answers to the same input. However, in reasoning tasks, the space of correct answers is much smaller than that of incorrect ones. Further, we notice that the rewards
of chosen data in the last training step follow the ranking order of KTO > NCA > DPO,
positively correlate with their performance trends. Therefore, we believe that increasing the
rewards of the chosen data is especially beneficial in preference learning for reasoning tasks.
**6.2** **Ablation Study**
We study the impact of ULTRAINTERACT Table 5: Ablation study of SFT data.
and other open-source alignment data **Model** **Coding** **Math** **BBH** **IFEval** **Avg.**
on EURUS-7B-SFT’s performance. We **EURUS-7B-SFT** 44.9 58.5 64.6 44.0 53.6
consider three settings: (1) With original **Ground-truthOpen-source Only** 33.931.2 46.133.5 64.465.3 42.943.6 44.037.0
**ground-truth answers,** which replaces **ULTRAINTERACT Only** 37.3 56.2 67.0 17.4 47.7
the generated actions with ground-truth
rationales and answers from the original datasets. If no rationales are available, we use those
from ULTRAINTERACT. (2) Open-source data only. (3)ULTRAINTERACT only. We evaluate
with the same setting as §4 and report the averaged scores. See full results in Appendix E.
In Table 5, EURUS outperforms the “Grouth-truth” model on all tasks, confirming the
advantage of ULTRAINTERACT’s designs of divide-and-conquer and code-as-action patterns, in line with conclusions of concurrent work (Chen et al., 2024b; Wang et al., 2024).
Training only on open-source data without ULTRAINTERACT greatly hurts the reasoning
performance, confirming the effectiveness of ULTRAINTERACT. Meanwhile, training only
on ULTRAINTERACT suffers a performance drop except for BBH, especially in instruction
following. We attribute the performance drop to a worse instruction-following ability. This
suggests the necessity of mixing ULTRAINTERACT with other alignment data for better
all-around supervised fine-tuning.
**7** **Related Work**
**Open LLMs in Reasoning. Open-source LLMs have shown remarkable progress in building**
_specialists that excel in mathematics reasoning (Luo et al., 2023a; Yue et al., 2023; Toshniwal_
et al., 2024) or coding abilities (Roziere et al., 2023; Wei et al., 2023; Guo et al., 2024a; Zheng
et al., 2024). On the contrary, mastering general reasoning capabilities still challenges open
models, while the most advanced ones (DeepSeek-AI, 2024; Bai et al., 2023; Touvron et al.,
2023; Jiang et al., 2024) are well behind proprietary models. More, these cutting-edge
open general-purpose models maintain their alignment recipes confidential, which further
hinders the replication and development of open-source reasoning models.
**Preference Learning for Reasoning. Aligning language models from human or AI pref-**
erences has emerged as a prevalent approach in the open-source community (Tunstall
et al., 2023; Bai et al., 2023) with the proposal of DPO (Rafailov et al., 2023) and high-quality
preference datasets (Cui et al., 2023; Zhu et al., 2023). Different from open-domain chatbots,
preference learning is largely underexplored in complex reasoning. Recent research showed
performance degradation when applying DPO on reasoning tasks, but some newly proposed algorithms demonstrated a positive effect (Ethayarajh et al., 2024; Chen et al., 2024a;
Mitra et al., 2024; Shao et al., 2024). However, a deep understanding of preference learning,
specifically its efficacy on complex reasoning, is not yet established.
**8** **Conclusion**
We strive to narrow the huge gap between open-source models and proprietary models from
the perspective of alignment. Our work pushes the boundaries of open-source reasoning
generalists by (1) releasing a high-quality multi-turn reasoning dataset ULTRAINTERACT
with preference trees, (2) introducing EURUS-series LLMs which achieve new SOTA on
challenging reasoning benchmarks and (3) providing insights on preference learning for
reasoning through analysis, leading to new reward modeling objectives as well as a powerful
reward model for reasoning.
-----
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with
operation-based formalisms. In Proc. of NAACL-HLT, 2019.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David
Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with
large language models. ArXiv preprint, abs/2108.07732, 2021.
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu,
Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng
Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao,
Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang,
Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen
technical report. ArXiv preprint, abs/2309.16609, 2023.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma,
Dawn Drain, Stanislav Fort, Deep Ganguli, T. J. Henighan, Nicholas Joseph, Saurav
Kadavath, John Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac HatfieldDodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt,
Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Christopher Olah, Benjamin Mann, and Jared Kaplan. Training a helpful and
harmless assistant with reinforcement learning from human feedback. ArXiv preprint,
abs/2204.05862, 2022.
Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the
method of paired comparisons. Biometrika, 39, 1952.
Huayu Chen, Guande He, Hang Su, and Jun Zhu. Noise contrastive alignment of language
models with explicit rewards. ArXiv preprint, abs/2402.05369, 2024a.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto,
Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray,
Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin,
Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings,
Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji,
Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh
Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage,
Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish,
Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code,
2021.
Wenhu Chen, Ming Yin, Max W.F. Ku, Yixin Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi
Wang, and Pan Lu. Theoremqa: A theorem-driven question answering dataset. ArXiv
preprint, abs/2305.12524, 2023.
Zehui Chen, Kuikun Liu, Qiuchen Wang, Wenwei Zhang, Jiangning Liu, Dahua Lin, Kai
Chen, and Feng Zhao. Agent-flan: Designing data and methods of effective agent tuning
for large language models. volume abs/2403.12881, 2024b.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers
to solve math word problems. volume abs/2110.14168, 2021.
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie,
Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with highquality feedback. ArXiv preprint, abs/2310.01377, 2023.
-----
DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism.
ArXiv preprint, abs/2401.02954, 2024.
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu,
Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality
instructional conversations. In Conference on Empirical Methods in Natural Language
Processing, 2023.
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto:
Model alignment as prospect theoretic optimization. ArXiv preprint, abs/2402.01306,
2024.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller
language models towards multi-step reasoning. In Proceedings of the International
Conference on Machine Learning, 2023.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did
aristotle use a laptop? a question answering benchmark with implicit reasoning strategies.
Transactions of the Association for Computational Linguistics, 9, 2021.
Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen,
Xiao Bi, Yu Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder:
When the large language model meets programming - the rise of code intelligence. ArXiv
preprint, abs/2401.14196, 2024a.
Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun,
Ruobing Xie, Jie Zhou, Yankai Lin, Zhiyuan Liu, and Maosong Sun. Controllable preference optimization: Toward controllable multi-objective alignment. ArXiv preprint,
abs/2402.19085, 2024b.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo,
Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge
competence with apps. In Thirty-fifth Conference on Neural Information Processing
Systems Datasets and Benchmarks Track (Round 2), 2021a.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math
dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets
and Benchmarks Track (Round 2), 2021b.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh
Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile
Saulnier, et al. Mistral 7b. ArXiv preprint, abs/2310.06825, 2023a.
Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary,
Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian
Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, L’elio Renard Lavaud,
Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang,
Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibaut Lavril, Thomas Wang,´
Timoth´ee Lacroix, and William El Sayed. Mixtral of experts. 2024.
Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. In Annual Meeting of the
Association for Computational Linguistics, 2023b.
Nathan Lambert, Valentina Pyatkin, Jacob Daniel Morrison, Lester James Validad Miranda,
Bill Yuchen Lin, Khyathi Raghavi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin
Choi, Noah A. Smith, and Hanna Hajishirzi. Rewardbench: Evaluating reward models
for language modeling. 2024.
Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. Generative
judge for evaluating alignment. ArXiv preprint, abs/2310.05470, 2023a.
-----
Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. Gsm-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem
solvers. ArXiv preprint, abs/2402.19255, 2024.
Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin,
and Ge Li. Taco: Topics in algorithmic code generation dataset. volume abs/2312.14852,
2023b.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Remi Leblond,´
Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter
Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang,
Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel Mankowitz, Esme
Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol
Vinyals. Competition-level code generation with alphacode. volume abs/2203.07814,
2022.
Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and
”Teknium”. Openorca: An open dataset of gpt augmented flan reasoning traces.
[https://https://huggingface.co/Open-Orca/OpenOrca, 2023.](https://https://huggingface.co/Open-Orca/OpenOrca)
Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. What makes good data
for alignment? a comprehensive study of automatic data selection in instruction tuning.
2023.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit,
Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for
semi-structured mathematical reasoning. In Proceedings of ICLR, 2023.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo
Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering
mathematical reasoning for large language models via reinforced evol-instruct. ArXiv
preprint, abs/2308.09583, 2023a.
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao,
Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language
models with evol-instruct, 2023b.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and
developing English math word problem solvers. In Proc. of ACL, 2020.
Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta
Baral, and Ashwin Kalyan. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proc. of ACL, 2022.
Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes,
Sahaj Agrawal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, Hamid
Palangi, Guoqing Zheng, Corby Rosset, Hamed Khanpour, and Ahmed Awadallah. Orca
2: Teaching small language models how to reason. ArXiv preprint, abs/2311.11045, 2023.
Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math:
Unlocking the potential of slms in grade school math. ArXiv preprint, abs/2402.14830,
2024.
OpenAI. Gpt-4 technical report, 2023.
Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured
tables. In Proc. of ACL, 2015.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve
simple math word problems? In Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language
Technologies, 2021.
-----
Cheng Qian, Chi Han, Yi Ren Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. Creator: Tool
creation for disentangling abstract and concrete reasoning of large language models. In
Conference on Empirical Methods in Natural Language Processing, 2023.
Yujia Qin, Shi Liang, Yining Ye, Kunlun Zhu, Lan Yan, Ya-Ting Lu, Yankai Lin, Xin Cong,
Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Marc H.
Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. Toolllm: Facilitating large language
models to master 16000+ real-world apis. ArXiv preprint, abs/2307.16789, 2023.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward
model. ArXiv preprint, abs/2305.18290, 2023.
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,
Yossi Adi, Jingyu Liu, Tal Remez, Jer´ emy Rapin, et al. Code llama: Open foundation´
models for code. ArXiv preprint, abs/2308.12950, 2023.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li,
Y Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in
open language models. ArXiv preprint, abs/2402.03300, 2024.
Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won¨
Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou,, and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. ArXiv preprint,
abs/2210.09261, 2022.
Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor
Gitman. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. arXiv
preprint arXiv: Arxiv-2402.10176, 2024.
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine
Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M.
Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,´
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami,
Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin
Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh
Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu,
Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin
Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina
Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela
Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey
Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models.
ArXiv preprint, abs/2307.09288, 2023.
Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes
Belkada, Shengyi Huang, Leandro von Werra, Clementine Fourrier, Nathan Habib, et al.´
Zephyr: Direct distillation of lm alignment. ArXiv preprint, abs/2310.16944, 2023.
Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. Openchat: Advancing open-source language models with mixed-quality data. ArXiv preprint,
abs/2309.11235, 2023a.
Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji.
Mint: Evaluating llms in multi-turn interaction with tools and language feedback. ArXiv
preprint, abs/2309.10691, 2023b.
Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji.
Executable code actions elicit better llm agents. ArXiv preprint, abs/2402.01030, 2024.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language
models. ArXiv preprint, abs/2201.11903, 2022.
-----
Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. Magicoder: Source
code is all you need, 2023.
Martin Weyssow, Aton Kamanda, and Houari Sahraoui. Codeultrafeedback: An llm-as-ajudge dataset for aligning large language models to coding preferences. ArXiv preprint,
abs/2403.09032, 2024.
Wenda Xu, Guanglei Zhu, Xuandong Zhao, Liangming Pan, Lei Li, and William Yang Wang.
Perils of self-feedback: Self-bias amplifies in large language models. ArXiv preprint,
abs/2402.11436, 2024.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable
multi-hop question answering. In Proc. of EMNLP, 2018.
Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. Reclor: A reading comprehension
dataset requiring logical reasoning. In Proc. of ICLR, 2020.
Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi Ren Fung, Hao Peng, and Heng Ji. Craft:
Customizing llms by creating and retrieving from specialized toolsets. ArXiv preprint,
abs/2309.17428, 2023.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning.
ArXiv preprint, abs/2309.05653, 2023.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao
Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Haotong Zhang, Joseph Gonzalez,
and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. ArXiv preprint,
abs/2306.05685, 2023.
Tianyu Zheng, Ge Zhang, Tianhao Shen, Xueling Liu, Bill Yuchen Lin, Jie Fu, Wenhu Chen,
and Xiang Yue. Opencodeinterpreter: Integrating code generation with execution and
refinement. ArXiv preprint, abs/2402.14658, 2024.
Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny
Zhou, and Le Hou. Instruction-following evaluation for large language models. ArXiv
preprint, abs/2311.07911, 2023.
Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. Starling-7b: Improving
llm helpfulness & harmlessness with rlaif, 2023.
-----
Table 6: ULTRAINTERACT covers a diverse set of datasets spanning three tasks.
**Task** **Datasets**
GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), MathQA (Amini et al., 2019),
**Math**
NumGlue (Mishra et al., 2022), TabMWP (Lu et al., 2023)
CodeContest (Li et al., 2022), TACO (Li et al., 2023b), WikiTableQuestions (Pasupat & Liang,
**Coding**
2015), Magicoder-Evol-Instruct (Luo et al., 2023b; Wei et al., 2023)
**Logic** ReClor (Yu et al., 2020), HotpotQA (Yang et al., 2018), StrategyQA (Geva et al., 2021)
**A** **Additional Details in ULTRAINTERACT Construction**
**A.1** **Dataset Details**
**Math. We adopt GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), MathQA**
(Amini et al., 2019), and NumGLUE (Mishra et al., 2022)for mathematic reasoning, and
include TabMWP (Lu et al., 2023) for tabular processing. We retain all the instructions
for all datasets except MathQA, NumGLUE, and TabMWP. MathQA divides problems
into different categories according to the topics and annotates the formula that indicates
the pattern needed to solve each problem. We apply stratified sampling to sample at
most five problems for each pattern and prioritize the problems that come from the longtail category. Numglue contains eight different reasoning tasks and we discard Task 5
(Reading Comprehension + Explicit Numerical Reasoning), Task 6 (Reading Comprehension
+ Implicit Numerical Reasoning), and Task 7 (Quantitative NLI) due to the simplicity Mishra
et al. (2022). For TabMWP, we only keep the questions with difficulty levels 4 and 5 since
the rest are too easy for current state-of-the-art models.
**Code. We focus on programming with Python for the simplicity of integration of the inter-**
preter. We use CodeContest (Li et al., 2022) and TACO (Li et al., 2023b), two competitionlevel coding datasets collected from various online platforms. We filter out the overlapped
questions. Note that part of the questions in TACO only contain ground-truth solutions
and do not contain test cases for evaluation, hence we apply GPT-4 to generate 12 test case
inputs (4 basic inputs, 4 edge cases, and 4 large numbers) for each question and then execute
the ground-truth solution snippets to produce outputs. Given that the two datasets mainly
focus on competition problems that may deviate from real-world daily uses, we exclusively
adopt Magicoder-Evol-Instruct (Luo et al., 2023b; Wei et al., 2023), the only dataset in our
selection that does not contain test cases or ground-truth solutions. We employ GPT-4 Turbo
to judge the correctness of generated code during interaction, and therefore we do not use
this dataset for preference learning since we cannot rigorously construct pairs of correct and
incorrect actions limited by the evaluation reliability. We also include WikiTableQuestions
(Pasupat & Liang, 2015) for table processing with code.
**Logical Reasoning. we use the multi-hop reasoning datasets HotpotQA (Yang et al., 2018)**
and StrategyQA (Geva et al., 2021), and the logical reasoning dataset ReClor (Yu et al.,
2020). We follow the setting of Wang et al. (2023b) and convert HotpotQA to a generation
task, removing the contexts and requiring LLMs to search relevant information using
Wikipedia API.
**A.2** **Details on Preference Tree Construction**
**Models Adopted for Incorrect Action Sampling.** We randomly sample one model
from Mistral-7B-Instruct-v0.2, DeepSeek-Coder-33B-Instruct, Mixtral-8x7B-Instruct, and
DeepSeek-LLM-67B-Chat to generate one incorrect action to pair with each correct one.
**Correct Action Generation Based on Ground Truth Annotations.**
We adopt GPT-3.5 Turbo as the generator to generate correct actions based on ground truth
considering the instruction-following ability. We provide different access to the ground
truth information for different tasks, specifically: (1) For coding, where test cases are black
boxes to reference solutions, we provide full access to the solution codes. The actor model
will add step marks and corresponding explanations to the ground-truth code to make it
-----
Table 7: Stats breakdown
|Human Annotation Task Dataset w/ Tool? # Prompts # Pairs # Correct Answers. Avg. Length Has Answer? Has Rationale?|Col2|
|---|---|
|Math|! 4,522 10,277 17,392 1,746.7 ! ! GSM8K % 7,257 10,879 15,752 823.3 ! ! ! 7,474 22,905 34,667 1,189.0 ! ! MATH % 7,471 25,765 36,005 1,735.0 ! ! ! 7,552 15,079 20,328 2,338.5 ! ! MathQA % 7,159 17,743 22,500 1,916.3 ! ! ! 3,020 3,601 5,717 1,474.6 ! % NumGLUE % 2,835 2,987 4,273 1,056.1 ! % TabMWP ! 3,117 4,135 6,083 842.6 ! %|
|Coding|CodeContest - 8,167 44,319 44,666 2,061.7 ! ! TACO - 9,016 50,877 58,191 2,143.5 ! ! WikiTableQuestions - 1,401 1,544 1,738 1,794.8 ! % Magicoder-Evol-Instruct - 10,374 0 10,238 687.1 % %|
|Logic|Reclor % 4,467 7,958 7,231 1,266.7 ! % HotpotQA ! 1,182 1,009 1,230 1,333.2 ! % StrategyQA ! 904 741 968 1,256.2 ! %|
easier to understand, or further refine the code for optimization. (2) For tool-free math
problems, to avoid the actor model directly copying the answers to pass the correctness
checking, we mask the answer numbers in the rationale before providing it to LLMs. This
approach can better ensure response quality since it encourages LLMs to generate responses
with complete reasoning chains with each step clearly marked. (3) For program-enhanced
math reasoning, we first translate the textual rationale into code. Then, we either directly
provide it to the actor model to generate plans, or ask the actor model to convert the code
into modularization programming and then make plans to create tools to solve problems.
**A.3** **Data Decomtamination**
We conduct careful decontamination. Firstly, for LeetCode, we apply the Exact Substring
Matching Algorithm[3] to compare with each instruction in the ULTRAINTERACT and find
no overlaps. For others, we perform 8-gram exact matching to compare ULTRAINTERACT
instructions with test sets of the same task. We remove those instructions that overlap 8
grams with any test sample.
**A.4** **Detailed Statistics**
In total, ULTRAINTERACT has 86K instructions and 220K action pairs. The Total # Pairs
does not equal Total # Turns in ULTRAINTERACT, since we fail to generate sufficient correct
actions for every incorrect action in multi-turn trajectories mainly due to a lack of sufficient
ground truth annotations. The total # pairs may not equal # correct answers, either, because
it is also difficult and unnecessary to sample incorrect actions for the correct ones for some
simple instructions. We present the specific information for each dataset. In particular, we
list information on human annotation in each dataset, which plays an important role in
correct action generation (§2.3 and Appendix A.2). All three steps of correct action sampling
methods mentioned in §2.3 can be applied to datasets that have rationales, while for datasets
only containing answers, only the first two steps are applicable. We do not apply any of the
three-step methods to generate correct answers for Magicoder, the only dataset without any
human annotation, to construct preference pairs.
**B** **Additional Details on Training EURUS Models**
**Supervised Fine-Tuning. We finetune base models for 1 epoch with a 2e-5 learning rate**
and 0.1 warmup ratio using a cosine scheduler. For EURUS-7B, we mix 32K UltraChat, 30K
ShareGPT, and 50K OpenOrca. For For EURUS-70B, we mix 63K UltraChat, 30K ShareGPT,
and 70K OpenOrca.
[3https://github.com/bigcode-project/bigcode-dataset/tree/main/decontamination](https://github.com/bigcode-project/bigcode-dataset/tree/main/decontamination)
-----
**Preference Learning. For hyperparameters, all β is set to 0.1, and λ+/λ−** in KTO is set to
1.33 as recommended. We finetune models for 1 epoch with a 5e-7 learning rate and 0.1
warmup ratio using a cosine scheduler.
**Reward Modeling. We train RM for 1 epoch with lr=1e-5 learning rate. We also use a cosine**
scheduler with a warmup ratio of 0.1.
Regarding pair augmentation, we scale up the pairs by matching every correct action for
each instruction with one incorrect action of other turns. This leads to NxN pairs of singleturn actions for a trajectory of depth N. We remove the action pairs consisting of nodes at
the same turn, as they are already part of the multi-turn trajectory pairs we included. Next,
to avoid overfitting on the training set, we only select instructions with NxN ≤ 10, and for
these instructions, we randomly sample at most 9 pairs with each action occurring no more
than 3 times. This leads to an augmentation of 240k single-turn action pairs.
**C** **Additional Evaluation Results of EURUS**
Table 8: MMLU and MT-Bench.
|Model|MMLU MT-Bench|
|---|---|
_∼7B_
|Mistral-7B-Instruct-v0.2 Zephyr-7B-β OpenChat-3.5-1210 Starling-LM-7B-α Magicoder-S-DS-6.7B OpenCI-DS-6.7B MAmmoTH-7B-Mistral WizardMath-7B-v1.1 OpenMath-Mistral-7B EURUS-7B-SFT + DPO + KTO + NCA|58.9 7.60 59.7 7.34 63.4 7.81 64.0 8.09 37.1 - 37.2 - 56.2 - 60.3 - 58.3 - 61.8 7.15 62.4 7.38 62.2 7.38 62.2 7.38|
|---|---|
_∼40B_
|Mixtral-8x7B-Instruct DeepSeek-Coder-33B-Ins|70.3 8.30 40.2 -|
|---|---|
_∼70B_
|CodeLLaMA-70B-Instruct DeepSeek-LM-67B-Chat QWen1.5-72B-Chat OpenCI-CL-70B OpenMath-CL-70B EURUS-70B-SFT + KTO + NCA|55.1 - 72.3 - 72.9 8.61 52.4 - 60.2 - 59.1 7.69 59.5 7.93 59.4 7.54|
|---|---|
Proprietary Models
**Detailed Setup in §4. For math, we test both textual**
reasoning and program-enhanced settings and report
the best performance of the two. All evaluations are
conducted in 0-shot CoT with two exceptions: BBH
uses 3 shots and IFEval does not use CoT. For MINT,
we select MATH, TheoremQA, and MMLU-math
from “reasoning” as a new “math” split. We also
evaluate 5-shot MMLU (Hendrycks et al., 2021a) for
STEM knowledge and MT-Bench (Zheng et al., 2023)
for conversation abilities to study whether EURUS
needs to trade off other capabilities for reasoning.
**Results. Results are shown in Table 8.**
On MMLU, EURUS outperforms baselines dedicated
to coding and math, and achieves higher results than
Mistral-Instruct-v0.2 and CodeLLaMA-70B-Instruct,
the official aligned versions of our base model built
by their authors. Compared to general-purpose baseline models, EURUS-7B achieves comparable performance with the top-performance OpenChat and
Starling-LM, though EURUS-70B does not achieve
the same level of performance as other generalpurpose models, which is expected due to the gap in
the base models since CodeLLaMA-70B has not been
intentionally optimized for knowledge.
|GPT-3.5 Turbo GPT-4|70.0 7.94 86.4 8.96|
|---|---|
On MT-Bench, we report baseline numbers from the official leaderboard[4]. EURUS matches
the performance of mainstream open-source general-purpose models, and EURUS-70B-KTO
further achieves the score of GPT-3.5 Turbo.
**D** **Detailed Results on Reward Modeling**
**D.1** **Additional Results on Reranking**
We present the full results on reranking in Table 9, where the conclusions are consistent
with those drawn from §D: (1) Our reward models always achieve the highest accuracy
on all test sets across different N, except when N=2 on HumanEval. (2) Both LBT and LDR
consistently help improve reranking performance on three test sets except for HumanEval,
where removing either of the objectives can prevent the accuracy from dropping when
increasing N from 8 to 16. (3) Modeling safety hurts reranking performance in reasoning.
[4https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
-----
Table 9: Detailed results of reranking Mistral-Instruct-v0.2’s responses on coding and math.
|Datasets|HumanEval|MBPP|GSM8K|MATH|
|---|---|---|---|---|
|N|2 4 8 16|2 4 8 16|2 4 8 16|2 4 8 16|
|Random Top Logits Self-Consistency Starling-RM-34B|41.5 39.0 40.2 39.6 43.3 43.3 43.3 43.3 43.3 42.7 42.1 40.9 47.6 47.0 49.4 45.7|33.1 33.6 34.3 30.1 35.3 35.3 35.3 35.3 35.3 36.3 36.6 37.1 37.8 38.8 39.6 40.4|45.0 43.1 44.5 40.2 45.7 45.7 45.7 45.7 45.7 49.5 52.2 52.8 49.1 52.8 56.0 56.5|11.5 11.3 10.0 8.5 12.1 12.1 12.1 12.1 12.1 13.8 15.8 16.8 6.5 7.2 7.7 7.7|
|EURUS-RM-7B w/o LDR w/o LBT w/o US w/o UF + US|44.5 45.7 47.6 47.0 45.7 44.5 46.3 50.0 45.1 44.5 47.0 48.2 45.7 47.0 49.4 50.6 43.9 43.3 47.0 46.3|39.3 42.6 43.4 43.9 39.3 42.4 42.4 42.1 38.6 40.6 39.6 40.1 39.3 41.1 41.4 42.9 36.3 38.1 36.6 35.3|49.8 53.7 56.3 57.3 49.4 53.2 55.4 56.3 49.1 52.5 55.2 57.8 49.4 53.8 57.4 58.7 49.4 52.3 54.6 57.2|14.3 16.2 17.1 17.3 14.2 16.1 17.0 16.9 14.3 16.3 17.2 17.1 14.5 16.6 17.2 17.5 14.3 16.5 17.4 17.4|
|Pass@N|62.8 73.8 88.4 92.7|42.4 48.1 52.6 58.6|54.9 64.1 73.2 80.4|16.9 22.7 28.9 35.5|
Table 10: Ablation Study.
|Model|Coding|Math|Reasoning|Ins-Following|Avg.|
|---|---|---|---|---|---|
||HumanEval MBPP LeetCode|GSM8K MATH TheoremQA SVAMP ASDiv|BBH|IFEval||
|EURUS-7B-SFT Ground-Truth Open-Source Only ULTRAINTERACT Only|55.5 59.1 20.0 46.3 46.4 8.9 38.4 44.1 11.1 46.3 50.1 15.6|73.7 32.6 20.0 82.2 84.1 62.2 15.0 9.6 75.1 68.8 45.3 10.8 9.3 52.7 49.4 67.6 30.9 20.1 80.4 82.0|64.6 64.4 65.3 67.0|44.0 42.9 43.6 17.4|53.6 44.0 37.0 47.7|
When removing UltraSafety from the training data, the RM achieves higher accuracies than
EURUS-RM-7B except on MBPP.
**E** **Detailed Ablation Results**
We present the full results of §5 in Table 10, with detailed metrics on all coding and math
datasets.
-----
| [
"Ning, Ding",
"Lifan, Yuan",
"Jia, Deng",
"Ganqu, Cui",
"Hao, Peng",
"Bowen, Zhou",
"Hanbin, Wang",
"Zhiyuan, Liu",
"Maosong, Sun",
"Xingyao, Wang",
"Boji, Shan",
"Huimin, Chen",
"Ruobing, Xie",
"Yankai, Lin",
"Zhenghao, Liu"
] | 2024-04-02T00:00:00 | ICLR 2025 Submission | false | 41 | 3 | null | http://arxiv.org/abs/2404.02078 | https://arxiv.org/abs/2404.02078 | https://www.semanticscholar.org/paper/ac2848656e68b60665e6bc3e28eb6c7d5bebb4b0 |
Learning to Prove Theorems by Learning to Generate Theorems | We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover and advances the state of the art of automated theorem proving in Metamath. Code is available at https://github.com/princeton-vl/MetaGen. | This work proposes to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover, and demonstrates that synthetic data from this approach improves the theorem provers and advances the state of the art of automated theorem proving in Metamath. | # Learning to Prove Theorems by Learning to Generate Theorems
**Mingzhe Wang**
Princeton University
```
[email protected]
```
**Abstract**
**Jia Deng**
Princeton University
```
[email protected]
```
We consider the task of automated theorem proving, a key AI task. Deep learning
has shown promise for training theorem provers, but there are limited humanwritten theorems and proofs available for supervised learning. To address this
limitation, we propose to learn a neural generator that automatically synthesizes
theorems and proofs for the purpose of training a theorem prover. Experiments on
real-world tasks demonstrate that synthetic data from our approach improves the
theorem prover and advances the state of the art of automated theorem proving in
[Metamath. Code is available at https://github.com/princeton-vl/MetaGen.](https://github.com/princeton-vl/MetaGen)
**1** **Introduction**
Automated theorem proving aims to automatically generate a proof given a conjecture (the target
theorem) and a knowledge base of known facts, all expressed in a formal language. Automated
theorem proving is useful in a wide range of applications, including the verification and synthesis of
software and hardware systems (Gu et al., 2016; Darvas et al., 2005; Kern & Greenstreet, 1999).
Automated theorem proving boils down to a search problem: finding the sequence of symbol
manipulations that generate a valid proof. The fundamental challenge lies in the explosion of search
space, in particular with long proofs and large knowledge bases. The success of theorem proving thus
relies on effective heuristics that guide the prover by deciding the next step the prover should take.
Deep learning has emerged as a promising approach to learning search heuristics in an automated
theorem prover (Irving et al., 2016; Whalen, 2016; Loos et al., 2017; Bansal et al., 2019a; Lee et al.,
2019). The search process fundamentally reduces to a sequence of actions on manipulating a set of
symbols. Thus a deep network can be trained to select the best action at each step.
A key challenge is how to train such networks. Prior work has used human-written theorems and
proofs to perform imitation learning and has shown promising results (Loos et al., 2017; Yang &
Deng, 2019; Whalen, 2016; Paliwal et al., 2019). The training data consists of theorems and proofs
manually written by human experts in a formal language, and the prover is trained to imitate the
proof steps demonstrated by humans.
However, relying on human-written data has a major drawback: such data has limited availability and
scalability. Writing theorems and proofs in a formal language requires highly specialized knowledge
and skills, including mathematics, computer programming, and proficiency in the particular formal
language. For a CS graduate student, it can take months to master a new formal language such as
Mizar, Metamath or HOLight (Wiedijk, 2003), after which it can take days to formalize a single page
of a math textbook. This makes it impractical to crowdsource human-written proofs at large scale.
In this paper, we propose to train a theorem prover using synthetic data. The basic idea is to construct
a generator that automatically synthesizes new theorems and their proofs, which serve to augment
human-written data for training the prover.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
-----
Figure 1: Left: A proof task. Middle: The proof tree of the theorem 3eqtri. Each leaf node is a
hypothesis and each internal node corresponds to a proof step. Right: The overview of our approach.
To generate a new theorem and its proof, the generator performs a sequence of symbol manipulations,
similar to a prover. It repeatedly applies inference rules on a set of existing theorems and combines
their proofs to form the proof of the new theorem. It is important to note that despite the similarity of
operations, the generator has a much easier task than the prover. The generator just needs to generate
_some new theorem of its own choice, whereas the prover needs to find the proof for a particular target_
theorem specified by someone else.
One challenge of generating synthetic theorems is that there are infinitely many possibilities but the
prover can only use a finite amount of them during training. Not all theorems are equally useful as
training data. Thus a key question is how to generate synthetic theorems that are more useful. To this
end we make the generator learnable by parameterizing it with deep networks.
We hypothesize that the generated data will be more useful if they are similar to human-written
data. Therefore we use human-written data to train a generator. We consider two scenarios. If the
human-written data consist of both theorem statements and their proofs, we train the generator to
follow the proof steps in the forward direction, so that a well-trained generator would derive theorems
humans tend to derive. If the human-written data consist of only theorem statements but not their
proofs, i.e. no human actions to imitate, we use reinforcement learning to let the generator discover
good actions that lead to synthetic theorems that are similar to the human-written theorems. To
measure similarity between synthetic theorems and human theorems, we use a discriminator trained
to distinguish the human theorems from synthetic ones, similar to GANs (Goodfellow et al., 2014).
We instantiate our approach in Metamath (Megill & Wheeler, 2019), a popular language for formal
mathematics, and with Holophrasm (Whalen, 2016), a Metamath neural prover. We propose a neural
theorem generator called “MetaGen”, which synthesizes new theorems and their proofs expressed
in the formalism of Metamath. To the best of our knowledge, MetaGen is the first neural generator
of synthetic training data for theorem proving. Experiments on real-world Metamath tasks show
that synthetic data from MetaGen can help train better provers, advancing the state of art in theorem
proving on Metamath.
**2** **Related Work**
**Automated theorem proving Our work is related to prior work on learning to prove theo-**
rems (Whalen, 2016; Gauthier et al., 2018; Bansal et al., 2019a; Yang & Deng, 2019; Loos et al.,
2017; Balunovic et al., 2018; Kaliszyk et al., 2018; Bansal et al., 2019b; Polu & Sutskever, 2020).
Our work directly builds off of Holophrasm (Whalen, 2016), a neural-augmented theorem prover for
Metamath. It contains three deep networks to generate actions and initial values to guide proof search
following the UCT algorithm (Kocsis & Szepesvári, 2006). Polu & Sutskever (2020) also build
the theorem prover for Metamath by adopting the GPT-like network architectures and pretraining
methods and generating proof steps autoregressively.
TacticToe (Gauthier et al., 2018), DeepHOL (Bansal et al., 2019a) and ASTactic (Yang & Deng, 2019)
are learning-based theorem provers based on interactive theorem provers HOL4 (Slind & Norrish,
2008), HOL Light (Harrison, 2009) and Coq (Bertot & Castéran, 2004) respectively. Paliwal et al.
(2019) improves DeepHOL by representing formulas as graphs. Loos et al. (2017) proposes to learn
clause selection by deep learning inside the first-order logic prover E (Schulz, 2002).
-----
All of these methods are orthogonal to our approach because all of their provers are learned from
human-written training data, whereas our prover is trained from human data augmented with synthetic
data. Our contribution is on the generation of synthetic data and using such data to train a prover.
Kaliszyk et al. (2018); Bansal et al. (2019a,b); Balunovic et al. (2018) use reinforcement learning
to train provers with only human-written theorems or SMT conjectures but not proofs. During
training, a prover collects rewards only upon finding full proofs. In contrast, we always train our
prover using imitation learning. Under the same setting with only human-written theorems but not
proofs, we use reinforcement learning to train our generator, whose reward is the similarity between
a generated theorem and human-written theorems, as measured by an adversarial discriminator. Our
reinforcement learning task is much easier because the reward is continuous and there are many ways
to generate theorems similar to human-written ones.
**Synthetic theorem generation Zombori et al. (2019); Fawzi et al. (2019) construct theorem provers**
by training on randomly generated synthetic theorems and evaluate the learned prover on synthetic
theorems. The main difference of our approach is that our generator is optimized through learning, as
opposed to random generation.
Kaliszyk et al. (2018); Jakub˚uv & Urban (2019); Urban et al. (2008); Kaliszyk et al. (2014); Piotrowski
& Urban (2018) train theorem provers iteratively. They repeatedly apply the trained prover on existing
human theorems and generate new machine proofs to train the prover further. In these methods, only
new proofs are synthesized and the synthetic proofs are only for existing human theorems; no new
theorems are synthesized. In contrast, our approach synthesizes both new theorems and new proofs
which could cover a much larger space of possible derivations than the proofs of existing human
theorems.
Urban (2004); Kaliszyk & Urban (2015); Kaliszyk et al. (2015) extract proof tasks from the proofs
of human-written theorems, such as the intermediate inference steps or their variants. That is, they
extract "sub-proofs" from existing proofs. In contrast, we generate entirely new theorems and new
proofs that are not part of any existing proofs.
Our work is also related to the line of work on conjecturing (Chvalovsky et al., 2019; Urban &`
Jakub˚uv, 2020; Colton, 2012), which aims to generate mathematical conjectures automatically. The
generated conjectures are not necessarily true, and their proofs are not required. In contrast, each of
our synthetic theorem is guaranteed to be correct and its proof is automatically available.
**Automatic goal generation by self-play Our work is similar to the line of work in reinforcement**
learning (Florensa et al., 2018; Sukhbaatar et al., 2017, 2018; Durugkar & Stone, 2018) that deploys
two agents in adversary self-play, where one agent to generate tasks for another agent to accomplish.
We pursue similar ideas in the new context of theorem proving by learning to generate synthetic
theorems to train the prover. Also of note is that we have no adversarial self-play. The goal of the
generator is to discover novel theorems similar to human-written ones, not to beat the prover.
Recently, Huang (2019) introduced a two-player game which encourages players to learn to predict
the consistency of logical formulas by self-play. These two players behave symmetrically and
compete with each other in the game. In contrast, our generator and prover execute different tasks,
and are co-operative. In addition, their game remains a theoretical proposal without any empirical
validation, whereas we have performed experiments on large-scale data.
**3** **Background on Metamath**
Metamath is a language for developing formal mathematics. It is one of the simplest formal systems.
It has only one inference rule, called substitution, but is universally applicable in formalizing a large
portion of mathematics [1] and different types of logic (Megill & Wheeler, 2019).
**Expression and theorem A basic building block of Metamath is expressions. An expression is a**
sequence of tokens that follows a set of grammar rules called “generating axioms”. A token is either a
constant or a variable. For example, x + 2 ∗ _y = y + (x + y) is an expression, where x and y are two_
variables. Each expression corresponds to a unique parse tree where each internal node represents a
generating axiom and each leaf node is a token.
1Its largest knowledge base, set.mm ranks 3rd in the "Formalizing 100 Theorems" challenge (Wiedijk, 2019).
-----
A theorem consists of a set of expressions, one expression as its assertion and zero or more expressions
as its hypotheses. The theorem can be understood to state that the hypotheses (e.g. x[2] = 1 and x > 0)
entail the assertion (e.g. x = 1). Some examples of theorems are shown in Figure 1.
**Substitution The only inference rule in Metamath is substitution, which transforms one expression**
by replacing each variable with a non-empty new expression. For example, the expression A = B
can be transformed to E + F = C ∗ _D by the substitution A →_ _E + F and B →_ _C ∗_ _D._
Given two expressions a and b, we say b can reach a or a is reachable from b if there exists a
substitution that transforms b to a. This is equivalent to saying that the parse tree of b can be obtained
by “trimming” the parse tree of a—repeatedly picking an internal node, removing all its descendants,
and replacing it with a variable node. Reachability can be checked by comparing parse trees; an
algorithm is described in the supplementary material.
**Proof step A proof step is the basic unit of reasoning. A proof step in Metamath has two parts: (1) a**
theorem and (2) a substitution that maps each variable in the theorem to a new expression. A proof
step serves to establish entailment between expressions based on the invoked theorem. For example,
let t be the theorem over1i, with the hypothesis A = B and the assertion (A F C) = (B F C),
where {A, B, C, F _} is the set of variables in t. Let φ be a substitution that maps each variable in t to_
a new expression: A → 2, B → (1 + 1), C → 2 and F → +. By replacing variables in t with their
corresponding expressions given by φ, we have a new hypothesis 2 = (1 + 1) and a new assertion
(2 + 2) = ((1 + 1) + 2) This proof step (t, φ) establishes that the new hypothesis 2 = (1 + 1) entails
the new assertion (2 + 2) = ((1 + 1) + 2) based on theorem t. The new assertion is called the
conclusion and the new hypothesis is called the precondition. Because a theorem has one assertion
and zero or more hypotheses, a proof step thus has one conclusion and zero or more preconditions.
**Proof A theorem is proved if we can construct a proof tree that connects the hypotheses of the**
theorem to its assertion through entailment. The root node of a proof tree is the assertion of the
theorem. Each leaf node of the tree is either a hypothesis of the theorem or empty. Each internal
node of the tree is an expression and is associated with a proof step that uses an pre-existing theorem,
together with an appropriate substitution, to establish the entailment of this internal node by its
child nodes. Note that if an internal node has an empty child, it means that the proof step has no
preconditions. An example proof tree is shown in Figure 1.
A proof is a sequence of proof steps that can be obtained by traversing a proof tree in pre-order. This
linearized proof is an equivalent to the tree representation. In this work we will use “proof” and
“proof tree” interchangeably.
**Corpus A corpus consists of a set of axioms and a sequence of theorems and their corresponding**
proofs. The proof of each theorem uses only the axioms and the preceding theorems.
**4** **Approach**
**Task setup We use the standard theorem proving setup in prior work Irving et al. (2016); Bansal**
et al. (2019a); Whalen (2016). A proof task consists of a target theorem (or “target” in short) to be
proved and a set of background theorems to be used as known facts. For each theorem in a corpus,
we construct a proof task using the theorem as the target theorem and all preceding theorems (i.e. the
theorems that humans had available when they were proving the target theorem) as the background
theorems. In other words, each theorem in the corpus corresponds to a unique proof task that uses
the theorem as the target. We randomly split all theorems into three disjoint sets: a training set, a
validation set, and a test set. Accordingly, we have three corresponding sets of proof tasks using the
theorems as targets. More details about this setup in the supplemental material.
**4.1** **Generator**
We propose MetaGen, a neural generator that performs forward reasoning to synthesize theorems. It
takes a set of training proof tasks as input and outputs a set of synthetic theorems. These synthetic
theorems are then combined with original training proof tasks to train the theorem prover (as shown
in the right of Fig. 1). The basic operation is generating a proof step—selecting an existing theorem
and constructing a substitution. From this single proof step we can derive a new theorem. Now, we
can treat this new theorem as an existing theorem and repeat to generate additional new theorems.
-----
One issue requiring special handling is avoiding generating “meaningless” theorems. A meaningless
theorem is one that includes falsehood in its hypotheses—as a result it is always provable regardless
what the assertion says. It is possible to generate such a theorem if we allow arbitrary substitutions in
constructing a proof step. For example, the hypothesis A = B can be substituted into 1 = 2. Such
theorems are valid but unlikely to be useful as training data.
To avoid meaningless theorems, in constructing a proof step, we require that each new hypothesis
produced by substitution must be identical to the full expression of a node in an existing proof tree
(either the root, a leaf, or an internal node), such as the five expressions in yellow boxes in Fig. 1.
This prevents introducing false expressions as hypotheses, provided that the existing proofs have no
false expressions. See supplemental material about more discussion on meaningless theorems
A second issue is generating new theorems with multi-step proofs. A single proof step gives a shallow
tree. To generate theorems with longer proofs, we “graft” this shallow tree with existing proof trees
or subtrees. For a leaf node e of the shallow tree, we can replace it with an existing proof tree (or
subtree) whose root node is also e. For example, suppose the shallow tree proves that x[2] = 1 and
_x > 0 entail x = 1, and there already exists another tree proving that x[3]_ _> 0 entails x > 0. Then we_
can join the two trees to generate a new tree proving that x[3] _> 0 and x[2]_ = 1 entail x = 1.
To generate theorems and proofs more similar to human-written ones, we impose an additional
constraint that a synthesized proof step can only invoke a theorem that has appeared as a background
theorem in a training proof task. This is because in the ground-truth proof for a proof task, only the
background theorems are invoked in proof steps. This means that we do not invoke any synthesized
theorems. To implement this constraint, the generator constructs proof steps using a restricted set of
“invocable” theorems pre-specified as input to the generator.
**Initializing existing proof trees The generator takes as input a set E of existing theorems and**
optionally their proof trees, and a set I of invocable theorems, where E and I are the union of the
target and background theorems of the training proof tasks respectively. To enable tree grafting, it
first builds a set G of existing proof trees. For every theorem in E, if its proof tree is available, for
every node e in its proof tree, we add to G the subtree that is rooted at e and contains all nodes below
_e. Otherwise, we add to G every hypothesis of this theorem as a single node proof tree._
Two proof trees are considered equivalent if they have the same root node and the same leaf nodes,
i.e. they prove the same theorem. Among equivalent trees, we only keep the smallest one. As a result,
_G contains all sub-proof trees from all the existing theorems that can be grafted to a new proof step._
**Generating new theorems To generate a new theorem, the key procedure is to construct a proof step**
and a set S of existing proof trees such that S is a subset of G and each precondition of this proof
step matches the root node of a proof tree in S. This is achieved in three steps as follows:
1. Pick an invocable theorem t ∈ _I according to the frequencies of invocable theorems being_
used in the proofs of the existing theorems.
2. Initialize the set S of proof trees as empty. Initialize the substitution φ for t as empty. For
each hypothesis h of theorem t, apply the current substitution φ to hypothesis h to obtain
the transformed expression h(φ), find all compatible proof trees, those whose root nodes are
reachable from h(φ)—h(φ) can be transformed to the root nodes by substitution, which can
be determined by comparing parse trees—and perform the following:
_• Select a compatible proof tree c using a relevance network (to be described later). For_
each variable that has not been substituted in h, update φ by assigning the variable a
substitute expression to match the root of c. Add tree c to set S.
If no compatible proof tree exists, go to Step 1 and rebuild this proof step from scratch.
3. If a variable appears in a hypothesis of t, its substitution has been determined by matching
this hypothesis with the root of a compatible proof tree. For the remaining variables that
appear exclusively in the assertion of t, use a subtitution network (to be described later) to
generate substitute expressions for them.
This proof step gives a one-step proof tree, which we expand to a multi-step proof tree by grafting the
trees in set S onto its leaves. This multi-step proof tree is added to G for subsequent generation. We
repeat this procedure to get a set of synthetic theorems (pseudo-code in the supplementary material).
-----
**Relevance network of generator The relevance network in step 2 is a deep network trained to pick**
a proof tree from a set of candidates by scoring and ranking them. It uses the same design as the
relevance network in Holophrasm Whalen (2016) (see Sec. 4.2) but has different inputs and purposes.
It takes two sequences of tokens as input. One input sequence represents the root and leaf nodes
of a proof tree. The other sequence consists of two parts. One part represents the leaf nodes of the
proof trees that have been selected for preceding hypotheses (the hypotheses are processed one by
one). The other part represents the assertion and hypotheses of the invocable theorem transformed by
the current substitution, except for the current hypothesis to be processed which is represented by a
special token. Two GRU encoders convert each input sequence to an embedding vector, followed by
a bilinear layer to output a score from the two vectors. In practice, we limit the number of candidate
trees to 2000 for tractability.
**Substitution network of generator The substitution network generates the substitution for a tar-**
get variable of an invocable theorem. It uses the same design as the “generation network” in
Holophrasm Whalen (2016) (see Sec. 4.2) but has different inputs and purposes. It is a sequence-tosequence model with the encoder-decoder GRU network. It takes as input the sequence of tokens that
represents the assertion of the invocable theorem and the leaf nodes of the existing proof trees that
have been selected to construct a proof step. The target variable is represented by a special token.
The network outputs a sequence of tokens, sampled one by one based on the softmax probabilities.
**Generator training We propose two strategies to train the relevance network and the substitution**
network, depending on the availability of human-written proofs.
Our generator can work without learnable parameters if we remove the two deep network and sample
new proof steps by randomly picking existing proof trees and generating substitutions. We call such
a generator as MetaGen-Rand.
Given human-written proofs, we train MetaGen-IL by imitation learning. Given a proof step (t, φ) in
a human-written proof tree s, each transformed hypothesis h(φ) of theorem t is an internal node of
tree s and is the root of a subtree; we train the relevance network to imitate this step by selecting this
subtree among a large set of candidates.
For a variable f that appears in the assertion but not the hypotheses of t, the substitution network is
trained to produce its human-written substitute expression φ(f ).
In the case of only human-written theorems but not their proofs, we can no longer perform imitation
learning. We instead use reinforcement learning. The objective is to learn actions to maximize the
similarity between the generated theorems and human-written theorems. We propose two reward
functions to evaluate a generated theorem and update the two deep networks toward the higher
rewards via the Reinforce algorithm Williams (1992).
The first reward function is the cross-entropy of a generated theorem given by a language model
trained from the human-written theorems. The generator from this reward is called MetaGen-RL-LM.
The second reward function is given by an adversarial loss similar to GAN (Goodfellow et al.,
2014)—a binary classifier to distinguish the human-written theorems from the generated ones. It
is pretrained to separate human-written theorems from the theorems generated by MetaGen-Rand,
and then updated on-the-fly to separate human-written theorems from the theorems generated by the
current generator. The generator is updated to minimize the adversarial loss. We call this generator
_MetaGen-RL-Adv._
More details about the deep networks of the generator are presented in the supplementary material.
**4.2** **Prover**
We use Holophrasm (Whalen, 2016) as our theorem prover and augment its training with synthetic
data. Given a proof task, Holophrasm conducts backward reasoning to prove the target theorem as
described in the supplementary material. For completeness we briefly summarize how Holophrasm
works and refer the reader to Whalen (2016) and the supplementary material for more details.
Holophrasm uses Monte Carlo Tree Search (MCTS) to explore multiple branches of actions to find a
proof tree. It involves three learnable deep networks: a payoff network to determine which branch is
-----
Table 1: Performance of the relevance network of the prover on validation data of iset.mm (top two
rows) and set.mm (starting from the third row).
**Human** **Synthetic** **Generator** **Model** **Top-1** **Top-5** **Top-20** **MRR**
**proofs** **proofs**
7123 (ISET) 0 - RELEVANCE 43.27 69.57 89.68 0.5535
7123 (ISET) 1M _MetaGen-IL_ RELEVANCE 45.10 71.00 89.46 0.5699
0 0 - TF-IDF 14.28 21.13 32.55 0.1877
0 0 - RELEVANCE 0.96 5.33 15.67 0.0445
0 300K _MetaGen-Rand_ RELEVANCE 24.22 37.27 49.92 0.3093
0 300K _MetaGen-RL-LM_ RELEVANCE 24.74 37.66 54.22 0.3182
0 300K _MetaGen-RL-Adv_ RELEVANCE 25.07 39.33 50.23 0.3242
2179 (10%) 0 - RELEVANCE 41.24 67.56 86.84 0.5356
2179 (10%) 1M _MetaGen-Rand_ RELEVANCE 45.44 70.13 88.33 0.5692
2179 (10%) 1M _MetaGen-IL_ RELEVANCE 46.10 71.12 89.38 0.5772
4358 (20%) 0 - RELEVANCE 47.02 72.45 89.48 0.5870
21786 (100%) 0 - RELEVANCE 51.52 78.56 93.41 0.6367
21786 (100%) 10M _MetaGen-Rand_ RELEVANCE 52.08 77.76 92.83 0.6375
21786 (100%) 10M _MetaGen-IL_ RELEVANCE 53.20 78.73 93.13 0.6474
more promising, a relevance network to pick a background theorem to construct a proof step, and a
_substitution network[2]_ to generate substitutions.
**4.3** **Applicability to other formal systems**
As is standard in related work Loos et al. (2017); Irving et al. (2016); Kaliszyk et al. (2018); Yang &
Deng (2019), we instantiate and validate our approach on a single formal system, but our approach is
applicable to other formal systems such as HOL Light, Coq and Isabelle.
Our approach can be applied to a new system under the following conditions: (1) the search heuristics
of the theorem prover can be trained by imitating ground truth proofs; (2) the proof of a theorem is a
tree of intermediate goals, and a proof steps demonstrate the entailment of a goal by its children; (3)
an intermediate goal in the proof is equivalent to a legal theorem. These conditions are satisfied by
the formal systems mentioned above.
To adapt our approach to a new system, the main effort is to rewrite the procedure of sampling proof
steps, by replacing substitution with inference rules of the new system. HOL Light, Coq and Isabelle
only provide tactics as inference rules to decompose a goal into subgoals for backward reasoning.
However, to generate new theorems, we need to execute the corresponding reverse tactics, which are
unavailable in their ML environments. We leave the experiments on these systems as future work.
**5** **Experiments**
**Dataset We experiment on two Metamath knowledge bases: iset.mm and set.mm. iset.mm**
formalizes intuitionistic logic and contains 463 axioms and 8916 theorems, which give rise to 8916
corresponding proof tasks. These proof tasks are divided into 7123 training tasks, 890 validation
tasks and 903 test tasks. We use the same version of set.mm as Whalen (2016). It formalizes the ZFC
set theory and contains 1099 axioms and 27218 theorems, which give rise to 27218 corresponding
proof tasks. These proof tasks are divided into 21786 training tasks, 2712 validation tasks and 2720
test tasks.
**Training protocol On set.mm, we control for the number of human proofs provided during training.**
Specifically, we compare our approach to baselines while including either 0%, 10%, or 100% of the
human proofs. We also report the baseline with 20% human proofs for comparison.
2called the generation network in Whalen (2016) but renamed here to avoid confusion with the generator.
-----
Table 2: Performance of the substitution network of the prover on validation data of iset.mm (top
two rows) and set.mm (starting from the third row).
**Human proofs** **Synthetic proofs** **Generator** **Model** **Prob** **Accuracy**
7123 (ISET) 0 - SUBSTITUTION 0.1723 49.45
7123 (ISET) 1M _MetaGen-IL_ SUBSTITUTION 0.2554 57.81
0 0 - LAUGUAGE MODEL 0.0032 9.06
0 0 - SUBSTITUTION 0.0008 0.01
0 300K _MetaGen-Rand_ SUBSTITUTION 0.0103 29.68
0 300K _MetaGen-RL-LM_ SUBSTITUTION 0.0181 24.33
0 300K _MetaGen-RL-Adv_ SUBSTITUTION 0.0186 31.38
2179 (10%) 0 - SUBSTITUTION 0.2738 58.91
2179 (10%) 1M _MetaGen-Rand_ SUBSTITUTION 0.3203 61.78
2179 (10%) 1M _MetaGen-IL_ SUBSTITUTION 0.3710 66.56
4358 (20%) 0 - SUBSTITUTION 0.3765 67.07
21786 (100%) 0 - SUBSTITUTION 0.6142 81.57
21786 (100%) 10M _MetaGen-Rand_ SUBSTITUTION 0.6439 81.85
21786 (100%) 10M _MetaGen-IL_ SUBSTITUTION 0.6847 83.90
Table 3: Number of theorems proved on test data of iset.mm (top two rows) and set.mm (starting
from the third row). †: without removing the trivial proof steps from the training data of the relevance
network.
**Human** **Synthetic** **Generator** **Prover** **Test proofs**
**proofs** **proofs** **found**
7123 (ISET) 0 - HOLOPHRASM 378
7123 (ISET) 1M _MetaGen-IL_ HOLOPHRASM 398
0 0 - TF-IDF & LM 312
0 0 - HOLOPHRASM 219
0 300K _MetaGen-Rand_ HOLOPHRASM 346
0 300K _MetaGen-RL-LM_ HOLOPHRASM 351
0 300K _MetaGen-RL-Adv_ HOLOPHRASM 357
2179 (10%) 0 - HOLOPHRASM 454
2179 (10%) 1M _MetaGen-Rand_ HOLOPHRASM 457
2179 (10%) 1M _MetaGen-IL_ HOLOPHRASM 472
4358 (20%) 0 - HOLOPHRASM 476
21786 (100%) 0 - HOLOPHRASM(’16) 388
21786 (100%) 0 - HOLOPHRASM 557
21786 (100%) 10M _MetaGen-Rand_ HOLOPHRASM 565
21786 (100%) 10M _MetaGen-IL_ HOLOPHRASM[†] 574
21786 (100%) 10M _MetaGen-IL_ HOLOPHRASM 600
**Implementation details We train the generator on the training set and use the trained generator to**
generate synthetic theorems and proofs. The prover is trained on both training and synthetic proofs.
On iset.mm, we generate 1M unique synthetic theorems. On set.mm, we generate 300K unique
theorems for the setting of 0% of human proofs (after discarding any duplicates) and 1M unique
theorems for 10% of the human training proofs. We generate 10M theorems for the setting of 100%
of human proofs, by generating 1M unique theorems a time (maximum allowed by memory limit)
and repeating 10 times.
During the training of the relevance network of the prover, we filter out the "trivial" proof steps. A
goal is trivial if it is reachable from the assertion of a background theorem b and b has no hypotheses,
because this goal can be decomposed by b without generating any new subgoals. By removing the
training proof steps that have trivial goals when we train the relevance network, the performance of
the prover is improved as shown in Tab. 3.
Please refer to the supplementary material for more details about the implementation and baselines.
-----
**5.1** **Results**
To validate the effectiveness of our theorem generator, we evaluate provers trained on the synthetic
data and compare them against various baselines.
**Relevance network of prover We evaluate how synthetic data can improve the relevance network**
of Holophrasm. The relevance network assigns a score to each candidate background theorem. We
use two metrics: (1) top-k accuracy defined as the percentage of times a groundtruth background
theorems is ranked in the top k and (2) mean reciprocal rank (MRR) of every groundtruth background
theorem among all candidates of its corresponding proof step. Both of them are the higher the better.
We evaluate the relevance network combined with different generators. We also evaluate with tf-idf
similarity between sequences of tokens. In Tab. 1, we see that synthetic data brings significant
improvement in all settings and the best performance is achieved with our trained generators.
**Substitution network of prover We evaluate how synthetic data can improve the substitution**
network of Holophrasm. The substitution network predicts the probability of each token at each
position under teacher forcing. We use two metrics: (1) accuracy, defined as the percentage of
times the tokens in the groundtruth substitutions have the highest probabilities and (2) the average
probability to generate the groundtruth substitutions normalized by its length. Tab. 2 reports the
results, including the result of a language model. In all settings, synthetic data brings significant
improvement. The best performance is achieved with our trained generators.
**Prover To evaluate the prover as a whole, we follow the same protocol of Whalen (2016) (more**
details in the supplementary material) and report the number of theorems proved. We compare
with the original Holophrasm prover proposed by Whalen (2016) trained by imitation learning on
human-written proofs only. With zero human-written proofs for prover training, we also evaluate
TF-IDF & LM, an ablated version of Holophrasm that needs no training proofs—we remove the
relevance network and instead pick a background theorem using tf-idf similarity; we replace the
substitution network with a language model of theorem statements.
As shown in Tab. 3, the performance of the prover shares the same pattern as the relevance and
substitution network. On both iset.mm and set.mm, the provers trained on synthetic data consistently
prove more theorems than the provers trained on human proofs only. On set.mm, with 10% human
proofs, the use of synthetic proofs almost achieve the same effect by doubling the number of human
proofs (472 vs 476 proved theorems). The provers trained with learnable generators perform better
than the provers trained with MetaGen-Rand.
Our GPU re-implementation of Holophrasm finds 557 proofs trained on 100% of human proofs, more
than the number reported in Whalen (2016). We believe this is due to the fact that our prover runs
faster on GPUs.
By removing the trivial proof steps from the training data of the relevance network of the prover, the
number of proved theorems on the test set increases from 574 to 600.
Polu & Sutskever (2020) demonstrate significant improvement on theorem proving of the set.mm
benchmark by using very large Transformer (Vaswani et al., 2017) models. Their model can prove
29.22% of test theorems (our percentage is 22.06%). We note a couple potential differences in
experimental setup, which may make our results not directly comparable. They appear to use a
different version of the set.mm knowledge base which has about 38k proofs (ours has 27218 proofs);
their evaluation protocol may be different (our prover has a time limit of 5 minutes for each run while
their time limit is not mentioned).
Please refer to the supplement material for the examples of synthetic theorems.
**6** **Conclusion**
We have proposed a neural generator that automatically synthesizes theorems and proofs for the
purpose of training a theorem prover. Experiments on real-world tasks have demonstrated that
synthetic data from our approach improves the theorem prover and advances the state of the art of
automated theorem proving in Metamath.
**Acknowledgements This work is partially supported by the National Science Foundation under**
Grant IIS-1903222 and the Office of Naval Research under Grant N00014-20-1-2634.
-----
**Broader Impact**
Our work addresses automated theorem proving. A successful automated theorem prover can help us
write programs that are provably correct, which is essential to safety-critical applications, such as
software for autonomous driving. On the other hand, since the correctness of the found proofs and
synthesized programs relies on the correctness of the underlying theorem prover, bugs in the prover
can lead to catastrophic failure.
**References**
Balunovic, M., Bielik, P., and Vechev, M. Learning to solve smt formulas. In Advances in Neural
_Information Processing Systems, pp. 10317–10328, 2018._
Bansal, K., Loos, S., Rabe, M., Szegedy, C., and Wilcox, S. Holist: An environment for machine
learning of higher order logic theorem proving. In International Conference on Machine Learning,
pp. 454–463, 2019a.
Bansal, K., Loos, S. M., Rabe, M. N., and Szegedy, C. Learning to reason in large theories without
imitation. arXiv preprint arXiv:1905.10501, 2019b.
Bertot, Y. and Castéran, P. Coq’art: the calculus of inductive constructions, 2004.
Chvalovsky, K., Gauthier, T., and Urban, J. First experiments with data driven conjecturing.` _4th_
_Conference on Artificial Intelligence and Theorem Proving, 2019._
Colton, S. Automated theory formation in pure mathematics. Springer Science & Business Media,
2012.
Darvas, Á., Hähnle, R., and Sands, D. A theorem proving approach to analysis of secure information
flow. In International Conference on Security in Pervasive Computing, pp. 193–209. Springer,
2005.
Durugkar, I. and Stone, P. Adversarial goal generation for intrinsic motivation. In Thirty-Second
_AAAI Conference on Artificial Intelligence, 2018._
Fawzi, A., Malinowski, M., Fawzi, H., and Fawzi, O. Learning dynamic polynomial proofs. In
_Advances in Neural Information Processing Systems, pp. 4181–4190, 2019._
Florensa, C., Held, D., Geng, X., and Abbeel, P. Automatic goal generation for reinforcement
learning agents. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference
_on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1515–1528,_
[Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.](http://proceedings.mlr.press/v80/florensa18a.html)
```
mlr.press/v80/florensa18a.html.
```
Gauthier, T., Kaliszyk, C., and Urban, J. Tactictoe: Learning to reason with hol4 tactics. arXiv
_preprint arXiv:1804.00595, 2018._
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and
Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp.
2672–2680, 2014.
Gu, R., Shao, Z., Chen, H., Wu, X. N., Kim, J., Sjöberg, V., and Costanzo, D. Certikos: An extensible
architecture for building certified concurrent {OS} kernels. In 12th {USENIX} Symposium on
_Operating Systems Design and Implementation ({OSDI} 16), pp. 653–669, 2016._
Harrison, J. HOL Light: An overview. In Berghofer, S., Nipkow, T., Urban, C., and Wenzel, M.
(eds.), Proceedings of the 22nd International Conference on Theorem Proving in Higher Order
_Logics, TPHOLs 2009, volume 5674 of Lecture Notes in Computer Science, pp. 60–66, Munich,_
Germany, 2009. Springer-Verlag.
Huang, D. On learning to prove. arXiv preprint arXiv:1904.11099, 2019.
-----
Irving, G., Szegedy, C., Alemi, A. A., Eén, N., Chollet, F., and Urban, J. Deepmath-deep sequence
models for premise selection. In Advances in Neural Information Processing Systems, pp. 2235–
2243, 2016.
Jakub˚uv, J. and Urban, J. Hammering mizar by learning clause guidance. _arXiv preprint_
_arXiv:1904.01677, 2019._
Kaliszyk, C. and Urban, J. Learning-assisted theorem proving with millions of lemmas. Journal of
_symbolic computation, 69:109–128, 2015._
Kaliszyk, C., Urban, J., and Vyskoˇcil, J. Machine learner for automated reasoning 0.4 and 0.5. arXiv
_preprint arXiv:1402.2359, 2014._
Kaliszyk, C., Urban, J., and Vyskoˇcil, J. Lemmatization for stronger reasoning in large theories. In
_International Symposium on Frontiers of Combining Systems, pp. 341–356. Springer, 2015._
Kaliszyk, C., Urban, J., Michalewski, H., and Olšák, M. Reinforcement learning of theorem proving.
In Advances in Neural Information Processing Systems, pp. 8822–8833, 2018.
Kern, C. and Greenstreet, M. R. Formal verification in hardware design: a survey. ACM Transactions
_on Design Automation of Electronic Systems (TODAES), 4(2):123–193, 1999._
Kocsis, L. and Szepesvári, C. Bandit based monte-carlo planning. In European conference on
_machine learning, pp. 282–293. Springer, 2006._
Lee, D., Szegedy, C., Rabe, M. N., Loos, S. M., and Bansal, K. Mathematical reasoning in latent
space. arXiv preprint arXiv:1909.11851, 2019.
Loos, S., Irving, G., Szegedy, C., and Kaliszyk, C. Deep network guided proof search. arXiv preprint
_arXiv:1701.06972, 2017._
Megill, N. and Wheeler, D. Metamath: A Computer Language for Mathematical Proofs. Lulu Press,
Morrisville, North Carolina, 2019. http://us.metamath.org/downloads/metamath.pdf.
Paliwal, A., Loos, S., Rabe, M., Bansal, K., and Szegedy, C. Graph representations for higher-order
logic and theorem proving. arXiv preprint arXiv:1905.10006, 2019.
Piotrowski, B. and Urban, J. Atpboost: Learning premise selection in binary setting with atp feedback.
In International Joint Conference on Automated Reasoning, pp. 566–574. Springer, 2018.
Polu, S. and Sutskever, I. Generative language modeling for automated theorem proving. arXiv
_preprint arXiv:2009.03393, 2020._
Schulz, S. E–a brainiac theorem prover. Ai Communications, 15(2, 3):111–126, 2002.
Slind, K. and Norrish, M. A brief overview of hol4. In International Conference on Theorem Proving
_in Higher Order Logics, pp. 28–32. Springer, 2008._
Sukhbaatar, S., Lin, Z., Kostrikov, I., Synnaeve, G., Szlam, A., and Fergus, R. Intrinsic motivation
and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407, 2017.
Sukhbaatar, S., Denton, E., Szlam, A., and Fergus, R. Learning goal embeddings via self-play for
hierarchical reinforcement learning. arXiv preprint arXiv:1811.09083, 2018.
Urban, J. Mptp–motivation, implementation, first experiments. Journal of Automated Reasoning, 33
(3-4):319–339, 2004.
Urban, J. and Jakub˚uv, J. First neural conjecturing datasets and experiments. arXiv preprint
_arXiv:2005.14664, 2020._
Urban, J., Sutcliffe, G., Pudlák, P., and Vyskoˇcil, J. Malarea sg1-machine learner for automated
reasoning with semantic guidance. In International Joint Conference on Automated Reasoning, pp.
441–456. Springer, 2008.
-----
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and
Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, pp.
5998–6008, 2017.
Whalen, D. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv preprint
_arXiv:1608.02644, 2016._
Wiedijk, F. Formal proof sketches. In International Workshop on Types for Proofs and Programs, pp.
378–393. Springer, 2003.
[Wiedijk, F. Formalizing 100 theorems. http://www.cs.ru.nl/~freek/100/, 2019.](http://www.cs.ru.nl/~freek/100/)
Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8(3-4):229–256, 1992.
Yang, K. and Deng, J. Learning to prove theorems via interacting with proof assistants. In Interna_tional Conference on Machine Learning, 2019._
Zombori, Z., Csiszárik, A., Michalewski, H., Kaliszyk, C., and Urban, J. Towards finding longer
proofs. arXiv preprint arXiv:1905.13100, 2019.
-----
| [
"Mingzhe, Wang",
"Jia, Deng"
] | 2020-01-01T00:00:00 | NeurIPS 2020 | true | 41 | 13 | [
"Holophrasm"
] | https://arxiv.org/abs/2002.07019 | https://arxiv.org/abs/2002.07019 | https://www.semanticscholar.org/paper/ea3e363b222c28abc695877de50a9929744eb2c8 |
ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system description) | We describe an implementation of gradient boosting and neural guidance of saturation-style automated theorem provers that does not depend on consistent symbol names across problems. For the gradient-boosting guidance, we manually create abstracted features by considering arity-based encodings of formulas. For the neural guidance, we use symbol-independent graph neural networks (GNNs) and their embedding of the terms and clauses. The two methods are efficiently implemented in the E prover and its ENIGMA learning-guided framework. To provide competitive real-time performance of the GNNs, we have developed a new context-based approach to evaluation of generated clauses in E. Clauses are evaluated jointly in larger batches and with respect to a large number of already selected clauses (context) by the GNN that estimates their collectively most useful subset in several rounds of message passing. This means that approximative inference rounds done by the GNN are efficiently interleaved with precise symbolic inference rounds done inside E. The methods are evaluated on the MPTP large-theory benchmark and shown to achieve comparable real-time performance to state-of-the-art symbol-based methods. The methods also show high complementarity, solving a large number of hard Mizar problems. | An implementation of gradient boosting and neural guidance of saturation-style automated theorem provers that does not depend on consistent symbol names across problems is described and is evaluated on the MPTP large-theory benchmark and shown to achieve comparable real-time performance to state-of-the-art symbol-based methods. | ## ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system description) [⋆]
Jan Jakub˚uv[1], Karel Chvalovsk´y[1], Miroslav Olˇs´ak[2], Bartosz Piotrowski[1][,][3],
Martin Suda[1], and Josef Urban[1]
1 Czech Technical University in Prague, Czechia
2 University of Innsbruck
3 University of Warsaw
**Abstract. We describe an implementation of gradient boosting and**
neural guidance of saturation-style automated theorem provers that does
not depend on consistent symbol names across problems. For the gradientboosting guidance, we manually create abstracted features by considering arity-based encodings of formulas. For the neural guidance, we use
symbol-independent graph neural networks (GNNs) and their embedding
of the terms and clauses. The two methods are efficiently implemented
in the E prover and its ENIGMA learning-guided framework.
To provide competitive real-time performance of the GNNs, we have developed a new context-based approach to evaluation of generated clauses
in E. Clauses are evaluated jointly in larger batches and with respect
to a large number of already selected clauses (context) by the GNN
that estimates their collectively most useful subset in several rounds of
message passing. This means that approximative inference rounds done
by the GNN are efficiently interleaved with precise symbolic inference
rounds done inside E. The methods are evaluated on the MPTP largetheory benchmark and shown to achieve comparable real-time performance to state-of-the-art symbol-based methods. The methods also show
high complementarity, solving a large number of hard Mizar problems.
**Keywords: Automated theorem proving · Machine Learning · Neural**
Networks · Decision Trees · Saturation-Style Proving
**1** **Introduction: Symbol Independent Inference Guidance**
In this work, we develop two symbol-independent (anonymous) inference guiding
methods for saturation-style automated theorem provers (ATPs) such as E [25]
and Vampire [20]. Both methods are based on learning clause classifiers from
_⋆_ Supported by the ERC Consolidator grant AI4REASON no. 649043 (JJ, BP, MS,
and JU), the Czech project AI&Reasoning CZ.02.1.01/0.0/0.0/15 003/0000466 and
the European Regional Development Fund (KC and JU), the ERC Project SMART
Starting Grant no. 714034 (MO), grant 2018/29/N/ST6/02903 of National Science
Center, Poland (BP), and the Czech Science Foundation project 20-06390Y (MS).
-----
2 Jan Jakub˚uv et al.
previous proofs within the ENIGMA framework [13,14,5] implemented in E. By
_symbol-independence we mean that no information about the symbol names is_
used by the learned guidance. In particular, if all symbols in a particular ATP
problem are consistently renamed to new symbols, the learned guidance will
result in the same proof search and the same proof modulo the renaming.
Symbol-independent guidance is an important challenge for learning-guided
ATP, addressed already in Schulz’s early work on learning guidance in E [23].
With ATPs being increasingly used and trained on large ITP libraries [3,2,16,18,6,8],
it is more and more rewarding to develop methods that learn to reason without relying on the particular terminology adopted in a single project. Initial
experiments in this direction using concept alignment [10] methods have already
shown performance improvements by transferring knowledge between the HOL
libraries [9]. Structural analogies (or even terminology duplications) are however
common already in a single large ITP library [17] and their automated detection
can lead to new proof ideas and a number of other interesting applications [11].
This system description first briefly introduces saturation-based ATP with
learned guidance (Section 2). Then we discuss symbol-independent learning and
guidance using abstract features and gradient boosting trees (Section 3) and
graph neural networks (Section 4). The implementation details are explained in
Section 5 and the methods are evaluated on the MPTP benchmark in Section 6.
**2** **Saturation Proving Guided by Machine Learning**
**Saturation-based Automated Theorem Provers (ATPs) such as E and**
Vampire are used to prove goals G using a set of axioms A. They clausify the formulas A _∪{¬G} and try to deduce contradiction using the given clause loop [22]_
as follows. The ATP maintains two sets of processed (P ) and unprocessed (U )
clauses. At each loop iteration, a given clause g from U is selected, moved to
_P_, and U is extended with new inferences from g and P . This process continues
until the contradiction is found, U becomes empty, or a resource limit is reached.
The search space grows quickly and selection of the right given clauses is critical.
**Learning Clause Selection over a set of related problems is a general method**
how to guide the proof search. Given a set of FOL problems P and initial ATP
strategy S, we can evaluate S over P obtaining training samples T . For each
successful proof search, training samples T contain the set of clauses processed
during the search. Positive clauses are those that were useful for the proof search
(they appeared in the final proof), while the remaining clauses were useless, forming the negative examples. Given the samples T, we can train a machine learning
_classifier M which predicts usefulness of clauses in future proof searches. Some_
clause classifiers are described in detail in Sections 3, 4, and 5.
**ATP Guidance By a Trained Classifier: Once a clause classifier M is**
trained, we can use it inside an ATP. An ATP strategy S is a collection of
proof search parameters such as term ordering, literal selection, and also given
clause selection mechanism. In E, the given clause selection is defined by a collection of clause weight functions which alternate to select the given clauses. Our
-----
ENIGMA Anonymous 3
ENIGMA framework uses two methods of plugging the trained classifier M into
_S. Either (1) we use M to select all given clauses (solo mode denoted S ⊙M), or_
(2) we combine predictions of M with clause selection mechanism from S so that
roughly 50% of the clauses is selected by M (cooperative mode denoted S ⊕M).
Proof search settings other than clause selection are inherited from S in both
the cases. See [5] for details. The phases of learning and ATP guidance can be
iterated in a learning/evaluation loop [29], yielding growing sets of proofs Ti and
stronger classifiers Mi trained over them. See [15] for such large experiment.
**3** **Clause Classification by Decision Trees**
**Clause Features are used by ENIGMA to represent clauses as sparse vectors**
for machine learners. They are based mainly on vertical/horizontal cuts of the
clause syntax tree. We use simple feature hashing to handle theories with large
number of symbols. A clause C is represented by the vector ϕC whose i-th index
stores the value of a feature with hash index i. Values of conflicting features
(mapped to the same index) are summed. Additionally, we embed conjecture
_features into the clause representation and we work with vector pairs (ϕC, ϕG)_
of size 2 ∗ _base, where ϕG is the feature vector of the current goal (conjecture)._
This allows us to provide goal-specific predictions. See [15] for more details.
**Gradient Boosting Decision Trees (GBDTs) implemented by the XGBoost**
library [4] currently provide the strongest ENIGMA classifiers. Their speed is
comparable to the previously used [14] weaker linear logistic classifier, implemented by the LIBLINEAR library [7]. In this work, we newly employ the
LightGBM [19] GBDT implementation. A decision tree is a binary tree whose
nodes contain Boolean conditions on values of different features. Given a feature
vector ϕC, the decision tree can be navigated from the root to the unique tree
leaf which contains the classification of clause C. GBDTs combine predictions
from a collection of follow-up decision trees. While inputs, outputs, and API
of XGBoost and LightGBM are compatible, each employ a different method of
tree construction. XGBoost constructs trees level-wise, while LightGBM leafwise. This implies that XGBoost trees are well-balanced. On the other hand,
LightGBM can produce much deeper trees and the tree depth limit is indeed an
important learning meta-parameter which must be additionally set.
**New Symbol-Independent Features: We develop a feature anonymization**
method based on symbol arities. Each function symbol name s with arity n is
substituted by a special name “fn”, while a predicate symbol name q with arity
_m is substituted by “pm”. Such features lose the ability to distinguish different_
symbol names, and many features are merged together. Vector representations
of two clauses with renamed symbols are clearly equal. Hence the underlying
machine learning method will provide equal predictions for such clauses. For
more detailed discussion and comparison with related work see Appendix B.
**New Statistics and Problem Features: To improve the ability to distinguish**
different anonymized clauses, we add the following features. Variable statistics of
-----
4 Jan Jakub˚uv et al.
clause C containing (1) the number of variables in C without repetitions, (2) the
number of variables with repetitions, (3) the number of variables with exactly
one occurrence, (4) the number of variables with more than one occurrence, (510) the number of occurrences of the most/least (and second/third most/least)
occurring variable. Symbol statistics do the same for symbols instead of variables.
Recall that we embed conjecture features in clause vector pair (ϕC, ϕG). As G
embeds information about the conjecture but not about the problem axioms,
we propose to additionally embed some statistics of the problem P that C and
_G come from. We use 22 problem features that E prover already computes for_
each input problem to choose a suitable strategy. These are (1) number of goals,
(2) number of axioms, (3) number of unit goals, etc. See E’s manual for more
details. Hence we work with vector triples (ϕC, ϕG, ϕP ).
**4** **Clause Classification by Graph Neural Network**
Another clause classifier newly added to ENIGMA is based on graph neural
networks (GNNs). We use the symbol-independent network architecture developed in [21] for premise selection. As [21] contains all the details, we only briefly
explain the basic ideas behind this architecture here.
**Hypergraph. Given a set of clauses C we create a directed hypergraph with**
three kinds of nodes that correspond to clauses, function and predicate symbols
_N_, and unique (sub)terms and literals U occurring in C, respectively. There are
two kinds of hyperedges that describe the relations between nodes according
to C. The first kind encodes literal occurrences in clauses by connecting the
corresponding nodes. The second hyperedge kind encodes the relations between
nodes from and . For example, for f (t1, . . ., tk) we loosely speaking
_N_ _U_ _∈U_
connect the nodes f and t1, . . ., tk with the node f (t1, . . ., tk) and
_∈N_ _∈U_
similarly for literals, where their polarity is also taken into account.
**Message-passing. The hypergraph describes the relation between various kinds**
of objects occurring in C. Every node in the hypergraph is initially assigned a
constant vector, called the embedding, based only on its kind (C, N, or U ). These
node embeddings are updated in a fixed number of message-passing rounds,
based on the embeddings of each node’s neighbors. The underlying idea of such
neural message-passing methods[4] is to make the node embeddings encode more
and more precisely the information about the connections (and thus various
properties) of the nodes. For this to work, we have to learn initial embeddings
for our three kinds of nodes and the update function.[5]
**Classification. After the message-passing phase, the final clause embeddings**
are available in the corresponding clause nodes. The estimated probability of
a clause being a good given clause is then computed by a neural network that
4 Graph convolutions are a generalization of the sliding window convolutions used for
aggregating neighborhood information in neural networks used for image recognition.
5 We learn individual components, which correspond to different kinds of hyperedges,
from which the update function is efficiently constructed.
-----
ENIGMA Anonymous 5
takes the final embedding of this clause and also aggregated final embeddings of
all clauses obtained from the negated conjecture.
**5** **Learning and Using the Classifiers, Implementation**
In order to use either GBDTs (Section 3) or GNNs (Section 4), a prediction
model must be learned. Learning starts with training samples T, that is, a set of
pairs (C[+], C[−]) of positive and negative clauses. For each training sample T ∈T,
we additionally know the source problem P and its conjecture G. Hence we can
consider one sample T ∈T as a quadruple (C[+], C[−], P, G) for convenience.
**GBDT. Given a training sample T = (C[+], C[−], P, G) ∈T, each clause C ∈**
_C[+]_ _∪C[−]_ is translated to the feature vector (ϕC, ϕG, ϕP ). Vectors where C ∈C[+]
are labeled as positive, and otherwise as negative. All the labeled vectors are fed
together to a GBDT trainer yielding model .
_DT_
When predicting a generated clause, the feature vector is computed and
_DT_
is asked for the prediction. GBDT’s binary predictions (positive/negative) are
turned into E’s clause weight (positives have weight 1 and negatives 10).
**GNN. Given T = (C[+], C[−], P, G) ∈T as above we construct a hypergraph for the**
set of clauses C[+]∪C[−]∪G. This hypergraph is translated to a tensor representation
(vectors and matrices), marking clause nodes as positive, negative, or goal. These
tensors are fed as input to our GNN training, yielding a GNN model . The
_NT_
training works in iterations, and contains one GNN per iteration epoch. Only
_NT_
one GNN from a selected epoch is used for predictions during the evaluation.
In evaluation, it is more efficient to compute predictions for several clauses
at once. This also improves prediction quality as the queried data resembles
more the training hypergraphs where multiple clauses are encoded at once as
well. During an ATP run on problem P with the conjecture G, we postpone
evaluation of newly inferred clauses until we reach a certain amount of clauses
_Q to query.[6]_ To resemble the training data even more, we add a fixed number of
the given clauses processed so far. We call these context clauses (X ). To evaluate
_Q, we construct the hypergraph for Q∪X ∪G, and mark clauses from G as goals._
Then model is asked for predictions on (predictions for are dropped).
_NT_ _Q_ _X_
The numeric predictions computed by are directly used as E’s weights.
_NT_
**Implementation & Performance. We use GBDTs implemented by the XG-**
Boost [4] and LightGBM [19] libraries. For GNN we use Tensorflow [1]. All the
libraries provide Python interfaces and C/C++ APIs. We use the Python interfaces for training and the C APIs for the evaluation in E. The Python interfaces
for XGBoost and LightGBM include the C APIs, while for Tensorflow this must
be manually compiled, which is further complicated by poor documentation.
The libraries support training both on CPUs and on GPUs. We train LightGBM on CPUs, and XGBoost and Tensorflow on GPUs. However, we always
evaluate on a single CPU as we aim at practical usability on standard hardware.
This is non-trivial and it distinguishes this work from evaluations done with
6 We may evaluate less than Q if E runs out of unevaluated unprocessed clauses.
-----
6 Jan Jakub˚uv et al.
large numbers of GPUs or TPUs and/or in prohibitively high real times. The
LightGBM training can be parallelized much better – with 60 CPUs it is much
faster than XGBoost on 4 GPUs. Neither using GPUs for LightGBM nor many
CPUs for XGBoost provided better training times. The GNN training is slower
than GBDT training and it is not easy to make Tensorflow evaluate reasonably
on a single CPU. It has to be compiled with all CPU optimizations and restricted
to a single thread, using Tensorflow’s poorly documented experimental C API.
**6** **Experimental Evaluation**
**Setup. We experimentally evaluate[7]** our GBDT and GNN guidance[8] on a large
benchmark of 57880 Mizar40 [18] problems[9] exported by MPTP [28]. Hence
this evaluation is compatible with our previous symbol-dependent work [15].
We evaluate GBDT and GNN separately. We start with a good-performing E
strategy S (see [5, Appendix A]) which solves 14 966 problems with a 10 s limit
per problem. This gives us training data T0 = eval(S) (see Section 5), and we
start three iterations of the learning/evaluation loop (see Section 2).
For GBDT, we train several models (with hash base 2[15]) and conduct a
small learning meta-parameters grid search. For XGBoost, we try different tree
depths (d ∈{9, 12, 16}), and for LightGBM various combinations of tree depths
and leaves count ((d, l) ∈{10, 20, 30, 40} × {1200, 1500, 1800}). We evaluate all
these models in a cooperative mode with S on a random (but fixed) 10% of all
problems (Appendix A). The best performing model is evaluated on the whole
benchmark in both cooperative (⊕) and solo (⊙) runs. These give us the next
samples _i+1. We perform three iterations and obtain models_ 0, 1, and 2.
_T_ _D_ _D_ _D_
For GNN, we train a model with 100 epochs, obtaining 100 different GNNs.
We evaluate GNNs from selected epochs (e ∈{10, 20, 50, 75, 100}) and we try
different settings of query (q) and context (c) sizes (see Section 5). In particular,
_q ranges over {64, 128, 192, 256, 512} and c over {512, 768, 1024, 1536}. All pos-_
sible combinations of (e, q, c) are again evaluated in a grid search on the small
benchmark subset (Appendix A), and the best performing model is selected for
the next iteration. We run three iterations and obtain models 0, 1, and 2.
_N_ _N_ _N_
**Results are presented in Table 1. For each model Di and Ni we show (1) true**
positive/negative rates, (2) training data sizes, (3) train times, and (4) the best
performing parameters from the grid search. Furthermore, for each model M we
show the performance of S ⊕M in (5) real and (6) abstract time. Details follow.
(1) Model accuracies are computed on samples extracted from problems newly
solved by each model, that is, on testing data not known during the training.
Columns TPR/TNR show accuracies on positive/negative testing samples. (2)
Train sizes measure the training data in millions of clauses. (4) Letter “X” stands
7 On a server with 36 hyperthreading Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
cores, 755 GB of memory, and 4 NVIDIA GeForce GTX 1080 Ti GPUs.
[8 Available at https://github.com/ai4reason/eprover-data/tree/master/IJCAR-20](https://github.com/ai4reason/eprover-data/tree/master/IJCAR-20)
[9 http://grid01.ciirc.cvut.cz/∼mptp/1147/MPTP2/problems small consist.tar.gz](http://grid01.ciirc.cvut.cz/~mptp/1147/MPTP2/problems_small_consist.tar.gz)
-----
ENIGMA Anonymous 7
**Table 1. Model training and evaluation for anonymous GBDTs (Di) and GNN (** _i)._
_N_
|[%] [%] size time params|S ⊕M +%|
|---|---|
|- - - - -|14 966 0.0|
|84.9 68.4 14M 2h29m X,d12 79.0 79.5 29M 4h33m X,d12 80.5 79.2 47M 40m L,d30,l1800|20 679 38.1 23 280 58.2 24 347 62.7|
TPR TNR training real time abstract time
_M_ [%] [%] size time params +% +%
_S ⊕M_ _S ⊕M_
_∅_ - - - - - 14 966 0.0 10 679 0.0
0 84.9 68.4 14M 2h29m X,d12 20 679 38.1 17 917 67.8
_D_
1 79.0 79.5 29M 4h33m X,d12 23 280 58.2 20 760 94.4
_D_
2 80.5 79.2 47M 40m L,d30,l1800 24 347 62.7 22 661 112.2
_D_
0 92.1 77.1 14M 17h e20,q128,c512 20 912 39.7 19 755 84.9
_N_
1 90.0 78.6 31M 1d19h e10,q128,c512 23 156 54.7 21 737 103.5
_N_
2 91.3 79.6 50M 1d 8h e50,q256,c768 23 262 55.4 22 169 107.6
_N_
for XGBoost models, while “L” for LightGBM. (5) For real time we use 10 s limit
per problem, and (6) in abstract time we limit the number of generated clauses
to 5000. We show the number of problems solved and the gain (in %) on S. The
abstract time evaluation is useful to assess the methods modulo the speed of the
implementation. The first row shows the performance of S without learning.
**Evaluation. The GNN models start better, but the GBDT models catch up and**
beat GNN in later iterations. The GBDT models show a significant gain even
in the 3rd iteration, while the GNN models start stagnating. The GNN models
report better testing accuracy, but their ATP performance is not as good.
For GBDTs, we see that the first two best models (D0 and D1) were produced
by XGBoost, while D2 by LightGBM. While both libraries can provide similar
results, LightGBM is significantly faster. For comparison, the training time for
XGBoost in the third iteration was 7 hours, that is, LightGBM is 10 times
faster. The higher speed of LightGBM can overcome the problems with more
complicated parameter settings, as more models can be trained and evaluated.
For GNNs, we observe higher training times and better models coming from
earlier epochs. The training in the 1st and 2nd iterations was done on 1 GPU,
while in the 3rd on 4 GPUs. The good abstract time performance indicates that
further gain could be obtained by a faster implementation. But note that this is
the first time that NNs have been made comparable to GBDTs in real time.
Figure 1 summarizes the results. On the left, we observe a slower start for
GNNs caused by the initial model loading. On the right, we see a decrease in
the number of processed clauses, which suggests that the guidance is effective.
**Complementarity. The twelve (solo and cooperative) versions of the methods**
compared in Figure 1 solve together 28271 problems, with the six GBDTs solving
25255 and the six GNNs solving 26571. All twenty methods tested by us solve
29118 problems, with the top-6 greedy cover solving (in 60 s) 28067 and the top15 greedy cover solving (in 150 s) 29039. The GNNs show higher complementarity
– varying the epoch as well as the size of the query and context produces many
new solutions. For example, the most complementary GNN method adds to the
best GNN method 1976 solutions. The GNNs are also quite complementary to
the GBDTs. The second (GNN) strategy in the greedy cover adds 2045 solutions
-----
8 Jan Jakub˚uv et al.
**Fig. 1. Left: the number of problems solved in time; Right: the number of processed**
clauses (the x-axis for S, and the y-axis for S ⊕D0 and S ⊕N0, respectively).
to the best (GBDT) strategy. Altogether, the twenty strategies solve (in 200 s)
2109 of the Mizar40 hard problems, i.e., the problems unsolved by any method
developed previously in [18].
**7** **Conclusion**
We have developed and evaluated symbol-independent GBDT and GNN ATP
guidance. This is the first time symbol-independent features and GNNs are
tightly integrated with E and provide good real-time results on a large corpus. Both the GBDT and GNN predictors display high ability to learn from
previous proof searches even in the symbol-independent setting.
To provide competitive real-time performance of the GNNs, we have developed context-based evaluation of generated clauses in E. This introduces a new
paradigm for clause ranking and selection in saturation-style proving. The generated clauses are not ranked immediately and independently of other clauses.
Instead, they are judged in larger batches and with respect to a large number of
already selected clauses (context) by a neural network that estimates their collectively most useful subset by several rounds of message passing. This also allows
new ways of parameterizing the search that result in complementary methods
with many new solutions.
The new GBDTs show even better performance than their symbol-dependent
versions from our previous work [15]. This is most likely because of the parameter
grid search and new features not used before. The union of the problems solved
by the twelve ENIGMA strategies (both ⊙ and ⊕) in real time adds up to 28 247.
When we add S to this portfolio we solve 28 271 problems. This shows that the
ENIGMA strategies learned quite well from S, not losing many solutions. When
we add eight more strategies developed here we solve 29 130 problems, of which
2109 are among the hard Mizar40. This is done in general in 200 s and without
any additional help from premise selection methods. Vampire in 300 seconds
-----
ENIGMA Anonymous 9
solves 27 842 problems. Future work includes joint evaluation of the system on
problems translated from different ITP libraries, similar to [9].
**8** **Acknowledgments**
We thank Stephan Schulz and Thibault Gauthier for discussing with us their
methods for symbol-independent term and formula matching.
**References**
1. Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig
Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing
Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster,
Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden,
Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow:
Large-scale machine learning on heterogeneous systems, 2015. Software available
from tensorflow.org.
2. Jasmin Christian Blanchette, David Greenaway, Cezary Kaliszyk, Daniel
K¨uhlwein, and Josef Urban. A learning-based fact selector for Isabelle/HOL. J.
_Autom. Reasoning, 57(3):219–244, 2016._
3. Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef
Urban. Hammering towards QED. J. Formalized Reasoning, 9(1):101–148, 2016.
4. Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In
_Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge_
_Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA, 2016._
ACM.
5. Karel Chvalovsk´y, Jan Jakubuv, Martin Suda, and Josef Urban. ENIGMA-NG:
efficient neural and gradient-boosted inference guidance for E. In Pascal Fontaine,
editor, Automated Deduction - CADE 27 - 27th International Conference on Au_tomated Deduction, Natal, Brazil, August 27-30, 2019, Proceedings, volume 11716_
of Lecture Notes in Computer Science, pages 197–215. Springer, 2019.
6. Lukasz Czajka and Cezary Kaliszyk. Hammer for Coq: Automation for dependent
type theory. J. Autom. Reasoning, 61(1-4):423–453, 2018.
7. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin.
Liblinear: A library for large linear classification. J. Mach. Learn. Res., 9:1871–
1874, June 2008.
8. Thibault Gauthier and Cezary Kaliszyk. Premise selection and external provers for
HOL4. In Xavier Leroy and Alwen Tiu, editors, Proceedings of the 2015 Conference
_on Certified Programs and Proofs, CPP 2015, Mumbai, India, January 15-17, 2015,_
pages 49–57. ACM, 2015.
9. Thibault Gauthier and Cezary Kaliszyk. Sharing HOL4 and HOL light proof
knowledge. In Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei
Voronkov, editors, Logic for Programming, Artificial Intelligence, and Reasoning _20th International Conference, LPAR-20 2015, Suva, Fiji, November 24-28, 2015,_
_Proceedings, volume 9450 of Lecture Notes in Computer Science, pages 372–386._
Springer, 2015.
-----
10 Jan Jakub˚uv et al.
10. Thibault Gauthier and Cezary Kaliszyk. Aligning concepts across proof assistant
libraries. J. Symb. Comput., 90:89–123, 2019.
11. Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Initial experiments with
statistical conjecturing over large formal corpora. In Andrea Kohlhase, Paul Libbrecht, Bruce R. Miller, Adam Naumowicz, Walther Neuper, Pedro Quaresma,
Frank Wm. Tompa, and Martin Suda, editors, Joint Proceedings of the FM4M,
_MathUI, and ThEdu Workshops, Doctoral Program, and Work in Progress at the_
_Conference on Intelligent Computer Mathematics 2016 co-located with the 9th Con-_
_ference on Intelligent Computer Mathematics (CICM 2016), Bialystok, Poland,_
_July 25-29, 2016, volume 1785 of CEUR Workshop Proceedings, pages 219–228._
CEUR-WS.org, 2016.
12. Zarathustra Goertzel, Jan Jakub˚uv, and Josef Urban. ENIGMAWatch:
ProofWatch meets ENIGMA. In Serenella Cerrito and Andrei Popescu, editors,
_Automated Reasoning with Analytic Tableaux and Related Methods, pages 374–388,_
Cham, 2019. Springer International Publishing.
13. Jan Jakubuv and Josef Urban. ENIGMA: efficient learning-based inference guiding
machine. In Herman Geuvers, Matthew England, Osman Hasan, Florian Rabe,
and Olaf Teschke, editors, Intelligent Computer Mathematics - 10th International
_Conference, CICM 2017, Edinburgh, UK, July 17-21, 2017, Proceedings, volume_
10383 of Lecture Notes in Computer Science, pages 292–302. Springer, 2017.
14. Jan Jakubuv and Josef Urban. Enhancing ENIGMA given clause guidance. In
Florian Rabe, William M. Farmer, Grant O. Passmore, and Abdou Youssef, editors, Intelligent Computer Mathematics - 11th International Conference, CICM
_2018, Hagenberg, Austria, August 13-17, 2018, Proceedings, volume 11006 of Lec-_
_ture Notes in Computer Science, pages 118–124. Springer, 2018._
15. Jan Jakubuv and Josef Urban. Hammering Mizar by learning clause guidance. In
John Harrison, John O’Leary, and Andrew Tolmach, editors, 10th International
_Conference on Interactive Theorem Proving, ITP 2019, September 9-12, 2019,_
_Portland, OR, USA, volume 141 of LIPIcs, pages 34:1–34:8. Schloss Dagstuhl -_
Leibniz-Zentrum f¨ur Informatik, 2019.
16. Cezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with
Flyspeck. J. Autom. Reasoning, 53(2):173–213, 2014.
17. Cezary Kaliszyk and Josef Urban. HOL(y)Hammer: Online ATP service for HOL
Light. Mathematics in Computer Science, 9(1):5–22, 2015.
18. Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning,
55(3):245–256, 2015.
19. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei
Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree.
In NIPS, pages 3146–3154, 2017.
20. Laura Kov´acs and Andrei Voronkov. First-order theorem proving and Vampire. In
Natasha Sharygina and Helmut Veith, editors, CAV, volume 8044 of LNCS, pages
1–35. Springer, 2013.
21. Miroslav Ols´ak, Cezary Kaliszyk, and Josef Urban. Property invariant embedding
for automated reasoning. CoRR, abs/1911.12073, 2019.
22. Ross A. Overbeek. A new class of automated theorem-proving algorithms. _J._
_ACM, 21(2):191–200, April 1974._
23. Stephan Schulz. Learning search control knowledge for equational deduction, volume 230 of DISKI. Infix Akademische Verlagsgesellschaft, 2000.
24. Stephan Schulz. Learning search control knowledge for equational theorem proving. In Franz Baader, Gerhard Brewka, and Thomas Eiter, editors, KI 2001:
-----
ENIGMA Anonymous 11
_Advances in Artificial Intelligence, Joint German/Austrian Conference on AI, Vi-_
_enna, Austria, September 19-21, 2001, Proceedings, volume 2174 of Lecture Notes_
_in Computer Science, pages 320–334. Springer, 2001._
25. Stephan Schulz. E - A Brainiac Theorem Prover. AI Commun., 15(2-3):111–126,
2002.
26. Stephan Schulz. Fingerprint Indexing for Paramodulation and Rewriting. In Bernhard Gramlich, Ulrike Sattler, and Dale Miller, editors, Proc. of the 6st IJCAR,
_Manchester, volume 7364 of LNAI, pages 477–483. Springer, 2012._
27. Stephan Schulz. Simple and efficient clause subsumption with feature vector indexing. In Automated Reasoning and Mathematics, volume 7788 of Lecture Notes
_in Computer Science, pages 45–67. Springer, 2013._
28. Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. _J._
_Autom. Reasoning, 37(1-2):21–43, 2006._
29. Josef Urban, Geoff Sutcliffe, Petr Pudl´ak, and Jiˇr´ı Vyskoˇcil. MaLARea SG1 Machine Learner for Automated Reasoning with Semantic Guidance. In Alessandro
Armando, Peter Baumgartner, and Gilles Dowek, editors, IJCAR, volume 5195 of
_LNCS, pages 441–456. Springer, 2008._
30. Robert Veroff. Using hints to increase the effectiveness of an automated reasoning
program: Case studies. J. Autom. Reasoning, 16(3):223–239, 1996.
**A** **Additional Data From the Experiments**
This appendix presents additional data from the experiments in Section 6. Figure 3 shows the results of the grid search for GNN models on one tenth of all
benchmark problems done in order to find the best-performing parameters for
_query and context sizes. The x-axis plots the query size, the y-axis plots the_
context size, while the z-axis plots the ATP performance, that is, the number
of solved problems. Recall that the grid search was performed on a randomly
selected but fixed tenth of all benchmark problems with a 10 s real-time limit per
problem. For N0 and N1, there is a separate graph for each iteration, showing
only the best epochs. For 2, there are two graphs for models from epoch 20
_N_
and 50. Note how the later epoch 50 becomes more independent on the context
size. The ranges of the grid search parameters were extended in later iterations
when the best-performing value was at the graph edge.
Figure 4 shows the grid search results for the best LightGBM’s GBDT models
from iterations 1, 2, and 3 (denoted here 0, 1, and 2). The x-axis plots the
_D_ _D_ _D_
number of tree leaves, the y-axis plots the tree depth, while the z-axis plots
the number of solved problems. There are two models from the second iteration
( 1), showing the effect of different learning rate (η). Again, the ranges of meta_D_
parameters were updated in between the iterations by a human engineer.
Figure 5 shows the training accuracies and training loss for the LightGBM
model 2. Accuracies (TPR and TNR) of the training data are computed from
_D_
the first iteration ( 0). The values for loss (z) are inverted (1 _z) so that higher_
_T_ _−_
values correspond to better models which makes a visual comparison easier. We
can see a clear correlation between the accuracies and the loss, but not so clear
correlation with the ATP performance. The ATP performance of D2 is the same
as in Figure 4, repeated here for convenience.
-----
12 Jan Jakub˚uv et al.
Figure 2 compares the lengths of the discovered proofs. We can see that
there is no systematic difference in this metric between the base strategy and
the ENIGMA ones.
**Fig. 2. Scatter plots for the lengths of the discovered proofs (the x-axis for S, and the**
_y-axis for S ⊕D2 and S ⊕N2, respectively)._
Finally, we have compared the feature vectors of the symbol-dependent and
symbol-independent versions of the GBDTs. On the same data, we observe
roughly 2x more collisions. The symbol-independent version has around 1% of
colliding feature vectors, while the symbol-dependent version has 0.42%.
**B** **Discussion of Anonymization**
Our use of symbol-independent arity-based features for GBDTs differs from
Schulz’s anonymous clause patterns [24,23] (CPs) used in E for proof guidance
and from Gauthier and Kaliszyk’s (GK) anonymous abstractions used for their
concept alignments between ITP libraries [10] in two ways:
1. In both CP and GK, serial (de Bruijn-style) numbering of abstracted symbols of the same arity is used. I.e., the term h(g(a)) will get abstracted to
_F_ 11(F 12(F 01)). Our encoding is just F 1(F 1(F 0)). It is even more lossy,
because it is the same for h(h(a)).
2. ENIGMA with gradient boosting decision trees (GBDTs) can be (approximately) thought of as implementing weighted feature-based clause classification where the feature weights are learned. Whereas both in CP and GK,
exact matching is used after the abstraction is done.[10] In CP, this is used for
hint-style guidance of E. There, for clauses, such serial numbering however
isn’t stable under literal reordering and subsumption. Partial heuristics can
10 We thank Stephan Schulz for pointing out that although CPs used exact matching
by default, matching up to a certain depth was also implemented.
-----
ENIGMA Anonymous 13
be used, such as normalization based on a fixed global ordering done in both
CP and GK.
Addressing the latter issue (stability under reordering of literals and subsumption) leads to the NP hardness of (hint) matching/subsumption. I.e., the
abstracted subsumption task can be encoded as standard first-order subsumption for clauses where terms like F 11(F 12(F 01)) are encoded as
_apply1(X1, apply1(X2, apply0(X3))). The NP hardness of subsumption is how-_
ever here more serious in practice than in standard ATP because only applications behave as non-variable symbols during the matching.
Thus, the difference between our anonymous approach and CP is practically the same as between the standard symbol-based ENIGMA guidance and
standard hint-based [30] guidance. In the former the matching (actually, clause
classification) is approximate, weighted and learned, while with hints the clause
matching/classification is crisp, logic-rooted and preprogrammed, sometimes
running into the NP hardness issues. Our latest comparison [12] done over the
Mizar/MPTP corpus in the symbol-based setting showed better performance of
ENIGMA over using hints, most likely due to better generalization behavior of
ENIGMA based on the statistical (GBDT) learning.
Note also that the variable and symbol statistics features to some extent
alleviate the conflicts obtained with our encoding. E.g., h(g(a)) and h(h(a))
will have different symbol statistics (Section 3) features. To some extent, such
features are similar to Schulz’s feature vector and fingerprint indexing [27,26].
-----
14 Jan Jakub˚uv et al.
(N )
-----
ENIGMA Anonymous 15
**Fig. 4. Grid search results for LightGBM GBDT models (** _i)._
_D_
-----
16 Jan Jakub˚uv et al.
-----
| [
"Karel, Chvalovskỳ",
"Bartosz, Piotrowski",
"Jan, Jakubuv",
"Josef, Urban",
"Martin, Suda",
"Miroslav, Olšák"
] | 2020-02-01T00:00:00 | null | false | 40 | 2 | null | https://arxiv.org/abs/2002.05406 | https://arxiv.org/abs/2002.05406 | https://www.semanticscholar.org/paper/3aa259d8d87d0b28582d60d9726a13cedf19f77b |